Just today,OpenAI has released a new O1 series of models。This model is a brand new series,Focus on enhancing comprehension。The model does internal multi-step thinking before answering a question,The answer is finally given,Great for scientific research,Code writing and other application scenarios that require complex understanding。 - At the cost, of course, is that the model is slow to respond,And because of the need for multiple rounds of thinking internally,So it can't support streaming either。Other,The tokens consumed while thinking are also counted on the user's head。So obviously,The price of the model is very expensive。 Now,You can also experience and use the O1 model in the drop smart chat。We've got two O1 models in place at this stage,They are: [crayon-67b9672393ccb851226876-i/] and [crayon-67b9672393cd5798675211-i/] Pricing O1 Preview input 0.3, output 0.8 O1 Mini input 0.06,output 0.25 Limitations: It is still in the closed beta stage,OpenAI gives very little traffic,Each account can only have one per minute 20 requests,So please save some of it,In the future, we will further reduce the price accordingly when the capacity is released。 The API is now available to you [crayon-67b9672393cd7554505273-i/] and [crayon-67b9672393cd9476321126-i/] As a model name to use the API of Drop Smart Chat uses these two models。But it's important to note that: Models are not supported [crayon-67b9672393cda078887113-i/] Features,Turning it on returns an error; Models are not supported [crayon-67b9672393cdc235818143-i/] Features,The relevant parameters are discarded; Features that are not supported by other models,All relevant parameters are discarded; […]
News
Now you can use Google's Gemini in the Lag Smart Chat client (www.chatai.lol). 1.5 Finish...... Although we have supported this model before,But due to some technical limitations,The model response is not very stable,From now on,We solve the technical difficulties,Gemini models are now available! Google's Gemini model is a bit less intellectually inferior than GPT-4o, though,But the advantage is that the price is cheap,And the context window is up to millions of tokens! Pricing for the existing Gemini 1.5 Flash pricing has been adjusted somewhat (price reduction!). ),Enter From 0.05 Reduced to 0.003, Output From 0.08 to 0.009! Now we're joined by Gemini as well 1.5 pro support,enter 0.005, output 0.015. API Unfortunately,Currently, our API can only support OpenAI and Anthropic models,It doesn't support Google's model,and,The return format is all OpenAI format。
Drop smart chat now supports visual models! Supported models include Claude 3 Haiku,Claude 3 Sonnet, Claude 3 Opus and GPT-4o (GPT-4o also supports setting the image definition level high or low.)。Clients for each platform have been released,Update 0.6.3 version。 Limits Because the model has a limitation on the processing power of images,On every conversation send,The Lag Zhiliao client supports sending up to 5 pictures,Each image must not exceed 3.5MB in size,If the user selects an image that is larger than 3.5MB,A pop-up window will appear for the user to choose whether to compress or discard。(But be careful.),Different models process different amounts of images at a time,Specifically, it should be used according to the model setting。Drop smart chat API as well,Drop smart chat API also already supports picture vision,Available to users who use the API [crayon-67b9672395016021448203-i/] [crayon-67b967239501e788658159-i/] [crayon-67b9672395020210708342-i/] [crayon-67b9672395022242114034-i/] [crayon-67b9672395023784964427-i/] Use the image feature as the model name,Requests are all in OpenAI format。 For GPT-4o,Drop smart chat API is not supported [crayon-67b9672395025155635985-i/] for [crayon-67b9672395026925358015-i/] mode,If the field is not set or set to [crayon-67b9672395028785679573-i/] ,then it defaults [crayon-67b9672395029104406491-i/] 。
Just yesterday, Anthropic has released the latest model in the Claude series,Although only 3.5 version of Sonnet,But according to official tests,Performance has been surpassed 3 Opus of the series。 Now,You can choose this model in the drop smart chat。 About Claude 3.5 of the other two models,The official release will be later this year。At that time, we will synchronize the access as soon as possible。 API Yes,Drop smart chat API also supports the model,You can use it [crayon-67b9672395231893902477-i/] as the model name to use the model,The returned result is still in OpenAI format。 Pricing The model is priced as an input 0.15 Points/1k token, output 0.3 Points/1k token。
OpenAI recently released a new GPT4 model,GPT-4o (yes letter o, not number 0) This model has higher performance, faster speed, and better comprehension,What's more,The price is also lower (compared to GPT-4-Turbo) Now GPT-4o is supported in both the Ups and Downs Smart Chat client and API! Model inputs:128k model output:4k Pricing: enter:0.1Dot/1k token output:0.2Dot/1k token Another: Drop smart chat currently only supports text interaction,This includes both the client and the API,We are working on arranging picture support......
Anthropic's Claude model is now in its third generation: Medium Haiku, Sonnet, Extra Large Opus, and Opus, among which the extra-large Opus is comparable to GPT 4, This is especially true when it comes to writing。The input length of the model can be up to 200k tokens,The output is only 4k。 Now Ups and Downs supports the selection of Claude 3 Series models。 Pricing: Medium cup Haiku input:0.25点/1k token 输出:0.85Dot/1k token large cup Sonnet input:0.1点/1k token 输出:0.25Dot/1k token Extra Large Cup Opus input:0.005点/1k token 输出:0.015Dot/1k token is the same,API users can also use it directly[crayon-67b96723953e4075527697-i/] , [crayon-67b96723953ea423163827-i/] and [crayon-67b96723953ec748175814-i/] as the model name to call the API,The returned result will be the same as the same […]
Just a few days ago,OpenAI has released new models... Guys。Settler Zhichat also made the first adaptation,Let's take a look,What updates have been made to Roger Zhiliao: Model GPT-3.5 Now GPT-3.5-Turbo is merged with GPT-3.5 Turbo 16k,The new model has only one GPT-3.5 Turbo,The new model is an input-output asymmetry model,The input supports up to 16k,The output supports up to 4K。 GPT-4 has two new models:GPT-4 Turbo and GPT-4 Turbo with vision,Obviously,Relative to the former,The latter supports the ability to read pictures。 It's a pity, though,Both models are still in preview,Especially the GPT-4 model with the ability to read pictures,Only 100/day number of requests is supported,It can only be tested by developers (probably not enough)。Fortunately, GPT-4 Turbo's request volume has now increased 10000 Plus per day 500 Per minute。 of course,GPT-4 Turbo can read pictures in addition to that,Inputs up to 128k are also supported,But sadly, the output is relative to the previous 8k,Shrinkage to 4k。For now,It seems to be a little faster than GPT-3.5 in terms of speed,Of course, it may also be due to the limitation of the number of user requests。 Price The price of the new model has been reduced again! The price of the model of Lag Zhiliao has also been reduced in the same proportion,Since GPT-3.5 16k has been merged into GPT-3.5,for […]
Now,Recharge drop Chat opens 618 Group buying promotions,One person in a group,Group purchase price super discount! It's time to count a wave of points for you in the second half of the year。 Recharge Drop Grid Smart Chat,Add power to your work and life with the world's leading GPT4,Efficiency is a boost! Drop the smart chat point group purchase price: 1500 Points ← 618 Yuan.,Four folds! 200 Points ← 61 Yuan.,Triple fold! 50 Points ← 18 Yuan.,Thirty-six fold! 18 Points ← 6 Yuan.,Three-three fold! Such a lot of force,Words of conscience,I can't afford to play a few times a year。
Just today! OpenAI has released a new wave of model updates,Now,Drop Chat already supports the newly added GPT-3.5-turbo-16k,The model supports up to 16000 The amount of memory of the token,Cheap price makes more application ideas possible! From now on,You can use GPT-3.5-turbo-16k in the Drop Chat API,Including Smart Chat platform and API platform。 Other,This update adds the function call function,We also supported it。The API interface is now perfectly adapted,Including stream and non-stream request modes~ That's right,At present, the full range of models supported by Ledger Zhiliao have been updated to the latest 0613 version,It's a pity,The API of Drop Smart Chat does not support specifying the old model,When you send a request,The specified version number is automatically replaced with the latest version。 Pricing is the same,OpenAI's impact on GPT 3.5 Serial models underwent a wave of price adjustments,We've also updated it as well:The original GPT-3.5 model,Unified by input and output 0.02 Point each 1000 Token price reduction is 75% off 0.015 dot,The output remains the same still 0.02 dot;For the newly joined 16k version of GPT 3.5 model,Enter as […]
Now Drop Chat supports the second networking mode,Before that,ChatAI supports online search (Google search) on the first question of each session,to make some of the answers more accurate,This greatly reduces the scenario in which the model has hallucinations (fabrications).。 From now on,ChatAI can directly read the page you pass into,This allows you to ask questions about the content of the page、analyse、summary、translation and so on。Web links are matched and extracted using regular expressions,So when you use this function,It is best to separate the link from the body content with spaces,avoid extraction failures。 Other,ChatAI supports multiple links to read,There is no upper limit to the amount that can be extracted here,But for the sake of your points and model maximum processing length,It's better not to commit too much in a session。 Will reading a web page cost extra points? – No,But the content itself certainly costs points。 Is this feature also limited to the first question per topic? – No,This feature is unlimited,But reading many pages on the same topic will quickly fill up history,This leads to higher prices,Please make appropriate use of this function。 of course,This feature also has some limitations,This needs to be clarified,Please be aware when using this function: Due to model handling content length limitations (GPT 3.5) and price issues (GPT 4),Content reading is roughly limited to 800 non-english word (also English word) approximately 1000 token around,If the content is too long,The content behind will be cropped,It will not use the GPT model to summarise,This is also due to price and efficiency; Since it is the server that requests the web page content,Login required、Pages for authorization, etc. are unreadable — even if you give your account number and password (strongly discouraged),It must be a publicly and anonymously accessible web page to be read by ChatAI; Since HTML itself has a lot of invisible content for functionality、Style and so on,These invisible texts are washed,to ensure that the token (i.e. in 800 word space meaningful body content in the word space as much as possible),We currently only get [crayon-67b967239555c174943911-i/] and [crayon-67b9672395561870222181-i/] wrapped content,links will also be removed。 At last,I hope ChatAI can improve your productivity,Good luck in your work! Drop Studio chat-ai.logcg.com www.chatai.beauty www.chatai.lol