Just today,OpenAI has released a new O1 series of models。This model is a brand new series,Focus on enhancing comprehension。The model does internal multi-step thinking before answering a question,The answer is finally given,Great for scientific research,Code writing and other application scenarios that require complex understanding。 - At the cost, of course, is that the model is slow to respond,And because of the need for multiple rounds of thinking internally,So it can't support streaming either。Other,The tokens consumed while thinking are also counted on the user's head。So obviously,The price of the model is very expensive。 Now,You can also experience and use the O1 model in the drop smart chat。We've got two O1 models in place at this stage,They are: [crayon-67bab0493cfd9914537868-i/] and [crayon-67bab0493cfe5978193423-i/] Pricing O1 Preview input 0.3, output 0.8 O1 Mini input 0.06,output 0.25 Limitations: It is still in the closed beta stage,OpenAI gives very little traffic,Each account can only have one per minute 20 requests,So please save some of it,In the future, we will further reduce the price accordingly when the capacity is released。 The API is now available to you [crayon-67bab0493cfe8347974509-i/] and [crayon-67bab0493cfea460019140-i/] As a model name to use the API of Drop Smart Chat uses these two models。But it's important to note that: Models are not supported [crayon-67bab0493cfec830615170-i/] Features,Turning it on returns an error; Models are not supported [crayon-67bab0493cfef781517540-i/] Features,The relevant parameters are discarded; Features that are not supported by other models,All relevant parameters are discarded; […]
Yearly Archives: 2024
Now you can use Google's Gemini in the Lag Smart Chat client (www.chatai.lol). 1.5 Finish...... Although we have supported this model before,But due to some technical limitations,The model response is not very stable,Now,We solve the technical difficulties,Gemini models are now available! Google's Gemini model is a bit less intellectually inferior than GPT-4o, though,But the advantage is that the price is cheap,And the context window is up to millions of tokens! Pricing for the existing Gemini 1.5 Flash pricing has been adjusted somewhat (price reduction!). ),Enter From 0.05 Reduced to 0.003, Output From 0.08 to 0.009! Now we're joined by Gemini as well 1.5 pro support,input 0.005, output 0.015. API Unfortunately,Currently, our API can only support OpenAI and Anthropic models,It doesn't support Google's model,and,The return format is all OpenAI format。
Drop smart chat now supports visual models! Supported models include Claude 3 Haiku,Claude 3 Sonnet, Claude 3 Opus and GPT-4o (GPT-4o also supports setting the image definition level high or low.)。Clients for each platform have been released,Update 0.6.3 version。 Limits Because the model has a limitation on the processing power of images,On every conversation send,The Lag Zhiliao client supports sending up to 5 pictures,Each image must not exceed 3.5MB in size,If the user selects an image that is larger than 3.5MB,A pop-up window will appear for the user to choose whether to compress or discard。(But be careful.),Different models process different amounts of images at a time,Specifically, it should be used according to the model setting。Drop smart chat API as well,Drop smart chat API also already supports picture vision,Available to users who use the API [crayon-67bab04940962246906760-i/] [crayon-67bab0494096b973055887-i/] [crayon-67bab0494096e191168665-i/] [crayon-67bab04940971622917403-i/] [crayon-67bab04940974038229411-i/] Use the image feature as the model name,Requests are all in OpenAI format。 For GPT-4o,Drop smart chat API is not supported [crayon-67bab04940976305446697-i/] for [crayon-67bab04940979863670933-i/] mode,If the field is not set or set to [crayon-67bab0494097b505024001-i/] ,then it defaults [crayon-67bab0494097e786799874-i/] 。
Just yesterday, Anthropic has released the latest model in the Claude series,Although only 3.5 version of Sonnet,But according to official tests,Performance has been surpassed 3 Opus of the series。 Now,You can choose this model in the drop smart chat。 About Claude 3.5 of the other two models,The official release will be later this year。At that time, we will synchronize the access as soon as possible。 API Yes,Drop smart chat API also supports the model,You can use it [crayon-67bab04940bf3434585806-i/] as the model name to use the model,The returned result is still in OpenAI format。 Pricing The model is priced as an input 0.15 Points/1k token, output 0.3 Points/1k token。
OpenAI recently released a new GPT4 model,GPT-4o (yes letter o, not number 0) This model has higher performance, faster speed, and better comprehension,What's more,The price is also lower (compared to GPT-4-Turbo) Now GPT-4o is supported in both the Ups and Downs Smart Chat client and API! Model inputs:128k model output:4k Pricing: input:0.1Dot/1k token output:0.2Dot/1k token Another: Drop smart chat currently only supports text interaction,This includes both the client and the API,We are working on arranging picture support......
Anthropic's Claude model is now in its third generation: Medium Haiku, Sonnet, Extra Large Opus, and Opus, among which the extra-large Opus is comparable to GPT 4, This is especially true when it comes to writing。The input length of the model can be up to 200k tokens,The output is only 4k。 Now Ups and Downs supports the selection of Claude 3 Series models。 Pricing: Medium cup Haiku input:0.25点/1k token 输出:0.85Dot/1k token large cup Sonnet input:0.1点/1k token 输出:0.25Dot/1k token Extra Large Cup Opus input:0.005点/1k token 输出:0.015Dot/1k token is the same,API users can also use it directly[crayon-67bab04940dfe370707030-i/] , [crayon-67bab04940e06072011338-i/] and [crayon-67bab04940e09732063240-i/] as the model name to call the API,The returned result will be the same as the same […]