Just today,OpenAI has released a new O1 series of models。This model is a brand new series,Focus on enhancing comprehension。The model does internal multi-step thinking before answering a question,The answer is finally given,Great for scientific research,Code writing and other application scenarios that require complex understanding。 - At the cost, of course, is that the model is slow to respond,And because of the need for multiple rounds of thinking internally,So it can't support streaming either。Other,The tokens consumed while thinking are also counted on the user's head。So obviously,The price of the model is very expensive。 Now,You can also experience and use the O1 model in the drop smart chat。We've got two O1 models in place at this stage,They are: [crayon-67bab726554f9906707823-i/] and [crayon-67bab72655504072287852-i/] Pricing O1 Preview input 0.3, output 0.8 O1 Mini input 0.06,output 0.25 Limitations: It is still in the closed beta stage,OpenAI gives very little traffic,Each account can only have one per minute 20 requests,So please save some of it,In the future, we will further reduce the price accordingly when the capacity is released。 The API is now available to you [crayon-67bab72655507457500950-i/] and [crayon-67bab7265550a217276531-i/] As a model name to use the API of Drop Smart Chat uses these two models。But it's important to note that: Models are not supported [crayon-67bab7265550c960396910-i/] Features,Turning it on returns an error; Models are not supported [crayon-67bab7265550e169726185-i/] Features,The relevant parameters are discarded; Features that are not supported by other models,All relevant parameters are discarded; […]
Monthly Archives: September 2024
Now you can use Google's Gemini in the Lag Smart Chat client (www.chatai.lol). 1.5 Finish...... Although we have supported this model before,But due to some technical limitations,The model response is not very stable,Now,We solve the technical difficulties,Gemini models are now available! Google's Gemini model is a bit less intellectually inferior than GPT-4o, though,But the advantage is that the price is cheap,And the context window is up to millions of tokens! Pricing for the existing Gemini 1.5 Flash pricing has been adjusted somewhat (price reduction!). ),Enter From 0.05 Reduced to 0.003, Output From 0.08 to 0.009! Now we're joined by Gemini as well 1.5 pro support,input 0.005, output 0.015. API Unfortunately,Currently, our API can only support OpenAI and Anthropic models,It doesn't support Google's model,and,The return format is all OpenAI format。