[Drop Smart Chat] now supports OpenAI O1 series models

Just today,OpenAI Newly released O1 Series models。This model is a brand new series,Focus on enhancing comprehension。The model does internal multi-step thinking before answering a question,The answer is finally given,Great for scientific research,Code writing and other application scenarios that require complex understanding。

- At the cost, of course, is that the model is slow to respond,And because of the need for multiple rounds of thinking internally,So it can't support streaming either。Other than that,The tokens consumed while thinking are also counted on the user's head。So obviously,The price of the model is very expensive。

Now,You can also experience and use the O1 model in the drop smart chat。We've got two O1 models in place at this stage,They are: o1-preview and o1-mini

Pricing

O1 Preview

input 0.3, output 0.8

O1 Mini

input 0.06,output 0.25

limit

At present, it is still in the closed beta stage,OpenAI gives very little traffic,Each account can only have one per minute 20 requests,So please save some of it,In the future, we will further reduce the price accordingly when the capacity is released。

API

Now you can use it o1-preview and o1-mini As a model name to use the API of Drop Smart Chat uses these two models。But it's important to note that:

  1. Models are not supported streaming function,Turning it on returns an error;
  2. Models are not supported function function,The relevant parameters are discarded;
  3. Features that are not supported by other models,All relevant parameters are discarded;
  4. Please pass in max-tokens 作为 max_completion_tokens substitution,Due to some limitations,Direct incoming max_completion_tokens will be returned an error live to be discarded。