ChatAI Smart Chat The price has been reduced

One fold!

Ok,The main reason is that the upstream (OpenAI) has cut prices。 OpenAI has released a new chat-specific API,It's called Turbo,The version went up to GPT 3.5. However, surprisingly,While greatly improving the speed of computing and response,Operating costs are also significantly reduced。The direct payout is that the price of this model is one-tenth of Davinci,which is 0.002 Dollar 1000 Token。

I tested the model as soon as it was released,How much accuracy is improved is less clear,Anyway, the API format has changed a lot 😂。And then the first feeling was... Fast! React made a web app before it could hide elements based on state, and it was already shipped! - even Chinese is just as fast。

Of course,There are still some differences between this model and GPT-3,For example, it tends to ask for further instructions rather than answer immediately,This is what GPT-3 does not have,The latter will immediately begin to answer。

Anyway,Because a new model was applied,Now the drop grid wisdom chat is also folded in sync,As you read this,All changes have been implemented,Have a good chat!