Following the lead of DeepSeek, OpenAI makes its reasoning model free

by wellnessfitpro
0 comment

These types of models are most effective at solving complex problems, so if you have any PhD-level math problems you’re cracking away at, you can try them out. Alternatively, if you’ve had issues with getting previous models to respond properly to your most advanced prompts, you may want to try out this new reasoning model on them. To try out o3-mini, simply select “Reason” when you start a new prompt on ChatGPT. 

Although reasoning models possess new capabilities, they come at a cost. OpenAI’s o1-mini is 20 times more expensive to run than its equivalent non-reasoning model, GPT-4o mini. The company says its new model, o3-mini, costs 63% less than o1-mini per input token However, at $1.10 per million input tokens, it is still about seven times more expensive to run than GPT-4o mini.

This new model is coming right after the DeepSeek release that shook the AI world less than two weeks ago. DeepSeek’s new model performs just as well as top OpenAI models, but the Chinese company claims it cost roughly $6 million to train, as opposed to the estimated cost of over $100 million for training OpenAI’s GPT-4. (It’s worth noting that a lot of people are interrogating this claim.) 

Additionally, DeepSeek’s reasoning model costs $0.55 per million input tokens, half the price of o3-mini, so OpenAI still has a way to go to bring down its costs. It’s estimated that reasoning models also have much higher energy costs than other types, given the larger number of computations they require to produce an answer.

This new wave of reasoning models present new safety challenges as well. OpenAI used a technique called deliberative alignment to train its o-series models, basically having them reference OpenAI’s internal policies at each step of its reasoning to make sure they weren’t ignoring any rules.

But the company has found that o3-mini, like the o1 model, is significantly better than non-reasoning models at jailbreaking and “challenging safety evaluations”—essentially, it’s much harder to control a reasoning model given its advanced capabilities. o3-mini is the first model to score as “medium risk” on model autonomy, a rating given because it’s better than previous models at specific coding tasks—indicating “greater potential for self-improvement and AI research acceleration,” according to OpenAI. That said, the model is still bad at real-world research. If it were better at that, it would be rated as high risk, and OpenAI would restrict the model’s release.

#lead #DeepSeek #OpenAI #reasoning #model #free

You may also like

Leave a Comment