Mint Explainer: What OpenAI o1 ‘reasoning’ model means for the future of generative AI

Photo of author

“We trained these models to spend more time thinking through problems before they respond, much like a person would. Through training, they learn to refine their thinking process, try different strategies, and recognize their mistakes,” the company said in a blog post.

The move is a marked departure from the currently established models—including OpenAI’s GPT-4o, Meta’s Llama-3.1 and Google’s Gemini 1.5 Pro. 

So far, the evolution of generative AI has seen efforts being made by Big Tech to shrink model sizes, take operations offline, and offer super-quick responses at affordable pricing in order to replace humans at repetitive tasks—such as checking a piece of computer programme for bugs, going through an essay for grammatical errors, and tallying correct answer on a Math answer sheet at examinations.

Elucidating upon this further, OpenAI said, “These enhanced reasoning capabilities may be particularly useful if you’re tackling complex problems in science, coding, math, and similar fields. For example, o1 can be used by healthcare researchers to annotate cell sequencing data, by physicists to generate complicated mathematical formulas needed for quantum optics, and by developers in all fields to build and execute multi-step workflows.”

Also read |  Why OpenAI-Google battle is not just about search. It’s also about building the most powerful AI

Longer time, more cost

To break things down in simple terms, OpenAI o1 is designed to take its time, cross-reference sources, and find answers to complex solutions that typically need researchers to access high-performance computers for. The company has so far capped the decision-making time for o1 to one minute of processing, which proves that its working model is the opposite of how generative AI has evolved so far.

“We’ve so far seen scaling of generative AI in terms of model size, but now we’ll see from the inference size,” said Kashyap Kompella, AI analyst and founder of tech consultancy RPA2AI Research. “It isn’t always that we need an instant, one-shot response to a search query, which is why this OpenAI o1 model can prove to be important for domain-specific applications. But it won’t be helpful in tracing misinformation or being applied in general-purpose search and internet use cases.”

Also read |  India’s generative AI startups look beyond building ChatGPT-like models

Kompella’s analysis is on-point. OpenAI o1 is up to four times more expensive than its fellow flagship AI model, GPT-4o. The latter costs $15 ( 1300) million for 1 million tokens (roughly 500,000 words) of text output, while o1 costs $60 ( 5000)—making it considerably resource-heavy for enterprises.

This can prove to be a challenge, Kompella said, adding that the cost of running reasoning-first models will be high, only help computing firms such as Nvidia and Cerebras, and increase the load of data. 

“This may also help OpenAI’s efforts to raise more capital, showcasing it as a still-innovative company. Llama and the likes are also likely to catch-up, but this may take them six months to do so,” he added.

Jim Fan, senior research manager and lead of embodied AI at Nvidia said on X, “Productionizing o1 is much harder than nailing academic benchmarks. For reasoning problems in the wild, how to decide when to stop searching? What’s the reward function? Success criterion? When to call tools like code interpreters in the loop? How to factor in the compute cost of those CPU processes? Their research post didn’t share much.”

Usability in the real world?

Others, meanwhile, have flagged OpenAI for likening the model to human thinking. 

“An AI system is not ‘thinking’—it’s processing’ and ‘running predictions’, just like Google or computers do. Giving the false impression that technology systems are human is just cheap snake oil and marketing to fool you into thinking it’s more clever than it is,” said Clement Delangue, chief executive of AI firm Hugging Face. “It’s not the same as thinking, since the results are process-oriented and not the same as thought.”

Jayanth Kolla, partner at tech consultant Convergence Catalyst, said OpenAI’s new GenAI model was in line with how Silicon Valley had planned the evolution of artificial intelligence. 

“The next generation after reasoning in AI models will bring memory, where AI will get stronger remembering capacity and historical context. Then, we’ll see perception, cognition, advanced cognition and finally artificial general intelligence,” he said. 

As more companies follow suit, computation costs will be initially high but drop eventually, given the volume of research meant to advance the technology. “However, reasoning models haven’t been proven yet, and OpenAI is known for launching models prematurely, so this launch may be taken with a pinch of salt.”

Also read |  Can GenAI bots win Nobels? They’ll soon be pushing the borders of knowledge

 


Source link

Leave a Comment