OpenAI's next big model could drop any minute – what we know

Kenneth Cheung/iStock Unreleased via Getty Images

Follow ZDNET: Add us as a preferred source on Google.


ZDNET’s key takeaways

  • OpenAI felt the squeeze of recent Google and Anthropic releases.
  • CEO Sam Altman has reportedly initiated a “code red.”
  • GPT-5.2 could arrive any minute. 

Following Google’s release of Gemini 3, which quickly rose to the top of the LMArena AI leaderboard, OpenAI CEO Sam Altman declared a “code red,” according to a report by The Information, to fast-track the company’s latest model — or several. New reports indicate that OpenAI could release something as soon as this week. 

GPT-5.2

According to The Verge, which cited insider sources, OpenAI is set to release a new GPT-5.2 model that aims to close the gap in the AI race that Google created with its latest release last month. As for a timeline, the model is anticipated early this week. 

Also: Stop using ChatGPT for everything: The AI models I use for research, coding, and more (and which I avoid)

As The Information reported, Altman said the model expected to ship this week performed better than Google’s Gemini 3 in internal evaluations, but also noted that more work was needed to improve the overall ChatGPT experience. This is particularly important because both Gemini 3 and Anthropic Opus 4.5, released last month, set new industry standards, with the former leading in reasoning and the latter leading in coding. 

‘Garlic’

Another report from The Information published last week revealed that OpenAI was also developing a new model, codenamed Garlic. OpenAI’s Chief Research Officer Mark Chen informed colleagues that Garlic has performed well in company evaluations compared to Gemini 3 and Anthropic’s Opus 4.5 in tasks involving coding and reasoning, according to the report. 

It’s unclear how separate Garlic and the anticipated GPT-5.2 are, but The Information referred to GPT-5.2 (as well as yet another forthcoming release, GPT-5.5) as potential versions of Garlic. 

Chen added that when developing Garlic, OpenAI addressed issues with pretraining, the initial phase of training in which the model begins learning from a massive dataset. The company focused the model on broader connections before training it for more specific tasks. 

Also: Gemini vs. Copilot: I tested the AI tools on 7 everyday tasks, and it wasn’t even close

These changes in pretraining enable OpenAI to infuse a smaller model with the same amount of knowledge previously reserved for larger models, according to Chen’s remarks cited in the report. Smaller models can be beneficial for developers as they are typically cheaper and easier to deploy — something French AI lab Mistral emphasized with its own release last week. 

For the company behind it, a smaller model is cheaper to build and deploy. Garlic is not to be confused with Shallotpeat, a model Altman announced to staff in October, according to a previous report also from The Information. That model also aimed to fix bugs in the pretraining process. 

As for when to expect Garlic, Chen kept the details vague, saying only “as soon as possible” in the report. The developments made when creating Garlic have already allowed the company to move on to developing its next bigger and better model, Chen said. 

Also: ChatGPT saves the average worker nearly an hour each day, says OpenAI – here’s how

OpenAI did not immediately respond to a request for comment.

A battle for users

This fierce race between Google and OpenAI can be partially attributed to both vying for the same sector: consumers. 

As Anthropic’s CEO, Dario Amodei, noted in conversation with Andrew Ross Sorkin during The New York Times‘ DealBook Summit last week, Anthropic isn’t in the same race or facing a “code red” panic as its competitors, because it is focused on serving enterprises rather than consumers. The company just announced that its Claude Code agentic coding tool reached $1 billion in run-rate revenue, only six months after becoming available to the public. 

Artificial Intelligence

Comments (0)
Add Comment