OpenAI Launches GPT-4.5, its Largest Model to Date
OpenAI announced on Thursday it is launching GPT-4.5, the much-anticipated AI model code-named Orion. GPT-4.5 is OpenAI’s largest model to date, trained using more computing power and data than any of the company’s previous releases.
Mixed Performance
OpenAI emphasizes that GPT-4.5 is not meant to be a drop-in replacement for GPT-4o, the company’s workhorse model that powers most of its API and ChatGPT. While GPT-4.5 supports features like file and image uploads and ChatGPT’s canvas tool, it currently lacks capabilities like support for ChatGPT’s realistic two-way voice mode.
In the plus column, GPT-4.5 is more performant than GPT-4o—and many other models besides. On OpenAI’s SimpleQA benchmark, which tests AI models on straightforward, factual questions, GPT-4.5 outperforms GPT-4o and OpenAI’s reasoning models, o1 and o3-mini, in terms of accuracy. According to OpenAI, GPT-4.5 hallucinates less frequently than most models, which in theory means it should be less likely to make stuff up.
OpenAI did not list one of its top-performing AI reasoning models, deep research, on SimpleQA. An OpenAI spokesperson tells TechCrunch it has not publicly reported deep research’s performance on this benchmark, and claimed its not a relevant comparison. Notably, AI startup Perplexity’s Deep Research model, which performs similarly on other benchmarks to OpenAI’s deep research, outperforms GPT-4.5 on this test of factual accuracy.
Scaling Laws Challenged
OpenAI claims that GPT‐4.5 is “at the frontier of what is possible in unsupervised learning.” That may be true, but the model’s limitations also appear to confirm speculation from experts that pre-training “scaling laws” won’t continue to hold.
OpenAI co-founder and former chief scientist Ilya Sutskever said in December that “we’ve achieved peak data,” and that “pre-training as we know it will unquestionably end.” His comments echoed concerns AI investors, founders, and researchers shared with TechCrunch for a feature in November.
In response to the pre-training hurdles, the industry — including OpenAI — has embraced reasoning models, which take longer than non-reasoning models to perform tasks but tend to be more consistent. By increasing the amount of time and computing power that AI reasoning models use to “think” through problems, AI labs are confident they can significantly improve models’ capabilities.
Implementation
Subscribers to ChatGPT Pro, OpenAI’s $200-a-month plan, will gain access to GPT-4.5 in ChatGPT starting Thursday as part of a research preview. Developers on paid tiers of OpenAI’s API will also be able to use GPT-4.5 starting today. As for other ChatGPT users, customers signed up for ChatGPT Plus and ChatGPT Team should get the model sometime next week, an OpenAI spokesperson told TechCrunch.
Conclusion
OpenAI’s GPT-4.5 is a significant step forward in AI model development, but its limitations also highlight the challenges facing the industry. As OpenAI and other AI labs continue to push the boundaries of what is possible, it remains to be seen whether GPT-4.5 will live up to its promise or remain a one-off achievement.
FAQs
Q: What is GPT-4.5?
A: GPT-4.5 is OpenAI’s largest AI model to date, trained using more computing power and data than any of the company’s previous releases.
Q: What are the benefits of GPT-4.5?
A: GPT-4.5 is more performant than GPT-4o and other models, and it hallucinates less frequently, making it less likely to make stuff up.
Q: What are the limitations of GPT-4.5?
A: GPT-4.5 is not a drop-in replacement for GPT-4o, and it lacks capabilities like support for ChatGPT’s realistic two-way voice mode. Its performance is also challenged by the limitations of pre-training “scaling laws.”
Q: What’s next for GPT-4.5?
A: OpenAI plans to combine its GPT series with its o reasoning series, beginning with GPT-5 later this year. GPT-4.5 may not take the AI benchmark crown on its own, but it will serve as a stepping stone toward something more powerful.

