counter hit make

OpenAI says it's working toward catastrophe or utopia – just not sure which

98
gettyimages-2206458820
narvo vexar/iStock/Getty Images Plus via Getty Images

Follow ZDNET: Add us as a preferred source on Google.


ZDNET’s key takeaways

  • AI could enable “widely distributed abundance,” OpenAI said.
  • It could also be “potentially catastrophic.”
  • Experts warn against rapidly developing superintelligence. 

OpenAI is warning about the dangers of runaway AI systems, even while it competes with other major tech developers to build “superintelligence” — an as-yet theoretical machine intelligence that outperforms the capabilities of the human brain.

In a blog post titled “AI Progress and Recommendations,” published Thursday, the company outlined its vision for the broad-scale social benefit that such an advanced AI could confer upon humanity, the risks that could be encountered along the way, and some suggestions for mitigating them. Here’s what it all means. 

Also: OpenAI and Microsoft finally have a new deal – and it’s all about AGI

(Disclosure: Ziff Davis, ZDNET’s parent company, filed an April 2025 lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)

An AI utopia 

According to OpenAI, the development of superintelligent AI could democratize human well-being.

“We expect the future to provide new and hopefully better ways to live a fulfilling life, and for more people to experience such a life than do today,” the company wrote, before going on to say that the world would likely need to crack a few eggs in order to make an AI omelette. “It is true that work will be different, the economic transition may be very difficult in some ways, and it is even possible that the fundamental socioeconomic contract will have to change. But in a world of widely-distributed abundance, people’s lives can be much better than they are today.” 

The message echoed a recent personal blog post from the company’s CEO Sam Altman, which portrayed superintelligent AI as an inevitability that would admittedly cause some major social disruptions (eliminating some categories of jobs, for example), but that would nevertheless turn out in the long run to be a major historical boon for humanity. 

Also: A new Chinese AI model claims to outperform GPT-5 and Sonnet 4.5 – and it’s free

Thursday’s blog made some vague predictions about how AI could lead to such “widely-distributed abundance” — helping to speed up novel scientific discovery, for example (an effort that the company has already begun working on). The company added that “AI systems will help people understand their health, accelerate progress in fields like materials science, drug development, and climate modeling, and expand access to personalized education for students around the world.”

Possible risks

“Superintelligence” is the latest and greatest marketing buzzword in Silicon Valley, for better or worse. Most famously, Meta announced in June that it had launched its own internal R&D division devoted to building superintelligence. Microsoft, similarly, has formed a “Superintelligence Team,” which, according to a recent X post from that company’s AI lead Mustafa Suleyman about the company’s “Humanist” approach, is geared toward building “incredibly advanced AI capabilities that always work for, in service of, people and humanity.”

Ironically, the term “superintelligence” was popularized by a 2014 book of the same name that largely served as a warning about the dangers of runaway, self-improving AI.

Also: Microsoft researchers tried to manipulate AI agents – and only one resisted all attempts

Last month, a statement published by the nonprofit Future of Life Institute and signed by tech luminaries including Geoffrey Hinton and Steve Wozniak warned that superintelligent AI could escape human control and pose an existential threat to civilization. The statement advised that all industry efforts to build superintelligent AI should therefore be paused until labs can chart a safe pathway forward.

Even with today’s AI tools, many experts worry about what’s known in the field as the alignment problem: the challenge to ensure these black box systems don’t contradict human interests. Broadly speaking, the major fear around superintelligence is that it would be so much more advanced than human intelligence, and so much more inscrutable than the systems we’re interacting with today, that it could manipulate or mislead us in dangerously subtle ways.

Others dismiss these fears as AI “doomerism,” and insist that even if superintelligent AI were to somehow go off the rails, we could always just turn it off. Boom, problem solved.

Proposed solutions 

OpenAI seemed to be nodding toward the Future of Life Institute statement about the risks of superintelligent AI when it wrote in its Thursday blog post that the technology was “potentially catastrophic,” and that one potential solution was for the industry to “slow development to more carefully study these systems as we get closer to systems capable of recursive self-improvement.” 

Also: AI models know when they’re being tested – and change their behavior, research shows

It also made the case that the industry should be allowed to collaborate closely with federal lawmakers to create the AI equivalent of building codes and fire standards, thus ensuring general and standardized compliance with AI safety and oversight protocol. 

Of course, this could be another tactic from OpenAI to boost its own influence over the shaping of federal AI policy at a time when states like California and Colorado have started regulating the technology more explicitly. Suggesting the industry “slow development” just weeks after confirming its company restructuring and agreement with Microsoft around the goal of achieving AGI — a stop along the way to, or often conflated with, superintelligence — appears somewhat contradictory. 

Also: Is AI a career killer? Not if you have these skills, McKinsey research shows

The company made its thoughts on a state-by-state approach to AI regulation clear in the blog: “Most developers and open-source models, and almost all deployments of today’s technology, should have minimal additional regulatory burdens relative to what already exists,” the company wrote. “It certainly should not have to face a 50-state patchwork.” In the past, OpenAI has not necessarily endorsed regulation the way Anthropic has, but has expressed a preference for federal regulation over state-by-state legislation.

The ROI motivator 

Despite its insistence on the importance of safety and comprehensive governance frameworks, OpenAI has garnered a reputation within the AI industry as a speedy and often reckless company. 

Many of its early employees — including siblings Dario and Daniela Amodei, the founders of rival AI lab Anthropic — parted ways with the company and publicly criticized what they regarded as a culture that prioritized rapid development over safety. This issue was also at the center of the short-lived ousting by the company board of Altman, which backfired spectacularly.

Also: OpenAI used to test its AI models for months – now it’s days. Why that matters

OpenAI and its competitors have another reason to paint the future of AI so optimistically: to keep investor dollars flowing amid fears of an “AI bubble.”

For the past three years, tech companies have spent tens of billions on AI, justified by the idea that the technology will soon assist with major scientific breakthroughs and revolutionize productivity, to name a few hopes. But many of those companies (including OpenAI) still aren’t profitable; most businesses using AI haven’t achieved any material gains from the technology; and the promise of AI-assisted scientific discovery remains mostly hypothetical. 

Also: As OpenAI hits 1 million business customers, could the AI ROI tide finally be turning?

That said, OpenAI recently announced it had acquired a million business customers, many of whom reported bigger AI-driven profits than the industry at large has seen thus far — indicating a possible turning point in the ROI narrative.

Artificial Intelligence

Comments are closed.