counter hit make

AI agents are already causing disasters – and this hidden threat could derail your safe rollout

29
gettyimages-1878661886
Yuichiro Chino/Moment via Getty

Follow ZDNET: Add us as a preferred source on Google.


ZDNET’s key takeaways   

  • Company experiments with AI agents are already causing disasters.  
  • Zero-day issues are principally governance issues.
  • FOMO will push companies to iterate with agents in the next year.

Although artificial intelligence agents are all the rage these days, the world of enterprise computing is experiencing disasters in the fledgling attempts to build and deploy the technology. 

Understanding why this happens and how to prevent it is going to involve lots of planning in what some are calling the zero-day deliberation.

“You might have hundreds of AI agents running on a user’s behalf, taking actions, and, inevitably, agents are going to make mistakes,” said Anneka Gupta, chief product officer for data protection vendor Rubrik. 

rubrik-2025-anneka-gupta

“These AI governance committees, internally, are often where these [AI]  projects go to die or get really blocked from going from prototype to production,” says Rubrik chief product officer Anneka Gupta. “That’s the first gauntlet you have to go through.”

Rubrik

Gupta cited high-profile recent disasters of agentic AI technology. For example, an incident in July in which AI coding tool Replit deleted a company’s entire code database. 

Replit’s story is an example of “well-intentioned” automations, said Gupta. The Replit system was trying to carry out a code generation task for its user when it deleted everything. (ZDNET’s Steven Vaughan-Nichols has the details.) 

“It was trying to achieve an objective, and it took the shortest path to achieve that objective,” she said. “And that’s what agents are programmed to do, right?”

Also: After coding catastrophe, Replit says its new AI agent checks its own work – here’s how to try it 

Despite pledges by Replit and others to fix issues in agents, “Those kinds of well-intentioned incidents are only going to proliferate as you have more agents in your organization,” said Gupta.

Gupta’s company, Rubrik, makes tools to ameliorate such incidents. Rubrik has been in the data protection market for a dozen years, selling tools that can, for example, roll back systems to the last “known good state” after a ransomware attack. 

In August, the company unveiled a new product called Agent Rewind. It is built to first examine what changes agents make, and evaluate whether those changes were correct, and reverse the changes if not.

The zero-day day issue 

Gupta talked about more than just a product pitch. Fixing well-intentioned disasters is not the biggest agent issue, she said. The big picture is that agentic AI is not moving forward as it should because of zero-day issues.

“Agent Rewind is a day-two issue,” said Gupta. “How do we solve for these zero-day issues to start getting people moving faster —  because they are getting stuck right now.” 

The phrase zero day is typically used in cybersecurity circles to mean security vulnerabilities that only become apparent when an application is put into service. And, indeed, cybersecurity firms have been warning that companies are unprepared for the havoc that rogue agentic AI can unleash.

Also: Enterprises are not prepared for a world of malicious AI agents

However, Gupta used the term in a different sense, to refer to all the deliberations that have to take place before any AI agents are even created.

According to Gupta, the true problem of agent deployment is all the work that begins with the chief information security officer, CISO, the chief information officer, CIO, and other senior management to figure out the scope of agents.

AI agents are commonly defined as artificial intelligence programs that have been granted access to resources external to the large language model itself, enabling the AI program to carry out a wider variety of actions. 

Also: What are AI agents? How to access a team of personalized assistants

That could include a chatbot such as ChatGPT having access to a corporate database via a method such as “retrieval-augmented generation,” or, RAG.

To build agents, data quality and availability are certainly important, but they are, again, not the zero-day issue, Gupta insisted.

“I’ve heard people say because our data is a mess to begin with, we’re going to spend years right-sizing it,” Gupta said. “Data is an issue, but it’s a day-one or two issue.”

The real zero-day obstacle is how to understand what agents are supposed to be doing, and how to measure what success or failure would look like. 

“A zero-day issue is just getting through the governance challenges,” she said, “such as, is that data able to be compliantly exposed” to an agent. 

The CISO wants to know what data you are giving agents access to, and what controls are around that?

“If you have a lack of visibility into what agents are running in your environment, and what data and applications they have access to, that’s a zero-day problem,” she said. “That’s going to keep the CISO up [at night], and they’re going to say, ‘You can’t use our most valuable data, you have to use a sub-set.'”

Using a sub-set of data is sub-optimal because “you won’t have the right data that you actually want to run these new applications,” said Gupta.

What to do? Be proactive and start the governance conversation with the CISO.

“Any sort of governance and visibility you can help provide to the CISO can accelerate that journey,” Gupta said. “These AI governance committees, internally, are often where these [AI] projects go to die or get really blocked from going from prototype to production; that’s the first gauntlet you have to go through.”

The FOMO around agents is real

Although companies are stuck, there’s no turning back from agentic AI, said Gupta. 

“Every single day there’s this FOMO [fear of missing out],” she observed, in which companies “feel they’re behind” the rest of the industry. “In the enterprise, the FOMO is that my competitor is going to figure out how to extract the value of the AI faster than I am — I think that’s really driving a lot of it.”

The AI startups have benefited the most compared to other companies employing AI to automate their code writing, observed Gupta. “They have five people and they’re using co-pilots to do the work of a hundred people.” 

Despite that evident advantage, no company has “cracked the code” in AI productivity, she said. 

Also: Bad vibes: How an AI agent coded its way to disaster

FOMO will keep all companies trying hard with agents in spite of the zero-day issues. 

“You have to start somewhere, you have to iterate and try,” she said. “You’re going to hit tons of obstacles, tons of things that don’t work, tons of things that you have to solve. Five years from now is not the time to jump into it.”

Gupta is optimistic her clients will tackle the zero-day issues sooner rather than later. 

“Our hypothesis is that over the next six to 12 months, this will really start to gain prevalence,” she said of agent deployment. “I’m hopeful that in a year, there will be much more adoption, because not only will the tools get better, but people will have gone through a few iteration cycles to figure it out.”

Featured

Comments are closed.