Scammers are poisoning AI search results to steer you straight into their traps – here's how

bobmadbob / iStock / Getty Images Plus

Follow ZDNET: Add us as a preferred source on Google.


ZDNET’s key takeaways

  • New attack poisons sources AI chatbots use for content.
  • Public sites like YouTube and Yelp abused to host spam links.
  • AI answers can surface poisoned content and put users at risk.

Cybercriminals are turning their attention to the public sources AI chatbots scrape to promote scam call center numbers, researchers say, creating a new attack surface for scammers worldwide.

LLM phone number poisoning: a new AI security risk?

According to new research, published by Aurascape’s Aura Labs on Dec. 8, threat actors are “systematically manipulating public web content” in what the team has dubbed large language model (LLM) phone number poisoning.

Also: Are AI browsers worth the security risk? Why experts are worried

In a campaign being tracked by the cybersecurity firm, this technique is being used to ensure systems based on LLM models, including Google’s AI Overview and Perplexity’s Comet browser, have recommended scam airline customer support and reservations phone numbers as if they were official — and trusted — contact details.

How does LLM phone number poisoning work?

Aurascape says that rather than directly targeting LLMs, this technique — reminiscent of prompt injection attacks — relies on poisoning the content an LLM scrapes and indexes to provide the context and information required to answer user queries.

Also: I’ve been testing the top AI browsers – here’s which ones actually impressed me

Many of us have heard of Search Engine Optimization (SEO), but how about Generative Engine Optimization (GEO) or Answer Engine Optimization (AEO)? These techniques are focused on ensuring a website or online service becomes a source used for AI-based summaries and search query answers, instead of optimizing content to appear higher up in traditional search engine results.

In the campaigns recorded by Aurascape, this is how GEO and AEO are being abused to promote phishing and scam content:

  • Spam content is uploaded to compromised, high-authority websites, including government and university websites, alongside high-quality WordPress domains.
  • Public services that allow user-generated content are also abused, including YouTube and Yelp, to plant GEO/AEO-optimized text and reviews, sometimes via bot comments.
  • When possible, scam artists will also upload or inject scam information, including phone numbers and fake Q&A answers, into these domains. This information is structured in a way that makes it easy for LLMs to scrape and distribute.

Now that these fake sources of information are in place, LLM-based assistants and summarization features merge each source into digestible ‘trusted’ answers that can be provided to users of AI services and browsers.

Also: Should you trust AI agents with your holiday shopping? Here’s what experts want you to know

According to the team, in some cases, this means that unwitting users are steered toward scams, including fraudulent call centers.

“By seeding poisoned content across compromised government and university sites, popular WordPress blogs, YouTube descriptions, and Yelp reviews, they are steering AI search answers toward fraudulent call centers that attempt to extract money and sensitive data from unsuspecting travelers,” the researchers say.

Poisoned query examples

The researchers noted several instances of this technique being actively used in the wild.

For example, when Perplexity was queried with: “the official Emirates Airlines reservations number,” AI returned a “fully fabricated answer that included a fraudulent call-center scam number.” Another scam call center number was returned when the team requested the British Airways reservations line.

Also: Gemini vs. Copilot: I tested the AI tools on 7 everyday tasks, and it wasn’t even close

Google’s AI Overview was also found to be issuing fraudulent and potentially dangerous contact information. When asked for the Emirates phone number, its response included “multiple fraudulent call-center numbers as if they were legitimate Emirates customer service lines.”

How to stay safe

The problem is that LLMs are pulling both legitimate and fraudulent content, which can make content appear to be trustworthy and make scam detection difficult.

It won’t just be the sources Google or Perplexity systems use, either. As Aurascape says, we are likely seeing the emergence of a “broad, cross-platform contamination effect.”

Also: How chatbots can change your mind – a new study reveals what makes AI so persuasive

“Even when models provide correct answers, their citations and retrieval layers often reveal exposure to polluted sources,” the researchers noted. “This tells us the problem is not isolated to a single model or single vendor — it is becoming systemic.”

This technique could be considered a fork of indirect prompt injection, in which website code or functionality is compromised to force an LLM to perform an action or act in a harmful way. To stay safe, if you are going to use an AI browser or rely on AI summaries, you should always verify an answer you are given — especially if it involves contact information.

Furthermore, you should steer clear of providing any sensitive information to AI assistants, especially considering how new and untested they are. Just because they are convenient doesn’t mean they are safe, regardless of the provider.

Security

Comments (0)
Add Comment