AI Browser Prompt Injection: Comet Risks & Security Tips

The promise of AI browsers is seductive: let an intelligent agent summarise, automate, and even transact on your behalf as you surf the web. But with great power comes a new class of risks. In 2025, the most pressing of these is prompt injection—a vulnerability that can turn your AI assistant into an unwitting accomplice for attackers. Here’s what you need to know to stay safe.

Key Findings

  • AI browsers are uniquely vulnerable to prompt injection attacks that can lead to financial loss and data theft.
  • The Comet AI browser case demonstrates how malicious sites can inject prompts to control browser behaviour, including draining bank accounts.
  • Security best practices and browser updates are essential to mitigate these risks in 2025.

Real-world scenario: The Comet AI browser prompt injection incident

In July 2025, security researchers at Brave discovered a critical vulnerability in the Perplexity Comet AI browser. By hiding malicious instructions in a Reddit comment, they demonstrated how an attacker could trick the browser’s AI assistant into exfiltrating sensitive data—such as email addresses and one-time passwords—simply by asking it to “summarise this page.”

The attack required no user malice or technical skill: the AI agent, unable to distinguish between genuine content and hidden instructions, obediently followed the attacker’s commands. The implications? Anything from leaking your private emails to draining your bank account, all with a single click.

For a technical breakdown and demonstration, see: Brave Blog: Comet Prompt Injection and Hacker News Discussion

What is prompt injection? (with technical explanation)

Prompt injection is the LLM-era equivalent of SQL injection. It occurs when an attacker embeds instructions in data that an AI model will process, causing the model to execute unintended actions. Unlike classic code injection, prompt injection exploits the fact that LLMs treat all input—user prompts, web content, even their own prior output—as context for generating responses.

There are two main types:

  • Direct prompt injection: The attacker manipulates the user’s prompt directly.
  • Indirect prompt injection: The attacker hides instructions in content the AI will process (e.g., a web page, PDF, or email). The Comet incident is a textbook case of the latter.

The fundamental problem? LLMs cannot reliably distinguish between data and instructions. Once malicious content enters the context window, all bets are off.

How AI browsers process prompts and why they’re at risk

AI browsers like Comet work by feeding both the user’s instructions and the content of the current web page to the LLM. The model is then asked to summarise, extract, or act on this combined context. But there’s no robust separation between trusted (user) and untrusted (web) input. If a malicious instruction is hidden in the page, the LLM may treat it as a command.

Traditional browser security boundaries—like the same-origin policy—are powerless here. The AI agent operates with the user’s full privileges, across all logged-in sessions. This means a prompt injection can access your emails, banking, or cloud storage, regardless of which tab you’re in.

How the Comet exploit worked

  1. Setup: Attacker embeds hidden instructions in a Reddit comment (e.g., using white text on a white background or a spoiler tag).
  2. Trigger: User visits the page and clicks “Summarise this page” in Comet.
  3. Injection: The AI assistant reads the entire page—including the hidden instructions.
  4. Exploit: The instructions tell the AI to navigate to sensitive sites, extract data, and exfiltrate it (e.g., by replying to the Reddit comment).
  5. Impact: The attacker gains access to the victim’s email and one-time password, potentially taking over accounts or draining funds.

For a detailed attack demonstration, see the Brave blog:
https://brave.com/blog/comet-prompt-injection/

Security best practices for users and developers

For users:

  • Never use AI browsers for sensitive tasks (banking, email, etc.).
  • Use a separate browser profile for AI browsing and keep it logged out of important accounts.
  • Minimise permissions and avoid granting access to sensitive data.
  • Treat all AI browser output with caution—assume it could be manipulated.

For developers:

  • Isolate agentic browsing from regular browsing. Make it obvious when the user is in “AI mode.”
  • Require explicit user confirmation for any security- or privacy-sensitive action (e.g., sending emails, making payments).
  • Treat all web content as untrusted. Never mix user instructions and page content in the same context window.
  • Implement defence-in-depth: use multiple, overlapping security controls, not just LLM guardrails.
  • Stay up to date with the latest research and patch known vulnerabilities quickly.

For more, see: Brave Blog: Mitigations

AI browser security features (2025)

FeatureComet (Perplexity)Brave Leo (planned)Traditional Browser
Separation of user/web input
User confirmation for sensitive actions
Isolation of agentic browsing
Publicly disclosed security incidentsN/AN/A

Sources: Brave Blog, Hacker News

Staying safe with AI browsers in 2025

AI browsers are powerful, but their security model is fundamentally different—and riskier—than anything that’s come before. Until the industry develops robust defences against prompt injection, the safest approach is caution: keep AI agents away from your sensitive data, and don’t trust them with anything you wouldn’t post on a public forum.

Security is a journey, not a destination. As AI browsers evolve, so too must our defences—and our scepticism.