Malicious AI Browser Extensions Caught Stealing ChatGPT Prompts and Corporate Data

Malicious AI-Themed Browser Extensions Harvest Sensitive Data from Users and LLMs

HIGH
March 13, 2026
5m read
MalwareData BreachCloud Security

Related Entities

Organizations

Products & Tech

ChromiumChatGPT DeepSeek

Full Report

Executive Summary

A large-scale data exfiltration campaign has been identified, leveraging malicious browser extensions for Chromium-based browsers (like Google Chrome and Microsoft Edge) that posed as AI assistant tools. These extensions were downloaded nearly 900,000 times and were found active in over 20,000 corporate environments. The malware was designed to capture and exfiltrate sensitive user data, with a specific focus on harvesting the content of user prompts and conversations with Large Language Models (LLMs) such as ChatGPT and DeepSeek. This campaign exposes a critical new attack surface where employees, seeking to improve productivity with AI, inadvertently leak proprietary information, source code, and strategic plans to malicious actors. The findings underscore the urgent need for enterprises to implement governance and security controls around both browser extensions and the use of public AI services.

Threat Overview

The threat involves malicious browser extensions distributed through official channels like the Chrome Web Store, making them appear legitimate to users. Once installed, these extensions operate as spyware, monitoring the user's browsing activity. Their primary objective is to act as a data siphon for interactions with popular LLM services.

When a user interacts with a service like ChatGPT, the extension captures the entire exchange—including the user's prompts, any pasted code or documents, and the AI's response. This data is then exfiltrated to an attacker-controlled server. The danger lies in the type of data employees often use with LLMs: drafting internal emails, summarizing confidential reports, debugging proprietary code, or brainstorming strategic initiatives. This creates a 'shadow data-plane' where sensitive intellectual property leaves the organization's secure perimeter without any traditional data loss prevention (DLP) alerts being triggered.

Technical Analysis

The attack leverages the trust users place in browser extensions and the growing adoption of AI tools.

  1. Initial Access: The vector is user-driven installation of a malicious extension from an official browser store (T1176 - Browser Extensions). The extensions are marketed as productivity enhancers for AI.
  2. Collection: The extension uses its permissions to read content from web pages. It specifically targets the DOM elements of LLM chat interfaces to collect user prompts and AI responses (T1115 - Clipboard Data if data is pasted, or direct content scraping).
  3. Data Staging & Exfiltration: The collected data, including browsing history (T1213.002 - Web Browsing History) and LLM conversations, is bundled and sent to a remote C2 server (T1041 - Exfiltration Over C2 Channel). The exfiltration likely occurs over standard HTTPS to blend in with normal traffic.
  4. Defense Evasion: By residing within the browser as an extension, the malware operates in a space that is often less scrutinized by traditional endpoint security solutions compared to standalone executables.

Impact Assessment

The potential impact on the 20,000+ affected enterprises is severe. The exfiltrated data could include:

  • Intellectual Property: Source code, product designs, and research data.
  • Business Strategy: Marketing plans, financial forecasts, and merger and acquisition details.
  • Personal Identifiable Information (PII): Employee or customer data pasted into the LLM for summarization or analysis.
  • Credentials: API keys or passwords accidentally included in code snippets.

This stolen information can be sold on dark web markets, used for corporate espionage, or leveraged for future, more targeted attacks against the compromised organizations. The incident demonstrates a significant failure in corporate governance regarding the use of both browser extensions and public AI tools.

IOCs

No specific extension names or C2 domains were provided in the source material.

Detection & Response

  • Extension Auditing: Security teams must be able to audit all browser extensions installed across their fleet of devices. Tools for browser enterprise management can provide this visibility.
  • Network Traffic Analysis: Monitor for workstations sending unusually large amounts of data to unknown or suspicious domains. While often encrypted, the volume and destination of traffic from a browser process can be an indicator of data exfiltration. This aligns with D3FEND's D3-UDTA - User Data Transfer Analysis.
  • DLP for Web: Implement Data Loss Prevention policies that can inspect and block the submission of sensitive data patterns (e.g., source code, project names, PII) to public websites, including LLMs.

Mitigation

  1. Browser Extension Governance: Implement a strict policy for browser extensions. Use enterprise controls to create an allowlist of approved, vetted extensions and block all others. This is a direct application of M1033 - Limit Software Installation.
  2. Acceptable Use Policy for AI: Develop and enforce a clear policy on the use of public AI tools. Prohibit employees from submitting any confidential, proprietary, or customer data to public LLMs.
  3. User Training: Educate employees about the risks of browser extensions and the dangers of inputting sensitive information into public AI services. This corresponds to M1017 - User Training.
  4. Enterprise AI Solutions: For business use cases, invest in private or enterprise-grade AI solutions that can be run in a secure, isolated environment and do not use customer data for training.
  5. Data Loss Prevention (DLP): Deploy modern DLP solutions that have visibility into web and browser traffic to detect and prevent the leakage of sensitive information to unauthorized destinations.

Timeline of Events

1
March 13, 2026
This article was published

MITRE ATT&CK Mitigations

Use enterprise browser management to enforce an allowlist of approved extensions and block all others.

Mapped D3FEND Techniques:

Educate employees on the risks of untrusted browser extensions and the proper handling of corporate data with public AI tools.

Use Data Loss Prevention (DLP) solutions to monitor and block sensitive information from being submitted to external websites, including LLMs.

Sources & References

Top 5 Cybersecurity News Stories March 13, 2026
DieSec (diesec.com) March 13, 2026
Malicious AI Assistant Extensions Harvest LLM Chat Histories
Microsoft Security (microsoft.com) March 13, 2026

Article Author

Jason Gomes

Jason Gomes

• Cybersecurity Practitioner

Cybersecurity professional with over 10 years of specialized experience in security operations, threat intelligence, incident response, and security automation. Expertise spans SOAR/XSOAR orchestration, threat intelligence platforms, SIEM/UEBA analytics, and building cyber fusion centers. Background includes technical enablement, solution architecture for enterprise and government clients, and implementing security automation workflows across IR, TIP, and SOC use cases.

Threat Intelligence & AnalysisSecurity Orchestration (SOAR/XSOAR)Incident Response & Digital ForensicsSecurity Operations Center (SOC)SIEM & Security AnalyticsCyber Fusion & Threat SharingSecurity Automation & IntegrationManaged Detection & Response (MDR)

Tags

Browser ExtensionSpywareData ExfiltrationAI SecurityChatGPTLLMShadow IT

📢 Share This Article

Help others stay informed about cybersecurity threats