A large-scale data exfiltration campaign has been identified, leveraging malicious browser extensions for Chromium-based browsers (like Google Chrome and Microsoft Edge) that posed as AI assistant tools. These extensions were downloaded nearly 900,000 times and were found active in over 20,000 corporate environments. The malware was designed to capture and exfiltrate sensitive user data, with a specific focus on harvesting the content of user prompts and conversations with Large Language Models (LLMs) such as ChatGPT and DeepSeek. This campaign exposes a critical new attack surface where employees, seeking to improve productivity with AI, inadvertently leak proprietary information, source code, and strategic plans to malicious actors. The findings underscore the urgent need for enterprises to implement governance and security controls around both browser extensions and the use of public AI services.
The threat involves malicious browser extensions distributed through official channels like the Chrome Web Store, making them appear legitimate to users. Once installed, these extensions operate as spyware, monitoring the user's browsing activity. Their primary objective is to act as a data siphon for interactions with popular LLM services.
When a user interacts with a service like ChatGPT, the extension captures the entire exchange—including the user's prompts, any pasted code or documents, and the AI's response. This data is then exfiltrated to an attacker-controlled server. The danger lies in the type of data employees often use with LLMs: drafting internal emails, summarizing confidential reports, debugging proprietary code, or brainstorming strategic initiatives. This creates a 'shadow data-plane' where sensitive intellectual property leaves the organization's secure perimeter without any traditional data loss prevention (DLP) alerts being triggered.
The attack leverages the trust users place in browser extensions and the growing adoption of AI tools.
T1176 - Browser Extensions). The extensions are marketed as productivity enhancers for AI.T1115 - Clipboard Data if data is pasted, or direct content scraping).T1213.002 - Web Browsing History) and LLM conversations, is bundled and sent to a remote C2 server (T1041 - Exfiltration Over C2 Channel). The exfiltration likely occurs over standard HTTPS to blend in with normal traffic.The potential impact on the 20,000+ affected enterprises is severe. The exfiltrated data could include:
This stolen information can be sold on dark web markets, used for corporate espionage, or leveraged for future, more targeted attacks against the compromised organizations. The incident demonstrates a significant failure in corporate governance regarding the use of both browser extensions and public AI tools.
No specific extension names or C2 domains were provided in the source material.
D3-UDTA - User Data Transfer Analysis.M1033 - Limit Software Installation.M1017 - User Training.Use enterprise browser management to enforce an allowlist of approved extensions and block all others.
Mapped D3FEND Techniques:
Educate employees on the risks of untrusted browser extensions and the proper handling of corporate data with public AI tools.
Use Data Loss Prevention (DLP) solutions to monitor and block sensitive information from being submitted to external websites, including LLMs.

Cybersecurity professional with over 10 years of specialized experience in security operations, threat intelligence, incident response, and security automation. Expertise spans SOAR/XSOAR orchestration, threat intelligence platforms, SIEM/UEBA analytics, and building cyber fusion centers. Background includes technical enablement, solution architecture for enterprise and government clients, and implementing security automation workflows across IR, TIP, and SOC use cases.
Help others stay informed about cybersecurity threats