Warning: Malicious ChatGPT Chrome Extensions Steal Session Tokens to Hijack Accounts

Malicious ChatGPT Chrome Extensions Discovered Stealing Session Tokens and User Data

MEDIUM
February 2, 2026
4m read
MalwarePhishingCloud Security

Related Entities

Products & Tech

Other

Full Report

Executive Summary

Security researchers have discovered a malicious campaign targeting users of OpenAI's ChatGPT service through deceptive Google Chrome extensions. At least 16 extensions, advertised as legitimate enhancers for ChatGPT, were found to contain malicious code. Upon installation, these extensions inject scripts into the ChatGPT web interface to steal user session tokens and authorization details. This information enables attackers to hijack user sessions, providing them with complete access to the victim's account, including their entire chat history. This could lead to the exposure of sensitive personal or corporate information that users may have discussed with the AI. The incident underscores the security risks associated with browser extensions and the need for user vigilance.

Threat Overview

The attack preys on the popularity of ChatGPT and the desire of users to enhance its functionality. Attackers publish extensions on the Chrome Web Store that promise useful features but secretly harbor malicious intent. The core of the attack is the abuse of the permissive security model for browser extensions, which often allows them to read and modify data on websites the user visits. In this case, the extensions specifically target chat.openai.com.

Technical Analysis

The attack mechanism is a form of session hijacking facilitated by a malicious browser extension.

  1. Distribution: The malicious extensions are distributed via the Google Chrome Web Store, often mimicking the functionality of legitimate tools. This initial step relies on social engineering to convince users to install the extension.
  2. Execution: Once installed, the extension uses its permissions to inject a malicious JavaScript payload into the ChatGPT web application. This is a primary function of T1176 - Browser Extensions.
  3. Credential Access: The injected script monitors the web page's activity, specifically looking for authorization data in HTTP requests or local storage. It intercepts the user's session token, which is used by the browser to maintain an authenticated session with the ChatGPT server. This is a form of T1539 - Steal Web Session Cookie.
  4. Exfiltration: The stolen session token and other authorization details are then sent to a remote server controlled by the attacker (T1041 - Exfiltration Over C2 Channel).
  5. Impersonation: With the session token, the attacker can now make requests to the ChatGPT API as the victim, effectively hijacking their session and gaining access to all their data.

Impact Assessment

The impact of a hijacked ChatGPT session can be severe. Many users input sensitive, confidential, or proprietary information into ChatGPT, including source code, business plans, personal identifiable information (PII), and internal company documents. An attacker with access to this chat history could leverage it for extortion, corporate espionage, or identity theft. They could also continue the conversations as the user, potentially tricking the user's colleagues or contacts. The breach of privacy is significant, and for corporate users, it could represent a major data leak.

Cyber Observables for Detection

  • Extension Audit: The primary observable is the presence of one of the 16 known malicious extensions installed in the browser.
  • Network Traffic: Anomalous outbound network requests from the chat.openai.com web page to unknown domains could indicate data exfiltration by a malicious script.
  • Account Activity: Unusual activity in the user's ChatGPT account, such as new chats they don't recognize, could be a sign of a hijacked session.

Detection & Response

  1. Review Browser Extensions: All users, especially those who use ChatGPT, should immediately audit their installed Chrome extensions. Navigate to chrome://extensions, carefully review each extension and its permissions, and remove any that are unfamiliar, unnecessary, or overly permissive.
  2. Monitor for Malicious Extensions: Enterprise security teams can use browser management tools or EDR solutions to get an inventory of installed extensions across their fleet and compare it against a blocklist of known malicious extension IDs.
  3. Log Out of Sessions: If a user suspects their session may have been hijacked, they should immediately log out of their OpenAI account on all devices. This will invalidate the existing session tokens, including any that may have been stolen.
  4. Review Account History: Users should review their ChatGPT chat history for any conversations they did not initiate.

Mitigation

  1. Limit Extension Installation: Be highly selective about installing browser extensions. Only install extensions from well-known, reputable developers. Read reviews and carefully check the requested permissions before installation (M1033 - Limit Software Installation).
  2. Principle of Least Privilege: When installing an extension, check if it asks for permissions that seem excessive for its stated function (e.g., a simple color-changing extension should not need to read data on all websites).
  3. Corporate Policy: Enterprises should establish a policy that either blocks the installation of all browser extensions by default or only allows installation from a pre-approved allowlist.
  4. Data Minimization: Treat AI chatbots like any other public cloud service. Avoid inputting highly sensitive or confidential information that could cause significant damage if exposed (M1017 - User Training).

Timeline of Events

1
February 2, 2026
This article was published

MITRE ATT&CK Mitigations

Use enterprise policies to restrict or block the installation of browser extensions, or maintain an allowlist of approved extensions.

Mapped D3FEND Techniques:

Train users to be cautious about the extensions they install and the data they input into public AI services.

Audit

M1047enterprise

Regularly audit installed browser extensions across the enterprise to identify and remove unauthorized or malicious add-ons.

D3FEND Defensive Countermeasures

In a corporate environment, the most effective way to prevent threats like malicious ChatGPT extensions is to implement a browser extension denylist (or a more secure allowlist). Using browser management policies (e.g., Google Chrome's ExtensionInstallBlocklist), security administrators can centrally prevent users from installing known-malicious extensions. As security researchers publish the IDs of the 16 malicious extensions, these should be immediately added to the denylist. For a more robust security posture, organizations should default to blocking all extensions and maintain a small allowlist of vetted, business-approved extensions (Executable Allowlisting, D3-EAL). This prevents not only this specific threat but also future, similar attacks that leverage rogue browser add-ons.

To detect hijacked ChatGPT sessions, organizations can perform Web Session Activity Analysis. This involves monitoring access logs for a user's OpenAI account. If a session token stolen from a user in one geographic location (e.g., New York) is suddenly used to access the account from a completely different and unexpected location (e.g., an IP address in Eastern Europe) within an impossible travel time, it is a strong indicator of session hijacking. Cloud Access Security Brokers (CASB) or identity providers can often be configured to detect and alert on such impossible travel scenarios or other session anomalies, such as changes in user-agent strings. Upon detecting such activity, the system should be configured to automatically terminate the suspicious session and force a re-authentication for the user.

Sources & References

2nd February – Threat Intelligence Report
Check Point Research (research.checkpoint.com) February 2, 2026
Malwarebytes Makes ChatGPT Smarter About Scams, Malware and Online Risk
Malwarebytes (malwarebytes.com) February 2, 2026

Article Author

Jason Gomes

Jason Gomes

• Cybersecurity Practitioner

Cybersecurity professional with over 10 years of specialized experience in security operations, threat intelligence, incident response, and security automation. Expertise spans SOAR/XSOAR orchestration, threat intelligence platforms, SIEM/UEBA analytics, and building cyber fusion centers. Background includes technical enablement, solution architecture for enterprise and government clients, and implementing security automation workflows across IR, TIP, and SOC use cases.

Threat Intelligence & AnalysisSecurity Orchestration (SOAR/XSOAR)Incident Response & Digital ForensicsSecurity Operations Center (SOC)SIEM & Security AnalyticsCyber Fusion & Threat SharingSecurity Automation & IntegrationManaged Detection & Response (MDR)

Tags

browser extensionsession hijackingChatGPTdata theftChromemalware

📢 Share This Article

Help others stay informed about cybersecurity threats

Continue Reading