Researchers from HiddenLayer discovered a session hijacking vulnerability in the personal version of Microsoft Copilot, which they dubbed the "Reprompt" attack. The flaw allowed an attacker to craft a malicious URL that, when clicked by a victim, would inject hidden prompts into their active Copilot session. This bypassed initial prompt safety checks and enabled the attacker to execute commands within the user's authenticated context, potentially leading to data exfiltration. The vulnerability was based on the ability to chain commands via a server-controlled loop, hiding the malicious activity from the user. Microsoft has addressed this vulnerability in its January 2026 Patch Tuesday updates. There is no evidence of in-the-wild exploitation.
The "Reprompt" attack exploited how Microsoft Copilot processed and handled prompts passed through URL parameters. The core of the vulnerability was that Copilot's security and data leakage protections were primarily focused on the user's initial prompt, but not on subsequent, programmatically generated prompts within the same session.
The attack worked as follows:
q URL parameter.This technique effectively turned the victim's browser into a proxy for the attacker to interact with the AI, using the victim's own account and data context.
The vulnerability did not affect the enterprise-grade Microsoft 365 Copilot, which is protected by more robust security controls like Microsoft Purview auditing and tenant-level Data Loss Prevention (DLP) policies.
There is no evidence that this vulnerability was exploited in the wild. The researchers at HiddenLayer responsibly disclosed the flaw to Microsoft, who subsequently developed and released a patch.
Had this vulnerability been exploited, it could have had significant privacy implications for users of the personal Copilot assistant. An attacker could have potentially:
The incident serves as a crucial case study in the emerging security challenges of Large Language Models (LLMs) and AI assistants, particularly around prompt injection and the processing of untrusted external input.
Detecting this specific attack post-patch is not relevant, but hunting for similar prompt injection techniques would involve:
| Type | Value | Description |
|---|---|---|
| URL Pattern | copilot.microsoft.com/?q=[encoded_prompt] |
Analyze web proxy or DNS logs for unusually long or complex URL parameters being passed to AI assistant domains. |
| Network Traffic Pattern | Repetitive requests from an AI assistant's domain to a single, non-Microsoft domain. | This could indicate a chained prompt attack where the AI is fetching instructions from an attacker's server in a loop. |
| Log Source | Microsoft 365 Audit Logs (for enterprise) | For M365 Copilot, audit logs can show all prompts and AI activity, which can be analyzed for anomalies. |
Applying the January 2026 security update from Microsoft is the direct remediation for this vulnerability.
Mapped D3FEND Techniques:
Educating users to be cautious of clicking unsolicited links, even those appearing to lead to trusted sites, is a key preventative measure.
Using web filtering solutions to analyze and block malicious URLs can prevent users from reaching the attacker's crafted link.
Mapped D3FEND Techniques:
The primary and most effective countermeasure is to ensure that all systems have the January 2026 Microsoft security updates applied. This patch directly addresses the root cause of the 'Reprompt' vulnerability within the Copilot service. For individual users, this means running Windows Update. For enterprise environments, this involves using centralized patch management systems like WSUS or Microsoft Intune to deploy the updates across all managed endpoints. Verifying patch compliance is crucial to ensure the vulnerability is fully remediated and no longer exploitable in the environment.
To defend against this and similar prompt injection attacks initiated via a malicious link, organizations should leverage web security gateways and endpoint protection solutions that perform deep URL analysis. These tools can be configured to inspect the structure and parameters of URLs, flagging those that are abnormally long, contain obfuscated code, or exhibit other signs of malicious intent. Specifically for AI services, rules can be created to monitor the content of parameters like the 'q' parameter in the Copilot URL. Alerting on or blocking URLs with suspicious prompt content can prevent the initial stage of the attack from succeeding.
For organizations using enterprise AI assistants like Microsoft 365 Copilot, it is essential to ingest and analyze the associated audit logs. By establishing a baseline of normal user interaction patterns, security teams can use User and Entity Behavior Analytics (UEBA) to detect anomalies indicative of a hijacked session. For example, a sudden change in the complexity or frequency of prompts, or prompts that instruct the AI to communicate with external, untrusted domains, could trigger an alert. This allows for the detection of a compromised session even if the initial injection vector was missed, providing an opportunity to respond by terminating the session and investigating the user's account.

Cybersecurity professional with over 10 years of specialized experience in security operations, threat intelligence, incident response, and security automation. Expertise spans SOAR/XSOAR orchestration, threat intelligence platforms, SIEM/UEBA analytics, and building cyber fusion centers. Background includes technical enablement, solution architecture for enterprise and government clients, and implementing security automation workflows across IR, TIP, and SOC use cases.
Help others stay informed about cybersecurity threats