A new proof-of-concept attack method called "AI-in-the-Middle" demonstrates how threat actors can abuse popular, web-connected AI assistants for covert command-and-control (C2) communications. Researchers showed that malware can send requests to an enterprise AI assistant like Microsoft Copilot or Grok, which is then instructed to fetch attacker commands from an external source like a Pastebin page. The AI assistant acts as a trusted proxy, relaying the commands back to the malware. This technique camouflages malicious traffic as legitimate enterprise activity, making it exceptionally difficult to detect with traditional network-based security controls.
The "AI-in-the-Middle" technique represents a significant evolution in C2 tactics, moving away from direct connections to attacker-controlled servers and instead leveraging trusted, third-party web services—in this case, AI platforms.
The attack flow is as follows:
From a network security perspective, the only traffic observed from the corporate network is a legitimate, encrypted connection from an endpoint to a trusted AI provider's domain (e.g., copilot.microsoft.com). There is no direct connection from the victim's machine to the attacker's server.
This technique is a modern implementation of several established MITRE ATT&CK techniques:
T1102 - Web Service: This is the primary technique. The attacker is using a legitimate external web service (the AI assistant) to relay C2 traffic. This is a form of C2 proxying.T1071.001 - Application Layer Protocol: Web Protocols: The communication between the malware and the AI assistant, as well as between the assistant and the attacker's command page, all occurs over standard HTTPS.T1027 - Obfuscated Files or Information: The attacker's commands can be easily obfuscated or hidden within a larger body of text on the external page, making it harder for automated systems to identify them as malicious.The widespread adoption of AI assistants in corporate environments makes this technique particularly dangerous:
Detection is very challenging and shifts the focus from the network to the endpoint and user behavior.
| Type | Value | Description |
|---|---|---|
| api_endpoint | copilot.microsoft.com |
High volume of automated, non-interactive requests to AI assistant APIs from a single process or endpoint could be suspicious. |
| process_name | powershell.exe, cscript.exe |
Look for scripting engines making frequent, programmatic calls to AI assistant web endpoints. |
| log_source | EDR Telemetry |
Endpoint Detection and Response tools are best positioned to see a non-browser process making web requests to AI platforms. |
D3-PA - Process Analysis.D3-WSAA - Web Session Activity Analysis.Since network detection is difficult, endpoint-based behavioral analysis is key to detecting a non-browser process making calls to an AI service.
While difficult, organizations could implement strict egress filtering and TLS inspection to monitor or block traffic to non-essential web services, including AI platforms if not used for business.
If feasible, disabling the AI assistant's ability to access external web content would mitigate this specific C2 vector.

Cybersecurity professional with over 10 years of specialized experience in security operations, threat intelligence, incident response, and security automation. Expertise spans SOAR/XSOAR orchestration, threat intelligence platforms, SIEM/UEBA analytics, and building cyber fusion centers. Background includes technical enablement, solution architecture for enterprise and government clients, and implementing security automation workflows across IR, TIP, and SOC use cases.
Help others stay informed about cybersecurity threats