New 'AI-in-the-Middle' Attack Turns Microsoft Copilot and Grok into C2 Channels

Researchers Detail "AI-in-the-Middle" Attack Abusing AI Assistants for Covert C2

MEDIUM
February 17, 2026
5m read
Threat IntelligenceMalwareCloud Security

Related Entities

Products & Tech

Full Report

Executive Summary

A new proof-of-concept attack method called "AI-in-the-Middle" demonstrates how threat actors can abuse popular, web-connected AI assistants for covert command-and-control (C2) communications. Researchers showed that malware can send requests to an enterprise AI assistant like Microsoft Copilot or Grok, which is then instructed to fetch attacker commands from an external source like a Pastebin page. The AI assistant acts as a trusted proxy, relaying the commands back to the malware. This technique camouflages malicious traffic as legitimate enterprise activity, making it exceptionally difficult to detect with traditional network-based security controls.


Threat Overview

The "AI-in-the-Middle" technique represents a significant evolution in C2 tactics, moving away from direct connections to attacker-controlled servers and instead leveraging trusted, third-party web services—in this case, AI platforms.

The attack flow is as follows:

  1. Compromise: An attacker first compromises a target system and deploys malware capable of interacting with an AI assistant's API or web interface.
  2. C2 Request: The malware on the victim's machine constructs a prompt and sends it to the AI assistant (e.g., Microsoft Copilot). The prompt instructs the AI to fetch content from a specific external URL, such as a public Pastebin link.
  3. Command Fetch: The AI assistant, running on the provider's trusted infrastructure, makes an outbound request to the specified URL. This URL hosts the attacker's commands.
  4. Command Relay: The AI assistant retrieves the content from the attacker's page and includes it in its response back to the malware on the victim's machine.
  5. Execution: The malware parses the response from the AI assistant, extracts the commands, and executes them.

From a network security perspective, the only traffic observed from the corporate network is a legitimate, encrypted connection from an endpoint to a trusted AI provider's domain (e.g., copilot.microsoft.com). There is no direct connection from the victim's machine to the attacker's server.

Technical Analysis

This technique is a modern implementation of several established MITRE ATT&CK techniques:

  • T1102 - Web Service: This is the primary technique. The attacker is using a legitimate external web service (the AI assistant) to relay C2 traffic. This is a form of C2 proxying.
  • T1071.001 - Application Layer Protocol: Web Protocols: The communication between the malware and the AI assistant, as well as between the assistant and the attacker's command page, all occurs over standard HTTPS.
  • T1027 - Obfuscated Files or Information: The attacker's commands can be easily obfuscated or hidden within a larger body of text on the external page, making it harder for automated systems to identify them as malicious.

Impact Assessment

The widespread adoption of AI assistants in corporate environments makes this technique particularly dangerous:

  • Evasion of Network Defenses: It bypasses firewalls, proxies, and network intrusion detection systems (NIDS) that rely on domain reputation, IP blacklisting, or signature-based detection. Blocking traffic to major AI providers is often not a viable option for businesses.
  • High Stealth: The C2 traffic is encrypted and blended with legitimate user activity, making it extremely difficult to isolate and identify.
  • Scalability: Attackers can easily change the URL of their command page (e.g., by creating a new Pastebin link) without having to modify the malware on the victim's machine.

Cyber Observables for Detection

Detection is very challenging and shifts the focus from the network to the endpoint and user behavior.

Type Value Description
api_endpoint copilot.microsoft.com High volume of automated, non-interactive requests to AI assistant APIs from a single process or endpoint could be suspicious.
process_name powershell.exe, cscript.exe Look for scripting engines making frequent, programmatic calls to AI assistant web endpoints.
log_source EDR Telemetry Endpoint Detection and Response tools are best positioned to see a non-browser process making web requests to AI platforms.

Detection & Response

  • Endpoint-First Approach: Detection must focus on the endpoint. Use an EDR solution to monitor for non-browser processes making API calls to AI services. A background service or a script should not be communicating with Copilot. Reference D3FEND technique D3-PA - Process Analysis.
  • Behavioral Analysis: Develop baselines for normal user interaction with AI assistants. Automated, periodic queries from the same process at regular intervals are a strong indicator of C2 activity. Reference D3FEND technique D3-WSAA - Web Session Activity Analysis.
  • TLS/SSL Inspection: While costly and complex, decrypting and inspecting traffic to trusted services can help identify anomalous prompts or data being sent to AI platforms. However, this raises privacy concerns.

Mitigation

  • Restrict AI Capabilities: If possible, use administrative controls to disable features in enterprise AI assistants that allow them to access external URLs. This would break the attack chain.
  • Application Control: Use application control to prevent unauthorized scripts or executables from running on endpoints. If the malware can't run, it can't initiate the C2 channel.
  • Least Privilege: Ensure that user accounts and processes do not have unnecessary permissions. Malware running with standard user privileges will have a harder time establishing persistence or causing significant damage.

Timeline of Events

1
February 17, 2026
The 'AI-in-the-Middle' C2 technique is detailed by security researchers.
2
February 17, 2026
This article was published

MITRE ATT&CK Mitigations

Since network detection is difficult, endpoint-based behavioral analysis is key to detecting a non-browser process making calls to an AI service.

While difficult, organizations could implement strict egress filtering and TLS inspection to monitor or block traffic to non-essential web services, including AI platforms if not used for business.

If feasible, disabling the AI assistant's ability to access external web content would mitigate this specific C2 vector.

Article Author

Jason Gomes

Jason Gomes

• Cybersecurity Practitioner

Cybersecurity professional with over 10 years of specialized experience in security operations, threat intelligence, incident response, and security automation. Expertise spans SOAR/XSOAR orchestration, threat intelligence platforms, SIEM/UEBA analytics, and building cyber fusion centers. Background includes technical enablement, solution architecture for enterprise and government clients, and implementing security automation workflows across IR, TIP, and SOC use cases.

Threat Intelligence & AnalysisSecurity Orchestration (SOAR/XSOAR)Incident Response & Digital ForensicsSecurity Operations Center (SOC)SIEM & Security AnalyticsCyber Fusion & Threat SharingSecurity Automation & IntegrationManaged Detection & Response (MDR)

Tags

AI securityC2command and controlMicrosoft CopilotevasionThreat Intelligence

📢 Share This Article

Help others stay informed about cybersecurity threats