A groundbreaking report from Google's Threat Intelligence Group (GTIG) reveals that state-sponsored threat actors are actively weaponizing generative AI and Large Language Models (LLMs) to enhance their cyber operations. Groups linked to China, Iran, North Korea, and Russia have been observed using LLMs, including Google's own Gemini, to accelerate and scale their attacks. This marks a significant evolution in adversary tradecraft, where AI is used to automate reconnaissance, improve social engineering, and assist in malware development. The findings indicate that defenders must now account for AI-augmented threats that can operate at a pace and level of sophistication previously unattainable.
The GTIG report details a systematic adoption of LLMs by multiple state-backed Advanced Persistent Threat (APT) groups. These actors are not merely experimenting with AI but are integrating it into core operational workflows. The use cases span the full attack lifecycle:
The report specifically mentioned UNC2970 (Lazarus Group), a North Korean APT, using LLMs for target reconnaissance, demonstrating the real-world application of these techniques by top-tier adversaries.
Threat actors are interacting with LLMs through various means, including public web interfaces and APIs. They often employ prompt engineering techniques to bypass the safety filters built into these models. For example, attackers might frame a malicious request within a fabricated, benign scenario, such as asking the AI to act as an expert cybersecurity analyst and generate a vulnerability testing plan for a fictional company. This allows them to extract sensitive information and code that would otherwise be blocked.
Key TTPs enhanced by AI include:
T1566 - Phishing: AI generates highly personalized and grammatically perfect phishing emails, making them harder to detect.T1592 - Gather Victim Host Information: LLMs can rapidly parse vast amounts of public data to build a detailed picture of a target's network and software stack.T1588.002 - Tool: Actors use AI to write or refine code for custom malware, droppers, and C2 communication modules.The primary advantage AI provides to these actors is not the creation of entirely new capabilities, but the dramatic increase in the speed, scale, and stealth of existing TTPs.
The weaponization of AI by state-sponsored actors represents a paradigm shift in the threat landscape. Organizations can expect to face a higher volume of more sophisticated and harder-to-detect attacks. The business impact includes:
Detecting AI-augmented threats requires a shift towards behavioral analysis, as traditional signatures will be less effective.
generativelanguage.googleapis.com) from sensitive network segments or by unusual user accounts.Train users to identify and report sophisticated, AI-generated phishing attempts.
Filter or monitor outbound traffic to known public AI/LLM service APIs to detect potential misuse.
Mapped D3FEND Techniques:
Use EDR and UEBA tools to detect anomalous behaviors indicative of post-compromise activity, regardless of how the initial access was achieved.
Mapped D3FEND Techniques:
To counter the threat of state actors using LLMs for malicious purposes, organizations must enhance their network traffic analysis capabilities. Specifically, security teams should establish a baseline of normal outbound traffic to known generative AI platforms like Google's Gemini API endpoints. Implement monitoring and alerting for anomalous connections to these services. Key indicators of malicious activity include: API calls originating from servers or non-developer workstations, a sudden spike in the volume of data sent to these endpoints, or connections occurring outside of normal business hours. Using a combination of NetFlow analysis, proxy logs, and deep packet inspection (where feasible) can help identify these suspicious patterns. By focusing on the network behavior associated with AI misuse, defenders can create an effective detection layer that is agnostic to the specific malicious code or phishing lure used by the attacker.
Since AI-augmented attacks often begin with successful social engineering, detecting anomalous user behavior post-compromise is critical. Deploy a User Behavior Analysis (UBA) solution to baseline the typical activities of employees, especially developers, system administrators, and executives. The system should monitor for deviations from this baseline, such as an account suddenly accessing sensitive repositories it has never touched before, executing unusual scripting commands (e.g., reconnaissance scripts), or attempting to access internal resources in a pattern inconsistent with their job function. In the context of this threat, a UBA system could flag a user who, shortly after a logged interaction with a public AI tool, begins performing activities indicative of internal reconnaissance. This behavioral approach helps to detect compromised accounts being used by attackers, even when the attackers are using legitimate credentials.

Cybersecurity professional with over 10 years of specialized experience in security operations, threat intelligence, incident response, and security automation. Expertise spans SOAR/XSOAR orchestration, threat intelligence platforms, SIEM/UEBA analytics, and building cyber fusion centers. Background includes technical enablement, solution architecture for enterprise and government clients, and implementing security automation workflows across IR, TIP, and SOC use cases.
Help others stay informed about cybersecurity threats