Nation-State Hackers from China, Russia, and Iran Weaponize Google's Gemini AI for Attacks

Google Report Confirms State-Sponsored Hackers Using Gemini for Recon, Malware Dev, and Phishing

HIGH
February 14, 2026
6m read
Threat ActorThreat IntelligenceMalware

Related Entities

Threat Actors

Products & Tech

Other

ChinaIranNorth KoreaRussia

Full Report

Executive Summary

A groundbreaking report from Google's Threat Intelligence Group (GTIG) reveals that state-sponsored threat actors are actively weaponizing generative AI and Large Language Models (LLMs) to enhance their cyber operations. Groups linked to China, Iran, North Korea, and Russia have been observed using LLMs, including Google's own Gemini, to accelerate and scale their attacks. This marks a significant evolution in adversary tradecraft, where AI is used to automate reconnaissance, improve social engineering, and assist in malware development. The findings indicate that defenders must now account for AI-augmented threats that can operate at a pace and level of sophistication previously unattainable.


Threat Overview

The GTIG report details a systematic adoption of LLMs by multiple state-backed Advanced Persistent Threat (APT) groups. These actors are not merely experimenting with AI but are integrating it into core operational workflows. The use cases span the full attack lifecycle:

  1. Reconnaissance: Automating open-source intelligence (OSINT) gathering to profile targets, identify key personnel, and map technical infrastructure.
  2. Social Engineering: Crafting highly convincing, context-aware phishing emails and social media lures with fluent, idiomatic language, thereby bypassing traditional suspicion.
  3. Malware and Tool Development: Generating code snippets, creating polymorphic malware, and developing custom tools for specific attack phases.
  4. Vulnerability Research: Using AI to analyze code for vulnerabilities and generate proof-of-concept exploit code.
  5. Post-Compromise Activity: Assisting with lateral movement by identifying misconfigurations, generating commands, and developing scripts for data exfiltration.

The report specifically mentioned UNC2970 (Lazarus Group), a North Korean APT, using LLMs for target reconnaissance, demonstrating the real-world application of these techniques by top-tier adversaries.


Technical Analysis

Threat actors are interacting with LLMs through various means, including public web interfaces and APIs. They often employ prompt engineering techniques to bypass the safety filters built into these models. For example, attackers might frame a malicious request within a fabricated, benign scenario, such as asking the AI to act as an expert cybersecurity analyst and generate a vulnerability testing plan for a fictional company. This allows them to extract sensitive information and code that would otherwise be blocked.

Key TTPs enhanced by AI include:

  • Initial Access: T1566 - Phishing: AI generates highly personalized and grammatically perfect phishing emails, making them harder to detect.
  • Reconnaissance: T1592 - Gather Victim Host Information: LLMs can rapidly parse vast amounts of public data to build a detailed picture of a target's network and software stack.
  • Resource Development: T1588.002 - Tool: Actors use AI to write or refine code for custom malware, droppers, and C2 communication modules.

The primary advantage AI provides to these actors is not the creation of entirely new capabilities, but the dramatic increase in the speed, scale, and stealth of existing TTPs.

Impact Assessment

The weaponization of AI by state-sponsored actors represents a paradigm shift in the threat landscape. Organizations can expect to face a higher volume of more sophisticated and harder-to-detect attacks. The business impact includes:

  • Increased Phishing Success: More employees are likely to fall for AI-crafted phishing lures, leading to more initial compromises.
  • Faster Breach Timelines: Attackers can move from initial access to data exfiltration more quickly, reducing the time for defenders to detect and respond.
  • Evasive Malware: AI can help generate polymorphic or custom malware that evades signature-based detection tools.
  • Overwhelmed Security Teams: The increased scale of attacks could overwhelm security operations centers (SOCs) with alerts.

Detection & Response

Detecting AI-augmented threats requires a shift towards behavioral analysis, as traditional signatures will be less effective.

  • Monitor API Usage: Monitor network traffic for unusual or high-volume API calls to public LLM services (e.g., generativelanguage.googleapis.com) from sensitive network segments or by unusual user accounts.
  • User and Entity Behavior Analytics (UEBA): Implement UEBA to baseline normal user activity and detect anomalies. An employee who suddenly starts running complex scripts or querying unusual data after interacting with an AI service could be a red flag.
  • Enhanced Email Security: Use email security gateways with advanced sandboxing and behavioral analysis capabilities to detect sophisticated phishing attempts that traditional filters might miss.
  • Endpoint Detection and Response (EDR): Focus on detecting post-compromise TTPs like lateral movement, credential dumping, and suspicious script execution, regardless of the initial access vector.

Mitigation

  1. User Training: Educate users about the existence of AI-powered phishing and social engineering attacks. Emphasize skepticism towards any unsolicited communication, even if it appears well-written and legitimate.
  2. Restrict Access to AI Services: For high-security environments, consider restricting or monitoring access to public generative AI services from corporate networks, especially for users with privileged access.
  3. Assume Breach Mentality: Given the increased sophistication of threats, adopt an assume-breach mindset. Focus on rapid detection and response capabilities rather than relying solely on prevention.
  4. Zero Trust Architecture: Implement a Zero Trust architecture to limit an attacker's ability to move laterally after an initial compromise. Enforce strict access controls and micro-segmentation.

Timeline of Events

1
February 14, 2026
This article was published

MITRE ATT&CK Mitigations

Train users to identify and report sophisticated, AI-generated phishing attempts.

Filter or monitor outbound traffic to known public AI/LLM service APIs to detect potential misuse.

Mapped D3FEND Techniques:

Use EDR and UEBA tools to detect anomalous behaviors indicative of post-compromise activity, regardless of how the initial access was achieved.

Mapped D3FEND Techniques:

D3FEND Defensive Countermeasures

To counter the threat of state actors using LLMs for malicious purposes, organizations must enhance their network traffic analysis capabilities. Specifically, security teams should establish a baseline of normal outbound traffic to known generative AI platforms like Google's Gemini API endpoints. Implement monitoring and alerting for anomalous connections to these services. Key indicators of malicious activity include: API calls originating from servers or non-developer workstations, a sudden spike in the volume of data sent to these endpoints, or connections occurring outside of normal business hours. Using a combination of NetFlow analysis, proxy logs, and deep packet inspection (where feasible) can help identify these suspicious patterns. By focusing on the network behavior associated with AI misuse, defenders can create an effective detection layer that is agnostic to the specific malicious code or phishing lure used by the attacker.

Since AI-augmented attacks often begin with successful social engineering, detecting anomalous user behavior post-compromise is critical. Deploy a User Behavior Analysis (UBA) solution to baseline the typical activities of employees, especially developers, system administrators, and executives. The system should monitor for deviations from this baseline, such as an account suddenly accessing sensitive repositories it has never touched before, executing unusual scripting commands (e.g., reconnaissance scripts), or attempting to access internal resources in a pattern inconsistent with their job function. In the context of this threat, a UBA system could flag a user who, shortly after a logged interaction with a public AI tool, begins performing activities indicative of internal reconnaissance. This behavioral approach helps to detect compromised accounts being used by attackers, even when the attackers are using legitimate credentials.

Sources & References

Cyber News Roundup – February 13th 2026
Integrity360 (integrity360.com) February 13, 2026

Article Author

Jason Gomes

Jason Gomes

• Cybersecurity Practitioner

Cybersecurity professional with over 10 years of specialized experience in security operations, threat intelligence, incident response, and security automation. Expertise spans SOAR/XSOAR orchestration, threat intelligence platforms, SIEM/UEBA analytics, and building cyber fusion centers. Background includes technical enablement, solution architecture for enterprise and government clients, and implementing security automation workflows across IR, TIP, and SOC use cases.

Threat Intelligence & AnalysisSecurity Orchestration (SOAR/XSOAR)Incident Response & Digital ForensicsSecurity Operations Center (SOC)SIEM & Security AnalyticsCyber Fusion & Threat SharingSecurity Automation & IntegrationManaged Detection & Response (MDR)

Tags

Generative AILLMAPTCyber WarfareGoogleGeminiThreat Intelligence

📢 Share This Article

Help others stay informed about cybersecurity threats