Chinese Hackers Used ChatGPT for Influence Operations, OpenAI Confirms

OpenAI Confirms Chinese-Linked Actors Abused ChatGPT for Cyber Influence and Disinformation Campaigns

MEDIUM
February 27, 2026
4m read
Threat ActorPhishingOther

Related Entities

Threat Actors

Chinese-linked threat actors

Products & Tech

ChatGPT

Other

Full Report

Executive Summary

OpenAI has confirmed in a report that threat actors associated with the Chinese government have been using its ChatGPT large language model (LLM) to augment their cyber and influence operations. The report clarifies that the AI was not used for sophisticated technical tasks like malware creation or exploit development. Instead, its primary use was to improve the quality, efficiency, and scale of their social engineering and propaganda efforts. The actors used the LLM for content generation, language translation, and operational planning. In response, OpenAI has terminated the accounts linked to this activity and is collaborating with industry partners to combat such misuse.


Threat Overview

  • Threat Actor: Unspecified threat actors linked to the government of China.
  • Tool: OpenAI ChatGPT, a powerful generative AI model.
  • Objective: The goal was not direct system compromise via AI, but to use AI as a force multiplier for influence operations and the initial stages of cyberattacks.

Technical Analysis (TTPs)

The misuse of ChatGPT focused on the informational and psychological aspects of cyber operations:

  1. Content Generation for Propaganda: The actors used the LLM to generate articles, social media posts, and comments in multiple languages to support disinformation campaigns. The AI's ability to produce fluent, contextually appropriate text makes the resulting propaganda more convincing and harder to detect than poorly translated content.
  2. Spear-Phishing Email Crafting: ChatGPT was used to draft highly personalized and grammatically correct spear-phishing emails. This increases the likelihood of a victim clicking a malicious link or opening a malicious attachment, which is the first step in many network intrusions (T1566 - Phishing).
  3. Operational Planning: The report notes that the actors used the AI to brainstorm and draft operational plans for social media manipulation campaigns, essentially using it as a strategic assistant.
  4. Specific Campaigns Observed:
    • Operation Date Bait: Romance scams using AI-generated personas and messages.
    • Operation False Witness: Fake legal fee fraud schemes.
    • Operation Silver Lining Playbook: Targeted outreach to U.S. officials with persuasive, AI-generated content.

This represents a shift in TTPs, where adversaries are outsourcing the creative and linguistic labor of their operations to generative AI.


Impact Assessment

The use of LLMs by state-sponsored actors has several significant implications:

  • Increased Scale and Speed: AI allows threat actors to generate vast amounts of content for disinformation or phishing campaigns in a fraction of the time it would take human operators.
  • Improved Quality: LLMs can eliminate the grammatical errors and awkward phrasing that often serve as red flags in phishing emails and propaganda, making them more effective.
  • Lowered Barrier to Entry: Less-skilled operators can now produce high-quality malicious content, effectively democratizing advanced social engineering.
  • Hyper-Personalization: AI can be used to quickly tailor phishing emails or messages to individual targets based on their publicly available information, a technique known as spear-phishing.

Detection & Response

  • OpenAI's Response: OpenAI has banned the accounts associated with the malicious activity, enhanced its internal abuse detection models, and is sharing indicators of compromise (IOCs) with law enforcement and industry partners.
  • Detection for Defenders:
    • Since the content is high-quality, traditional detection based on poor grammar is no longer reliable.
    • Defenders must focus more on other indicators: the origin of the email, the reputation of links and attachments, and the unusual nature of the request.
    • AI-powered email security gateways are being developed to detect AI-generated phishing content, looking for subtle patterns in tone, style, and structure.

Mitigation Recommendations

Combating AI-enhanced influence operations requires a focus on human resilience and technical controls.

  1. Enhanced User Training: Security awareness training is more critical than ever. Users must be taught to be skeptical of unsolicited communications, regardless of how well-written they are. Training should focus on verifying the sender and the request through a separate, trusted communication channel (e.g., calling a known phone number). This is the core of M1017 - User Training.
  2. Email Security Gateways: Use advanced email security solutions that employ sandboxing for attachments and link protection to analyze payloads before they reach the user.
  3. Digital Literacy: Broader societal initiatives to improve digital literacy can help individuals critically evaluate information they encounter on social media and recognize the hallmarks of propaganda.
  4. Platform Responsibility: Tech companies like OpenAI have a responsibility to continue investing in robust safety systems to detect and prevent the malicious use of their models, as demonstrated by their response here.

Timeline of Events

1
February 27, 2026
OpenAI confirms in a report that its ChatGPT model has been used by Chinese-linked actors for influence operations.
2
February 27, 2026
This article was published

MITRE ATT&CK Mitigations

The primary defense against AI-enhanced phishing and disinformation is a well-educated and skeptical user base.

Platforms like OpenAI must continue to build safety and policy enforcement into their models to prevent malicious use.

Sources & References

Top 5 Cybersecurity News Stories February 27, 2026
DIESEC (diesec.com) February 27, 2026
Daily Cybersecurity Roundup, February 27, 2026
Cyware (cyware.com) February 27, 2026

Article Author

Jason Gomes

Jason Gomes

• Cybersecurity Practitioner

Cybersecurity professional with over 10 years of specialized experience in security operations, threat intelligence, incident response, and security automation. Expertise spans SOAR/XSOAR orchestration, threat intelligence platforms, SIEM/UEBA analytics, and building cyber fusion centers. Background includes technical enablement, solution architecture for enterprise and government clients, and implementing security automation workflows across IR, TIP, and SOC use cases.

Threat Intelligence & AnalysisSecurity Orchestration (SOAR/XSOAR)Incident Response & Digital ForensicsSecurity Operations Center (SOC)SIEM & Security AnalyticsCyber Fusion & Threat SharingSecurity Automation & IntegrationManaged Detection & Response (MDR)

Tags

Generative AIChatGPTOpenAIInfluence OperationsDisinformationChinaPhishing

📢 Share This Article

Help others stay informed about cybersecurity threats