OpenAI has confirmed in a report that threat actors associated with the Chinese government have been using its ChatGPT large language model (LLM) to augment their cyber and influence operations. The report clarifies that the AI was not used for sophisticated technical tasks like malware creation or exploit development. Instead, its primary use was to improve the quality, efficiency, and scale of their social engineering and propaganda efforts. The actors used the LLM for content generation, language translation, and operational planning. In response, OpenAI has terminated the accounts linked to this activity and is collaborating with industry partners to combat such misuse.
The misuse of ChatGPT focused on the informational and psychological aspects of cyber operations:
T1566 - Phishing).This represents a shift in TTPs, where adversaries are outsourcing the creative and linguistic labor of their operations to generative AI.
The use of LLMs by state-sponsored actors has several significant implications:
Combating AI-enhanced influence operations requires a focus on human resilience and technical controls.
M1017 - User Training.The primary defense against AI-enhanced phishing and disinformation is a well-educated and skeptical user base.
Platforms like OpenAI must continue to build safety and policy enforcement into their models to prevent malicious use.
OpenAI confirms in a report that its ChatGPT model has been used by Chinese-linked actors for influence operations.

Cybersecurity professional with over 10 years of specialized experience in security operations, threat intelligence, incident response, and security automation. Expertise spans SOAR/XSOAR orchestration, threat intelligence platforms, SIEM/UEBA analytics, and building cyber fusion centers. Background includes technical enablement, solution architecture for enterprise and government clients, and implementing security automation workflows across IR, TIP, and SOC use cases.
Help others stay informed about cybersecurity threats
Every tactic, technique, and sub-technique used in this threat has been identified and mapped to the MITRE ATT&CK framework for consistent, actionable threat language.
Observables and indicators of compromise (IOCs) have been extracted and cataloged. Risk has been assessed and correlated with known threat actors and historical campaigns.
Detection rules, incident response steps, and D3FEND-aligned mitigation strategies are included so your team can act on this intelligence immediately.
Structured threat data is packaged as a STIX 2.1 bundle and can be visualized as an interactive graph — relationships between actors, malware, techniques, and indicators.
Sigma detection rules are derived from the threat techniques in this article and can be converted for deployment across any major SIEM or EDR platform.