AI Supply Chain Attack: Hundreds of Malicious 'Skills' on ClawHub Marketplace Steal Credentials

Supply Chain Attack Hits OpenClaw AI Ecosystem via Malicious 'Skills' on ClawHub

HIGH
February 9, 2026
5m read
Supply Chain AttackMalwareCloud Security

Related Entities

Organizations

KOI SecuritySlowMist

Products & Tech

OpenClawClawHubVirusTotal

Full Report

Executive Summary

A novel software supply chain attack is exploiting the open-source ecosystem of the OpenClaw AI assistant. Threat actors have flooded the ClawHub marketplace with hundreds of malicious "skills," which are community-contributed plugins that extend the AI's capabilities. These skills, discovered by researchers at KOI Security and SlowMist, appear legitimate but contain malicious code designed to steal credentials, cryptocurrency wallets, and other sensitive information. The attack works by tricking users into downloading and executing malware, such as the Atomic Stealer infostealer, as part of the skill's installation prerequisites. The incident highlights a new frontier for supply chain attacks within the burgeoning AI agent ecosystem, exploiting user trust in open platforms. In response, OpenClaw has partnered with VirusTotal to implement automated security scanning for all marketplace submissions.

Threat Overview

  • Attack Type: Software Supply Chain Attack.
  • Targeted Platform: The ClawHub marketplace for the OpenClaw AI assistant (formerly Clawdbot/Moltbot).
  • Vector: Malicious "skills" published on the open marketplace. Attackers leverage social engineering within the skill's documentation.
  • Payloads:
    • Atomic Stealer: An information-stealing malware targeting macOS.
    • Backdoors and Keyloggers: Custom malicious code designed to exfiltrate credentials and capture keystrokes on Windows.
  • Scale: Researchers identified at least 472 malicious skills out of approximately 2,857 on the marketplace.
  • Modus Operandi: The attack is not a sophisticated code injection. It relies on simple deception: the Prerequisites section of a malicious skill's documentation instructs the user to download and run a malicious file from an external source like GitHub.

Technical Analysis

The attack chain leverages the user's trust in the AI assistant's ecosystem and their desire to add new functionality.

  1. Publication of Malicious Skill: A threat actor creates a skill with an appealing name and description, such as solana-wallet-tracker or youtube-summarize-pro, and publishes it on the open ClawHub marketplace. (T1195.002 - Compromise Software Supply Chain: Compromise Software Distribution)
  2. Social Engineering via Documentation: The skill's installation instructions or README file contains a step that instructs the user to download a supposed dependency. This is presented as a normal part of the setup process.
  3. User-Assisted Execution: The user, following the instructions, downloads the malicious file (e.g., from a GitHub repository controlled by the attacker) and executes it on their system. This manual execution by the user bypasses many automated security controls. (T1204.002 - Malicious File)
  4. Payload Deployment: The executed file installs the malware, such as Atomic Stealer. This infostealer is designed to scour the victim's machine for sensitive data, including browser passwords, system credentials, and cryptocurrency wallet files.
  5. Data Exfiltration: The stolen information is then exfiltrated to an attacker-controlled C2 server. (T1041 - Exfiltration Over C2 Channel)

This attack vector is particularly insidious because it exploits the open and collaborative nature of modern AI platforms. The lack of a mandatory security review process for published skills created a significant vulnerability that threat actors were quick to exploit.

Impact Assessment

  • Credential and Crypto Theft: Users who install the malicious skills are at high risk of having their credentials for various online services, as well as their cryptocurrency assets, stolen. This can lead to direct and significant financial loss.
  • System Compromise: The installation of backdoors and keyloggers provides attackers with persistent access to the victim's machine, which can be used for further attacks, espionage, or inclusion in a botnet.
  • Erosion of Trust in AI Ecosystems: This incident damages user trust in the OpenClaw platform and serves as a warning for the entire AI assistant ecosystem. Users will become more hesitant to install third-party skills, potentially stifling innovation and community growth.

Detection & Response

  • Skill Vetting: Before installing any new skill, users must perform due diligence. Scrutinize the skill's publisher, check for reviews or community feedback, and be extremely wary of any skill that requires downloading and running executables from external, unofficial sources.
  • Process Monitoring: Monitor for suspicious processes being spawned by the AI assistant or related tools. Look for unexpected network connections to unknown domains.
  • IOC Scanning: Use security tools to scan for indicators of compromise associated with Atomic Stealer and other payloads distributed in this campaign.
  • Marketplace Scanning: As OpenClaw is now doing, platform owners must implement automated security scanning (e.g., using VirusTotal APIs) for all submissions to their marketplaces to detect malicious code before it becomes available to users.

Mitigation

  • User Education: Users of AI assistants must be educated about the risks of installing third-party skills. The number one rule should be to never download and execute files from untrusted sources as part of a skill's installation process.
  • Sandboxing: Run AI assistants and their skills in a sandboxed or containerized environment to limit their access to the underlying operating system and user data. This can prevent an information stealer from accessing sensitive files outside of its sandbox.
  • Secure Marketplace Policies: AI platforms must enforce strict security policies for their marketplaces. This should include mandatory static and dynamic analysis of all submitted skills, publisher identity verification, and a clear process for reporting and removing malicious content.
  • Principle of Least Privilege: Configure AI assistants to run with the minimum necessary permissions. They should not have broad access to the user's file system or credentials by default.

Timeline of Events

1
February 9, 2026
This article was published

MITRE ATT&CK Mitigations

Implementing application allow-listing would prevent the unauthorized malware downloaded by the user from running.

Mapped D3FEND Techniques:

Running the AI assistant and its skills in a sandbox would limit its ability to access and steal sensitive files from the host system.

Mapped D3FEND Techniques:

Educating users not to download and execute arbitrary files from the internet, even if instructed by a seemingly legitimate application, is a crucial defense.

D3FEND Defensive Countermeasures

For a platform like ClawHub, implementing automated dynamic analysis (sandboxing) for every submitted skill is a critical security gate. Before a skill is made public, it should be automatically installed and run in an isolated, instrumented environment. The sandbox would monitor the skill's behavior, such as file system access, network connections, and process creation. For the malicious skills in this campaign, this process would immediately flag suspicious activity. For example, a 'youtube-summarize-pro' skill should not be attempting to access keychain files, read browser cookies, or make outbound connections to unknown IP addresses. By analyzing these behaviors, the platform can automatically reject malicious skills before they ever pose a risk to users. This D3FEND technique shifts the security burden from the end-user to the platform provider, creating a much safer ecosystem.

End-users of AI assistants like OpenClaw must adopt a defensive posture through application hardening. The core principle is to run the AI assistant with the least privilege necessary for it to function. This can be achieved by running the application in a container (e.g., Docker) or a dedicated, non-privileged user account with restricted file system access. By default, the AI assistant should be denied access to sensitive user directories such as ~/Documents, ~/Downloads, and especially cryptocurrency wallet locations or browser profile folders. When a skill requires access to a specific file or folder, the user should have to explicitly grant that permission for that session only. This configuration would prevent an information stealer like Atomic Stealer, delivered via a malicious skill, from being able to find and exfiltrate the valuable data it is designed to steal.

Sources & References

ClawHub hosts supply chain attacks through AI agent skills
Cryptopolitan (cryptopolitan.com) February 9, 2026
Cybersecurity News
WIU Cybersecurity (wiu.edu) February 9, 2026
OpenClaw AI Agent Skills Abused by Threat Actors to Distribute Malware
GBHackers on Security (gbhackers.com) February 8, 2026

Article Author

Jason Gomes

Jason Gomes

• Cybersecurity Practitioner

Cybersecurity professional with over 10 years of specialized experience in security operations, threat intelligence, incident response, and security automation. Expertise spans SOAR/XSOAR orchestration, threat intelligence platforms, SIEM/UEBA analytics, and building cyber fusion centers. Background includes technical enablement, solution architecture for enterprise and government clients, and implementing security automation workflows across IR, TIP, and SOC use cases.

Threat Intelligence & AnalysisSecurity Orchestration (SOAR/XSOAR)Incident Response & Digital ForensicsSecurity Operations Center (SOC)SIEM & Security AnalyticsCyber Fusion & Threat SharingSecurity Automation & IntegrationManaged Detection & Response (MDR)

Tags

Supply Chain AttackAIArtificial IntelligenceOpenClawClawHubAtomic StealerMalware

📢 Share This Article

Help others stay informed about cybersecurity threats

Continue Reading