Google Patches Critical Prompt Injection Flaw in Antigravity IDE

Google Patches Code Execution Flaw in Antigravity IDE Enabled by Prompt Injection

HIGH
April 22, 2026
4m read
VulnerabilityCloud SecurityThreat Intelligence

Related Entities

Organizations

Google Pillar Security

Products & Tech

Antigravity IDE

Full Report

Executive Summary

Google has patched a significant vulnerability in its agentic Integrated Development Environment (IDE), Antigravity. The flaw, discovered by researcher Dan Lisichkin of Pillar Security, allowed for arbitrary code execution via a sophisticated prompt injection attack. By crafting a malicious prompt, an attacker could bypass the IDE's "Strict Mode" sandbox and execute arbitrary code on the underlying system. The vulnerability stemmed from insufficient input sanitization in a native file-searching tool, which could be abused to execute a staged malicious file. This incident underscores the complex security risks associated with AI-powered development tools and the novel attack vectors they introduce.

Vulnerability Details

The vulnerability was a chain of two weaknesses within the Antigravity IDE:

  1. Permitted File Creation: The IDE allows AI agents to create files within the workspace.
  2. Input Sanitization Failure: The native file-searching tool, find_by_name, did not properly sanitize its input parameters before passing them to the underlying fd command-line utility.

The attack chain works as follows:

  1. An attacker convinces a user to input a malicious prompt.
  2. The AI agent, as part of its operation, creates a file containing a malicious script (e.g., malicious_script.sh).
  3. The malicious prompt then instructs the agent to search for a file using the find_by_name tool, but injects the -X (or --exec-batch) flag into the search pattern.
  4. The fd tool is called with this injected flag, which forces it to execute the malicious_script.sh file against the search results.

This entire sequence bypasses the IDE's "Strict Mode," which is designed to prevent network access and out-of-workspace file writes.

Affected Systems

The vulnerability affected versions of Google's Antigravity IDE prior to the patch. This tool is used by developers for AI-assisted coding, making it a potentially high-value target.

Exploitation Status

The vulnerability was discovered by a security researcher and responsibly disclosed to Google, who then patched it. There is no indication that it was exploited in the wild.

Impact Assessment

A successful exploit would grant an attacker arbitrary code execution within the context of the IDE's environment. This could lead to:

  • Theft of Intellectual Property: The attacker could exfiltrate source code, API keys, and other sensitive data from the developer's workspace.
  • Supply Chain Attack: The attacker could use their access to inject malicious code into the software being developed, leading to a downstream supply chain attack.
  • Pivoting to other Systems: Depending on the environment, the attacker might be able to pivot from the compromised IDE to other systems on the developer's machine or network.

This vulnerability is a prime example of how prompt injection is evolving from a novelty to a serious security threat, capable of bridging the gap between the AI model and the underlying system to achieve code execution.

Cyber Observables — Hunting Hints

Detecting this specific attack would be difficult without direct access to the prompts, but similar attacks could be hunted by looking for:

Type
Command-Line Pattern
Value
fd ... -X ... or fd ... --exec-batch ...
Description
Monitoring command-line logs for the use of the exec-batch flag in the fd tool, especially when combined with unusual patterns.
Type
Process Activity
Value
An IDE or code editor process spawning unexpected child processes.
Description
For example, antigravity_ide spawning sh or bash to execute a script.
Type
File Monitoring
Value
Creation of executable script files (.sh, .py) followed by a file search operation.
Description
This sequence of events could indicate a staged attack.

Detection Methods

  • Input Sanitization and Validation: The primary defense is on the application side. All user-supplied input, including prompts that are passed to underlying tools, must be strictly sanitized to prevent command injection.
  • Behavioral Monitoring: EDR tools on developer workstations should monitor for IDE processes spawning unexpected shells or executing files. D3FEND's D3-PA - Process Analysis can help model normal behavior and detect deviations.
  • Least Privilege for AI Agents: The AI agents themselves should run in a highly constrained environment with the absolute minimum privileges necessary, a concept aligned with M1048 - Application Isolation and Sandboxing.

Remediation Steps

  1. Update Immediately: Developers using Antigravity IDE should ensure they have updated to the latest version that includes the patch for this vulnerability.
  2. Audit Prompts: Be cautious of prompts from untrusted sources. Treat prompts with the same suspicion as you would any other user-supplied input.
  3. Secure AI Development Practices: This incident should serve as a lesson for all organizations building or using agentic AI systems. Robust input sanitization, strict sandboxing, and least-privilege execution are not optional; they are essential security requirements.

Timeline of Events

1
April 22, 2026
This article was published

MITRE ATT&CK Mitigations

Ensure that AI agents and the tools they call run in a strictly sandboxed environment with no access to the underlying host system.

Mapped D3FEND Techniques:

Apply the patch from Google to fix the input sanitization flaw.

Mapped D3FEND Techniques:

Developers of AI tools must implement robust input sanitization for any user-provided content that is passed to system commands.

Mapped D3FEND Techniques:

D3FEND Defensive Countermeasures

The root cause of the Antigravity IDE vulnerability was the failure to sanitize user-controlled input (the prompt) before passing it to a backend command-line tool (fd). To prevent this entire class of vulnerability, developers of AI agentic systems must treat all output from the Large Language Model (LLM) as untrusted user input. Before the IDE's code passes the 'Pattern' parameter to the fd tool, it must be strictly sanitized. This involves stripping any characters that have special meaning to the shell, such as flags (-X), pipes (|), and command separators (;). By implementing a robust sanitization layer between the AI's output and any system call, Google could have prevented the malicious -X flag from ever reaching the fd process, thus breaking the exploit chain at its source.

For a strong defense-in-depth posture, the environment where the Antigravity IDE and its AI agents operate should be heavily sandboxed using system call filtering. Technologies like seccomp-bpf on Linux can be used to create a policy that defines exactly which system calls a process is allowed to make. For the Antigravity IDE, a policy could be created that explicitly denies the execve system call (used to execute programs) for any process spawned by the AI agent, except for a very narrow list of approved tools. This would mean even if an attacker successfully injects a command via a prompt, the underlying operating system would block the attempt to execute a malicious binary like malicious_script.sh, providing a fail-safe that prevents sandbox escape and code execution.

Sources & References

Article Author

Jason Gomes

Jason Gomes

• Cybersecurity Practitioner

Cybersecurity professional with over 10 years of specialized experience in security operations, threat intelligence, incident response, and security automation. Expertise spans SOAR/XSOAR orchestration, threat intelligence platforms, SIEM/UEBA analytics, and building cyber fusion centers. Background includes technical enablement, solution architecture for enterprise and government clients, and implementing security automation workflows across IR, TIP, and SOC use cases.

Threat Intelligence & AnalysisSecurity Orchestration (SOAR/XSOAR)Incident Response & Digital ForensicsSecurity Operations Center (SOC)SIEM & Security AnalyticsCyber Fusion & Threat SharingSecurity Automation & IntegrationManaged Detection & Response (MDR)

Tags

AI SecurityPrompt InjectionGoogleVulnerabilityRCESandbox Escape

📢 Share This Article

Help others stay informed about cybersecurity threats