On March 9, 2026, the Foundation for Defense of Democracies (FDD) submitted a formal public comment to the U.S. National Institute of Standards and Technology (NIST), raising alarms about the significant and novel security risks associated with agentic artificial intelligence (AI). The FDD's submission, in response to a NIST Request for Information (RFI), argues that deploying autonomous AI agents within the federal government without robust security frameworks could create severe vulnerabilities. The document emphasizes that adversaries like China, Russia, and Iran are already leveraging similar techniques against conventional AI, and the move to agentic systems will amplify these threats.
The FDD's comment was directed at NIST's Center for AI Standards and Innovation, which is seeking input to develop a future AI Agent Security Framework. The core of the FDD's argument is that existing cybersecurity frameworks, such as the NIST Cybersecurity Framework, are insufficient to address the unique attack surfaces presented by autonomous AI agents. These agents can act on their own, interact with other systems, and make decisions without direct human oversight, creating new pathways for compromise.
The submission detailed several potent attack vectors specific to agentic AI:
Indirect Prompt Injection: This is a primary concern. Unlike direct attacks, an adversary doesn't need to access a government system. Instead, they can embed malicious instructions into external data that an AI agent might process (e.g., an email, a web page, a document). When the agent ingests this data, the hidden prompt can force it to take unauthorized actions, such as exfiltrating data or executing commands, effectively hijacking the agent's logic. This aligns with emerging threat models beyond traditional MITRE ATT&CK techniques.
Data Poisoning: Adversaries could corrupt the training data of an AI model to introduce subtle backdoors or biases that can be triggered later.
Multi-Agent Interaction Risk: As AI agents begin to interact with each other, the complexity of securing these interactions grows exponentially. It becomes difficult to predict emergent behaviors and to perform attribution if a compromise occurs within a chain of agent-to-agent communications.
The FDD warns that a failure to address these risks could have severe consequences. An adversary could use a prompt injection attack to turn a government AI agent into an insider threat, leaking sensitive information or manipulating government processes. The autonomous nature of these agents means that a single, successful attack could be scaled rapidly, causing widespread damage. This could undermine public trust in government AI initiatives and provide a strategic advantage to U.S. adversaries.
The FDD provided several key recommendations for NIST:
This public comment serves as a critical input for U.S. policymakers and standards bodies as they grapple with how to safely harness the power of advanced AI technologies.
Implement strict input validation and output encoding for AI agents to mitigate prompt injection attacks.
Mapped D3FEND Techniques:
Run AI agents in sandboxed environments with the principle of least privilege, limiting their access to only the data and tools necessary for their function.
Mapped D3FEND Techniques:

Cybersecurity professional with over 10 years of specialized experience in security operations, threat intelligence, incident response, and security automation. Expertise spans SOAR/XSOAR orchestration, threat intelligence platforms, SIEM/UEBA analytics, and building cyber fusion centers. Background includes technical enablement, solution architecture for enterprise and government clients, and implementing security automation workflows across IR, TIP, and SOC use cases.
Help others stay informed about cybersecurity threats