FDD Warns NIST of "Agentic AI" Security Risks, Highlighting Prompt Injection and Multi-Agent Dangers

Foundation for Defense of Democracies Submits Public Comment to NIST on Securing Agentic AI Systems

INFORMATIONAL
March 10, 2026
5m read
Policy and ComplianceRegulatoryThreat Intelligence

MITRE ATT&CK Techniques

Full Report

Executive Summary

On March 9, 2026, the Foundation for Defense of Democracies (FDD) submitted a formal public comment to the U.S. National Institute of Standards and Technology (NIST), raising alarms about the significant and novel security risks associated with agentic artificial intelligence (AI). The FDD's submission, in response to a NIST Request for Information (RFI), argues that deploying autonomous AI agents within the federal government without robust security frameworks could create severe vulnerabilities. The document emphasizes that adversaries like China, Russia, and Iran are already leveraging similar techniques against conventional AI, and the move to agentic systems will amplify these threats.


Regulatory Details

The FDD's comment was directed at NIST's Center for AI Standards and Innovation, which is seeking input to develop a future AI Agent Security Framework. The core of the FDD's argument is that existing cybersecurity frameworks, such as the NIST Cybersecurity Framework, are insufficient to address the unique attack surfaces presented by autonomous AI agents. These agents can act on their own, interact with other systems, and make decisions without direct human oversight, creating new pathways for compromise.

New Attack Vectors Highlighted

The submission detailed several potent attack vectors specific to agentic AI:

  • Indirect Prompt Injection: This is a primary concern. Unlike direct attacks, an adversary doesn't need to access a government system. Instead, they can embed malicious instructions into external data that an AI agent might process (e.g., an email, a web page, a document). When the agent ingests this data, the hidden prompt can force it to take unauthorized actions, such as exfiltrating data or executing commands, effectively hijacking the agent's logic. This aligns with emerging threat models beyond traditional MITRE ATT&CK techniques.

  • Data Poisoning: Adversaries could corrupt the training data of an AI model to introduce subtle backdoors or biases that can be triggered later.

  • Multi-Agent Interaction Risk: As AI agents begin to interact with each other, the complexity of securing these interactions grows exponentially. It becomes difficult to predict emergent behaviors and to perform attribution if a compromise occurs within a chain of agent-to-agent communications.

Impact Assessment

The FDD warns that a failure to address these risks could have severe consequences. An adversary could use a prompt injection attack to turn a government AI agent into an insider threat, leaking sensitive information or manipulating government processes. The autonomous nature of these agents means that a single, successful attack could be scaled rapidly, causing widespread damage. This could undermine public trust in government AI initiatives and provide a strategic advantage to U.S. adversaries.

Compliance Guidance and Recommendations

The FDD provided several key recommendations for NIST:

  1. Update Core Standards: NIST should update its fundamental systems engineering and development standards to incorporate security considerations for the entire lifecycle of agentic AI, from design and training to deployment and decommissioning.
  2. Accelerate AI Security Initiatives: NIST must prioritize and accelerate its work on securing AI systems, with a specific focus on the unique challenges of agentic AI.
  3. Develop a New Framework: The creation of the AI Agent Security Framework is critical and must account for threats like prompt injection, model theft, and emergent behavior.
  4. Focus on Real-World Threats: The framework must be grounded in the understanding that nation-state actors are actively developing and deploying these attack techniques.

This public comment serves as a critical input for U.S. policymakers and standards bodies as they grapple with how to safely harness the power of advanced AI technologies.

Timeline of Events

1
March 9, 2026
The Foundation for Defense of Democracies (FDD) submitted its public comment to NIST. The public comment period for the RFI also closed on this day.
2
March 10, 2026
This article was published

MITRE ATT&CK Mitigations

Implement strict input validation and output encoding for AI agents to mitigate prompt injection attacks.

Mapped D3FEND Techniques:

Run AI agents in sandboxed environments with the principle of least privilege, limiting their access to only the data and tools necessary for their function.

Mapped D3FEND Techniques:

Audit

M1047enterprise

Maintain comprehensive, immutable logs of all AI agent actions to enable monitoring for anomalous behavior and to support forensic investigation.

Article Author

Jason Gomes

Jason Gomes

• Cybersecurity Practitioner

Cybersecurity professional with over 10 years of specialized experience in security operations, threat intelligence, incident response, and security automation. Expertise spans SOAR/XSOAR orchestration, threat intelligence platforms, SIEM/UEBA analytics, and building cyber fusion centers. Background includes technical enablement, solution architecture for enterprise and government clients, and implementing security automation workflows across IR, TIP, and SOC use cases.

Threat Intelligence & AnalysisSecurity Orchestration (SOAR/XSOAR)Incident Response & Digital ForensicsSecurity Operations Center (SOC)SIEM & Security AnalyticsCyber Fusion & Threat SharingSecurity Automation & IntegrationManaged Detection & Response (MDR)

Tags

Artificial IntelligenceAI SecurityNISTFDDPrompt InjectionAgentic AIPolicy

📢 Share This Article

Help others stay informed about cybersecurity threats