The U.S. National Institute of Standards and Technology (NIST) has published a preliminary draft of a "Cybersecurity Framework Profile for Artificial Intelligence." This document provides tailored guidance for organizations to manage the unique cybersecurity risks posed by Artificial Intelligence (AI) systems. Released on January 6, 2026, the profile is designed to complement the NIST Cybersecurity Framework (CSF) 2.0 and the AI Risk Management Framework (AI RMF). It offers specific outcomes and subcategories to help organizations secure AI systems during development and deployment, defend against AI-enabled attacks, and respond to incidents involving AI. NIST is soliciting feedback from the public to refine the guidance before its final publication.
The draft Cyber AI Profile is not a new regulation but voluntary guidance intended to be a practical tool for risk management. It extends the principles of the CSF to the specific context of AI. The profile is organized into three primary focus areas, which diverge slightly from the standard CSF functions to address AI-specific challenges:
The guidance is intended for a broad audience, including any organization that develops, deploys, or uses AI systems. This spans nearly every industry, from technology and finance to healthcare and manufacturing. It is designed to be adaptable for organizations at all levels of cybersecurity and AI maturity.
As the profile is a voluntary framework, there are no direct compliance requirements. However, its adoption is likely to become a de facto standard for demonstrating due care in securing AI systems. Organizations may be expected by regulators, partners, and customers to align their AI security programs with the NIST Cyber AI Profile. Key activities organizations will need to undertake include:
The release of this profile signals a formalization of AI security as a distinct and critical discipline within cybersecurity. For businesses, it provides a much-needed structure for conversations about AI risk. It will drive investment in new security tools and expertise focused on AI model security, data integrity, and threat detection for AI systems. It will also likely influence future regulations and contractual requirements related to AI, making early adoption a competitive advantage.
Organizations working with AI should take the following steps:

Cybersecurity professional with over 10 years of specialized experience in security operations, threat intelligence, incident response, and security automation. Expertise spans SOAR/XSOAR orchestration, threat intelligence platforms, SIEM/UEBA analytics, and building cyber fusion centers. Background includes technical enablement, solution architecture for enterprise and government clients, and implementing security automation workflows across IR, TIP, and SOC use cases.
Help others stay informed about cybersecurity threats