NIST Releases New Guidance to Help Organizations Secure AI Systems

The National Institute of Standards and Technology (NIST) has introduced a new resource aimed at helping organizations safely adopt and manage artificial intelligence. The newly released draft, titled the Cybersecurity Framework Profile for Artificial Intelligence, expands on NIST’s widely used Cybersecurity Framework (CSF) by addressing the growing security challenges tied to AI technologies.

The new profile outlines practical guidance for managing AI-related risks, strengthening cyber defenses with AI, and protecting systems from AI-driven threats. NIST organizes the guidance into three core focus areas: secure, defend, and thwart. These categories are designed to help organizations understand how AI can both introduce new risks and enhance cybersecurity capabilities.

According to NIST, the framework recognizes that organizations are engaging with AI at different levels of maturity. “The three focus areas reflect the fact that AI is entering organizations’ awareness in different ways,” said Barbara Cuthill, one of the authors of the framework. “But ultimately, every organization will need to address all three.”

The new AI profile builds directly on the existing Cybersecurity Framework, offering detailed guidance on how AI intersects with traditional security practices such as intrusion detection, vulnerability management, supply chain protection, and incident response. It also provides recommendations on how organizations can responsibly integrate AI into their cybersecurity operations while managing emerging risks.

NIST developed the draft through extensive collaboration with the cybersecurity community. More than 6,500 contributors provided input, helping shape guidance that reflects real-world challenges and use cases. The agency is now seeking public feedback on the draft through January 30 and plans to host a virtual workshop on January 14 to gather additional input.

This release builds on earlier NIST efforts in the AI space. In recent years, the agency introduced the AI Risk Management Framework and later expanded it with guidance focused on generative AI. NIST has also published recommendations to help organizations secure AI systems using established security controls.

The updated guidance reflects growing federal attention on AI governance. While previous administrations directed NIST to develop standards for AI safety and security, recent policy updates have expanded the agency’s role in helping organizations evaluate and manage AI technologies more effectively.

Together, these efforts signal a broader push to ensure that as AI adoption accelerates, security practices evolve alongside it—helping organizations harness innovation without increasing risk.

Leave a Reply

Your email address will not be published. Required fields are marked *