WASHINGTON — The National Institute of Standards and Technology (NIST) has released a concept paper proposing a framework of control overlays to secure artificial intelligence (AI) systems, built on the SP 800-53 cybersecurity framework. The proposal outlines how organizations can tailor security controls to AI deployments and includes a PDF of the concept paper.
In this approach, a control overlay functions as a customizable set of security controls designed for a specific technology. By overlaying AI-specific requirements onto SP 800-53, the framework aims to make security guidance more flexible for diverse AI applications while drawing from established standards already used by many firms. NIST notes that the overlays could also include considerations for AI developers.
According to the paper, use cases for AI include generative AI, predictive AI, and agentic AI systems, with safeguards tailored to each scenario. The document also references controls tied to data handling and developer practices to protect confidentiality, integrity, and availability of information used by or produced by AI.
Melissa Ruzzi, director of AI at AppOmni, said on LinkedIn that the use-case descriptions appear to cover the most popular AI implementations but still need to be more explicit and defined. She stressed the need for clearer distinctions between supervised and unsupervised AI and for more granular data-sensitivity controls.
NIST has invited public feedback to help shape the final version o.f the framework and has outlined several avenues for participation. The agency has also launched a Slack channel for experts and practitioners to contribute to COSAIS, which can be explored at COSAIS project page. A related NIST News post providing background on how these overlays fit into the agency’s broader AI-security efforts is available here.