Exploring the Effects of NIST's Latest Recommendations on Cybersecurity, Privacy, and Artificial Intelligence
The National Institute of Standards and Technology (NIST) has unveiled a comprehensive Cybersecurity, Privacy, and AI program, designed to help organizations manage the unique risks associated with Artificial Intelligence (AI) technologies. This initiative aims to harmonize AI risk management with established cybersecurity and privacy standards.
The program focuses on three key areas of AI-related risk:
- Cybersecurity and privacy risks from organizations’ use of AI: This includes securing AI systems and underlying machine learning infrastructures, preventing data leakage, and updating governance and risk management to handle AI-specific vulnerabilities.
- Defending against AI-enabled attacks: As adversaries leverage AI to enhance attack methods, the program guides developing effective defenses against such AI-powered threats.
- Using AI for cyber defense and privacy protection: Leveraging AI capabilities to strengthen defensive measures and improve privacy safeguards in cybersecurity operations.
NIST is developing a specialized community profile—called the Cyber AI Profile—within the NIST Cybersecurity Framework (CSF) 2.0. This profile is tailored to address the unique cybersecurity and privacy challenges introduced by AI technologies and helps organizations harmonize AI risk management with existing security standards.
Additional elements of the program include guidance on updating access control and authorization policies to fit AI environments, revising employee training and service agreements to address AI-related security considerations, and considering supply chain risks introduced by third-party AI providers.
The new guidance also focuses on three main areas of AI data security: data drift and potentially poisoned data, risks in the data supply chain, and maliciously modified or "poisoned" data. Maintaining data integrity during storage and transport requires robust cryptographic measures, including the implementation of cryptographic hashes, checksums, and digital signatures.
Secure infrastructure and access controls become paramount when protecting AI model repositories and APIs. The complexity of AI supply chains compounds these vulnerabilities significantly. Organizations must establish comprehensive systems to track data transformations throughout its lifecycle, requiring cryptographically signed records.
The Cybersecurity, Privacy, and AI program will be implemented as a community profile within NIST's Cybersecurity Framework (CSF) 2.0. This initiative reflects a strategic government effort to help organizations safely and effectively integrate AI technologies amid rapidly evolving cybersecurity and privacy landscapes.
Organizations can leverage the NIST Cybersecurity Framework Implementation Tiers to assess their current cybersecurity maturity and guide their journey toward enhanced AI security. The National Security Agency's Artificial Intelligence Security Center (AISC) has released the joint Cybersecurity Information Sheet (CSI) - AI Data Security: Best Practices for Securing Data Used to Train & Operate AI Systems, providing additional guidance on AI data security.
Adopting quantum-resistant cryptographic standards ensures future-proofing against emerging threats. AI-specific incident response procedures represent a critical gap in many organizations' security postures, requiring unique AI threats such as model extraction or poisoning attacks to be addressed.
In summary, the NIST Cybersecurity, Privacy, and AI program empowers organizations by providing industry-specific, adaptable frameworks, risk management guidance addressing AI usage, AI-powered attacks, and defense leveraging AI, a shared taxonomy and consensus-based approach to manage AI risks uniformly across sectors, and alignment of AI risk management with privacy principles to safeguard data and decision integrity.
- The National Institute of Standards and Technology (NIST) has unveiled a program that focuses on securing AI systems and underlying machine learning infrastructures, preventing data leakage, and updating governance and risk management to handle AI-specific vulnerabilities, which are cybersecurity and privacy risks from organizations’ use of AI.
- As AI technologies evolve, adversaries are leveraging them to enhance attack methods, and the program guides developing effective defenses against such AI-powered threats, known as defending against AI-enabled attacks.
- Leveraging AI capabilities to strengthen defensive measures and improve privacy safeguards in cybersecurity operations is one goal of the program, aligning with using AI for cyber defense and privacy protection.
- The new guidance also addresses the unique cybersecurity and privacy challenges introduced by AI technologies, including maintaining data integrity during storage and transport, secure infrastructure and access controls for AI model repositories and APIs, adopting quantum-resistant cryptographic standards, and developing AI-specific incident response procedures.