NIST AI Risk Management Framework

NIST is developing a framework to better manage risks to individuals, organisations, and society associated with artificial intelligence (AI). The NIST AI Risk Management Framework (AI RMF) is for voluntary use. It improves the ability to incorporate trustworthiness considerations into designing, developing, using, and evaluating AI products, services, and systems.

NIST has issued the second draft of the AI RMF, along with a draft Playbook companion to the AI Framework. According to the second draft of the AI RMF:

The AI RMF is intended for voluntary use to address risks in the design, development, use, and evaluation of AI products, services, and systems. AI research and development, as well as the standards landscape, is evolving rapidly. For that reason, the AI RMF and its companion documents will evolve over time and reflect new knowledge, awareness, and practices. NIST intends to continue its engagement with stakeholders to keep the Framework up to date with AI trends and reflect experience based on the use of the AI RMF. Ultimately, the AI RMF will be offered in multiple formats, including online versions, to provide maximum flexibility.

Part 1 of the AI RMF draft explains the motivation for developing and using the Framework, its audience, and the framing of AI risk and trustworthiness.

Part 2 includes the AI RMF Core and a description of Profiles and their use.

Along with the draft framework, NIST has also provided Playbook. The Playbook provides actions framework users could take to implement the AI RMF by incorporating trustworthiness consideration in the design, development, use and evaluation of AI systems.

The NIST AI RMF focuses on trustworthy and responsible AI, which the NIST defines as:

Trustworthy AI is valid and reliable, safe, fair and bias is managed, secure and resilient, accountable and transparent, explainable and interpretable, and privacy-enhanced.

Responsible use and practice of AI systems is a counterpart to AI system trustworthiness. AI systems are not inherently bad or risky, and it is often the contextual environment that determines whether or not negative impact will occur. The AI Risk Management Framework (AI RMF) can help organizations enhance their understanding of how the contexts in which the AI systems they build and deploy may interact with and affect individuals, groups, and communities. Responsible AI use and practice can:

  • assist AI designers, developers, deployers, evaluators, and users to think more critically about context and potential or unexpected negative and positive impacts;

  • be leveraged to design, develop, evaluate, and use AI systems with impact in mind; and

  • prevent, preempt, detect, mitigate, and manage AI risks.

As per the NIST AI Risk Management Framework, the core attributes of the AI RMF are:

The AI RMF strives to:

  1. Be risk-based, resource-efficient, pro-innovation, and voluntary.

  2. Be consensus-driven and developed and regularly updated through an open, transparent process. All stakeholders should have the opportunity to contribute to the AI RMF’s development.

  3. Use clear and plain language that is understandable by a broad audience, including senior executives, government officials, non-governmental organization leadership, and those who are not AI professionals – while still of sufficient technical depth to be useful to practitioners. The AI RMF should allow for communication of AI risks across an organization, between organizations, with customers, and to the public at large.

  4. Provide common language and understanding to manage AI risks. The AI RMF should offer taxonomy, terminology, definitions, metrics, and characterizations for AI risk.

  5. Be easily usable and fit well with other aspects of risk management. Use of the Framework should be intuitive and readily adaptable as part of an organization’s broader risk management strategy and processes. It should be consistent or aligned with other approaches to managing AI risks.

  6. Be useful to a wide range of perspectives, sectors, and technology domains. The AI RMF should be universally applicable to any AI technology and to context-specific use cases.

  7. Be outcome-focused and non-prescriptive. The Framework should provide a catalog of outcomes and approaches rather than prescribe one-size-fits-all requirements.

  8. Take advantage of and foster greater awareness of existing standards, guidelines, best practices, methodologies, and tools for managing AI risks – as well as illustrate the need for additional, improved resources.

  9. Be law- and regulation-agnostic. The Framework should support organizations’ abilities to operate under applicable domestic and international legal or regulatory regimes.

  10. Be a living document. The AI RMF should be readily updated as technology, understanding, and approaches to AI trustworthiness and uses of AI change and as stakeholders learn from implementing AI risk management generally and this framework in particular.

The AI RMF is not a compliance mechanism. It is law- and regulation-agnostic, as AI policy discussions are live and evolving. While risk management practices should incorporate and align with applicable laws and regulations, this framework (NIST AI RMF) is not intended to supersede existing regulations, laws, or other mandates.

The NIST AI Risk Management Framework classify the risk as a composite measure of an event's probability of occurring and the magnitude (or degree of the consequences of the corresponding events.

The impacts, or consequences, of AI systems can be positive, negative, or both and can result in opportunities or threats (Adapted from: ISO 31000:2018).

When taking into account the negative impact of a potential event, the risk is a function of:

  • the negative impact, or magnitude of harm, that would arise if the circumstance or event occurs and

  • the likelihood of occurrence (Adapted from: OMB Circular A-130:2016).

Risk Management, refers to coordinated activities to direct and control an organisation concerning risk” (Source: ISO 31000:2018).

AI Trustworthiness and AI Risks

Trustworthy AI is: valid and reliable, safe and fair, and bias is managed, secure and resilient, accountable and transparent, explainable and interpretable, and privacy-enhanced. Activities that focus on enhancing the AI's trustworthiness also contribute to the reduction of AI risks.

These characteristics are inextricably tied to human social and organisational behaviour, the datasets used by AI systems and the decisions made by those who build them, and the interactions with the humans who provide insight from and oversight of such systems.

Addressing AI trustworthy characteristics individually will not assure AI system trustworthiness, and tradeoffs are always involved. Trustworthiness is greater than the sum of its parts. According to the NIST AI RMF, all of the AI trustworthiness characteristics are interrelated, if not interdependent.

Trustworthiness characteristics explained in this document are interrelated. Highly secure but unfair systems, accurate but opaque and uninterpretable systems, and inaccurate but secure, privacy-enhanced, and transparent systems are all undesirable. Trustworthy AI systems should achieve a high degree of control over risk while retaining a high level of performance quality. Achieving this difficult goal requires a comprehensive approach to risk management, with tradeoffs among the trustworthiness characteristics.

The following table shows the mapping of AI RMF taxonomy to global AI policy documents:

AI RMFOECD AI PrinciplesEU AIAEO 13960

Valid and reliable

Robustness

Technical robustness

Purposeful and performance driven Accurate, reliable and effective Regularly mointored

Safe

Safety

Safety

Lawful and respect of our Nation's values

Fair and bias are managed

Human-centred values and fairness

Non-discrimination Diversity and fairness Data governance

Secure and resilient

Secure and resilient

Security

Security & resilience

Transparent Accountable Lawful and respectful of our Nation's values Responsible and traceable Regularly monitored

Transparent and accountable

Transparency and responsible disclosure Accountability

Transparency Accountability Human agency and oversight

Understandable by subject matter experts, users, and others, as appropriate

Explainable and interpretable

Explainability

Understandable by subject matter experts, users and others, as appropriate

Privacy-enhanced

Human values; Respect for human rights

Privacy Data governance

Lawful and respectful of our Nation's values

  • Valid and reliable: Validity and reliability for deployed AI systems is often assessed by ongoing audits or monitoring that confirm a system is performing as intended. Measurement of accuracy, reliability, and robustness contribute to trustworthiness and should consider that certain types of failures can cause greater harm – and risks should be managed to minimise the negative impact of those failures.

  • Safe: AI systems “should not, under defined conditions, cause physical or psychological harm or lead to a state in which human life, health, property, or the environment is endangered” (Source: ISO/IEC TS 5723:2022). Safe operation of AI systems requires responsible design and development practices, clear information to deployers on how to use a system appropriately, and responsible decision-making by deployers and end-users.

  • Fair and bias are managed: NIST has identified three major categories of AI bias to be considered and managed: systemic, computational, and human, all of which can occur in the absence of prejudice, partiality, or discriminatory intent. While bias is not always a negative phenomenon, certain biases exhibited in AI models and systems can perpetuate and amplify negative impacts on individuals, groups, communities, organisations, and society – and at a speed and scale far beyond the traditional discriminatory practices that can result from implicit human or systemic biases. Bias is tightly associated with the concepts of transparency as well as fairness in society.

  • Secure and resilient: AI systems that can withstand adversarial attacks, or more generally, unexpected changes in their environment or use, or to maintain their functions and structure in the face of internal and external change, and to degrade gracefully when this is necessary (Adapted from: ISO/IEC TS 5723:2022) may be said to be resilient. AI systems that can maintain confidentiality, integrity, and availability through protection mechanisms that prevent unauthorised access and use may be said to be secure.

  • Transparent and accountable: Transparency reflects the extent to which information is available to individuals about an AI system, if they are interacting – or even aware that they are interacting – with such a system. Its scope spans from design decisions and training data to model training, the structure of the model, its intended use case, and how and when deployment or end user decisions were made and by whom. Determinations of accountability in the AI context relate to expectations of the responsible party in the event that a risky outcome is realized. The shared responsibility of all AI actors should be considered when seeking to hold actors accountable for the outcomes of AI systems.

  • Explainable and interpretable: Explainability refers to a representation of the mechanisms underlying an algorithm’s operation, whereas interpretability refers to the meaning of AI systems’ output in the context of its designed functional purpose. Together, they assist those operating or overseeing an AI system to do so effectively and responsibly. The underlying assumption is that perceptions of risk stem from a lack of ability to make sense of, or contextualize, system output appropriately.

  • Privacy-enhanced: Privacy refers generally to the norms and practices that help to safeguard human autonomy, identity, and dignity. These norms and practices typically address freedom from intrusion, limiting observation, or individuals’ agency to consent to disclosure or control of facets of their identities (e.g., body, data, reputation).

In the NIST AI RMF, the core provides the outcomes and actions related to managing the AI risks. The Core comprises four functions: Map, Measure, Manage and Govern. Each of these high-level functions is broken down into categories and subcategories. The Seclea Risk Management template for NIST AI RMF is structured around these core categories/sub-categories - along with relevant checks and controls (if and when appropriate).

Subcategories in the NIST AI RMF are mapped to risk categories in the Seclea Risk Management for AI applications. The details of each of Seclea's risk categories with controls and checks are discussed in the rest of this documentation. You can click on the NIST AI RMFor quick reference subcategories of Seclea's risk categories.

Last updated