AI Risk Management is the process of identifying and mitigating potential risks associated with the development, deployment, and use of AI systems. Effective AI Risk Management is a core goal of AI Governance.
While additional considerations are underway to further align with international AI standards
(including ISO/IEC 22989, ISO/IEC 23894, etc.), the United States National Institute for Standards in Technology (NIST) defines AI risk management in the context of the AI Risk Management Framework (RMF) as: coordinated activities to direct and control an organization with regard to risk (Source: ISO 31000:2018).
1
2
3
4
5
6
7
8
9
12
13
16
18
19
20