While additional considerations are underway to further align with international AI standards
(including ISO/IEC 22989, ISO/IEC 23894, etc.), the United States National Institute for Standards in Technology (NIST) defines risk in the context of the AI Risk Management Framework (RMF) as: the composite measure of an event’s probability of occurring and the magnitude (or degree) of the consequences of the corresponding events. The impacts or consequences of AI systems can be positive, negative, or both and can result in opportunities or threats (Adapted from ISO 31000:2018).
At Credo AI, risk generally refers to the potential for loss, harm, or negative consequences resulting from an action, decision, or event. It involves uncertainty and the possibility of outcomes that may not be desired or anticipated.
AI Risk refers to the potential for negative consequences resulting from the development and deployment of AI systems. This includes risks such as unintended consequences, bias, discrimination, or cybersecurity threats. The goal of managing AI risk is to identify and mitigate potential negative consequences associated with AI while still realizing its potential benefits.
1
2
3
4
5
6
7
8
9
12
13
16
18
19
20