Fairness

Considered by Credo AI as one of the six key principles of Responsible AI.

Fairness takes seriously that AI systems ultimately automate the distribution of benefits and harms amongst people and that it is important that we consider the nature of that distribution. There is a lot to this topic, but here we will focus on the highest level of distinction made between individual and group fairness.

  1. Individual Fairness: Give similar predictions to similar individuals.
  2. Group Fairness: Treat different groups similarly.

Group fairness is more commonplace, which requires answering the question: what is a group? For most applications, “groups” are taken to correspond to legally protected characteristics. For instance, “race,” as defined by census definitions, may define a group. While legal definitions are relevant, it is prudent to think carefully about what groups are relevant for a particular use case.

An important finding is that it is often impossible to satisfy both individual and group fairness at the same time, except in specific circumstances. This means that each machine learning use case requires thought to figure out what fairness means in that specific case. There is no single fairness metric to optimize that works all the time.

Researchers have described different worldviews to help practitioners develop their own fairness perspective. While it’s beyond the scope of this short glossary to tell you how these worldviews should affect downstream decisions, they’ll lay the groundwork.

  1. What you see is what you get (WYSIWYG) assumes that your data is an accurate reflection of the world.

We're all Equal (WAE) assumes that different outcomes are related to structural bias or unfairness in the data generation process.

All Terms

A
B
C
D
E
F
G
H
I
J
K
L
M
N
O
P
Q
R
S
T
U
V
W
X
Y
Z

1

A

2

B

3

C

4

D

5

E

6

F

7

G

8

H

9

I

12

L

13

M

16

P

18

R

19

S

20

T

EU AI Act
EU AI Act

E

AI Governance
AI Governance

A

Accountability
Accountability

A

AI-in-the-loop
AI-in-the-loop

A

AI Alignment
AI Alignment

A

AI Policy
AI Policy

A

AI Risk Management
AI Risk Management

A

Risk Tolerance
Risk Tolerance

R

AI Safety
AI Safety

A

Governance Artifact
Governance Artifact

G

Artificial Intelligence
Artificial Intelligence

A

Artificial General Intelligence
Artificial General Intelligence

A

AI Law
AI Law

A

Assessment
Assessment

A

Autonomous System
Autonomous System

A

Audit
Audit

A

Auditing or Audibility of AI Systems
Auditing or Audibility of AI Systems

A

Credo AI Audit Trail
Credo AI Audit Trail

C

Attestation
Attestation

A

Bias (Social vs. Statistical):
Bias (Social vs. Statistical):

B

Conversational AI
Conversational AI

C

Conformity Assessment
Conformity Assessment

C

Data Quality
Data Quality

D

Evidence
Evidence

E

Explainability
Explainability

E

Fairness
Fairness

F

Foundational Model
Foundational Model

F

General-purpose AI (GPAI)
General-purpose AI (GPAI)

G

Generative AI or genAI
Generative AI or genAI

G

Human-Centered
Human-Centered

H

Human-Centered Design
Human-Centered Design

H

Human-in-the-loop
Human-in-the-loop

H

Human-on-the-loop
Human-on-the-loop

H

Impact Assessment
Impact Assessment

I

Inclusivity
Inclusivity

I

Interpretability
Interpretability

I

Law
Law

L

Machine Learning
Machine Learning

M

Model Card
Model Card

M

Multi-Stakeholder Collaboration
Multi-Stakeholder Collaboration

M

Policy
Policy

P

Credo AI Policy Center
Credo AI Policy Center

C

Credo AI Policy Pack
Credo AI Policy Pack

C

Privacy
Privacy

P

Programmatic RAI Assessments
Programmatic RAI Assessments

P

Regulation
Regulation

R

Responsible AI License
Responsible AI License

R

AI Risk
AI Risk

A

Robustness
Robustness

R

Rulemaking Process
Rulemaking Process

R

Safety
Safety

S

Social-technical Systems
Social-technical Systems

S

Standard
Standard

S

Technical Evidence
Technical Evidence

T

Transparency
Transparency

T

Transparency Report
Transparency Report

T

Trust
Trust

T

AI Use Case
AI Use Case

A

Project Failure Rate
Project Failure Rate

P

Project Rejection Stage
Project Rejection Stage

P

Sunk Project Cost
Sunk Project Cost

S

Brand Risk
Brand Risk

B

Compliance Risk
Compliance Risk

C

Trust Risk
Trust Risk

T

AI GRC Project Rejection Rate
AI GRC Project Rejection Rate

A

Transformative AI (TAI)
Transformative AI (TAI)

T