Skip to main content
BackRisks from models and algorithms (Risks of bias and discrimination)
Home/Risks/National Technical Committee 260 on Cybersecurity (TC260) (2024)/Risks from models and algorithms (Risks of bias and discrimination)

Risks from models and algorithms (Risks of bias and discrimination)

AI Safety Governance Framework

National Technical Committee 260 on Cybersecurity (TC260) (2024)

Sub-category
Risk Domain

Unequal treatment of individuals or groups by AI, often based on race, gender, or other sensitive characteristics, resulting in unfair outcomes and unfair representation of those groups.

"During the algorithm design and training process, personal biases may be introduced, either intentionally or unintentionally. Additionally, poor-quality datasets can lead to biased or discriminatory outcomes in the algorithm's design and outputs, including discriminatory content regarding ethnicity, religion, nationality, and region."(p. 6)

Other risks from National Technical Committee 260 on Cybersecurity (TC260) (2024) (25)