Risks from models and algorithms (Risks of bias and discrimination)
AI Safety Governance Framework
National Technical Committee 260 on Cybersecurity (TC260) (2024)
Unequal treatment of individuals or groups by AI, often based on race, gender, or other sensitive characteristics, resulting in unfair outcomes and unfair representation of those groups.
"During the algorithm design and training process, personal biases may be introduced, either intentionally or unintentionally. Additionally, poor-quality datasets can lead to biased or discriminatory outcomes in the algorithm's design and outputs, including discriminatory content regarding ethnicity, religion, nationality, and region."(p. 6)
Other risks from National Technical Committee 260 on Cybersecurity (TC260) (2024) (25)
Risks from models and algorithms (Risks of explainability)
7.4 Lack of transparency or interpretabilityRisks from models and algorithms (Risks of robustness)
7.3 Lack of capability or robustnessRisks from models and algorithms (Risks of stealing and tampering)
2.2 AI system security vulnerabilities and attacksRisks from models and algorithms (Risks of unreliable output)
3.1 False or misleading informationRisks from models and algorithms (Risks of adversarial attack)
2.2 AI system security vulnerabilities and attacksRisks from data (Risks of illegal collection and use of data)
2.1 Compromise of privacy by leaking or correctly inferring sensitive information