"Second-order risks result from the consequences of first-order risks and relate to the risks resulting from an ML system interacting with the real world, such as risks to human rights, the organization, and the natural environment."(p. 13)
Sub-categories (7)
Safety
This is the risk of direct or indirect physical or psychological injury resulting from interaction with the ML system.
7.3 Lack of capability or robustnessDiscrimination
This is the risk of an ML system encoding stereotypes of or performing disproportionately poorly for some demographics/social groups.
1.1 Unfair discrimination and misrepresentationSecurity
This is the risk of loss or harm from intentional subversion or forced failure.
2.2 AI system security vulnerabilities and attacksPrivacy
The risk of loss or harm from leakage of personal information via the ML system.
2.1 Compromise of privacy by leaking or correctly inferring sensitive informationEnvironmental
The risk of harm to the natural environment posed by the ML system.
6.6 Environmental harmOrganizational
The risk of financial and/or reputational damage to the organization building or using the ML system.
6.0 Socioeconomic & EnvironmentalOther ethical risks
"Although we have discussed a number of common risks posed by ML systems, we acknowledge that there are many other ethical risks such as the potential for psychological manipulation, dehumanization, and exploitation of humans at scale."
4.1 Disinformation, surveillance, and influence at scaleOther risks from Tan, Taeihagh & Baxter (2022) (17)
First-Order Risks
7.0 AI System Safety, Failures & LimitationsFirst-Order Risks > Application
7.0 AI System Safety, Failures & LimitationsFirst-Order Risks > Misapplication
7.3 Lack of capability or robustnessFirst-Order Risks > Algorithm
7.3 Lack of capability or robustnessFirst-Order Risks > Training & validation data
7.0 AI System Safety, Failures & LimitationsFirst-Order Risks > Robustness
7.3 Lack of capability or robustness