"First-order risks can be generally broken down into risks arising from intended and unintended use, system design and implementation choices, and properties of the chosen dataset and learning components."(p. 4)
Sub-categories (9)
Application
"This is the risk posed by the intended application or use case. It is intuitive that some use cases will be inherently "riskier" than others (e.g., an autonomous weapons system vs. a customer service chatbot)."
7.0 AI System Safety, Failures & LimitationsMisapplication
This is the risk posed by an ideal system if used for a purpose/in a manner unintended by its creators. In many situations, negative consequences arise when the system is not used in the way or for the purpose it was intended.
7.3 Lack of capability or robustnessAlgorithm
"This is the risk of the ML algorithm, model architecture, optimization technique, or other aspects of the training process being unsuitable for the intended application.Since these are key decisions that influence the final ML system, we capture their associated risks separately from design risks, even though they are part of the design process"
7.3 Lack of capability or robustnessTraining & validation data
"This is the risk posed by the choice of data used for training and validation."
7.0 AI System Safety, Failures & LimitationsRobustness
"This is the risk of the system failing or being unable to recover upon encountering invalid, noisy, or out-of-distribution (OOD) inputs."
7.3 Lack of capability or robustnessDesign
"This is the risk of system failure due to system design choices or errors."
7.3 Lack of capability or robustnessImplementation
"This is the risk of system failure due to code implementation choices or errors."
7.0 AI System Safety, Failures & LimitationsControl
This is the difficulty of controlling the ML system
7.1 AI pursuing its own goals in conflict with human goals or valuesEmergent behavior
"This is the risk resulting from novel behavior acquired through continual learning or self-organization after deployment."
7.1 AI pursuing its own goals in conflict with human goals or valuesOther risks from Tan, Taeihagh & Baxter (2022) (17)
Second-Order Risks
6.0 Socioeconomic & EnvironmentalSecond-Order Risks > Safety
7.3 Lack of capability or robustnessSecond-Order Risks > Discrimination
1.1 Unfair discrimination and misrepresentationSecond-Order Risks > Security
2.2 AI system security vulnerabilities and attacksSecond-Order Risks > Privacy
2.1 Compromise of privacy by leaking or correctly inferring sensitive informationSecond-Order Risks > Environmental
6.6 Environmental harm