BackMisapplication
Sub-category
Risk Domain
AI systems that fail to perform reliably or effectively under varying conditions, exposing them to errors and failures that can have significant consequences, especially in critical applications or areas that require moral reasoning.
This is the risk posed by an ideal system if used for a purpose/in a manner unintended by its creators. In many situations, negative consequences arise when the system is not used in the way or for the purpose it was intended.(p. 6)
Entity— Who or what caused the harm
Intent— Whether the harm was intentional or accidental
Timing— Whether the risk is pre- or post-deployment
Supporting Evidence (3)
1.
"Ability to prevent misuse: The ability to prevent misuse before it occurs significantly reduces misapplication risk. In the case of autonomous vehicles, the car might be programmed to automatically slow to a stop if individuals remove their hands from the wheel or if there is a significant weight decrease in the driver’s seat while the car is in motion. However, while such failsafes significantly reduce risk, they do not entirely eliminate it since they can be bypassed [9]."(p. 6)
2.
"Ability to detect misuse: Being able to detect if the ML system is being used for unintended purposes is crucial to preventing misuse. This can take the form of a component that alerts the organization when a user tries to process inputs with features that match those belonging to prohibited applications (e.g., using a computer vision system for physiognomic purposes), or detect prohibited actions (e.g., leaving the driver’s seat when the semi-autonomous vehicle is in motion). Merely relying on whistleblowers and journalists to detect misuse will likely result in the vast majority of misuses going undetected. The detection method’s efficacy would, therefore, inversely affect the misapplication risk."(p. 7)
3.
"Ability to stop misuse: Assuming it is possible to detect misapplication, the next factor in managing this risk is an organization’s ability to stop misuse once it has been detected. For example, the ability to detect if a customer is using a computer vision system for an unacceptable application (e.g., face recognition for predictive law enforcement) and terminate their access will significantly lower the likelihood of the system being used for such purposes. This is directly related to the system’s control risk (see Section 4.8). Being able to instantly shut the system down or terminate the user’s access will lower the likelihood and severity of negative consequences stemming from misuse, compared to a delayed or non-response, and could be the difference between life and death for the people affected by the system."(p. 7)
Part of First-Order Risks
Other risks from Tan, Taeihagh & Baxter (2022) (17)
First-Order Risks
7.0 AI System Safety, Failures & LimitationsOtherOtherOther
First-Order Risks > Application
7.0 AI System Safety, Failures & LimitationsHumanIntentionalPost-deployment
First-Order Risks > Algorithm
7.3 Lack of capability or robustnessAI systemUnintentionalPre-deployment
First-Order Risks > Training & validation data
7.0 AI System Safety, Failures & LimitationsHumanOtherPre-deployment
First-Order Risks > Robustness
7.3 Lack of capability or robustnessAI systemUnintentionalPost-deployment
First-Order Risks > Design
7.3 Lack of capability or robustnessHumanOtherPre-deployment