BackUnfair statistical AI decisions and discrimination of minorities
Home/Risks/Wirtz, Weyerer & Kehl (2022)/Unfair statistical AI decisions and discrimination of minorities
Home/Risks/Wirtz, Weyerer & Kehl (2022)/Unfair statistical AI decisions and discrimination of minorities
Unfair statistical AI decisions and discrimination of minorities
Risk Domain
Unequal treatment of individuals or groups by AI, often based on race, gender, or other sensitive characteristics, resulting in unfair outcomes and unfair representation of those groups.
Entity— Who or what caused the harm
Intent— Whether the harm was intentional or accidental
Timing— Whether the risk is pre- or post-deployment
Part of Ethical AI Risks
Other risks from Wirtz, Weyerer & Kehl (2022) (37)
Informational and Communicational AI Risks
4.1 Disinformation, surveillance, and influence at scaleOtherIntentionalPost-deployment
Informational and Communicational AI Risks > Manipulation and control of information provision (e.g., personalised adds, filtered news)
4.1 Disinformation, surveillance, and influence at scaleOtherIntentionalPost-deployment
Informational and Communicational AI Risks > Disinformation and computational propaganda
4.1 Disinformation, surveillance, and influence at scaleHumanIntentionalPost-deployment
Informational and Communicational AI Risks > Censorship of opinions expressed in the Internet restricts freedom of expression
5.2 Loss of human agency and autonomyOtherOtherPost-deployment
Informational and Communicational AI Risks > Endangerment of data protection through AI cyberattacks
4.2 Cyberattacks, weapon development or use, and mass harmHumanIntentionalPost-deployment
Economic AI Risks
6.2 Increased inequality and decline in employment qualityOtherOtherPost-deployment