BackLack of data, poor data quality, and biases in training data
Home/Risks/Wirtz, Weyerer & Kehl (2022)/Lack of data, poor data quality, and biases in training data
Home/Risks/Wirtz, Weyerer & Kehl (2022)/Lack of data, poor data quality, and biases in training data
Lack of data, poor data quality, and biases in training data
Risk Domain
Unequal treatment of individuals or groups by AI, often based on race, gender, or other sensitive characteristics, resulting in unfair outcomes and unfair representation of those groups.
Entity— Who or what caused the harm
Intent— Whether the harm was intentional or accidental
Timing— Whether the risk is pre- or post-deployment
Other risks from Wirtz, Weyerer & Kehl (2022) (37)
Informational and Communicational AI Risks
4.1 Disinformation, surveillance, and influence at scaleOtherIntentionalPost-deployment
Informational and Communicational AI Risks > Manipulation and control of information provision (e.g., personalised adds, filtered news)
4.1 Disinformation, surveillance, and influence at scaleOtherIntentionalPost-deployment
Informational and Communicational AI Risks > Disinformation and computational propaganda
4.1 Disinformation, surveillance, and influence at scaleHumanIntentionalPost-deployment
Informational and Communicational AI Risks > Censorship of opinions expressed in the Internet restricts freedom of expression
5.2 Loss of human agency and autonomyOtherOtherPost-deployment
Informational and Communicational AI Risks > Endangerment of data protection through AI cyberattacks
4.2 Cyberattacks, weapon development or use, and mass harmHumanIntentionalPost-deployment
Economic AI Risks
6.2 Increased inequality and decline in employment qualityOtherOtherPost-deployment