BackVulnerability of AI systems to attacks and misuse
Vulnerability of AI systems to attacks and misuse
Risk Domain
Vulnerabilities that can be exploited in AI systems, software development toolchains, and hardware, resulting in unauthorized access, data and privacy breaches, or system manipulation causing unsafe outputs or behavior.
Entity— Who or what caused the harm
Intent— Whether the harm was intentional or accidental
Timing— Whether the risk is pre- or post-deployment
Other risks from Wirtz, Weyerer & Kehl (2022) (37)
Informational and Communicational AI Risks
4.1 Disinformation, surveillance, and influence at scaleOtherIntentionalPost-deployment
Informational and Communicational AI Risks > Manipulation and control of information provision (e.g., personalised adds, filtered news)
4.1 Disinformation, surveillance, and influence at scaleOtherIntentionalPost-deployment
Informational and Communicational AI Risks > Disinformation and computational propaganda
4.1 Disinformation, surveillance, and influence at scaleHumanIntentionalPost-deployment
Informational and Communicational AI Risks > Censorship of opinions expressed in the Internet restricts freedom of expression
5.2 Loss of human agency and autonomyOtherOtherPost-deployment
Informational and Communicational AI Risks > Endangerment of data protection through AI cyberattacks
4.2 Cyberattacks, weapon development or use, and mass harmHumanIntentionalPost-deployment
Economic AI Risks
6.2 Increased inequality and decline in employment qualityOtherOtherPost-deployment