BackInsufficient Security Measures
Insufficient Security Measures
Risk Domain
Vulnerabilities that can be exploited in AI systems, software development toolchains, and hardware, resulting in unauthorized access, data and privacy breaches, or system manipulation causing unsafe outputs or behavior.
Malicious entities can take advantage of weaknesses in AI algorithms to alter results, potentially resulting in tangible real-life impacts. Additionally, it’s vital to prioritize safeguarding privacy and handling data responsibly, particularly given AI’s significant data needs. Balancing the extraction of valuable insights with privacy maintenance is a delicate task(p. 4)
Entity— Who or what caused the harm
Intent— Whether the harm was intentional or accidental
Timing— Whether the risk is pre- or post-deployment
Other risks from Habbal et al. (2024) (6)
Bias and Discrimination
1.1 Unfair discrimination and misrepresentationAI systemUnintentionalPost-deployment
Privacy Invasion
2.1 Compromise of privacy by leaking or correctly inferring sensitive informationAI systemUnintentionalPost-deployment
Society Manipulation
4.1 Disinformation, surveillance, and influence at scaleAI systemIntentionalPost-deployment
Deepfake Technology
4.1 Disinformation, surveillance, and influence at scaleHumanIntentionalPost-deployment
Lethal Autonomous Weapons Systems (LAWS)
4.2 Cyberattacks, weapon development or use, and mass harmAI systemIntentionalPost-deployment
Malicious Use of AI
4.3 Fraud, scams, and targeted manipulationHumanIntentionalPost-deployment