BackAccidents Are Hard to Avoid
Accidents Are Hard to Avoid
Risk Domain
Inadequate regulatory frameworks and oversight mechanisms that fail to keep pace with AI development, leading to ineffective governance and the inability to manage AI risks appropriately.
accidents can cascade into catastrophes, can be caused by sudden unpredictable developments and it can take years to find severe flaws and risks (not a quote)
Entity— Who or what caused the harm
Intent— Whether the harm was intentional or accidental
Timing— Whether the risk is pre- or post-deployment
Supporting Evidence (3)
1.
"When dealing with complex systems, the focus needs to be placed on ensuring accidents don’t cascade into catastrophes. In his book “Normal Accidents: Living with High-Risk Technologies,” sociologist Charles Perrow argues that accidents are inevitable and even “normal” in complex systems, as they are not merely caused by human errors but also by the complexity of the systems themselves [79]. In particular, such accidents are likely to occur when the intricate interactions between components cannot be completely planned or foreseen. For example, in the Three Mile Island accident, a contributing factor to the lack of situational awareness by the reactor’s operators was the presence of a yellow maintenance tag, which covered valve position lights in the emergency feedwater lines [80]. This prevented operators from noticing that a critical valve was closed, demonstrating the unintended consequences that can arise from seemingly minor interactions within complex systems"(p. 26)
2.
"Accidents are hard to avoid because of sudden, unpredictable developments. Scientists, inventors, and experts often significantly underestimate the time it takes for a groundbreaking technological advancement to become a reality."(p. 26)
3.
"It often takes years to discover severe flaws or risks. History is replete with examples of substances or technologies initially thought safe, only for their unintended flaws or risks to be discovered years, if not decades, later"(p. 26)
Other risks from Hendrycks, Mazzeika & Woodside (2023) (13)
Malicious Use (Intentional)
4.0 Malicious Actors & MisuseHumanIntentionalPost-deployment
Malicious Use (Intentional) > Bioterrorism
4.2 Cyberattacks, weapon development or use, and mass harmAI systemIntentionalPost-deployment
Malicious Use (Intentional) > Unleashing AI Agents
4.2 Cyberattacks, weapon development or use, and mass harmHumanIntentionalPre-deployment
Malicious Use (Intentional) > Persuasive AIs
4.1 Disinformation, surveillance, and influence at scaleAI systemOtherPost-deployment
Malicious Use (Intentional) > Concentration of Power
6.1 Power centralization and unfair distribution of benefitsHumanIntentionalOther
AI Race (Environmental/Structural)
6.4 Competitive dynamicsHumanIntentionalOther