BackImplementation
Sub-category
"This is the risk of system failure due to code implementation choices or errors."(p. 11)
Entity— Who or what caused the harm
Intent— Whether the harm was intentional or accidental
Timing— Whether the risk is pre- or post-deployment
Supporting Evidence (2)
1.
"Reliability of external libraries: Software development is increasingly reliant on open source libraries, and machine learning is no different. Despite their benefits (e.g., lower barrier to entry), using external libraries, particularly when the development team is unfamiliar with the internals, increases the risk of failure due to bugs in the dependency chain. Additionally, over-reliance on open source libraries may result in critical systems going down if the dependencies are taken offline. The level of risk here is therefore determined by the reliability of and community support for the library in question. For example, a library that is widely used and regularly updated by a paid team will likely be more reliable than one released by a single person as a hobby project, even though both are considered open source libraries. However, this is not a given, as the recently discovered Log4j vulnerability demonstrates. Other common sources of bugs resulting from the use of external libraries are API changes that are not backward-compatible."(p. 11)
2.
"Code review and testing practices: The intertwined nature of the data, model architecture, and training algorithm in ML systems poses new challenges for rigorously testing ML systems. In addition, deep learning systems often fail silently and continue to work despite implementation errors. Good code review and unit testing practices may help to catch implementation errors that may otherwise go unnoticed, lowering the implementation risk."(p. 12)
Part of First-Order Risks
Other risks from Tan, Taeihagh & Baxter (2022) (17)
First-Order Risks
7.0 AI System Safety, Failures & LimitationsOtherOtherOther
First-Order Risks > Application
7.0 AI System Safety, Failures & LimitationsHumanIntentionalPost-deployment
First-Order Risks > Misapplication
7.3 Lack of capability or robustnessHumanIntentionalPost-deployment
First-Order Risks > Algorithm
7.3 Lack of capability or robustnessAI systemUnintentionalPre-deployment
First-Order Risks > Training & validation data
7.0 AI System Safety, Failures & LimitationsHumanOtherPre-deployment
First-Order Risks > Robustness
7.3 Lack of capability or robustnessAI systemUnintentionalPost-deployment