BackValue Chain and Component Integration
Home/Risks/National Institute of Standards and Technology (2024)/Value Chain and Component Integration
Home/Risks/National Institute of Standards and Technology (2024)/Value Chain and Component Integration
Value Chain and Component Integration
Risk Domain
Challenges in understanding or explaining the decision-making processes of AI systems, which can lead to mistrust, difficulty in enforcing compliance standards or holding relevant actors accountable for harms, and the inability to identify and correct errors.
"Non-transparent or untraceable integration of upstream third-party components, including data that has been improperly obtained or not processed and cleaned due to increased automation from GAI; improper supplier vetting across the AI lifecycle; or other issues that diminish transparency or accountability for downstream users."(p. 4)
Entity— Who or what caused the harm
Intent— Whether the harm was intentional or accidental
Timing— Whether the risk is pre- or post-deployment
Supporting Evidence (2)
1.
"GAI value chains involve many third-party components such as procured datasets, pre-trained models, and software libraries. These components might be improperly obtained or not properly vetted, leading to diminished transparency or accountability for downstream users. While this is a risk for traditional AI systems and some other digital technologies, the risk is exacerbated for GAI due to the scale of the training data, which may be too large for humans to vet; the difficulty of training foundation models, which leads to extensive reuse of limited numbers of models; and the extent to which GAI may be integrated into other devices and services. As GAI systems often involve many distinct third-party components and data sources, it may be difficult to attribute issues in a system’s behavior to any one of these sources."(p. 12)
2.
"Errors in third-party GAI components can also have downstream impacts on accuracy and robustness. For example, test datasets commonly used to benchmark or validate models can contain label errors. Inaccuracies in these labels can impact the “stability” or robustness of these benchmarks, which many GAI practitioners consider during the model selection process."(p. 12)
Other risks from National Institute of Standards and Technology (2024) (11)
CBRN Information or Capabilities
4.2 Cyberattacks, weapon development or use, and mass harmOtherOtherPost-deployment
Confabulation
3.1 False or misleading informationAI systemUnintentionalPost-deployment
Dangerous, Violent or Hateful Content
1.2 Exposure to toxic contentAI systemOtherPost-deployment
Data Privacy
2.1 Compromise of privacy by leaking or correctly inferring sensitive informationAI systemUnintentionalPost-deployment
Environmental Impacts
6.6 Environmental harmOtherUnintentionalPre-deployment
Harmful Bias or Homogenization
1.1 Unfair discrimination and misrepresentationOtherUnintentionalOther