Reliability issues
AI systems that fail to perform reliably or effectively under varying conditions, exposing them to errors and failures that can have significant consequences, especially in critical applications or areas that require moral reasoning.
"Relying on general-purpose AI products that fail to fulfil their intended function can lead to harm. For example, general- purpose AI systems can make up facts (‘hallucination’), generate erroneous computer code, or provide inaccurate medical information. This can lead to physical and psychological harms to consumers and reputational, financial and legal harms to individuals and organisations."(p. 88)
Supporting Evidence (2)
"Such reliability issues occur because of technical shortcomings or misconceptions about the capabilities and limitations of the technology. For example, reliability issues may stem from technical challenges such as hallucinations, or from users applying systems to unsuitable tasks. Existing guardrails to contain and mitigate reliability issues are not fail- proof."(p. 88)
"Type of reliability issues Examples Confabulations or hallucinations • Citing non- existent precedent in legal briefs (451) • Citing non- existent reduced fare policies for bereaved passengers (452) Common- sense reasoning failures • Failing to perform basic mathematical calculations (453*) • Failing to infer basic causal relationships (454) Contextual knowledge failures • Providing inaccurate medical information (448) • Providing outdated information about events (455)"(p. 90)
Other risks from Bengio2025 (13)
Risks from malicious use
4.0 Malicious Actors & MisuseRisks from malicious use > Harm to individuals through fake content
4.3 Fraud, scams, and targeted manipulationRisks from malicious use > Manipulation of public opinion
4.1 Disinformation, surveillance, and influence at scaleRisks from malicious use > Cyber offence
4.2 Cyberattacks, weapon development or use, and mass harmRisks from malicious use > Biological and chemical attacks
4.2 Cyberattacks, weapon development or use, and mass harmBias
1.1 Unfair discrimination and misrepresentation