Misinformation and Privacy Violations
Governing General Purpose AI: A Comprehensive Map of Unreliability, Misuse and Systemic Risks
Maham & Küspert (2023)
AI systems that inadvertently generate or spread incorrect or deceptive information, which can lead to inaccurate beliefs in users and undermine their autonomy. Humans that make decisions based on false beliefs can experience physical, emotional or material harms
"Due to their unreliability, general purpose AI models might disseminate false or misleading information, omit critical information, or convey true information that violates privacy rights."(p. 21)
Supporting Evidence (2)
"For example, Meta had to take down the public demo of Galactica, their general purpose AI model intended to support scientific work, only three days post-launch due to its tendency to spread incorrect information – making up, for example, facts, formulas and articles – while it “sounded right and authoritative”.90 Such fabricated content is often referred to as hallucinations by the model.91 Harm from misinformation92 could be particularly severe in multiple sensitive domains such as medicine or law, for example, through a misinformed medical diagnoses or false legal advice.93 It could also increase a person’s confidence in an unfounded opinion and reinforce false beliefs at scale, or harm the reputation of individuals and organizations, having already led to defamation as OpenAI’s ChatGPT accused a regional Australian mayor of being a guilty party in a foreign bribery scandal94, while in another case a law professor found that ChatGPT cited a fictional sexual harassment incident and listed the professor as one of the accused95."(p. 21)
"For example, a lawyer in New York is facing charges for using false legal research he obtained by using OpenAI’s model interface ChatGPT. He defended himself by citing that the apparent competence of the chatbot let him to believe the research was trustworthy.96 The National Eating Disorder Association in the US has taken down an AI system after reports that the chatbot was providing harmful advice.97 In another case, a man reportedly committed suicide after six weeks of intensive conversation with an AI chatbot built on an open-source general purpose AI model developed by EleutherAI.98"(p. 21)
Other risks from Maham & Küspert (2023) (10)
Misuse Risks
4.0 Malicious Actors & MisuseMisuse Risks > Cybercrime
4.3 Fraud, scams, and targeted manipulationMisuse Risks > Biosecurity Threats
4.2 Cyberattacks, weapon development or use, and mass harmMisuse Risks > Politically motivated misuse
4.1 Disinformation, surveillance, and influence at scaleSystemic Risks
6.1 Power centralization and unfair distribution of benefitsSystemic Risks > Economic Power Centralisation and Inequality
6.1 Power centralization and unfair distribution of benefits