Meta AI released Galactica, a large language model for scientific literature, but removed the demo after three days due to criticism that it generated convincing but false scientific information and toxic content.
On November 15, 2022, Meta AI unveiled Galactica, a large language model trained on 48 million scientific papers, textbooks, and encyclopedias, designed to assist scientists with tasks like summarizing papers, solving math problems, and generating scientific content. The company released a public demo encouraging users to try the system. However, within hours, users discovered the model could generate authoritative-sounding but false scientific information, including fake papers with real author attributions, incorrect scientific facts, and fictional content like 'history of bears in space.' Users also found they could prompt the system to generate racist and offensive content. Critics, including scientists like Michael Black from the Max Planck Institute, pointed out that the model was 'wrong or biased but sounded right and authoritative,' making it dangerous. The system also had problematic content filters that blocked queries about topics like 'racism' and 'AIDS,' further marginalizing certain communities. After three days of intense ethical criticism on social media, Meta removed the public demo on November 18, 2022. Chief AI Scientist Yann LeCun defended the system, suggesting critics were 'casually misusing it.' The incident highlighted ongoing concerns about large language models' tendency to generate convincing misinformation and the responsibility of companies releasing such systems to the public.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI systems that inadvertently generate or spread incorrect or deceptive information, which can lead to inaccurate beliefs in users and undermine their autonomy. Humans that make decisions based on false beliefs can experience physical, emotional or material harms
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed