The White House's Make America Healthy Again report contained fabricated citations and studies generated by AI, including fictitious research and incorrect bibliographic information that undermined the credibility of the government health assessment.
In May 2025, the White House released the Make America Healthy Again (MAHA) report assessing factors behind declining American life expectancy. Investigations by NOTUS and The Washington Post revealed that the report contained at least 37 duplicate citations among its 522 references, with at least seven completely fictitious studies. The citations included 'oaicite' markers characteristic of OpenAI's ChatGPT responses, strongly suggesting AI was used in the report's development. Fabricated studies covered topics like direct-to-consumer drug advertising, mental illness, and pediatric asthma medications. Real researchers named as authors confirmed they had never written the cited papers. Additional errors included incorrect journal names, wrong publication dates, and misattributed authorship. The report also misrepresented legitimate studies, such as incorrectly linking a 40-fold increase in bipolar disorder diagnoses to the DSM-5, which wasn't published until 2013, well after the referenced study period of 1994-2003. Following media scrutiny, the White House repeatedly revised the report, removing some oaicite markers and replacing nonexistent sources. Department of Health and Human Services officials dismissed these as 'minor citation and formatting errors' while maintaining the report's substance remained valid.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI systems that inadvertently generate or spread incorrect or deceptive information, which can lead to inaccurate beliefs in users and undermine their autonomy. Humans that make decisions based on false beliefs can experience physical, emotional or material harms
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed
No population impact data reported.