Two DOGE employees used ChatGPT to analyze National Endowment for the Humanities grants for DEI content, leading to the cancellation of 1,477 grants worth over $100 million based on the AI's determinations with minimal human oversight.
In March 2025, two employees from the Department of Government Efficiency (DOGE), Justin Fox and Nate Cavanaugh, arrived at the National Endowment for the Humanities with a mandate to cancel grants that violated President Trump's anti-DEI executive orders. Instead of conducting detailed reviews, they used ChatGPT with the prompt 'Does the following relate at all to D.E.I.? Respond factually in less than 120 characters. Begin with Yes or No.' The AI flagged projects including building improvements at an Indigenous languages archive in Alaska, digitization of Black newspapers, a documentary about Jewish women's slave labor during the Holocaust, and even HVAC system upgrades at museums as DEI-related. The DOGE team did not question ChatGPT's judgments and sent a master list of 1,477 problematic awards to acting NEH chairman Michael McDonald, who agreed to terminate them. The cancellations clawed back more than $100 million, nearly half of the agency's annual budget, forcing many organizations into upheaval and shuttering some projects entirely. Only 42 grants from the Biden administration were kept. Court documents reveal that McDonald yielded his authority to DOGE staff and allowed them to draft and send termination letters using unofficial email addresses.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
Unequal treatment of individuals or groups by AI, often based on race, gender, or other sensitive characteristics, resulting in unfair outcomes and unfair representation of those groups.
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed