Apple Intelligence's AI notification summarization feature falsely attributed a headline to BBC News claiming Luigi Mangione had shot himself, when he had not, leading to complaints from BBC and Apple temporarily disabling the feature for news apps.
Apple Intelligence, Apple's suite of AI features launched in the UK in December 2024, includes a notification summarization feature that groups and summarizes notifications from news apps using artificial intelligence. The feature falsely generated a push notification attributed to BBC News that read 'Luigi Mangione shoots himself; Syrian mother hopes Assad pays the price; South Korea police raid Yoon Suk Yeol's office.' The first part about Luigi Mangione, the suspect arrested in connection with the UnitedHealthcare CEO shooting, was completely false as he remains in custody and has not harmed himself. The other two parts of the summary were accurate representations of actual news stories. BBC News complained to Apple about this misrepresentation, stating that as 'the most trusted news media in the world,' it is essential that audiences can trust any information published in their name, including notifications. Similar incidents occurred with other news outlets, including the New York Times, where Apple Intelligence incorrectly summarized a story about an ICC arrest warrant for Netanyahu as 'Netanyahu arrested.' Following multiple complaints and criticism, Apple announced in January 2025 that it would temporarily disable notification summaries for the News & Entertainment category of apps while working to improve the technology. The company also added warnings that the feature is in beta and may contain errors, and began displaying AI-generated summaries in italics to distinguish them from original notifications.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI systems that inadvertently generate or spread incorrect or deceptive information, which can lead to inaccurate beliefs in users and undermine their autonomy. Humans that make decisions based on false beliefs can experience physical, emotional or material harms
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed