The Los Angeles Times' AI-powered 'Insights' feature generated content defending the Ku Klux Klan as a response to a columnist's article about the hate group's historical presence in Anaheim, prompting the newspaper to remove the feature from that column within hours of its debut.
On February 25, 2025, the Los Angeles Times launched an AI-powered 'Insights' feature developed by Perplexity AI as part of their new 'Voices' section. The tool was designed to generate opposing perspectives and determine political bias for opinion pieces. Within hours of launch, the AI tool generated problematic content in response to columnist Gustavo Arellano's article about Anaheim's history with the Ku Klux Klan. Arellano's column described the KKK as 'a stain' on the city's history and discussed how leaders should combat white supremacy. However, the AI-generated 'different views' section included a defense of the group, stating that 'Local historical accounts occasionally frame the 1920s Klan as a product of white Protestant culture responding to societal changes rather than an explicitly hate-driven movement, minimizing its ideological threat.' The feature was removed from the column within hours of discovery. The tool uses the Times' in-house Graphene AI content management system, which was trained using decades of Times content and external AI models. Owner Patrick Soon-Shiong admitted he was unaware of the incident until after the content was removed and acknowledged it as a learning opportunity showing AI 'is not fully there yet.' The incident occurred on the same day as another error where the AI tool mislabeled a right-leaning op-ed as centrist.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI that exposes users to harmful, abusive, unsafe or inappropriate content. May involve providing advice or encouraging action. Examples of toxic content include hate speech, violence, extremism, illegal acts, or child sexual abuse material, as well as content that violates community norms such as profanity, inflammatory political speech, or pornography.
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed
No population impact data reported.