Google deployed an 'inclusive language' feature in Google Docs that automatically flags words like 'landlord', 'mankind', and 'motherboard' as potentially non-inclusive and suggests alternatives, causing criticism for being intrusive and overly broad in its recommendations.
Google rolled out an 'assistive writing' feature in Google Docs that includes an 'inclusive language' function designed to identify potentially discriminatory or inappropriate language and suggest more inclusive alternatives. The feature is turned on by default for enterprise users and flags words such as 'landlord' (suggesting 'property owner'), 'mankind' (suggesting 'humankind'), 'policemen' (suggesting 'police officers'), and even technical terms like 'motherboard'. The system uses language understanding models trained on millions of common phrases to automatically learn communication patterns, but this also means it can reflect human cognitive biases. Testing by journalists revealed inconsistent performance - the system flagged innocuous words and made questionable suggestions for famous speeches including Martin Luther King Jr.'s 'I Have a Dream' speech and the Biblical Sermon on the Mount, while failing to flag a transcribed interview with former KKK leader David Duke containing racial slurs. Critics have described the feature as 'speech-policing', 'deeply intrusive', and 'profoundly clumsy', arguing it undermines privacy, freedom of expression, and individual creativity. Google acknowledged the technology is still improving and may never have a complete solution to identifying and mitigating unwanted word associations and biases.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI systems that fail to perform reliably or effectively under varying conditions, exposing them to errors and failures that can have significant consequences, especially in critical applications or areas that require moral reasoning.
AI system
Due to a decision or action made by an AI system
Intentional
Due to an expected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed