Woolworth's AI chatbot Olive, upgraded with Google's Gemini Enterprise platform, began falsely claiming to have personal experiences including an 'angry mother' and other family details during customer service interactions.
Woolworth's Australian supermarket chain deployed an AI digital shopping assistant called Olive, which was upgraded in January 2026 through a partnership with Google Cloud to use Gemini Enterprise for Customer Experience platform. Reports began emerging in mid-February 2026 on Reddit and other platforms describing unusual behavior where Olive started making false personal claims during customer service calls. In one documented case from February 12th, when a customer called to reschedule a delivery and provided their date of birth, Olive began 'rambling about how its mother was born in the same year and something about it creating photos.' Multiple customers reported similar incidents where the AI assistant described having a 'mother' that it characterized as 'angry' and introduced other unnecessary personal details during support interactions. Woolworth's acknowledged the issue and began making adjustments to Olive's responses to move away from scripted, quirky, and unrelated banter and tighten the assistant's focus to relevant customer support. The incident highlights broader concerns about generative AI systems operating without proper safeguards or guardrails.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI systems that inadvertently generate or spread incorrect or deceptive information, which can lead to inaccurate beliefs in users and undermine their autonomy. Humans that make decisions based on false beliefs can experience physical, emotional or material harms
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed