Academics used Google Bard AI to generate false case studies about misconduct by major consultancy firms in a parliamentary submission, resulting in defamatory allegations that received parliamentary privilege protection.
A group of accounting academics submitted a report to a parliamentary inquiry into the ethics and professional accountability of the consultancy industry, advocating for broad regulation changes including splitting up the big four firms. Part of their submission relied on Google Bard AI, which one academic had only begun using that same week. The AI generated several false case studies about misconduct that were included in the submission. The false allegations included accusing KPMG of being complicit in a '7-Eleven wage theft scandal' and auditing Commonwealth Bank during a financial planning scandal (KPMG never audited Commonwealth Bank), and accusing Deloitte of being sued by Probuild liquidators for audit failures (Deloitte never audited Probuild), involvement in a 'NAB financial planning scandal', and falsifying Patisserie Valerie accounts (Deloitte never audited this company either). The false information gained parliamentary privilege protection, preventing defamation lawsuits. This is believed to be the first time a parliamentary inquiry has dealt with AI-generated false accusations covered by parliamentary privilege. The academics issued unreserved apologies, and the false sections were removed from the submission. Emeritus professor James Guthrie took responsibility, stating he realized 'AI can generate authoritative-sounding output that can be incorrect, incomplete or biased'.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI systems that inadvertently generate or spread incorrect or deceptive information, which can lead to inaccurate beliefs in users and undermine their autonomy. Humans that make decisions based on false beliefs can experience physical, emotional or material harms
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed
No population impact data reported.