A child protection worker in Victoria, Australia used ChatGPT to draft a court report about child welfare risks, leading to inaccurate information that downplayed dangers to a vulnerable child and resulted in unauthorized disclosure of sensitive personal data to OpenAI.
In December 2023, Victoria's Department of Families, Fairness and Housing (DFFH) reported to the Office of the Victorian Information Commissioner (OVIC) that a child protection worker had used ChatGPT to draft a Protection Application Report submitted to the Children's Court. The report concerned a young child whose parents had been charged with sexual offences unrelated to the child. The worker entered personal and sensitive case-specific information into ChatGPT, including the child's name, to generate the report text. OVIC's investigation found multiple indicators of ChatGPT usage, including inappropriate language and sentence structure not consistent with child protection guidelines. Most concerning, the report contained inaccurate personal information that downplayed risks to the child - specifically describing a child's doll that had been used by the father for sexual purposes as evidence of the parents providing 'age-appropriate toys' to support development. This mischaracterization had the potential to affect court decisions about the child's care, though it ultimately did not change the outcome. The use of ChatGPT constituted an unauthorized disclosure of information to OpenAI, an overseas company, releasing the data from DFFH's control. Further investigation revealed that the worker may have used ChatGPT in 100 cases over one year, and nearly 900 DFFH employees (13% of the workforce) had accessed ChatGPT between July-December 2023. OVIC issued a compliance notice banning generative AI tools for child protection staff for two years from November 5, 2024, and the worker is no longer employed by the department.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI systems that memorize and leak sensitive personal data or infer private information about individuals without their consent. Unexpected or unauthorized sharing of data and information can compromise user expectation of privacy, assist identity theft, or cause loss of confidential intellectual property.
Human
Due to a decision or action made by humans
Intentional
Due to an expected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed