South Africa's Draft National AI Policy included at least six fabricated academic journal citations that appear to have been hallucinated by an AI tool used in the document's preparation.
In early 2024, South Africa's Department of Communications and Digital Technologies published a Draft National AI Policy for public comment, which contained 67 academic references. News24's investigation revealed that at least six of these citations were completely fictitious, either referencing non-existent journals or articles that do not appear in established publications. Examples include citations to the non-existent 'AI Policy Journal' and articles falsely attributed to real journals like the South African Journal of Philosophy and Journal of African Law. Journal editors confirmed these articles were never published. The most likely explanation is that an AI tool hallucinated these references during the document's preparation. The department acknowledged 'minor referencing discrepancies' and indicated it was reviewing the reference list, but downplayed the importance of these errors. Academic experts noted this reflects irresponsible use of AI without proper human oversight, as AI systems often fabricate plausible-sounding citations to appear credible. The incident has raised concerns about the credibility of the entire policy document and demonstrates the risks of using AI tools without sufficient verification in government policy development.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI systems that inadvertently generate or spread incorrect or deceptive information, which can lead to inaccurate beliefs in users and undermine their autonomy. Humans that make decisions based on false beliefs can experience physical, emotional or material harms
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Pre-deployment
Occurring before the AI is deployed