An elderly man with cancer, kidney disease, and leukemia used Perplexity AI to self-diagnose a rare condition and rejected his oncologist's recommended treatment based on the AI's incorrect medical advice, potentially hastening his death.
The author's father, a former neuroscientist with lung cancer, kidney disease, and Chronic Lymphocytic Leukemia (CLL), was diagnosed approximately 18 months before his death. His oncologist recommended Venetoclax-Obinutuzumab (Ven-Obi) treatment for the CLL, which is described as remarkably effective at extending patient life expectancy while reducing suffering. However, the father used Perplexity AI to self-diagnose his condition and became convinced he was experiencing Richter's Transformation, a rare CLL complication. Based on Perplexity's output, he believed he should refrain from the Ven-Obi treatment because it would make things worse. The author discovered this when given access to his father's medical records and found correspondence between his father and the frustrated oncologist that included the Perplexity report. The author contacted the researchers cited in Perplexity's response, who confirmed that Perplexity had misstated their research conclusions and that the father should follow his oncologist's recommendations. Despite being presented with this information, the father did not respond and only agreed to start the recommended treatment several months later after his condition had worsened dramatically. Although the treatment immediately reduced his white blood cell count, his pain continued and he died a few weeks later.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI systems that inadvertently generate or spread incorrect or deceptive information, which can lead to inaccurate beliefs in users and undermine their autonomy. Humans that make decisions based on false beliefs can experience physical, emotional or material harms
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed