A mental health nonprofit called Koko used OpenAI's GPT-3 to generate responses for approximately 4,000 people seeking mental health support, using the AI in over 30,000 messages without proper informed consent or ethical review.
In October 2022, Koko, a mental health nonprofit that provides peer-to-peer support, conducted an experiment using OpenAI's GPT-3 to help generate responses to people seeking mental health counseling. The AI was used in a 'co-pilot' fashion where humans supervised and could edit the AI-generated responses. According to co-founder Rob Morris, the experiment involved approximately 4,000 users and over 30,000 messages. Morris reported that AI-generated messages were rated significantly higher than human-only responses and response times decreased by 50%. However, when users became aware the messages were AI-assisted, the perceived effectiveness diminished because 'simulated empathy feels weird, empty.' The experiment drew significant criticism from AI ethicists and experts who raised concerns about the lack of informed consent, absence of institutional review board oversight, and the ethical implications of experimenting on vulnerable populations seeking mental health support. Morris initially claimed the experiment was exempt from informed consent requirements, though he later clarified that users were told messages were 'written in collaboration with kokobot.' Critics argued that using AI in sensitive mental health contexts without proper ethical review poses unknown risks and raises questions about accountability if the AI provides harmful suggestions.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
Users anthropomorphizing, trusting, or relying on AI systems, leading to emotional or material dependence and inappropriate relationships with or expectations of AI systems. Trust can be exploited by malicious actors (e.g., to harvest personal information or enable manipulation), or result in harm from inappropriate use of AI in critical situations (e.g., medical emergency). Overreliance on AI systems can compromise autonomy and weaken social ties.
Human
Due to a decision or action made by humans
Intentional
Due to an expected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed