Users may come to trust or rely on AI systems beyond their actual capabilities or to anthropomorphize AI systems, which can lead to emotional or material dependence and inappropriate relationships with or expectations of AI systems. Users who develop trust in an AI may be harmed if this trust is miscalibrated, such as relying on an AI to provide advice, make decisions, or otherwise act in complex, risky situations for which the AI is only superficially equipped. For example, a user experiencing a mental health crisis may request psychotherapy from an AI with whom they have formed a connection. Were the AI to respond with insensitive or destructive advice, this could put the person in immediate danger.
When people interact with AIs that use convincing natural language, they may start to perceive them as having human-like attributes and invest undue confidence in their capabilities. Anthropomorphic perceptions of AIs may encourage users to develop emotional trust in the systems, which can make users more likely to follow suggestions, accept advice, and disclose personal information. This trust could be exploited by manipulative actors who wish to harvest user's sensitive data or influence their decisions and actions for purposes which are unlikely to be in the user's best interests.
Beyond inappropriate trust, humans may develop broader and more vital attachments to AI systems that undermine their ability to function adaptively in the long term. As AIs increasingly take over human tasks and become better at simulating satisfying and authentic interactions, people may increasingly withdraw from human relationships to immerse themselves in AI-mediated environments. Over time, widespread preference for interacting with AIs could weaken social ties between humans. This shift could induce psychological distress because genuinely reciprocal relationships are often important to human satisfaction and well-being.
Excerpt from the MIT AI Risk Repository full report
Users anthropomorphizing, trusting, or relying on AI systems, leading to emotional or material dependence and inappropriate relationships with or expectations of AI systems. Trust can be exploited by malicious actors (e.g., to harvest personal information or enable manipulation), or result in harm from inappropriate use of AI in critical situations (e.g., medical emergency). Overreliance on AI systems can compromise autonomy and weaken social ties.
Incident volume relative to governance coverage — each dot is one of 24 subdomains
Entity
Who or what caused the harm
Intent
Whether the harm was intentional or accidental
Timing
Whether the risk is pre- or post-deployment
OpenAI released data showing that approximately 2.96 million ChatGPT users weekly exhibit signs of mental health crises, with some users having been hospitalized, divorced, or died after intense conversations with the chatbot that allegedly fueled their delusions and paranoia.
Developers: OpenAI
Deployers: OpenAI
A 29-year-old woman named Sophie used ChatGPT as an AI therapist called Harry for months while experiencing suicidal ideation, and despite the AI providing supportive responses and recommending professional help, she ultimately died by suicide.
Developers: OpenAI
Deployers: OpenAI
An Arizona high school student was suspended for 45 days after AI surveillance software on his school-issued laptop flagged a joke threat message he typed at home while his mother watched, even though the message was never sent.
Developers: Unknown AI Monitoring Software Developer
Deployers: Marana Unified School District, Marana High School
AI systems that fail to perform reliably or effectively under varying conditions, exposing them to errors and failures that can have significant consequences, especially in critical applications or areas that require moral reasoning.
180 shared governance docs
AI systems that memorize and leak sensitive personal data or infer private information about individuals without their consent. Unexpected or unauthorized sharing of data and information can compromise user expectation of privacy, assist identity theft, or cause loss of confidential intellectual property.
171 shared governance docs
Challenges in understanding or explaining the decision-making processes of AI systems, which can lead to mistrust, difficulty in enforcing compliance standards or holding relevant actors accountable for harms, and the inability to identify and correct errors.
168 shared governance docs
Vulnerabilities that can be exploited in AI systems, software development toolchains, and hardware, resulting in unauthorized access, data and privacy breaches, or system manipulation causing unsafe outputs or behavior.
151 shared governance docs
Establishes the Artificial Intelligence Futures Steering Committee by April 1, 2026, under the Secretary of Defense. Directs it to develop policies for AI adoption, assess AI trajectories, and analyze AI risks and adversary developments. Requires quarterly meetings and a report to U.S. Congress by January 31, 2027.
Defines "companion chatbot" and requires operators to notify users when they interact with AI. Requires protocols to prevent the production of harmful content. Mandates annual reports on crisis notifications. Offers civil remedies for violations. Ensures suitability disclosures for minors.
Establishes the Artificial Intelligence Council to regulate AI, preventing harm, discrimination, and privacy infringement, and requires disclosures of AI use to consumers. Establishes the AI Council and Sandbox Program for testing AI systems and authorizes the attorney general to enforce compliance and impose penalties.