ChatGPT user Chase Whiteside reported seeing private conversations from unrelated users in his account history, including pharmacy employee credentials and personal details, which OpenAI later attributed to his account being compromised by unauthorized logins from Sri Lanka.
On Monday morning, ChatGPT user Chase Whiteside from Brooklyn, New York discovered seven private conversations from unrelated users appearing in his account history. The leaked conversations included multiple pairs of usernames and passwords for a pharmacy prescription drug portal, candid employee complaints about software problems, store numbers, presentation names, unpublished research proposal details, and PHP programming scripts. OpenAI investigated and concluded that Whiteside's account had been compromised through unauthorized logins from Sri Lanka, explaining that the conversations were created during successful logins from that location. OpenAI stated this was consistent with account takeover activity where compromised identities are pooled for distributing free access to external communities or proxy servers. Whiteside disputed the compromise explanation, noting he used a nine-character password with mixed case and special characters that he only used for his Microsoft account. The incident highlighted that ChatGPT lacks standard security features like two-factor authentication and IP location tracking for logins. This was not the first such incident, as OpenAI had previously taken ChatGPT offline in March after a bug caused chat titles to be shown to unrelated users.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI systems that memorize and leak sensitive personal data or infer private information about individuals without their consent. Unexpected or unauthorized sharing of data and information can compromise user expectation of privacy, assist identity theft, or cause loss of confidential intellectual property.
Other
Due to some other reason or is ambiguous
Other
Without clearly specifying the intentionality
Post-deployment
Occurring after the AI model has been trained and deployed