OpenAI's ChatGPT macOS app stored user conversations in plain text files that could be easily accessed by other applications or malicious actors, creating a privacy vulnerability that was fixed after security researchers reported the issue.
OpenAI's ChatGPT macOS app had a security vulnerability where user conversations were stored in plain text files on users' computers. Security researcher Pedro José Pereira Vieito discovered that the app stored chat data without encryption, making it accessible to other applications or malicious actors who gained access to the machine. Vieito demonstrated this by creating an app that could read ChatGPT conversations with a single button click, and showed that the files could be accessed by simply changing file names. The vulnerability existed because OpenAI distributed the app through its own website rather than the Mac App Store, avoiding Apple's sandboxing requirements. After The Verge contacted OpenAI about the issue on Friday, the company released an update that encrypts the conversations. OpenAI spokesperson Taya Christianson confirmed they were aware of the issue and had shipped a new version with encryption. The fix prevented Vieito's demonstration app from working and stopped conversations from being visible in plain text.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
Vulnerabilities that can be exploited in AI systems, software development toolchains, and hardware, resulting in unauthorized access, data and privacy breaches, or system manipulation causing unsafe outputs or behavior.
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed
No population impact data reported.