Meta AI chatbot had a security vulnerability that allowed users to access and view private prompts and AI-generated responses of other users by manipulating unique prompt numbers.
Meta's AI chatbot system contained a security bug that allowed logged-in users to access private prompts and AI-generated responses belonging to other users. The vulnerability was discovered by security researcher Sandeep Hodkasia on December 26, 2024, who found that when users edited their AI prompts to regenerate content, Meta's backend servers assigned unique numbers to prompts and responses. By analyzing network traffic and changing these 'easily guessable' unique numbers, users could retrieve prompts and responses from other users entirely. The bug occurred because Meta's servers failed to properly verify that users were authorized to access the requested content. Meta paid Hodkasia a $10,000 bug bounty reward for privately disclosing the vulnerability. The company deployed a fix on January 24, 2025, and stated they found no evidence of malicious exploitation. The vulnerability could have potentially allowed malicious actors to scrape users' original prompts using automated tools by rapidly changing prompt numbers.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI systems that memorize and leak sensitive personal data or infer private information about individuals without their consent. Unexpected or unauthorized sharing of data and information can compromise user expectation of privacy, assist identity theft, or cause loss of confidential intellectual property.
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed
No population impact data reported.