PayPal's generative AI chatbot falsely claimed a user had a declined transaction for $23.64 when no such transaction existed, creating unnecessary work for both the customer and support staff.
A user engaged with PayPal's generative AI chatbot seeking information on the PayPal website. Before the user could input any query, the chatbot proactively stated it had noticed a recent declined transaction for $23.64 and offered to help with more information. When the user accepted help, the chatbot only provided generic links about why transactions might be declined and refused to give specific details about the referenced transaction. The user could not find any such transaction in their account history and contacted PayPal's human customer service to verify. The customer service representative confirmed no such transaction existed, indicating the chatbot had fabricated the information. The incident created additional work for both the customer and the human support representative. When the user attempted to report the error via the provided email address service@paypal.com, they received an automated response that the email address was no longer active, with instructions redirecting them back to the same malfunctioning chatbot. The chatbot was described as being in beta testing phase.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI systems that inadvertently generate or spread incorrect or deceptive information, which can lead to inaccurate beliefs in users and undermine their autonomy. Humans that make decisions based on false beliefs can experience physical, emotional or material harms
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed