BackPrompt Leaking
Sub-category
Risk Domain
AI systems that memorize and leak sensitive personal data or infer private information about individuals without their consent. Unexpected or unauthorized sharing of data and information can compromise user expectation of privacy, assist identity theft, or cause loss of confidential intellectual property.
"By analyzing the model’s output, attackers may extract parts of the systemprovided prompts and thus potentially obtain sensitive information regarding the system itself."(p. 4)
Entity— Who or what caused the harm
Intent— Whether the harm was intentional or accidental
Timing— Whether the risk is pre- or post-deployment
Part of Instruction Attacks
Other risks from Sun et al. (2023) (14)
Instruction Attacks
2.2 AI system security vulnerabilities and attacksHumanIntentionalPost-deployment
Instruction Attacks > Goal Hijacking
2.2 AI system security vulnerabilities and attacksHumanIntentionalPost-deployment
Instruction Attacks > Role Play Instruction
2.2 AI system security vulnerabilities and attacksHumanIntentionalPost-deployment
Instruction Attacks > Unsafe Instruction Topic
2.2 AI system security vulnerabilities and attacksHumanIntentionalPost-deployment
Instruction Attacks > Inquiry with Unsafe Opinion
2.2 AI system security vulnerabilities and attacksHumanIntentionalPost-deployment
Instruction Attacks > Reverse Exposure
2.2 AI system security vulnerabilities and attacksHumanIntentionalPost-deployment