BackReverse Exposure
Sub-category
Risk Domain
Vulnerabilities that can be exploited in AI systems, software development toolchains, and hardware, resulting in unauthorized access, data and privacy breaches, or system manipulation causing unsafe outputs or behavior.
"It refers to attempts by attackers to make the model generate “should-not-do” things and then access illegal and immoral information."(p. 5)
Entity— Who or what caused the harm
Intent— Whether the harm was intentional or accidental
Timing— Whether the risk is pre- or post-deployment
Part of Instruction Attacks
Other risks from Sun et al. (2023) (14)
Instruction Attacks
2.2 AI system security vulnerabilities and attacksHumanIntentionalPost-deployment
Instruction Attacks > Goal Hijacking
2.2 AI system security vulnerabilities and attacksHumanIntentionalPost-deployment
Instruction Attacks > Prompt Leaking
2.1 Compromise of privacy by leaking or correctly inferring sensitive informationHumanIntentionalPost-deployment
Instruction Attacks > Role Play Instruction
2.2 AI system security vulnerabilities and attacksHumanIntentionalPost-deployment
Instruction Attacks > Unsafe Instruction Topic
2.2 AI system security vulnerabilities and attacksHumanIntentionalPost-deployment
Instruction Attacks > Inquiry with Unsafe Opinion
2.2 AI system security vulnerabilities and attacksHumanIntentionalPost-deployment