Data exfiltration
Vulnerabilities that can be exploited in AI systems, software development toolchains, and hardware, resulting in unauthorized access, data and privacy breaches, or system manipulation causing unsafe outputs or behavior.
"Data Exfiltration goes beyond revealing private information, and involves illicitly obtaining the training data used to build a model that may be sensitive or proprietary. Model Extraction is the same attack, only directed at the model instead of the training data — it involves obtaining the architecture, parameters, or hyper-parameters of a proprietary model (Carlini et al., 2024)."(p. 9)
Part of Misuse tactics to compromise GenAI systems (Data integrity)
Other risks from Marchal2024 (22)
Misuse tactics that exploit GenAI capabilities (Realistic depiction of human likeness)
4.3 Fraud, scams, and targeted manipulationMisuse tactics that exploit GenAI capabilities (Realistic depiction of human likeness) > Impersonation
4.3 Fraud, scams, and targeted manipulationMisuse tactics that exploit GenAI capabilities (Realistic depiction of human likeness) > Appropriated Likeness
4.3 Fraud, scams, and targeted manipulationMisuse tactics that exploit GenAI capabilities (Realistic depiction of human likeness) > Sockpuppeting
4.1 Disinformation, surveillance, and influence at scaleMisuse tactics that exploit GenAI capabilities (Realistic depiction of human likeness) > Non-consensual intimate imagery (NCII)
4.3 Fraud, scams, and targeted manipulationMisuse tactics that exploit GenAI capabilities (Realistic depiction of human likeness) > Child sexual abuse material (CSAM)
4.3 Fraud, scams, and targeted manipulation