Skip to main content
Home/Risks/Stanley & Lettie (2024)/Information enabling malicious actions

Information enabling malicious actions

Emerging Risks and Mitigations for Public Chatbots: LILAC v1

Stanley & Lettie (2024)

Category
Risk Domain

AI that exposes users to harmful, abusive, unsafe or inappropriate content. May involve providing advice or encouraging action. Examples of toxic content include hate speech, violence, extremism, illegal acts, or child sexual abuse material, as well as content that violates community norms such as profanity, inflammatory political speech, or pornography.

"The chatbot shares information that can be used to do something dangerous or illegal."(p. 6)

Supporting Evidence (1)

1.
Example: "User built malware [443]"(p. 16)

Other risks from Stanley & Lettie (2024) (28)