Negative Externality Domains (Other harms from AI development and use)
AI-driven concentration of power and resources within certain entities or groups, especially those with access to or ownership of powerful AI systems, leading to inequitable distribution of benefits and increased societal inequality.
Human
Due to a decision or action made by humans
AI system
Due to a decision or action made by an AI system
Other
Due to some other reason or is ambiguous
Not coded
Intentional
Due to an expected outcome from pursuing a goal
Unintentional
Due to an unexpected outcome from pursuing a goal
Other
Without clearly specifying the intentionality
Not coded
Pre-deployment
Occurring before the AI is deployed
Post-deployment
Occurring after the AI model has been trained and deployed
Other
Without a clearly specified time of occurrence
Not coded
Sub-categories (2)
Societal inequality (individuals and companies who develop the best AIs get disproportionately powerful)
6.1 Power centralization and unfair distribution of benefitsGeopolitical harms (potential for conflict due to power imbalances)
6.1 Power centralization and unfair distribution of benefitsOther risks from Gipiškis2024 (144)
Direct Harm Domains (content safety harms)
1.2 Exposure to toxic contentDirect Harm Domains (content safety harms) > Violence and extremism
1.2 Exposure to toxic contentDirect Harm Domains (content safety harms) > Hate and toxicity
1.2 Exposure to toxic contentDirect Harm Domains (content safety harms) > Sexual content
1.2 Exposure to toxic contentDirect Harm Domains (content safety harms) > Child harm
1.2 Exposure to toxic contentDirect Harm Domains (content safety harms) > Self-harm
1.2 Exposure to toxic content