AI Ethics
AI systems that fail to perform reliably or effectively under varying conditions, exposing them to errors and failures that can have significant consequences, especially in critical applications or areas that require moral reasoning.
"Ethical challenges are widely discussed in the literature and are at the heart of the debate on how to govern and regulate AI technology in the future (Bostrom & Yudkowsky, 2014; IEEE, 2017; Wirtz et al., 2019). Lin et al. (2008, p. 25) formulate the problem as follows: “there is no clear task specification for general moral behavior, nor is there a single answer to the question of whose morality or what morality should be implemented in AI”. Ethical behavior mostly depends on an underlying value system. When AI systems interact in a public environment and influence citizens, they are expected to respect ethical and social norms and to take responsibility of their actions (IEEE, 2017; Lin et al., 2008)."(p. 821)
Sub-categories (4)
AI-rulemaking for human behaviour
"AI rulemaking for humans can be the result of the decision process of an AI system when the information computed is used to restrict or direct human behavior. The decision process of AI is rational and depends on the baseline programming. Without the access to emotions or a consciousness, decisions of an AI algorithm might be good to reach a certain specified goal, but might have unintended consequences for the humans involved (Banerjee et al., 2017)."
7.3 Lack of capability or robustnessCompatibility of AI vs. human value judgement
"Compatibility of machine and human value judgment refers to the challenge whether human values can be globally implemented into learning AI systems without the risk of developing an own or even divergent value system to govern their behavior and possibly become harmful to humans."
7.3 Lack of capability or robustnessMoral dilemmas
"Moral dilemmas can occur in situations where an AI system has to choose between two possible actions that are both conflicting with moral or ethical values. Rule systems can be implemented into the AI program, but it cannot be ensured that these rules are not altered by the learning processes, unless AI systems are programed with a “slave morality” (Lin et al., 2008, p. 32), obeying rules at all cost, which in turn may also have negative effects and hinder the autonomy of the AI system."
7.3 Lack of capability or robustnessAI discrimination
"AI discrimination is a challenge raised by many researchers and governments and refers to the prevention of bias and injustice caused by the actions of AI systems (Bostrom & Yudkowsky, 2014; Weyerer & Langer, 2019). If the dataset used to train an algorithm does not reflect the real world accurately, the AI could learn false associations or prejudices and will carry those into its future data processing. If an AI algorithm is used to compute information relevant to human decisions, such as hiring or applying for a loan or mortgage, biased data can lead to discrimination against parts of the society (Weyerer & Langer, 2019)."
1.1 Unfair discrimination and misrepresentationOther risks from Wirtz, Weyerer & Sturm (2020) (11)
AI Law and Regulation
6.5 Governance failureAI Law and Regulation > Governance of autonomous intelligence systems
6.5 Governance failureAI Law and Regulation > Responsibility and accountability
6.5 Governance failureAI Law and Regulation > Privacy and safety
4.1 Disinformation, surveillance, and influence at scaleAI Society
5.2 Loss of human agency and autonomyAI Society > Workforce substitution and transformation
6.2 Increased inequality and decline in employment quality