This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Unilateral voluntary commitments and safety frameworks adopted by individual organizations.
Also in Voluntary & Cooperative
Support the development of self-regulatory codes of conduct for data and AI related professions, with specific ethical duties.
This would be along the lines of other socially sensitive professions, such as medical doctors or lawyers, i.e., with the attendant certification of ‘ethical AI’ through trust-labels to make sure that people understand the merits of ethical AI and will therefore demand it from providers. Current attention manipulation techniques may be constrained through these self-regulating instruments.
Reasoning
Industry coordination develops voluntary self-regulatory codes of conduct with ethical duties and trust-labels.
Assess capacity of existing institutions
Assess the capacity of existing institutions, such as national civil courts, to redress the mistakes made or harms inflicted by AI systems. This assessment should evaluate the presence of sustainable, majority-agreed foundations for liability from the design stage onwards, in order to reduce negligence and conflicts.
2.2.1 Risk AssessmentAssess tasks and decision-making functionalities
Assess which tasks and decision-making functionalities should not be delegated to AI systems, through the use of participatory mechanisms to ensure alignment with societal values and understanding of public opinion. This assessment should take into account existing legislation and be supported by ongoing dialogue between all stakeholders (including government, industry, and civil society) to debate how AI will impact society opinion
3.3.1 Industry CoordinationAssess current regulations
Assess whether current regulations are sufficiently grounded in ethics to provide a legislative framework that can keep pace with technological developments. This may include a framework of key principles that would be applicable to urgent and/or unanticipated problems.
3.1.1 Legislation & PolicyDevelop framework to enhance explainability
Develop a framework to enhance the explicability of AI systems that make socially significant decisions. Central to this framework is the ability for individuals to obtain a factual, direct, and clear explanation of the decision-making process, especially in the event of unwanted consequences. This is likely to require the development of frameworks specific to different industries, and professional associations should be involved in this process, alongside experts in science, business, law, and ethics.
3.2.2 Technical StandardsDevelop legal procedures
Develop appropriate legal procedures and improve the IT infrastructure of the justice system to permit the scrutiny of algorithmic decisions in court. This is likely to include the creation of a framework for AI explainability as indicated in Recommendation 4, specific to the legal system
3.1.1 Legislation & PolicyDevelop auditing mechanisms
Develop auditing mechanisms for AI systems to identify unwanted consequences, such as unfair bias, and (for instance, in cooperation with the insurance sector) a solidarity mechanism to deal with severe risks in AI-intensive sectors. Those risks could be mitigated by multistakeholder mechanisms upstream.
3.2.1 Benchmarks & EvaluationAI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations
Floridi, Luciano; Cowls, Josh; Beltrametti, Monica; Chatila, Raja; Chazerand, Patrice; Dignum, Virginia; Luetge, Christoph; Madelin, Robert; Pagallo, Ugo; Rossi, Francesca; Schafer, Burkhard; Valcke, Peggy; Vayena, Effy (2018)
This article reports the findings of AI4People, an Atomium—EISMD initiative designed to lay the foundations for a “Good AI Society”. We introduce the core opportunities and risks of AI for society; present a synthesis of five ethical principles that should undergird its development and adoption; and offer 20 concrete recommendations—to assess, to develop, to incentivise, and to support good AI—which in some cases may be undertaken directly by national or supranational policy makers, while in others may be led by other stakeholders. If adopted, these recommendations would serve as a firm foundation for the establishment of a Good AI Society. ¬© 2018, The Author(s).
Other (outside lifecycle)
Outside the standard AI system lifecycle
Unable to classify
Could not be classified to a specific actor type
Govern
Policies, processes, and accountability structures for AI risk management