This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Shared evaluation datasets, testing frameworks, and measurement tools for AI systems.
Also in Shared Infrastructure
Develop agreed-upon metrics for the trustworthiness of AI products and services, to be undertaken either by a new organisation, or by a suitable existing organisation.
These metrics would serve as the basis for a system that enables the user-driven benchmarking of all marketed AI offerings. In this way, an index for trustworthy AI can be developed and signalled, in addition to a product’s price. This “trust comparison index” for AI would improve public understanding and engender competitiveness around the development of safer, more socially beneficial AI (e.g., “IwantgreatAI.org”). In the longer term, such a system could form the basis for a broader system of certification for deserving products and services, administered by the organisation noted here, and/or by the oversight agency proposed in Recommendation 9. The organisation could also support the development of codes of conduct (see Recommendation 18). Furthermore, those who own or operate inputs to AI systems and profit from it could be tasked with funding and/or helping to develop AI literacy programs for consumers, in their own best interest.
Reasoning
Develops shared evaluation metrics and benchmarking system enabling cross-organization trustworthiness assessment of AI products.
Assess capacity of existing institutions
Assess the capacity of existing institutions, such as national civil courts, to redress the mistakes made or harms inflicted by AI systems. This assessment should evaluate the presence of sustainable, majority-agreed foundations for liability from the design stage onwards, in order to reduce negligence and conflicts.
2.2.1 Risk AssessmentAssess tasks and decision-making functionalities
Assess which tasks and decision-making functionalities should not be delegated to AI systems, through the use of participatory mechanisms to ensure alignment with societal values and understanding of public opinion. This assessment should take into account existing legislation and be supported by ongoing dialogue between all stakeholders (including government, industry, and civil society) to debate how AI will impact society opinion
3.3.1 Industry CoordinationAssess current regulations
Assess whether current regulations are sufficiently grounded in ethics to provide a legislative framework that can keep pace with technological developments. This may include a framework of key principles that would be applicable to urgent and/or unanticipated problems.
3.1.1 Legislation & PolicyDevelop framework to enhance explainability
Develop a framework to enhance the explicability of AI systems that make socially significant decisions. Central to this framework is the ability for individuals to obtain a factual, direct, and clear explanation of the decision-making process, especially in the event of unwanted consequences. This is likely to require the development of frameworks specific to different industries, and professional associations should be involved in this process, alongside experts in science, business, law, and ethics.
3.2.2 Technical StandardsDevelop legal procedures
Develop appropriate legal procedures and improve the IT infrastructure of the justice system to permit the scrutiny of algorithmic decisions in court. This is likely to include the creation of a framework for AI explainability as indicated in Recommendation 4, specific to the legal system
3.1.1 Legislation & PolicyDevelop auditing mechanisms
Develop auditing mechanisms for AI systems to identify unwanted consequences, such as unfair bias, and (for instance, in cooperation with the insurance sector) a solidarity mechanism to deal with severe risks in AI-intensive sectors. Those risks could be mitigated by multistakeholder mechanisms upstream.
3.2.1 Benchmarks & EvaluationAI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations
Floridi, Luciano; Cowls, Josh; Beltrametti, Monica; Chatila, Raja; Chazerand, Patrice; Dignum, Virginia; Luetge, Christoph; Madelin, Robert; Pagallo, Ugo; Rossi, Francesca; Schafer, Burkhard; Valcke, Peggy; Vayena, Effy (2018)
This article reports the findings of AI4People, an Atomium—EISMD initiative designed to lay the foundations for a “Good AI Society”. We introduce the core opportunities and risks of AI for society; present a synthesis of five ethical principles that should undergird its development and adoption; and offer 20 concrete recommendations—to assess, to develop, to incentivise, and to support good AI—which in some cases may be undertaken directly by national or supranational policy makers, while in others may be led by other stakeholders. If adopted, these recommendations would serve as a firm foundation for the establishment of a Good AI Society. ¬© 2018, The Author(s).
Other (outside lifecycle)
Outside the standard AI system lifecycle
Unable to classify
Could not be classified to a specific actor type
Measure
Quantifying, testing, and monitoring identified AI risks