This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Structured analysis to identify, characterize, and prioritize potential harms and risks.
Also in Risk & Assurance
3.1 Organisations that develop, deploy or use AI systems should assess the overall environmental impact of such AI systems, throughout their implementation, including consumption of resources, energy costs of data storage and processing and the net energy efficiencies or environmental benefits that they may produce. Organisations should seek to promote and implement uses of AI systems with a view to achieving overall carbon neutrality or carbon reduction. 3.2 Governments are encouraged to adjust regulatory regimes and/or promote industry self-regulatory regimes concerning market-entry and/or adoption of AI systems in a way that the possible exposure (in terms of ‘opportunities vs. risks’) that may result from the public operation of such AI systems is reasonably reflected. Special regimes for intermediary and limited admissions to enable testing and refining of the operation of the AI system can help to expedite the completion of the AI system and improve its safety and reliability. 3.3 In order to ensure and maintain public trust in final human control, governments should consider implementing rules that ensure comprehensive and transparent investigation of such adverse and unanticipated outcomes of AI systems that have occurred through their usage, in particular if these outcomes have lethal or injurious consequences for the humans using such systems. Such investigations should be used for considering adjusting the regulatory framework for AI systems, in particular to develop, where practicable and achievable, a more rounded understanding of how and when such systems should gracefully handover to their human operators in a failure scenario. 3.4 AI has a particular potential to reduce environmentally harmful resource waste and inefficiencies. AI research regarding these objectives should be encouraged. In order to do so, policies must be put in place to ensure the relevant data is accessible and usable in a manner consistent with respect for other principles of the Policy Framework for Responsible AI such as Fairness and Non-Discrimination, Open Data and Fair Competition and Privacy, Lawful Access and Consent
Reasoning
Mandates environmental impact assessment and disclosure requirements for AI system developers and operators.
Ethical Purpose and Societal Benefit
Organisations that develop, deploy or use AI systems and any national laws that regulate such use should require the purposes of such implementation to be identified and ensure that such purposes are consistent with the overall ethical purposes of beneficence and non-maleficence, as well as the other principles of the Policy Framework for Responsible AI.
3.2.2 Technical StandardsEthical Purpose and Societal Benefit > Overarching principles
2.1.3 Policies & ProceduresEthical Purpose and Societal Benefit > Work and automation
2.2.1 Risk AssessmentEthical Purpose and Societal Benefit > Weaponised AI
3.1.3 International AgreementsEthical Purpose and Societal Benefit > The weaponisation of false or misleading information
1.2.1 Guardrails & FilteringAccountability
Organisations that develop, deploy or use AI systems and any national laws that regulate such use shall respect and adopt the eight principles of this Policy Framework for Responsible AI (or other analogous accountability principles). In all instances, humans should remain accountable for the acts and omissions of AI systems.
3.2.2 Technical StandardsOther (outside lifecycle)
Outside the standard AI system lifecycle
Deployer
Entity that integrates and deploys the AI system for end users
Measure
Quantifying, testing, and monitoring identified AI risks
Primary
6.6 Environmental harm