This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Unilateral voluntary commitments and safety frameworks adopted by individual organizations.
Also in Voluntary & Cooperative
3.1 Organisations that develop AI systems are normally entitled to commercialise such systems as they wish. However, governments should at a minimum advocate accessibility through open source or other similar licensing arrangements to those innovative AI systems which may be of particular societal benefit or advance the “state of the art” in the field via, for example, targeted incentive schemes. 3.2 Organisations that elect not to release their AI systems as open source software are encouraged nevertheless to license the System on a commercial basis. 3.3 To the extent that an AI system can be subdivided into various constituent parts with general utility and application in other AI usecases, organisations that elect not to license the AI system as a whole (whether on an open source or commercial basis) are encouraged to license as many of such re-usable components as is possible.
Reasoning
Governments advocate for organizations to voluntarily release AI systems as open source or licensed components, representing unilateral voluntary commitments without state enforcement.
Ethical Purpose and Societal Benefit
Organisations that develop, deploy or use AI systems and any national laws that regulate such use should require the purposes of such implementation to be identified and ensure that such purposes are consistent with the overall ethical purposes of beneficence and non-maleficence, as well as the other principles of the Policy Framework for Responsible AI.
3.2.2 Technical StandardsEthical Purpose and Societal Benefit > Overarching principles
2.1.3 Policies & ProceduresEthical Purpose and Societal Benefit > Work and automation
2.2.1 Risk AssessmentEthical Purpose and Societal Benefit > Environmental impact
2.2.1 Risk AssessmentEthical Purpose and Societal Benefit > Weaponised AI
3.1.3 International AgreementsEthical Purpose and Societal Benefit > The weaponisation of false or misleading information
1.2.1 Guardrails & FilteringOther (outside lifecycle)
Outside the standard AI system lifecycle
Governance Actor
Regulator, standards body, or oversight entity shaping AI policy
Govern
Policies, processes, and accountability structures for AI risk management