This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Laws, legal frameworks, and binding policy instruments governing AI development and use.
Also in Legal & Regulatory
Organisations that develop, deploy or use AI systems should take necessary steps to protect the rights in the resulting works through appropriate and directed application of existing intellectual property rights laws. Governments should investigate how AI‑authored works may be further protected, without seeking to create any new IP right at this stage.
Reasoning
Organizations apply existing IP laws to protect AI-authored works through design standards and ethical principles governing system development.
Supporting incentivisation and protection for innovation
3.1.1 Legislation & PolicyProtection of IP rights
2.3.2 Access & Security ControlsDevelopment of new IP laws
3.1.1 Legislation & PolicyEthical Purpose and Societal Benefit
Organisations that develop, deploy or use AI systems and any national laws that regulate such use should require the purposes of such implementation to be identified and ensure that such purposes are consistent with the overall ethical purposes of beneficence and non-maleficence, as well as the other principles of the Policy Framework for Responsible AI.
3.2.2 Technical StandardsEthical Purpose and Societal Benefit > Overarching principles
2.1.3 Policies & ProceduresEthical Purpose and Societal Benefit > Work and automation
2.2.1 Risk AssessmentEthical Purpose and Societal Benefit > Environmental impact
2.2.1 Risk AssessmentEthical Purpose and Societal Benefit > Weaponised AI
3.1.3 International AgreementsEthical Purpose and Societal Benefit > The weaponisation of false or misleading information
1.2.1 Guardrails & FilteringOther (multiple stages)
Applies across multiple lifecycle stages
Developer
Entity that creates, trains, or modifies the AI system
Govern
Policies, processes, and accountability structures for AI risk management