Provides eight aspirational principles for the responsible development of artificial intelligence, stressing the importance of fairness, security, privacy, and amicable cooperation.
Analysis summaries, actor details, and coverage mappings were LLM-classified and may contain errors.
This document is a set of aspirational principles issued by an expert committee using entirely voluntary language ('should') with no enforcement mechanisms, penalties, or binding obligations.
The document provides minimal to basic coverage across multiple risk domains, with primary focus on fairness and discrimination (1.1, 1.3), privacy (2.1), misinformation risks (3.1), malicious use prevention (4.1, 4.2, 4.3), human-AI interaction (5.1, 5.2), socioeconomic impacts (6.1, 6.2, 6.3), and AI system safety (7.1, 7.3, 7.4). Coverage is generally aspirational and principle-based rather than detailed, with most subdomains receiving minimal coverage scores of 2.
This document provides cross-sectoral governance principles applicable to AI development and deployment across all economic sectors. It does not focus on specific industries but rather establishes broad principles for responsible AI that would apply to any sector utilizing AI technology.
The document addresses multiple stages of the AI lifecycle with emphasis on development, deployment, and ongoing operation. It covers design principles (Plan and Design), data handling (Collect and Process Data), model development (Build and Use Model), and particularly emphasizes monitoring and governance throughout the lifecycle (Operate and Monitor).
The document uses general terminology referring to 'AI', 'AI systems', and 'AI development' without distinguishing between specific technical categories such as AI models vs. systems, or general-purpose vs. task-specific AI. There is no mention of compute thresholds, foundation models, or open-weight models.
National New Generation Artificial Intelligence Governance Expert Committee
The document is explicitly issued by the National New Generation Artificial Intelligence Governance Expert Committee, as stated at the end of the document with the date June 17, 2019.
No enforcement body or enforcement mechanisms are specified in this document. The principles are voluntary and rely on self-discipline rather than external enforcement.
No monitoring body or monitoring mechanisms are specified in this document. The document mentions the need for governance systems but does not designate specific monitoring entities.
No specific entities named; applies broadly to AI developers, users, and other interested parties
The document explicitly states that 'various parties related to AI development should adhere to the following principles' and later specifies 'AI developers, users, and other interested parties' as having responsibilities.
20 subdomains (20 Minimal)