Establishes regulations for developing, implementing, and using AI systems in Brazil, emphasizing human rights, privacy, non-discrimination, and transparency. Requires risk assessment for AI applications, prohibiting excessive-risk systems. Defines governance and accountability for high-risk AI, stipulating rights to contest automated decisions. Designates a competent authority for oversight and enforcement, and prescribes penalties for non-compliance. Encourages innovation through regulatory sandboxes while ensuring data protection and ethical AI usage.
Analysis summaries, actor details, and coverage mappings were LLM-classified and may contain errors.
This is a binding legislative instrument (Federal Senate Bill) with extensive mandatory obligations, enforcement mechanisms including administrative sanctions and fines up to R$50 million, and a designated competent authority for oversight and enforcement.
The document has good coverage of approximately 12-14 subdomains, with strong focus on discrimination and bias (1.1, 1.3), privacy and data protection (2.1), misinformation (3.1), malicious actors and misuse (4.1, 4.2, 4.3), human-computer interaction (5.1, 5.2), and AI system safety and robustness (7.1, 7.3, 7.4). Coverage is concentrated in discrimination prevention, transparency, human oversight, and governance frameworks.
This is a horizontal AI regulation that applies across all economic sectors in Brazil. The document explicitly identifies high-risk AI applications across multiple sectors including critical infrastructure, education, employment, financial services, healthcare, public administration, and national security. The regulation governs AI development and deployment comprehensively rather than being sector-specific.
The document comprehensively covers all stages of the AI lifecycle, with particularly strong emphasis on Design, Verification/Validation, Deployment, and Operation/Monitoring. It establishes requirements spanning from initial planning and risk assessment through ongoing monitoring and incident reporting.
The document explicitly defines and covers AI systems broadly, with specific attention to high-risk applications. It does not explicitly mention frontier AI, general purpose AI, foundation models, or specific compute thresholds. The focus is on AI systems generally, with risk-based categorization rather than technical architecture distinctions.
The bill was authored by Senator Rodrigo Pacheco and developed with input from a Commission of Jurists specifically established to draft AI legislation, which conducted extensive consultations including public hearings with over 70 specialists and received 102 written contributions.
The law establishes a central competent authority to be designated by the Executive Branch with comprehensive enforcement powers including inspection, sanctions, and regulatory authority. It also provides for coordination with sectoral regulatory bodies.
The competent authority has explicit monitoring responsibilities including inspection powers, receiving reports of serious incidents, maintaining public databases of impact assessments, and preparing annual reports on its activities.
The law explicitly defines and targets both AI system providers (developers) and operators (deployers), including both private entities and public sector organizations at all levels of government. The definitions in Art. 4 clearly establish these as the regulated parties.
16 subdomains (8 Good, 8 Minimal)