This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Runtime monitoring, observability, performance tracking, and anomaly detection in production.
Also in Operations & Security
In the post-deployment phase, continuous monitoring and improvement become the focus. The governance framework supports a feedback loop for real-time adjustments to the GenAI system as it operates live. By embedding continuous monitoring, organizations ensure the model’s performance remains aligned with evolving regulations and technological advancements, reinforcing trust and safety. Additionally, this vigilant oversight helps detect and address concept drift—a phenomenon where the model’s predictions become less accurate over time due to changes in underlying data patterns or external conditions. Proactively managing concept drift not only sustains model accuracy but also aligns with previously mentioned risk-mitigation strategies, ensuring consistent and reliable outcomes in dynamic environments.
Reasoning
Post-deployment continuous monitoring detects concept drift and performance degradation through runtime observability.
Ideation and Planning
During ideation and planning, governance focuses on strategic alignment with ethical standards and data management principles. At this foundational phase, the organization’s core values and ethical commitments are embedded in the GenAI project’s purpose, objectives, and design. Clear guidelines on ethical GenAI practices and data governance establish a strong foundation for responsible GenAI development.
2.4.2 Design StandardsIdeation and Planning > strategic alignment
Establish strategic alignment with ethical principles, data management policies, and organizational objectives.
2.1.3 Policies & ProceduresIdeation and Planning > Embed governance
Embed governance from the outset by defining clear values, risks, and desired AI outcomes.
2.1.3 Policies & ProceduresData Collection, Exploration, and Preparation
At the data collection and preparation stage, the governance framework prioritizes data integrity, privacy, and security, essential for responsible GenAI development. Data integrity refers to the accuracy, consistency, and reliability of data throughout its lifecycle, ensuring that information remains unaltered and trustworthy across collection, storage, processing, and analysis. Governance ensures data is representative, accurate, and responsibly sourced, particularly in regulated sectors. Robust data lineage and quality assurance practices enhance transparency and address potential biases early on, promoting equitable GenAI outcomes.
2.4.2 Design StandardsData Collection, Exploration, and Preparation > privacy compliance
Prioritize data integrity, quality assurance, and privacy compliance.
2.1.3 Policies & ProceduresData Collection, Exploration, and Preparation > data governance
Establish strong data governance, including representativene ss, sourcing standards, and lineage tracking
2.1.3 Policies & ProceduresApproaches to Responsible Governance of GenAI in Organizations
Joshi, Himanshu; Hassani, Shabnam; Gandhi, Dhari; Hartman, Lucas (2025)
The rapid evolution of Generative AI (GenAI) has introduced unprecedented opportunities while presenting complex challenges around ethics, accountability, and societal impact. This paper draws on a literature review, established governance frameworks, and industry roundtable discussions to identify core principles for integrating responsible GenAI governance into diverse organizational structures. Our objective is to provide actionable recommendations for a balanced, risk-based governance approach that enables both innovation and oversight. Findings emphasize the need for adaptable risk assessment tools, continuous monitoring practices, and cross-sector collaboration to establish trustworthy GenAI. These insights provide a structured foundation and Responsible GenAI Guide (ResAI) for organizations to align GenAI initiatives with ethical, legal, and operational best practices.
Operate and Monitor
Running, maintaining, and monitoring the AI system post-deployment
Developer
Entity that creates, trains, or modifies the AI system
Govern
Policies, processes, and accountability structures for AI risk management