This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Changes to the model's learned parameters, architecture, or training process, including modifications to training data that affect what the model learns.
Also in AI System
AI models require addressing biases in training datasets and algorithms to ensure fairness.
- **Mitigation Strategies: **Techniques such as data augmentation, re-sampling, and adversarial reduce biases in training datasets. Regular audits and fairness metrics can evaluate the model’s impartiality during deployment.13 - **Research Insights: **Researchers like Bender et al.32 have highlighted the risks of biased language models and suggested approaches to mitigate these risks.
Reasoning
Data augmentation and re-sampling modify training data composition to reduce model biases.
Best Practices for Organizations Deploying Generative AI
Companies should prioritize ethical considerations when deploying generative AI by mitigating bias in training data and outputs. Ensuring transparency and explainability in that system, establishing clear data governance and privacy securities, and maintaining human management for AI-driven decisions.
2.1.3 Policies & ProceduresStakeholder Engagement
The decision-making process involves employees, customers, and a diverse group of stakeholders to ensure needs and a wide range of views are considered. Maintain open line communication with all stakeholders about the goals and potential risks of deploying the AI system. This builds trust. Implement robust feedback mechanisms to get input from stakeholders, users, or developers. This helps identify and address matters promptly.
2.1.3 Policies & ProceduresEthical Training
Implement and develop training programs that cover the ethical implications of generative AI to understand the ethical considerations and their responsibilities. Train employees to handle Ethical dilemmas related to generative AI using real-world scenarios. This type of practical approach helps to better understand ethical principles. Keep the training programs updated with the latest developments in AI ethics. This ensures the organization remains compliant.
2.4.4 Training & AwarenessMonitoring
To monitor the effectiveness and impact of generative AI, establish clear performance metrics. And regularly review these metrics to maintain ethical standards. Implement audit trails to track the decision-making processes of Gen AI and identify any biases or unethical behavior. To conduct regular assessments of the AI systems, engage third-party auditors. Independent audits provide a neutral view of the system’s performance.
2.2 Risk & AssuranceEstablish Clear Ethical Guidelines
Governments should develop ethical guidelines for the development and deployment of generative AI.
3.1.1 Legislation & PolicyImplement Robust Data Protection Laws
Create or implement robust Laws to handle personal data used by generative AI.
3.1.1 Legislation & PolicyEthical Considerations and Responsible Governance of Generative AI: A Systematic Review
Pathan, Mohammed Karimkhan; Shah, Aman (2025)
Generative artificial intelligence (AI), a transformative technology capable of generating or creating text, images, and other content, has revolutionized industries while raising critical ethical and governance challenges. This review systematically examines key ethical considerations, such as intellectual property rights, bias, fairness, misinformation, data privacy, environmental impact, and the need for human oversight. These challenges highlight complexities in governing generative AI, requiring robust international guidelines and best practices. By analyzing existing frameworks and case studies, our review identifies significant gaps in current research and policy. Key findings emphasize the importance of multi-stakeholder collaboration among policymakers, industry leaders, and researchers in developing an adaptive governance framework that prioritizes transparency, accountability, and inclusivity to mitigate risks and promote responsible AI. The review highlights the importance of sustainable AI in addressing environmental concerns and advocates for policies that ensure equitable access while addressing societal impacts such as the spread of misinformation and the potential for exacerbating existing inequalities. By synthesizing insights from diverse sources, this study provides actionable recommendations to guide the ethical and responsible governance of generative AI technologies that align with evolving technological advancements and societal needs.
Collect and Process Data
Gathering, curating, labelling, and preprocessing training data
Developer
Entity that creates, trains, or modifies the AI system
Manage
Prioritising, responding to, and mitigating AI risks