This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Internal policies, content safety guidelines, and ethical design principles governing system creation.
Also in Engineering & Development
Bias and fairness in AI require a comprehensive approach that includes diverse data, ethical AI design, continuous monitoring, and the incorporation of societal values
Reasoning
Establishes ethical design principles incorporating fairness and societal values into AI system development.
Mitigating bias
Mitigating bias in AI systems involves several strategies. Diverse data collection ensures that data sets are representative of all relevant groups, reducing the risk of biased outcomes. Algorithmic auditing involves regularly reviewing algorithms to detect and correct biases. Human oversight is also essential, as human judgement can identify and correct biases that algorithms might overlook. These measures collectively help in developing fairer AI systems.
2.2 Risk & AssuranceMeasures for fairness
To ensure fairness in AI systems, diverse data sets should be used to train models, representing all relevant groups accurately. Fairness metrics, such as demographic parity, ensure that decisions are independent of sensitive attributes like race or gender. Techniques such as equal opportunity and equalised odds aim to provide equal predictive performance across different groups. Ethical guidelines must be established and followed to ensure AI systems are designed and used responsibly. Incorporating individual fairness considerations into decision-making processes is also crucial for achieving fairness.
1.1 ModelFeedback loop
A continuous feedback loop is essential for maintaining fairness in AI systems. This involves regular monitoring and adaptive adjustments to the AI models. Continuous monitoring helps detect biases as they emerge, allowing for timely interventions
2.3.3 Monitoring & LoggingLegal and regulatory compliance
As new technologies advance, they require legal and regulatory compliance frameworks to ensure ethical use, privacy, and security.
3.1 Legal & RegulatoryLegal and regulatory compliance > Domestic regulation
Nations need to establish clear ethical guidelines and standards to govern the development and use of AI. These guidelines should address various concerns, including privacy, transparency, bias, and accountability.
3.1.1 Legislation & PolicyLegal and regulatory compliance > International regulation
Establishing global standards for AI, like the Paris Agreement for climate change, is the next step in ensuring AI is safe and ethical use. These standards should address issues such as the AI arms race, autonomous weapons, and global surveillance systems.
3.1.3 International AgreementsEnsuring compliance in AI and ML systems
Creating AI governance committees and conducting regular system audits can help ensure accuracy, mitigate bias, and guarantee ethical alignment. Organisations must also comply with data privacy laws when implementing AI/ML systems. Regular assessments should be conducted to reduce potential risks associated with AI/ML systems, and plans should be implemented to address any potential risks
2.2 Risk & AssuranceAI supply chain security and risk propagation
To manage these risks, regulatory frameworks must incorporate AI security standards that enforce stringent vetting of AI models, continuous adversarial robustness assessments, and secure model distribution policies. AI security capacity-building efforts should prioritise defensive mechanisms such as adversarial training, differential privacy, homomorphic encryption, and federated trust frameworks to prevent risk propagation across AI-driven supply chains.
3.1.1 Legislation & PolicyGDPR compliance in AI
The GDPR (2018) is a crucial piece of legislation in the European Union and the United Kingdom (ICO, 2018) that focuses on data protection and privacy.
3.1.1 Legislation & PolicyFrontier AI regulation: what form should it take?
Radanliev, Petar (2025)
Frontier AI systems, including large-scale machine learning models and autonomous decision-making technologies, are deployed across critical sectors such as finance, healthcare, and national security. These present new cyber-risks, including adversarial exploitation, data integrity threats, and legal ambiguities in accountability. The absence of a unified regulatory framework has led to inconsistencies in oversight, creating vulnerabilities that can be exploited at scale. By integrating perspectives from cybersecurity, legal studies, and computational risk assessment, this research evaluates regulatory strategies for addressing AI-specific threats, such as model inversion attacks, data poisoning, and adversarial manipulations that undermine system reliability. The methodology involves a comparative analysis of domestic and international AI policies, assessing their effectiveness in managing emerging threats. Additionally, the study explores the role of cryptographic techniques, such as homomorphic encryption and zero-knowledge proofs, in enhancing compliance, protecting sensitive data, and ensuring algorithmic accountability. Findings indicate that current regulatory efforts are fragmented and reactive, lacking the necessary provisions to address the evolving risks associated with frontier AI. The study advocates for a structured regulatory framework that integrates security-first governance models, proactive compliance mechanisms, and coordinated global oversight to mitigate AI-driven threats. The investigation considers that we do not live in a world where most countries seem to be wishing to follow European Union ideals, and in the wake of this particular trend, this research presents a regulatory blueprint that balances technological advancement with decentralised security enforcement. Copyright © 2025 Radanliev.
Other (multiple stages)
Applies across multiple lifecycle stages
Deployer
Entity that integrates and deploys the AI system for end users
Other
Risk management function not captured by the standard AIRM categories