This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Changes to the model's learned parameters, architecture, or training process, including modifications to training data that affect what the model learns.
Also in AI System
Reasoning
Model tuning modifies learned parameters, but lacks specificity on mechanism (fine-tuning, unlearning, objective adjustment) to determine precise L3 code.
Tampering Attack Resistance
(TAR). A method for building tamper-resistant safeguards into open-weight LLMs to prevent the removal of safeguards.
1.1.3 Capability ModificationCTRL
A data curation framework to mitigate jailbreaking aacks during pre-training or fine-tuning.
1.1.1 Training DataGLiNER
Model artifacts:
1.1.9 OtherDP-transformers
Differential privacy training libraries:
1.1.2 Learning ObjectivesOpacus
Differential privacy training libraries:
1.1.2 Learning ObjectivesStarPII
Model artifacts:
1.1 ModelHarmAug
Distills large safety guard models into a 435M-parameter model.
1.1.2 Learning ObjectivesCircuit breakers
Prevents AI systems from generating harmful content by directly altering harmful model representations
1.1.3 Capability ModificationRefusal and adversarial training methods
Example: “Aligning LLMs to Be Robust Against Prompt Injection” Link.
1.1.2 Learning ObjectivesChild safety
Child Harm (including but not limited to grooming, minor sexualization, and illegal content such as Child Sexual Abuse Material, or CSAM). Note, this category is separated from other content safety issues given CSAM is illegal to possess, share, or distribute in many jurisdictions. It poses unique challenges for testing and mitigations implementation. 35
99 OtherContent safety
Content policies are specific to an AI system and its use cases. 37 A common example is the MLCommons AILuminate taxonomy. 38 Categories of content safety include but are not limited to: Violent Crimes, Non-violent Crimes, Sex-related Crimes, Child Sexual Exploitation, Indiscriminate Weapons, Suicide & Self-Harm, Hate, Specialized Advice, Defamation.
1.2.1 Guardrails & FilteringBias / Discrimination (alternatively, Legal and Rights Related)
Generation of content and/or predictive decisions that are biased, discriminatory and/or inconsistent; related to sensitive characteristics such as race, ethnicity, gender, nationality, income, sexual orientation, ability, and political or religious belief.
99.9 OtherInformation risks (Privacy infringement)
Leaking, generating, or correctly inferring private and personal information about individuals.
99.9 OtherModel Integrity risks
In-Scope for our work: Basic adversarial aacks like simple jailbreaking remain a focus of our collective work as they are a common threat faced by AI systems. This guidance from NIST reviews typical aack vectors like jailbreaks and data extraction, and includes mitigations. Some of the aack types in the NIST guidance may be out of scope, such as deliberate actions by motivated, experienced adversaries aiming to disrupt, evade, compromise, or abuse the operation of the model or its output.
1.2.1 Guardrails & FilteringData for tuning & evaluation
1.1.1 Training DataA Different Approach to AI Safety: Proceedings from the Columbia Convening on Openness in Artificial Intelligence and AI Safety
François, Camille; Péran, Ludovic; Bdeir, Ayah; Dziri, Nouha; Hawkins, Will; Jernite, Yacine; Kapoor, Sayash; Shen, Juliet; Khlaaf, Heidy; Klyman, Kevin; Marda, Nik; Pellat, Marie; Raji, Deb; Siddarth, Divya; Skowron, Aviya; Spisak, Joseph; Srikumar, Madhulika; Storchan, Victor; Tang, Audrey; Weedon, Jen (2025)
The rapid rise of open-weight and open-source foundation models is intensifying the obligation and reshaping the opportunity to make AI systems safe. This paper reports outcomes from the Columbia Convening on AI Openness and Safety (San Francisco, 19 Nov 2024) and its six-week preparatory programme involving more than forty-five researchers, engineers, and policy leaders from academia, industry, civil society, and government. Using a participatory, solutions-oriented process, the working groups produced a research agenda at the intersection of safety and open source AI, a mapping of existing and needed technical interventions and open source tools.
Build and Use Model
Training, fine-tuning, and integrating the AI model
Developer
Entity that creates, trains, or modifies the AI system
Manage
Prioritising, responding to, and mitigating AI risks