This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Technical mechanisms operating on non-model components of the AI system without modifying model weights. Components include: input/output interfaces, runtime environment, guardrail/monitoring classifiers, tool chain, and hardware.
Also in AI System
Reasoning
Insufficient detail to identify mechanism or focal activity; no description provided.
Mitigations for Availability
2.3.2 Access & Security ControlsMitigations for Availability > Leverage protections provided by model hosters
As a model integrator, leveraging the protections provided by model hosters is critical to addressing threats such as bot activity, Denialof-Service (DoS) and Denial-of-Wallet attacks. These are of particular concern given that bot-generated traffic accounts for approximately 47% of Internet activity
2.3.2 Access & Security ControlsMitigations for Availability > Documenting protections
Documenting these protections helps meet EU AI Act requirements
3.1.4 Compliance RequirementsMitigations for Availability > Measuring inference costs
In addition, measuring inference costs, such as time or energy consumption, and implementing cut-off thresholds can prevent abuse [18]. This approach potentially eliminates the need for complex sponge attack detectors1 while maintaining operational efficiency
1.2.3 Monitoring & DetectionMitigations for Confidentiality
2.3.2 Access & Security ControlsMitigations for Confidentiality > Query management
Query management plays a critical role in mitigating attacks such as model stealing and model inversion. Setting query rate limits prevents attackers from exploiting excessive queries, while restricting outputs to class labels, rather than confidence scores, effectively reduces the risk of membership inference attacks [19]. However, these measures may be insufficient against advanced label-only attacks [20], requiring further refinements.
1.2.1 Guardrails & FilteringCompliance Made Practical: Translating the EU AI Act into Implementable Security Actions
Bunzel, Niklas (2025)
The EU AI Act, along with emerging regulations in other countries, mandates that AI systems meet security requirements to prevent risks associated with AI misuse and vulnerabilities. However, for practitioners, defining and achieving a secure AI system is complex and context-dependent, posing challenges in understanding what actions they need to take and when they are sufficient. ISO/IEC TR 24028/29 and ENISA Securing Machine Learning Algorithms offer a comprehensive framework for AI security, aligning with the EU AI Act's requirements by addressing risks, threats, and mitigation strategies. However, for practical implementation, these reports lack hands-on guidance. Industry resources like the OWASP AI Exchange and OWASP LLM Top 10 fill this gap by providing accessible, actionable insights for securing AI systems effectively. This paper addresses the question of responsibility in AI risk mitigation, especially for companies utilizing pretrained or off-the-shelf models. We want to clarify how companies can practically comply with the upcoming ISO 27090 and ensure compliance with the EU AI Act through actionable security strategies tailored to this prevalent use case. © 2025 IEEE.
Verify and Validate
Testing, evaluating, auditing, and red-teaming the AI system
User
Individual or organisation that directly uses the AI system
Measure
Quantifying, testing, and monitoring identified AI risks