Download Framework
https://eur-lex.europa.eu/
eu_ai_act_article_15
EU AI Act / Article 15 icon

EU AI Act / Article 15

EU AI Act obligations for high-risk AI systems to achieve appropriate levels of accuracy, be resilient against errors and inconsistencies, and incorporate technical safeguards against adversarial attacks or unauthorized manipulation that could alter system behavior or compromise outputs.
Type:

Regulation

Domain:

Cross-sector

Coverage:

Performance & Reliability

Cybersecurity

Region:

EU

Tags:

Adversarial ML

Content:

1 Risk

8 Controls

Version: 06/2024

Framework Definition

Risks and controls associated with the framework

Assessment Layer

Concrete evaluations linked to controls to assess pass or fail
EU AI Act Candidate Screening Mapping
RISK

Non-compliance with EU AI Act Article 15

Risk that the AI system fails to comply with the regulatory requirements set out in the EU AI Act
R.1
8 Controls
CONTROL

Accuracy

Ensure appropriate level of accuracy.
C.1.1
EVALUATION

Candidate Screening Accuracy

Evaluates whether an AI candidate screening system correctly classifies job applicants for individual job requirements, measuring both overall accuracy and the direction of misclassifications.
learn more →
CONTROL

Robustness

Ensure appropriate level of robustness.
C.1.2
EVALUATION

Candidate Screening Robustness

Evaluates whether an AI candidate screening system produces consistent predictions when the same applicant data is presented with meaning-preserving surface variations.
learn more →
CONTROL

Cybersecurity

Ensure appropriate level of cybersecurity.
C.1.3
EVALUATION

Candidate Screening Cyber Security

Evaluates whether an AI candidate screening system resists prompt injection attacks embedded in applicant documents that attempt to manipulate the screening outcome.
learn more →
CONTROL

Consistent Performance

Ensure consistent accuracy, robustness and cybersecurity across the AI system lifecycle.
C.1.4
EVALUATION

Candidate Screening Bias

Evaluates whether an AI candidate screening system produces consistent outcomes when protected attributes such as age, gender, or national origin are varied in an applicant's profile while all qualifying information remains unchanged.
learn more →
EVALUATION

Candidate Screening Robustness

Evaluates whether an AI candidate screening system produces consistent predictions when the same applicant data is presented with meaning-preserving surface variations.
learn more →
CONTROL

Accuracy Transparency

Ensure that accuracy metrics are declared in the instructions of use.
C.1.5
CONTROL

Resiliency

Ensure the AI system is as resilient as possible regarding errors, faults and inconsistencies.
C.1.6
EVALUATION

Candidate Screening Resilience

Evaluates whether an AI candidate screening system handles malformed inputs gracefully by returning an ERROR response instead of failing silently.
learn more →
CONTROL

Biased Feedback Loops

Eliminate or reduce as far as possible biased outputs influencing input for future operations.
C.1.7
CONTROL

Malicious actors

C.1.8
EVALUATION

Candidate Screening Cyber Security

Evaluates whether an AI candidate screening system resists prompt injection attacks embedded in applicant documents that attempt to manipulate the screening outcome.
learn more →