Download Framework
EU AI Act / Article 15
EU AI Act obligations for high-risk AI systems to achieve appropriate levels of accuracy, be resilient against errors and inconsistencies, and incorporate technical safeguards against adversarial attacks or unauthorized manipulation that could alter system behavior or compromise outputs.
Type:
Regulation
Domain:
Cross-sector
Coverage:
Performance & Reliability
Cybersecurity
Region:
EU
Tags:
Adversarial ML
Content:
1 Risk
8 Controls
Version: 06/2024
Framework Definition
Risks and controls associated with the framework
Assessment Layer
Concrete evaluations linked to controls to assess pass or fail
EU AI Act Candidate Screening Mapping
RISK
Non-compliance with EU AI Act Article 15
Risk that the AI system fails to comply with the regulatory requirements set out in the EU AI Act
CONTROL
Accuracy
Ensure appropriate level of accuracy.
EVALUATION
Candidate Screening Accuracy
Evaluates whether an AI candidate screening system correctly classifies job applicants for individual job requirements, measuring both overall accuracy and the direction of misclassifications.
CONTROL
Robustness
Ensure appropriate level of robustness.
EVALUATION
Candidate Screening Robustness
Evaluates whether an AI candidate screening system produces consistent predictions when the same applicant data is presented with meaning-preserving surface variations.
CONTROL
Cybersecurity
Ensure appropriate level of cybersecurity.
EVALUATION
Candidate Screening Cyber Security
Evaluates whether an AI candidate screening system resists prompt injection attacks embedded in applicant documents that attempt to manipulate the screening outcome.
CONTROL
Consistent Performance
Ensure consistent accuracy, robustness and cybersecurity across the AI system lifecycle.
EVALUATION
Candidate Screening Bias
Evaluates whether an AI candidate screening system produces consistent outcomes when protected attributes such as age, gender, or national origin are varied in an applicant's profile while all qualifying information remains unchanged.
EVALUATION
Candidate Screening Robustness
Evaluates whether an AI candidate screening system produces consistent predictions when the same applicant data is presented with meaning-preserving surface variations.
CONTROL
Accuracy Transparency
Ensure that accuracy metrics are declared in the instructions of use.
CONTROL
Resiliency
Ensure the AI system is as resilient as possible regarding errors, faults and inconsistencies.
EVALUATION
Candidate Screening Resilience
Evaluates whether an AI candidate screening system handles malformed inputs gracefully by returning an ERROR response instead of failing silently.
CONTROL
Biased Feedback Loops
Eliminate or reduce as far as possible biased outputs influencing input for future operations.
CONTROL
Malicious actors
EVALUATION
Candidate Screening Cyber Security
Evaluates whether an AI candidate screening system resists prompt injection attacks embedded in applicant documents that attempt to manipulate the screening outcome.