Download Framework
AIUC-1 / AI Agent Standard
AIUC-1 is a standard for AI agents. It covers data & privacy, security, safety, reliability, accountability and societal risks.
Type:
Standard
Domain:
Agentic
Coverage:
Accountability & Governance
Privacy & Data
Cybersecurity
Safety & Reputational Harm
Performance & Reliability
Content:
6 Risks
49 Controls
Version: 2026-01
Framework Definition
Risks and controls associated with the framework
Assessment Layer
Concrete evaluations linked to controls to assess pass or fail
No evaluation mapping defined yet.
RISK
Data & Privacy
Inadequate data and privacy protections expose users and enterprises to unauthorized access, data leakage, intellectual property exposure, and non-compliant use of personal or confidential information in AI training and inference, resulting in regulatory penalties, reputational damage, and loss of customer trust.
CONTROL
Establish input data policy
Ensure that AI input data policies are established, documented, and communicated to customers, covering how customer data is used for model training and inference processing, data retention periods, and customer data rights.
CONTROL
Establish output data policy
Ensure that AI output ownership, usage, opt-out, and deletion policies are established, documented, and clearly communicated to customers.
CONTROL
Limit AI agent data collection
Ensure that safeguards are implemented to restrict AI agent data access to task-relevant information only, based on user roles and operational context, minimizing unnecessary data exposure.
CONTROL
Protect IP & trade secrets
Ensure that technical safeguards and controls are implemented to prevent AI systems from leaking company intellectual property, trade secrets, or confidential information through model outputs or interactions.
CONTROL
Prevent cross-customer data exposure
Ensure that safeguards are implemented to prevent unauthorized cross-customer data exposure when combining or processing customer data from multiple sources.
CONTROL
Prevent PII leakage
Ensure that safeguards are implemented to detect and prevent the leakage of personally identifiable information (PII) through AI system outputs.
CONTROL
Prevent IP violations
Ensure that technical safeguards and controls are implemented to prevent AI outputs from reproducing or violating copyrights, trademarks, or other third-party intellectual property rights.
RISK
Security
Insufficient security controls expose AI systems to unauthorized access, adversarial attacks, prompt injection, jailbreak attempts, and endpoint scraping, resulting in system compromise, data breaches, reputational harm, and potential exploitation of AI capabilities for malicious purposes.
CONTROL
Third-party testing of adversarial robustness
Ensure that an adversarial testing program is established and conducted by qualified third parties to validate AI system resilience against adversarial inputs and prompt injection attempts, aligned with a recognized adversarial threat taxonomy.
CONTROL
Detect adversarial input
Ensure that monitoring capabilities are implemented to detect, alert on, and respond to adversarial inputs and prompt injection attempts in real time.
CONTROL
Manage public release of technical details
Ensure that controls are implemented to prevent over-disclosure of technical information about AI systems and organizational details that could enable adversarial targeting or system exploitation.
CONTROL
Prevent AI endpoint scraping
Ensure that safeguards are implemented to detect and prevent unauthorized probing or scraping of external AI endpoints.
CONTROL
Implement real-time input filtering
Ensure that real-time input filtering is implemented using automated moderation tools to intercept and block malicious or policy-violating inputs before processing.
CONTROL
Prevent unauthorized AI agent actions
Ensure that safeguards are implemented to restrict AI agent system access and actions to those consistent with operational context and declared objectives, preventing unauthorized or unintended behaviors.
CONTROL
Enforce user access privileges to AI systems
Ensure that user access controls and administrative privileges for AI systems are established, maintained, and enforced in accordance with access control policy.
CONTROL
Protect model deployment environment
Ensure that security measures including encryption, access controls, and authorization mechanisms are implemented and maintained for AI model deployment environments.
CONTROL
Limit output over-exposure
Ensure that output limitations and obfuscation techniques are implemented to prevent AI systems from exposing sensitive or excess information through their responses.
RISK
Safety
Failure to prevent harmful, offensive, or out-of-scope AI outputs exposes customers to physical, psychological, or financial harm, and exposes the organization to reputational damage, regulatory scrutiny, and erosion of customer trust.
CONTROL
Define AI risk taxonomy
Ensure that a risk taxonomy is established that categorizes and defines harmful, out-of-scope, hallucinated, and other high-risk output types, including tool call risks, based on application-specific usage and context.
CONTROL
Conduct pre-deployment testing
Ensure that internal testing of AI systems is conducted prior to deployment across all defined risk categories for any system changes requiring formal review or approval.
CONTROL
Prevent harmful outputs
Ensure that safeguards and technical controls are implemented to detect and prevent harmful AI outputs, including distressed outputs, angry responses, high-risk advice, offensive content, biased outputs, and deceptive content.
CONTROL
Prevent out-of-scope outputs
Ensure that safeguards and technical controls are implemented to detect and prevent AI outputs that fall outside the intended scope of the system, such as political discussion or unsanctioned healthcare advice.
CONTROL
Prevent customer-defined high risk outputs
Ensure that safeguards and technical controls are implemented to detect and prevent additional high-risk outputs as defined in the organization's risk taxonomy.
CONTROL
Prevent output vulnerabilities
Ensure that safeguards are implemented to detect and prevent security vulnerabilities embedded in AI outputs from being executed or causing harm to users.
CONTROL
Flag high risk outputs
Ensure that an alerting system is implemented to automatically flag high-risk outputs for timely human review and intervention.
CONTROL
Monitor AI risk categories
Ensure that continuous monitoring of AI system outputs is implemented across all defined risk categories to enable timely detection and response to policy violations.
CONTROL
Enable real-time feedback and intervention
Ensure that mechanisms are implemented to enable real-time user feedback collection and human intervention capabilities to address harmful or erroneous AI outputs promptly.
CONTROL
Third-party testing for harmful outputs
Ensure that qualified third parties are appointed to evaluate AI system robustness against harmful outputs—including distressed outputs, angry responses, high-risk advice, offensive content, bias, and deception—at least every three months.
CONTROL
Third-party testing for out-of-scope outputs
Ensure that qualified third parties are appointed to evaluate AI system robustness against out-of-scope outputs, such as political discussion or unsanctioned healthcare advice, at least every three months.
CONTROL
Third-party testing for customer-defined risk
Ensure that qualified third parties are appointed to evaluate AI system robustness against additional high-risk outputs as defined in the organization's risk taxonomy at least every three months.
RISK
Reliability
Unreliable AI outputs, including hallucinations and unauthorized tool calls, expose customers to misinformation, financial loss, and harm, while undermining trust in AI systems and creating organizational liability.
CONTROL
Prevent hallucinated outputs
Ensure that safeguards and technical controls are implemented to detect and prevent hallucinated or factually incorrect outputs from AI systems.
CONTROL
Third-party testing for hallucinations
Ensure that qualified third parties are appointed to evaluate AI system susceptibility to hallucinated outputs at least every three months.
CONTROL
Restrict unsafe tool calls
Ensure that safeguards and technical controls are implemented to prevent AI system tool calls from executing unauthorized actions, accessing restricted information, or making decisions beyond their intended scope.
CONTROL
Third-party testing of tool calls
Ensure that qualified third parties are appointed to evaluate AI system tool call behavior—including unauthorized actions, restricted information access, and out-of-scope decision-making—at least every three months.
RISK
Accountability
Weak governance, undefined ownership, and insufficient oversight of AI systems increase the likelihood of undetected failures, regulatory non-compliance, and inadequate response to AI-related incidents, resulting in organizational, legal, and reputational harm.
CONTROL
AI failure plan for security breaches
Ensure that a documented AI failure plan is established for AI privacy and security breaches, assigning accountable owners, and defining notification and remediation procedures with third-party support as needed, including legal, public relations, and insurance stakeholders.
CONTROL
AI failure plan for harmful outputs
Ensure that a documented AI failure plan is established for harmful AI outputs that cause significant customer harm, assigning accountable owners and defining remediation procedures with third-party support as needed, including legal, public relations, and insurance stakeholders.
CONTROL
AI failure plan for hallucinations
Ensure that a documented AI failure plan is established for hallucinated AI outputs that cause substantial customer financial loss, assigning accountable owners and defining remediation procedures with third-party support as needed, including legal, public relations, and insurance stakeholders.
CONTROL
Assign accountability
Ensure that all AI system changes across the development and deployment lifecycle that require formal review or approval are documented, assigned a named accountable lead, and that approvals are recorded with supporting evidence.
CONTROL
Assess cloud vs on-prem processing
Ensure that criteria are established for selecting cloud providers and determining circumstances for on-premises processing, considering data sensitivity, regulatory requirements, security controls, and operational needs.
CONTROL
Conduct vendor due diligence
Ensure that AI vendor due diligence processes are established and applied to foundation and upstream model providers, covering data handling practices, PII controls, security measures, and regulatory compliance.
CONTROL
Review internal processes
Ensure that regular internal reviews of key AI governance processes are conducted, with review records and approvals documented and maintained.
CONTROL
Monitor third-party access
Ensure that systems are implemented to continuously monitor and audit third-party access to AI systems and associated data.
CONTROL
Establish AI acceptable use policy
Ensure that an AI acceptable use policy is established, documented, and implemented across the organization, defining permitted and prohibited uses of AI systems.
CONTROL
Record processing locations
Ensure that all AI data processing locations are documented and maintained to support regulatory compliance, data sovereignty requirements, and incident response.
CONTROL
Document regulatory compliance
Ensure that applicable AI laws, regulations, and standards are identified and documented, along with required data protections and organizational strategies for achieving and maintaining compliance.
CONTROL
Implement quality management system
Ensure that a quality management system for AI systems is established and maintained, proportionate to the size and complexity of the organization and the risk level of deployed AI systems.
CONTROL
Log model activity
Ensure that logs of AI system processes, actions, and model outputs are maintained where permitted, to support incident investigation, auditing, and explanation of AI system behavior.
CONTROL
Implement AI disclosure mechanisms
Ensure that clear and accessible disclosure mechanisms are implemented to inform users when they are interacting with an AI system rather than a human.
CONTROL
Document system transparency policy
Ensure that a system transparency policy is established and that a repository of model cards, datasheets, and interpretability reports is maintained for all major AI systems.
RISK
Society
Misuse of AI systems for cyber attacks, exploitation, or enabling chemical, biological, radiological, or nuclear threats poses catastrophic risks to individuals, critical infrastructure, national security, and broader societal stability.
CONTROL
Prevent AI cyber misuse
Ensure that guardrails are implemented and documented to detect and prevent the use of AI systems to facilitate or enable cyber attacks and exploitation.
CONTROL
Prevent catastrophic misuse
Ensure that guardrails are implemented and documented to detect and prevent AI-enabled catastrophic system misuse, including applications related to chemical, biological, radiological, and nuclear threats.