Download Framework
https://www.mas.gov.sg/
mindforge_risk
MindForge / Emerging Risks and Opportunities of Generative AI for Banks: A Singaporean Perspective
A risk framework developed to cover the emerging Generative AI technology for financial services industry, developed by the the MindForge consortium.
Type:
Industry
Domain:
Finance
Coverage:
Bias & Fairness
Safety & Reputational Harm
Accountability & Governance
Transparency
Legal & Compliance
Performance & Reliability
Cybersecurity
Privacy & Data
Region:
Singapore
Tags:
GenAI
Content:
40 Risks
106 Controls
Version: July 2024
Framework Definition
Risks and controls associated with the framework
Assessment Layer
Concrete evaluations linked to controls to assess pass or fail
No evaluation mapping defined yet.
Fairness and Bias
Setting fairness objectives to help identify and address unintentional bias and discrimination.
RISK
Unrepresentative or Biased Data Inputs
Risk that training or input data systematically underrepresents or misrepresents certain individuals or groups, leading to biased model outputs, discriminatory decisions, or reputational harm.
R.1.1
2 Controls
CONTROL
Data is not biased
Ensure that the data does not contain systematic, unfair or harmful representation of certain groups based on attributes like race, gender, age, etc.
C.1.1.1
CONTROL
Data Representativeness
Ensure that the data selected for training, validation and testing is sufficiently representative of certain groups based on attributes like race, gender, age, etc.
C.1.1.2
RISK
Adverse or Inappropriate Impact to Individuals and Groups
Risk of AI models generating outputs that cause harm, disadvantage, or offence to individuals or groups, resulting in discriminatory outcomes, erosion of trust, and potential legal liability.
R.1.2
1 Control
CONTROL
AI System is not biased
Ensure that the AI system outputs do not contain systematic, unfair or harmful representation of certain groups or individuals.
C.1.2.1
Ethics and Impact
Ensuring responsible and ethical outcomes in AI use against a clearly defined set of core values and practices.
RISK
Value Misalignment
Risk that AI system outputs or usage conflict with the organisation's core values or societal norms, leading to reputational damage, stakeholder distrust, and ethical harm.
R.2.1
2 Controls
CONTROL
No Prohibited AI Systems Use or Development
Ensure that a risk categorization of AI systems is defined, includes 'prohibited' category and systems classified as 'prohibited' are not developed or used.
C.2.1.1
CONTROL
Intended Use Violations
Ensure that the intended use of an AI system is documented and suitable guardrails or monitoring is put in place to ensure the actual use is aligned with the intended use.
C.2.1.2
RISK
Environmental Sustainability Impact
Risk that the high energy consumption and carbon emissions associated with training and operating large language models undermine the organisation's ESG commitments and corporate social responsibility objectives.
R.2.2
2 Controls
CONTROL
Ensure Sustainable Resource Usage
Ensure that the resources used to train and deploy AI systems are used in accordance the sustainable limits set by the organization.
C.2.2.1
CONTROL
Ensure Sustainable Resource Alternatives are Considered
Ensure that for AI models requiring high energy consumption, alternative lower resource models are considered and used if they meet the business requirements.
C.2.2.2
RISK
Dark Patterns
Risk that AI-generated content is used to deceive or manipulate users into actions they would not otherwise take, causing consumer harm, loss of informed consent, and regulatory exposure.
R.2.3
2 Controls
CONTROL
No Deceptive AI Systems Use or Development
Ensure that the AI systems that generate deceptive content, impersonations or purposely false information that may trick or mislead users are not developed or used.
C.2.3.1
CONTROL
Misinformation
Ensure that appropriate mechanisms are put in place to prevent users being tricked or mislead into taking certain actions without fully understanding the consequences.
C.2.3.2
RISK
Toxic and Offensive Outputs
Risk that AI systems produce harmful, hateful, discriminatory, violent, or otherwise offensive content, causing direct harm to individuals and reputational damage to the organisation.
R.2.4
6 Controls
CONTROL
Harmful Content
Ensure that the AI system outputs do not contain harmful content.
C.2.4.1
CONTROL
Offensive Content
Ensure that the AI system outputs do not contain offensive content.
C.2.4.2
CONTROL
Violent Content
Ensure that the AI system outputs do not contain violent content.
C.2.4.3
CONTROL
Racist Content
Ensure that the AI system outputs do not contain racist content.
C.2.4.4
CONTROL
Sexist Content
Ensure that the AI system outputs do not contain sexist content.
C.2.4.5
CONTROL
Profane Content
Ensure that the AI system outputs do not contain profane content.
C.2.4.6
Accountability and Governance
Enabling accountability and governance for outcomes and impact of data and AI systems.
RISK
Lack of Generative AI Risk Awareness
Risk that insufficient training and education on Generative AI risks leaves staff and users unable to identify, escalate, or mitigate AI-related harms, increasing the organisation's exposure to operational, reputational, and compliance failures.
R.3.1
2 Controls
CONTROL
AI Literacy Training
Ensure that AI users and staff dealing with the AI operations are trained in AI literacy.
C.3.1.1
CONTROL
AI Literacy Assessment
Ensure AI systems are accessed only by users with the necessary awareness and understanding of the unique risks involved in such systems.
C.3.1.2
RISK
Lack of Third-Party Accountability
Risk that the organisation lacks sufficient oversight or contractual control over third-party AI system or model providers, resulting in inability to govern model changes, ensure compliance, or assign liability for adverse outcomes.
R.3.2
2 Controls
CONTROL
Risk management of third-party AI systems
Impose required risk management processes on the development of third-party AI systems and models.
C.3.2.1
CONTROL
Third-party provider modifications
Ensure stable and transparent upgrade process of AI systems and models from third-party providers.
C.3.2.2
RISK
Lack of Use Case and Model Governance
Risk that the absence of AI governance frameworks, controls, and accountability structures for AI use cases leads to unmanaged risks, untraceable decisions, and inability to assign responsibility for harmful outcomes.
R.3.3
3 Controls
CONTROL
AI System Governance
Establish AI governance framework with central inventory, risk classification and role definitions
C.3.3.1
CONTROL
AI System Profile
Maintain the profile of the AI system, which includes AI system's purpose, intended use and relevant parties of the AI system, as input into the AI risk management system that controls deviations.
C.3.3.2
CONTROL
Algorithm and Model Governance
Ensure that for the algorithms and models used, the applicable legal, organizational and technical requirements are met, suitable documentation exists, and an approval process exists.
C.3.3.3
RISK
Inadequate Human Oversight
Risk that insufficient human-in-the-loop oversight prevents timely detection and correction of AI failures, particularly for high-risk outputs requiring human validation, leading to unchecked harmful outcomes.
R.3.4
2 Controls
CONTROL
Adequate Human Oversight
Ensure adequate human oversight to AI system processes, including the ability for human correction or intervention in the event of exceptions or when generating content with risk levels requiring human validation.
C.3.4.1
CONTROL
Human Oversight Monitoring
Ensure adequate monitoring exists to trigger human intervention in case anomalies, dysfunctions and/or unexpected performance is detected.
C.3.4.2
RISK
Inadequate Feedback and Recourse Mechanism
Risk that the absence of accessible feedback channels and formal recourse pathways leaves individuals harmed by AI outputs without remedy, and removes accountability from system developers and operators.
R.3.5
3 Controls
CONTROL
Feedback Mechanisms
Ensure a clearly accessible feedback mechanism within the AI system is available to all users and affected parties. Establish a defined recourse pathway — a documented, step-by-step process for individuals to contest, escalate, or seek remediation for harmful or biased outputs.
C.3.5.1
CONTROL
Feedback Monitoring
Monitor and track submitted feedback and conduct periodic audits of feedback patterns to identify systemic bias, recurring harmful outputs, or gaps in recourse effectiveness. Implement automated flagging for high-severity complaints that triggers immediate escalation.
C.3.5.2
CONTROL
Accountability
Embed consequence management into developer and operator accountability frameworks — including KPIs, performance reviews, or contractual obligations tied to harmful output rates and recourse resolution times.
C.3.5.3
Transparency and Explainability
Enabling human awareness, explainability, interpretability and auditability of data and AI systems.
RISK
Unclear Output Accuracy
Risk that undefined or unvalidated accuracy requirements for AI use cases lead to deployment of systems that produce unreliable outputs, resulting in poor decision-making, financial loss, and erosion of stakeholder confidence.
R.4.1
2 Controls
CONTROL
Manual Testing
Sufficient user testing has been performed to evaluate AI system output accuracy with respect to the intended use.
C.4.1.1
CONTROL
Intended Use Violation Monitoring
Ensure that the intended use of an AI system is monitored at a regular frequency.
C.4.1.2
RISK
Unclear Provenance for Training/Test Data
Risk that the inability to trace the origin, lineage, and rights associated with training and test data undermines auditability, regulatory compliance, and the organisation's legal right to use the data.
R.4.2
1 Control
CONTROL
Governance of Data used by the AI system
Ensure that all datasets used by the AI system have acceptable data provenance, including ownership, origin and lineage (transformations between original sources and the AI system)
C.4.2.1
RISK
Lack of Explainability
Risk that the black-box nature of AI models prevents meaningful explanation of how outputs are derived, impairing auditability, user trust, regulatory compliance, and the ability to identify and correct errors.
R.4.3
3 Controls
CONTROL
Model Explainability
Ensure that suitable explainability methods (either white-box or black-box) are deployed along with the AI system.
C.4.3.1
CONTROL
Model Explainability Evaluated
Ensure that the technical explainability methods are adequately evaluated to access their performance, limitations and whether they are fit for purpose.
C.4.3.2
CONTROL
Model Explainability Training
Ensure that the relevant parties (e.g., users) are trained in how to use and interpret model explainability, as well as, understand its limitations.
C.4.3.3
RISK
Anthropomorphism
Risk that AI systems mimicking human-like characteristics cause users to place unwarranted trust in outputs or mistakenly believe they are interacting with a human, leading to impaired judgement, manipulation, and erosion of informed consent.
R.4.4
3 Controls
CONTROL
AI System Transparency Communicated
Ensure that users are informed in understandable and accessible manner whenever they interact with an AI system, instead of a human.
C.4.4.1
CONTROL
AI System Deception Assessed
Ensure that the AI system does not intentionally deceive the users into thinking they interact with a human, instead of an AI system.
C.4.4.2
CONTROL
AI System Transparency Assessed
Ensure that users are aware that they interact with an AI system, instead of a human.
C.4.4.3
Legal and Regulatory
Identifying any legal or regulatory obligations that need to be met or may be breached by the use of AI, including issues with compliance, data protection and privacy rules.
RISK
Inability to Ensure Location Compliance for Model Hosting and Data Processing
Risk that model hosting and data processing activities occur outside permitted geographic jurisdictions, resulting in breach of data sovereignty regulations, regulatory sanctions, and loss of customer trust.
R.5.1
2 Controls
CONTROL
Jurisdictional requirements identification
Ensure that the jurisdictional requirements governing the hosting and processing of data involved in the AI system are be identified and documented before deployment.
C.5.1.1
CONTROL
Model hosting and processing location verification or contractual obligations
Where possible, ensure that the actual locations of model hosting and all data processing activities (including subprocessors) are compliant with applicable requirements. If not possible, ensure that contracts with model providers and subprocessors include binding obligations specifying the permitted geographic locations for model hosting, data processing, and storage, and must require providers to notify the organisation of any changes to those locations before they take effect.
C.5.1.2
RISK
Unclear Data Ownership
Risk that ambiguous ownership of data used to train or generated by AI models exposes the organisation to intellectual property disputes, privacy violations, and legal liability.
R.5.2
2 Controls
CONTROL
Ownership and rights to use data to train the AI system
Ensure that all datasets used by the AI system have documented data ownership, respond to relevant data regulations, rights to data for AI use, licenses and copyrights.
C.5.2.1
CONTROL
Ownership and rights to use data created by the AI system
Ensure that the necessary rights to use exists over the AI system outputs.
C.5.2.2
RISK
Unauthorised Data Transfer and Storage
Risk that data is transferred to or stored on systems not authorised under licensing terms or organisational policies, resulting in data security breaches, compliance violations, and contractual liability.
R.5.3
3 Controls
CONTROL
Transfer and Storage Data Governance
Establish data access and usage policies applicable to each AI system data asset to ensure that data compliance obligations are known, attributable to their source, and available to inform access controls, approval decisions, and audit activity.
C.5.3.1
CONTROL
Data Protection at Rest
Ensure that data assets at rest are be protected in a manner proportionate to their data policy classification.
C.5.3.2
CONTROL
Data transfer and usage approval
Ensure that the new and existing approved uses of each data asset remain current and governed over time — so that new tools, systems, or transfer requirements are assessed against applicable obligations before being permitted,
C.5.3.3
RISK
Breach of Misalignment with Regulatory or Organisational Standards
Risk that AI system behaviour or outputs fail to meet applicable legal, regulatory, or organisational requirements, exposing the organisation to enforcement actions, fines, and reputational harm.
R.5.4
1 Control
CONTROL
EU AI Act Compliance
Ensure that the AI system complies with the regulatory requirements set out in the EU AI Act
C.5.4.1
RISK
IP Infringement
Risk that AI system inputs or outputs violate intellectual property rights held by third parties, exposing the organisation to litigation, financial damages, and reputational harm.
R.5.5
2 Controls
CONTROL
Respect intellectual property
Ensure the development and use of an AI system does not violate the Intellectual Property rights owned by another individual, organisation or entity.
C.5.5.1
CONTROL
Intellectual property infringement monitoring
Ensure that suitable automated techniques are put in place to automatically detect common types of outputs that infringe third-party intellectual property.
C.5.5.2
RISK
Unavailability of IP Protection
Risk that AI-generated content cannot be protected by copyright or trademark due to legal uncertainty around AI authorship, leaving the organisation's outputs vulnerable to unrestricted use by competitors.
R.5.6
3 Controls
CONTROL
AI content classification
Ensure that content produced by an AI system without material human change is marked as AI-generated; and content produced by an AI system with material human change has a named human owner who is accountable for reviewing and approving the final output, and whose contribution establishes authorship over it.
C.5.6.1
CONTROL
Human authorship requirement and attestation
Ensure that the human authorship required to establish IP protection over AI-assisted content is genuine, identifiable, and evidenced — and that the organisation can demonstrate a defensible authorship chain for any content it asserts ownership over.
C.5.6.2
CONTROL
AI content provenance detection
Ensure an independent, automated mechanism exists for identifying AI-generated content that has not been declared as such — reducing reliance on author self-declaration and discovering cases where AI involvement was not disclosed voluntarily.
C.5.6.3
RISK
Inadequate Privacy Protection
Risk that misclassified or insufficiently protected personal and sensitive data is processed by AI systems without legal or ethical justification, resulting in privacy violations, regulatory penalties, and harm to data subjects.
R.5.7
2 Controls
CONTROL
Data classification at ingestion
Ensure that personal or sensitive data is not processed without appropriate protections as a result of being misclassified or unrecognised at entry — recognising that in a customer-facing AI system, personal data will routinely enter through channels that were not designed to handle it, such as free-text user prompts.
C.5.7.1
CONTROL
Lawful basis for processing
Ensure that no personal or sensitive data is processed by the AI system without a prior, explicit determination that a valid legal or ethical basis exists for doing so — and that this determination is made deliberately rather than assumed.
C.5.7.2
RISK
Unclear Data Retention and Deletion
Risk that the absence of clear data retention and deletion policies for AI system data leads to prolonged storage of personal or sensitive information, resulting in privacy breaches, regulatory non-compliance, and inability to honour data subject rights.
R.5.8
3 Controls
CONTROL
Data retention policy for AI system data assets
Define a retention policy for each data asset associated with the AI system, specifying the maximum period for which personal, sensitive, or confidential data may be retained, the basis for that period, and the deletion or anonymisation action required at the end of it.
C.5.8.1
CONTROL
Data subject erasure requests
Ensure that data subject rights under applicable regulation can be exercised in the context of the AI system, and that where full technical erasure is not achievable — for example where data has been incorporated into model weights — this limitation is documented, legally assessed, and disclosed appropriately.
C.5.8.2
CONTROL
Deletion and anonymisation execution
Ensure that retention policy commitments are operationally executed rather than existing only on paper — and that deletion is complete across the full data landscape of the AI system, not just the primary storage location.
C.5.8.3
Monitoring and Stability
Ensuring the robustness and operational stability of the model or service and its infrastructure.
RISK
Hallucination / Fabrication / Confabulation
Risk that AI models generate ungrounded or factually incorrect outputs presented as truth, leading to misinformation, reputational damage, erosion of public trust in AI systems, and harm to affected individuals or groups.
R.6.1
3 Controls
CONTROL
Hallucinations
Ensure that the degree to which an AI system hallucinates is fit for purpose.
C.6.1.1
CONTROL
Hallucinations Monitoring
Ensure that monitoring and necessary logging is in place to allow analyzing the AI system hallucinations.
C.6.1.2
CONTROL
Hallucinations Detection
Ensure that suitable hallucination detection methods are deployed along with the AI system.
C.6.1.3
RISK
Overconfidence
Risk that AI models present uncertain, contested, or false information with unwarranted confidence, impairing users' ability to exercise independent judgement and leading to flawed decisions based on unreliable outputs.
R.6.2
2 Controls
CONTROL
Output uncertainty disclosure
Ensure that the system outputs distinguish between information that is verifiable and attributable and information that is uncertain, inferred, contested, or unverifiable — and must communicate that distinction to the user in the output itself.
C.6.2.1
CONTROL
Uncertainty disclosure calibration
The uncertainty disclosed by a AI system in its outputs must accurately reflect the actual level of uncertainty present in those outputs, as determined by periodic independent assessment.
C.6.2.2
RISK
Training Data or Inputs Not Fit for Purpose
Risk that training data does not adequately represent the geographic, cultural, or operational context of the AI system's deployment, leading to inaccurate outputs, poor decisions, and failure to meet the system's intended objectives.
R.6.3
2 Controls
CONTROL
Data Representativeness
Ensure that the data selected for training is sufficiently representative of the geographical and cultural context under which the AI system will be used.
C.6.3.1
CONTROL
Data is aligned with the intended goal
Ensure that the data selected for training and validation/testing is aligned with the indented goal of the AI system.
C.6.3.2
RISK
Lack of Continuous Monitoring
Risk that the absence of systematic, ongoing monitoring of AI system performance, usage patterns, and compliance allows undetected degradation, misuse, or deviation from intended purposes, ethical guidelines, and regulatory requirements.
R.6.4
4 Controls
CONTROL
Operational Monitoring
Ensure that the relevant operational metrics (e.g., request latency, throughput, error rates, and resource utilization) are monitored.
C.6.4.1
CONTROL
Security and Privacy Monitoring
Ensure that the relevant metrics capturing security and privacy aspects are monitored.
C.6.4.2
CONTROL
Data Quality and Model Performance Monitoring
Ensure that functional metrics capturing data quality and model performance are monitored.
C.6.4.3
CONTROL
Intended Use Monitoring
Ensure that the AI system actual use conforms to the intended use, ethical guidelines and regulatory requirements.
C.6.4.4
RISK
Insufficient Data Quality
Risk that low-quality, noisy, or incomplete training data degrades model performance, resulting in unreliable outputs, increased remediation costs, and diminished fitness for purpose.
R.6.5
10 Controls
CONTROL
Data Currentness
Ensure that the data selected for training, validation and testing has attributes that are of the right age in a specific context of use.
C.6.5.1
CONTROL
Data Representativeness
Ensure that the data selected for training, validation and testing is sufficiently representative of the real-world conditions under which the AI system is (or will) be used.
C.6.5.2
CONTROL
Data Noise
Ensure that the data selected for training, validation and testing is not noisy.
C.6.5.3
CONTROL
Data Accuracy
Ensure that the data selected for training, validation and testing has attributes that correctly represent the true value of the intended attribute, concept or event in a specific context of use.
C.6.5.4
CONTROL
Data Uniqueness
Ensure that objects (of the real world) occur only once as a record in a dataset.
C.6.5.5
CONTROL
Data Contamination
Ensure that the data used to train the model is not leaked into the data used to evaluate the model (or vice versa).
C.6.5.6
CONTROL
Data Completeness
Ensure that all the required records/data values/attributes and metadata in the dataset are present.
C.6.5.7
CONTROL
Data Time Completeness
Ensure that all the required records/data values/attributes and metadata in the dataset collected over the relevant period.
C.6.5.8
CONTROL
Data Trustworthiness
Ensure that the data source provides data that is factual and has attributes that are correct and of high quality.
C.6.5.9
CONTROL
Data Consistency
Ensure that the data has attributes that are free from contradiction and are coherent with other data in a specific context of use.
C.6.5.10
RISK
Model Staleness
Risk that training data becomes outdated as real-world conditions evolve, causing progressive degradation in model accuracy, the development of ingrained biases, and unreliable outputs that undermine business decisions.
R.6.6
2 Controls
CONTROL
Data Currentness Monitoring
Ensure that the data selected for training, validation and testing has attributes that are of the right age in a specific context of use.
C.6.6.1
CONTROL
Concept Drift Monitoring
Ensure that the model performance does not decay due to concept drift.
C.6.6.2
RISK
Insufficient Model Accuracy / Soundness
Risk that model outputs fail to meet required accuracy and performance thresholds, rendering the system unfit for its intended purpose and leading to flawed decisions, financial loss, and loss of stakeholder confidence.
R.6.7
4 Controls
CONTROL
Model outputs are accurate
Ensure that the model outputs meet the required performance thresholds.
C.6.7.1
CONTROL
Model outputs are consistent
Ensure that the model outputs are consistent.
C.6.7.2
CONTROL
Model robustness
Ensure that the AI system performance robust to input noise or semantically equivalent variations of the input data.
C.6.7.3
CONTROL
Model evaluation repeatability
Ensure that the AI system performance evaluation is repeatable.
C.6.7.4
RISK
Model Degradation from Unexpected Use
Risk that unanticipated usage patterns exploit the broad capabilities of generative models, causing outcome instability, unexpected failure modes, and unreliable outputs that harm users or the organisation.
R.6.8
1 Control
CONTROL
Anomaly Detection
Ensure monitoring mechanisms is implemented to detect and anomalies, and a process exists to periodically review them.
C.6.8.1
RISK
Inadequate Operational Resilience
Risk that the complexity of Generative AI services outpaces the organisation's business continuity and disaster recovery capabilities, leading to prolonged service disruptions and inability to maintain critical operations.
R.6.9
2 Controls
CONTROL
Unbounded Consumption
Ensure appropriate mitigations and monitoring is implemented to prevent users from conducting excessive and uncontrolled inferences.
C.6.9.1
CONTROL
Provider Overreliance
Over Reliance of the AI system on an individual model provider, without a backup plan in case of disruptions.
C.6.9.2
RISK
Unmet Architectural Requirements
Risk that technology, cost, or resource constraints prevent the organisation from meeting the architectural requirements needed to govern AI models across deployment environments, resulting in security gaps, compliance failures, and ungovernable systems.
R.6.10
6 Controls
CONTROL
Secure Data in Transit
The ability to inspect and/or ensure data security in transit (e.g., support for SSL/ TLS encryption to secure data transmission).
C.6.10.1
CONTROL
Authorized Access
The ability to inspect and/or implement user authentication and authorisation protocols.
C.6.10.2
CONTROL
Model Validation
The ability to perform automated model validation (e.g., necessary APIs are exposed).
C.6.10.3
CONTROL
Backup and Recovery
The ability to inspect and/or implement backup and recovery plan to protect the model and the data.
C.6.10.4
CONTROL
System Update
The ability to control, maintain and update the AI system, including bug fixes, security patches, and feature enhancements.
C.6.10.5
CONTROL
System Monitoring
The ability to monitor system operation and logs for automated processing.
C.6.10.6
Cyber and Data Security
Protecting data, AI models and systems, and other enterprise information technology (IT) assets from unauthorised access, data loss or leakage, and misuse by malicious actors.
RISK
Unintentional, Inappropriate or Illegal Use
Risk that consumers or employees inadvertently use Generative AI for inappropriate or illegal activities, exposing the organisation to legal liability, regulatory sanctions, and reputational damage.
R.7.1
1 Control
CONTROL
UnIntended Use Violations
Ensure that the unintended use of an AI system is documented and suitable guardrails or monitoring is put in place to ensure the actual use is not violating the unintended use policy.
C.7.1.1
RISK
Data Poisoning
Risk that malicious actors deliberately introduce corrupted or adversarial data into training pipelines or operational inputs, compromising model integrity and producing harmful, inaccurate, or exploitable outputs.
R.7.2
5 Controls
CONTROL
Data Validation
Ensure that data is thoroughly validated and verified before it is used (e.g., data passes data quality requirements before including in the training dataset or into the RAG knowledge base).
C.7.2.1
CONTROL
Data Ingestion
Ensure that the AI system either does not automatically ingest new data while in use, or that appropriate controls are in place to access data quality and validate the data.
C.7.2.2
CONTROL
Data Access
Store data in a secure manner, and maintain access controls to limit who/what can update data stores.
C.7.2.3
CONTROL
Model Resilience to Tainted Inputs
Insure that the model has sufficient robust when handling potentially tainted inputs (e.g., through adversarial training or via secure handling of possible prompt injections).
C.7.2.4
CONTROL
Anomaly Detection
Ensure monitoring mechanisms is implemented to detect and anomalies before the data is ingested into the system.
C.7.2.5
RISK
Adversarial Model Manipulation
Risk that a malicious party with access to foundational model components deliberately alters model behaviour, leading to unpredictable, harmful, or exploitable outputs that undermine system integrity and user safety.
R.7.3
2 Controls
CONTROL
Strict model access controls
Ensure least-privilege access to foundational model weights, APIs, and fine-tuning pipelines. Enforce MFA, role-based permissions, and audit logs on all model-touching operations.
C.7.3.1
CONTROL
Model integrity verification
Cryptographically sign and hash model weights at each stage (training, fine-tuning, deployment). Verify signatures at load time; reject any artefact whose hash has changed unexpectedly.
C.7.3.2
RISK
Prompt Injection
Risk that adversarial prompts bypass AI system guardrails or filters, enabling malicious actors to generate prohibited content, extract sensitive information, or cause the system to act outside its intended boundaries.
R.7.4
2 Controls
CONTROL
Prompt Injection Vulnerability
Perform security testing to assess the degree of system vulnerability to prompt injections, and understand the system's limitations.
C.7.4.1
CONTROL
Prompt Injection Detection
Ensure that the degree that sufficient mitigation measures are implemented to detect prompt injection attacks.
C.7.4.2
RISK
Re-Identification
Risk that de-identified data released during normal AI operations can be re-linked to individuals through inference or cross-referencing, resulting in privacy breaches, regulatory violations, and harm to affected data subjects.
R.7.5
1 Control
CONTROL
Re-identification
Possibility of de-identified records/ data being able to be re-identified mostly with malicious intent. This risk is related to “model inference attacks” but is distinct in that it refers to data released in the normal course of operations, whereas model inference attacks imply the use of deliberately designed inputs.
C.7.5.1
RISK
Data Leakage
Risk that AI model outputs or development processes inadvertently expose sensitive, confidential, or personal data to unauthorised parties, whether through normal usage, prompt injection attacks, or training data memorisation, causing privacy violations and regulatory liability.
R.7.6
3 Controls
CONTROL
Categorize PII Data
Define and maintain a list of PII and sensitive information that the AI system should not reveal.
C.7.6.1
CONTROL
PII Protection
Ensure that the degree to which the AI system is susceptible to reveal PII and sensitive information is fit for purpose. This includes: (i) unintentional data exposure during normal usage, (ii) targeted attacks by an malicious users (e.g., via prompt injection), and (iii) data leakage via training data.
C.7.6.2
CONTROL
System Prompt Leakage
Ensure that the system prompt used by the AI system is not leaked to the malicious users.
C.7.6.3
RISK
Model Inference Attacks
Risk that adversaries exploit the model's natural language interface to extract information about individuals in the training data through crafted inputs, compromising data subject privacy and exposing the organisation to regulatory penalties and legal claims.
R.7.7
2 Controls
CONTROL
Prompt Injection Vulnerability
Perform security testing to assess the degree of system vulnerability to prompt injections, and understand the system's limitations.
C.7.7.1
CONTROL
Adversarial De-anonymization
Ensure that the adversarial de-anonymization attack accuracy is below the allowed thresholds.
C.7.7.2