Consultation: MAS Guidelines on Artificial Intelligence Risk Management
The proposed guidelines form part of Singapore’s broader regulatory framework for technology risk management and reflect MAS’ increasing focus on AI governance. Pending industry feedback and recommendations, the MAS will issue a set of Guidelines to set out its expectations on how financial institutions (FI) should be using AI.
This article highlights the key priorities emerging from MAS’ proposals, outlines practical readiness considerations, and explains how FIs can prepare for implementation.
AI Tools and Risk Management in Financial Services
The launch of AI tools has led to an increased use across the banking and broader financial sector. AI tools are now being used widely in a variety of applications and internal processes. Common use cases include fraud detection, anti-money laundering (AML) transaction monitoring, customer onboarding (KYC), portfolio optimisation, and regulatory reporting automation.
MAS recognises that while AI can enhance efficiency, decision-making, and customer experience, it carries risks if not developed or deployed responsibly.
AI Risks
MAS explicitly highlights risks such as:
- Model errors & unpredictability: AI’s probabilistic behaviour can lead to inaccurate outputs and unexpected decisions, resulting in financial losses or wrong risk assessments.
- Operational disruption: AI-driven automation can fail or breakdown, disrupting operations.
- Bias & conduct risk: AI outcomes can be biased, leading to unfair treatment of customer segments or steering clients toward unsuitable products.
- Financial crime risk: AI used for fraud, AML/CFT purposes may miss suspicious patterns or generate inconsistent detection outcomes.
- Reputational harm: Customer-facing AI (e.g., chatbots) may produce inaccurate, harmful, or inappropriate responses, leading to complaints and reputational damage.
- Data privacy & confidentiality leakage: AI tools may expose sensitive or customer data, especially when third-party models are used without proper controls/consents.
As AI becomes increasingly embedded across business functions, MAS expects FIs to implement structured, enterprise-wide AI risk frameworks with systematic internal control procedures.
Scope and Applicability
MAS proposes that the AIRG apply to all FIs using such AI tools, implemented in a proportionate manner based on:
- The size and nature of the institution
- The extent and materiality of AI usage
- The risk profile of AI use cases.
This proportional approach aligns with MAS’ broader supervisory philosophy, ensuring that AI regulatory compliance obligations are risk-based rather than one-size-fits-all.
Importantly, MAS defines AI broadly to include:
- Internally developed AI models
- Third-party AI tools and services
- Generative AI and AI agents
- AI used for both decision-making and decision support.
Even firms using AI “assistive tools” (e.g. copilots, analytics engines) will be expected to maintain baseline AI governance policies. This signals that no AI deployment is considered “too small” for oversight.
Key Compliance Priorities expected of FIs
The MAS Consultation Paper outlines several core regulatory expectations that financial institutions should integrate into their AI governance and risk management frameworks. These priorities reflect global regulatory trends in AI supervision, model risk management, and digital operational resilience.
Scope and Proportionality
As discussed above, the AIRG applies to all FIs that use, develop, procure, or deploy AI, including third-party AI tools. AI is broadly defined as systems that generate outputs such as predictions, recommendations, decisions, or content through learning/inference. This expansive definition ensures that both traditional quantitative models and newer generative AI systems fall within regulatory scope.
Implementation should be proportionate to the FI’s size/complexity and the materiality of AI use (i.e., potential impact on business operations and customers).
Even where AI use is limited, MAS expects baseline controls (e.g., rules on allowed/disallowed use, ownership, internal checks, and review).
Board and Senior Management Oversight
The Board and Senior Management remain accountable for AI governance and must establish a framework to ensure the FI can:
- Identify AI use cases/systems/models (incl. vendor tools)
- Assess materiality and adopt controls across the lifecycle
- Ensure adequate capability/capacity to implement and oversee AI safely.
MAS reinforces that AI governance is not solely a technological function but a core enterprise risk management responsibility. Existing governance and risk frameworks should be updated to include:
- AI risk appetite and thresholds
- Roles/responsibilities across business/tech/risk
- Escalation and incident reporting
- Periodic review (recognising AI risks evolve quickly).
If AI risk exposure is material, MAS expects a resolute cross-functional committee to oversee these risks. Establishing a formal AI governance committee strengthens demonstrable compliance and supports effective documentation during supervisory inspections or thematic reviews.
AI Control Mechanisms
Robust AI governance begins with strong foundational control mechanisms to ensure effective AI risk management across the enterprise. MAS expects core visibility and control mechanisms including:
- AI identification: criteria/process to consistently identify where AI is used.
- AI inventory: a complete and up-to-date inventory of AI use cases/systems/models, including third-party AI.
- Materiality assessment: structured approach covering at least:
- Impact (on customers/operations/compliance)
- Complexity (model opacity, data issues)
- Reliance (mission-critical decisions, customer outcomes).
AI Lifecycle Controls (End-to-End)
MAS emphasises that AI risk management must extend across the full AI system lifecycle. Controls should cover the full lifecycle and be calibrated to risk materiality.
MAS highlights key areas to manage, including:
- Data management and quality
- Fairness and bias controls
- Transparency and explainability
- Human oversight and accountability
- Third-party AI/vendor governance
- Evaluation/testing (pre-deployment)
- Cyber security and technology safeguards
- Reproducibility/audit trails
- Monitoring, review and change management.
Stronger safeguards are expected where AI impacts customer outcomes, compliance, financial risk, or critical operations.
AI Risk Management Readiness Checklist
To support practical implementation of the MAS AI risk management guidelines, firms should conduct a structured gap analysis against regulatory expectations. FIs may consider the following questions when assessing readiness:
- Has the Board approved an AI governance framework and risk appetite?
- Is there a clear process to identify and inventory all AI use cases (including third-party AI)?
- Are AI risk materiality assessments consistently applied and documented?
- Are lifecycle controls calibrated to AI risk levels?
- Is AI use auditable, explainable, and defensible to regulators?
- Are roles, responsibilities, and escalation pathways clearly defined?
- Is the firm prepared for supervisory queries and internal audit reviews on AI?
How Waystone Can Help
As regulatory expectations around AI continue to evolve, financial institutions require specialist expertise to navigate compliance obligations effectively. By embedding robust AI governance and compliance practices into daily operations, firms demonstrate regulatory foresight, operational resilience, and stakeholder trust.
Waystone provides a comprehensive suite of Compliance Solutions to support organisations operating in, or expanding into, Singapore’s regulatory environment. Our team collaborates with firms to design, implement, and maintain AI governance frameworks aligned with MAS expectations, enabling responsible AI adoption without compromising innovation.
From AI risk assessments and governance policy design to outsourcing reviews, audit readiness, and ongoing compliance support, Waystone helps firms manage AI-related regulatory risk with confidence.
To learn more about our Compliance and Governance Solutions in Singapore, please reach out to your usual Waystone representative or our APAC Compliance Solutions team via the link below.
