Consultation: MAS Guidelines on Artificial Intelligence Risk Management
Pending industry feedback and recommendations, the MAS will issue a set of Guidelines to set out its expectations on how financial institutions (FI) should be using AI.
This article highlights the key priorities emerging from MAS’ proposals, outlines practical readiness considerations, and explains how FIs can prepare for implementation.
AI Tools and Risk Management
The launch of AI tools has led to an increased use across the banking and broader financial sector. AI tools are now being used widely in a variety of applications and internal processes.
MAS recognises that while AI can enhance efficiency, decision-making, and customer experience, it carries risks if not developed or deployed responsibly.
MAS explicitly highlights risks such as:
- Model errors & unpredictability: AI’s probabilistic behaviour can lead to inaccurate outputs and unexpected decisions, resulting in financial losses or wrong risk assessments.
- Operational disruption: AI-driven automation can fail or breakdown, disrupting operations.
- Bias & conduct risk: AI outcomes can be biased, leading to unfair treatment of customer segments or steering clients toward unsuitable products.
- Financial crime risk: AI used for fraud, AML/CFT purposes may miss suspicious patterns or generate inconsistent detection outcomes.
- Reputational harm: Customer-facing AI (e.g., chatbots) may produce inaccurate, harmful, or inappropriate responses, leading to complaints and reputational damage.
- Data privacy & confidentiality leakage: AI tools may expose sensitive or customer data, especially when third-party models are used without proper controls/consents.
As AI becomes increasingly embedded across business functions, MAS expects FIs to implement structured, enterprise-wide AI risk frameworks with systematic internal control procedures.
Scope and Applicability
MAS proposes that the AIRG apply to all FIs using such AI tools, implemented in a proportionate manner based on:
- The size and nature of the institution
- The extent and materiality of AI usage
- The risk profile of AI use cases.
Importantly, MAS defines AI broadly to include:
- Internally developed AI models
- Third-party AI tools and services
- Generative AI and AI agents
- AI used for both decision-making and decision support.
Even firms using AI “assistive tools” (e.g. copilots, analytics engines) will be expected to maintain baseline AI governance policies.
Key Compliance Priorities expected of FIs
Scope and proportionality
As discussed above, the AIRG applies to all FIs that use, develop, procure, or deploy AI, including third-party AI tools. AI is broadly defined as systems that generate outputs such as predictions, recommendations, decisions, or content through learning/inference.
Implementation should be proportionate to the FI’s size/complexity and the materiality of AI use (i.e., potential impact on business operations and customers).
Even where AI use is limited, MAS expects baseline controls (e.g., rules on allowed/disallowed use, ownership, internal checks, and review).
Board and Senior Management Oversight
The Board and Senior Management remain accountable for AI governance and must establish a framework to ensure the FI can:
- Identify AI use cases/systems/models (incl. vendor tools)
- Assess materiality and adopt controls across the lifecycle
- Ensure adequate capability/capacity to implement and oversee AI safely.
Existing governance and risk frameworks should be updated to include:
- AI risk appetite and thresholds
- Roles/responsibilities across business/tech/risk
- Escalation and incident reporting
- Periodic review (recognising AI risks evolve quickly).
If AI risk exposure is material, MAS expects a resolute cross-functional committee to oversee these risks.
AI Control Mechanisms
MAS expects core visibility and control mechanisms including:
- AI identification: criteria/process to consistently identify where AI is used.
- AI inventory: a complete and up-to-date inventory of AI use cases/systems/models, including third-party AI.
- Materiality assessment: structured approach covering at least:
- Impact (on customers/operations/compliance)
- Complexity (model opacity, data issues)
- Reliance (mission-critical decisions, customer outcomes).
AI lifecycle controls (end-to-end)
Controls should cover the full lifecycle and be calibrated to risk materiality.
MAS highlights key areas to manage, including:
- Data management and quality
- Fairness and bias controls
- Transparency and explainability
- Human oversight and accountability
- Third-party AI/vendor governance
- Evaluation/testing (pre-deployment)
- Cyber security and technology safeguards
- Reproducibility/audit trails
- Monitoring, review and change management.
Stronger safeguards are expected where AI impacts customer outcomes, compliance, financial risk, or critical operations.
AI Risk Management Readiness Checklist
FIs may consider the following questions when assessing readiness:
- Has the Board approved an AI governance framework and risk appetite?
- Is there a clear process to identify and inventory all AI use cases (including third-party AI)?
- Are AI risk materiality assessments consistently applied and documented?
- Are lifecycle controls calibrated to AI risk levels?
- Is AI use auditable, explainable, and defensible to regulators?
- Are roles, responsibilities, and escalation pathways clearly defined?
- Is the firm prepared for supervisory queries and internal audit reviews on AI?
How Waystone Can Help
By embedding robust AI governance and compliance practices into daily operations, firms demonstrate regulatory foresight, operational resilience, and stakeholder trust.
Waystone provides a comprehensive suite of Compliance Solutions to support organisations operating in, or expanding into, Singapore’s regulatory environment. Our team collaborates with firms to design, implement, and maintain AI governance frameworks aligned with MAS expectations, enabling responsible AI adoption without compromising innovation.
From AI risk assessments and governance policy design to outsourcing reviews, audit readiness, and ongoing compliance support, Waystone helps firms manage AI-related regulatory risk with confidence.
To learn more about our Compliance and Governance Solutions, please reach out to your usual Waystone representative or our APAC Compliance Solutions team via the link below.
