AI and Insurance
Artificial intelligence (AI) is no longer science-fiction – businesses around the world are adopting it for a variety of reasons in order to reduce cost and improve efficiency. Artificial intelligence (AI) is commonly understood as the theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.
We believe the key words within this descriptive are “Requiring Human Intelligence”. AI carries enormous transformational potential for industries and society. Within the insurance sector, primary insurers are adopting AI to improve customer service, but linking with corporate governance and financial crime prevention, to increase efficiency and to fight against fraud more effectively.
Key AI considerations for insurance stakeholders
Increasing profits is of course a key driver for insurance stakeholders, as well as decreasing operational costs, but there are some key considerations to take into account:
- regulatory objectives
- legal considerations
- the use of data
- human VS AI talking
- financial crime prevention
- cyber security
- insurance life cycle.
This article will focus on regulatory objectives.
Regulatory objectives and AI
All technology has a knock-on effect, for example the human history and the mobile phone or online gaming or a robot waiter. As with any technological development there are many challenges, the use of AI is no different, challenges need to be addressed by companies, governments, and regulators.
For example, something simple such as a robot waiter. Order at the table on an app and service is immediate, however there are risks. What about liability? Does the restaurant insurance cover such a robot and not a human, have they factored in age verification for alcohol use or more seriously, food allergies. The potential for risk is endless. Therefore, ensuring that the right regulatory framework is in place is critical.
For insurance we would hope it is a proportionate and a risk-based regulatory framework, to encourage the ethical and responsible development of AI in the insurance industry.
This will help to ensure that it always remains clear for consumers (retail), the primary focus for most regulators, whenever they are interacting with an AI system, while at the same time ensuring that those AI systems that are high-risk in nature and that may have a significant impact on the fundamental rights of individuals are subject to additional requirements.
Financial services legislation does in some way ensure a robust regulatory framework in the insurance sector when it comes to AI use.
A key takeaway at high level is that we believe it is important that AI carries out only what a human built/programmed at the start. Human error is always possible, however, and so linking back to MAS’ individual accountability regime – someone must be accountable.
MAS led an Industry Consortium to publish an assessment of methodologies for responsible use of AI by Financial Institutions in 2022 which considers the key areas.
The output was five white papers detailing assessment methodologies for (a) Fairness, (b) Ethics, Accountability and (c) Transparency (FEAT) principles.
They are a framework to assist use of AI by financial institutions (FIs) in a responsible way.
MAS whitepapers about the responsible use of AI
Essential elements to note:
Fairness, Ethics, Accountability and Transparency (FEAT) principles. | a comprehensive FEAT checklist for FIs to adopt during their Artificial Intelligence and Data Analytics (AIDA) software development lifecycles. |
Fairness Assessment Methodology | an enhanced Fairness Assessment Methodology to enable FIs to define their AIDA systems fairness objectives, identify personal attributes of individuals and any unintentional bias. |
Ethics and Accountability Assessment Methodology | a new Ethics and Accountability Assessment Methodology, which provides a framework for FIs to carry out quantifiable measurement of ethical practices, in addition to the qualitative practices currently adopted. |
Transparency Assessment Methodology | a new Transparency Assessment Methodology which helps FIs determine whether and how much internal/external transparency is needed to explain and interpret the predictions of machine learning models. |
Open-source software toolkit
To assist with wide adoption the FEAT methodologies and principles, an open-source software toolkit has been produced to enable the automation of the fairness metrics assessment, which therefore allows for visualisation of the interface for fairness assessment and for the plug-ins to integrate with FI’s IT systems.
The general view of financial services legislation should ensure a robust regulatory framework in the insurance sector when it comes to AI use.
Existing framework, for example, does contain some related provisions addressing the governance mechanisms put in place by insurers and in direct related parties, while principles such as transparency, fairness and ethics are also addressed by rules on conduct of business and disclosure. Key elements being product oversight and governance regulate the design of new insurance products and ensure that all insurance products meet the needs of their specific target market, regardless of the techniques used in said products. As well as rules on advice apply wherever a personal recommendation is provided to a customer, regardless of whether that recommendation is provided by a human or AI actor.
We believe it is important to flag that monitoring the use of AI applications should continue to fall within the competence of the relevant sectoral supervisory or regulatory authorities, as they remain best placed to understand the market in question and the specific context of the AI application and applicable regulatory framework. This is particularly important in the financial services sector, given the comprehensive body of existing rules.
The importance of adopting an overall data strategy
In addition to having the right regulatory framework in place, and promoting and supporting the development of AI, as well as actions to facilitate access to and use of data, which is essential for the further development of AI systems. An overall data strategy could be adopted, taking positive steps in this regard, such as by ensuring greater access to public sector datasets.
The risks are many, but it will also offer a valuable tool to help insurance stakeholders address the wider implications of their use of AI to ensure fairness and good consumer outcomes.
Insurance and AI Summary
We believe there is a place for AI in the insurance industry, however, before taking the step in to AI, it is important to give due consideration and clearly document, your rationale and evidence in a durable medium.
We would recommend before adopting AI into your business model you undertake a detailed assessment of the risks and controls which will be applied to protect your business.
To learn more, get in touch with our APAC Compliance Solutions team today.