How can AI be used by insurance companies?
AI is not new to the insurance industry, but recent technological advances have expanded its capacity to assist with a broad range of tasks, including:
- pricing and underwriting;
- claims processing and benefit management;
- product design and sales;
- customer service and policy administration; and
- fraud detection.
Using AI for these purposes can cut down on claims processing time, reduce losses and decrease administrative burden.
What key risks should insurance companies look out for?
Litigation risk
Key litigation risks include:
- Bias and discrimination. AI relies on datasets for training and analysis. These datasets may exhibit biases, which can impact the recommendations or decisions an AI system makes. For example, in Huskey v. State Farm1, the court denied an insurance company’s motion to dismiss a claim alleging that the company’s use of algorithmic decision-making and automated claims processing resulted in longer processing times and less coverage for Black homeowners than white homeowners.
- Misrepresentation. Generative AI is known to make mistakes. Client-facing AI chatbots can create litigation risk by making misrepresentations that clients rely on, to their detriment. This was an issue in Moffatt v. Air Canada2, in which the British Columbia Civil Resolution Tribunal ordered Air Canada to honour its chatbot’s misrepresentation of the airline’s bereavement policy because a customer relied on it when booking flights. Contrary to what the airline argued, the tribunal concluded that the chatbot was not considered a separate legal entity.
- Breach of contract and lack of good faith/fair dealing. When relying on AI systems, companies can also be at risk for claims of unfair treatment, amounting to a breach of contract and lack of good faith or fair dealing, if clients argue that their agreements preclude a fully automated review. For example, the plaintiffs in an ongoing U.S. case, The Estate of Gene B Lokken et al. v. UnitedHealth Group3, allege that the defendant insurance company improperly denied medical insurance claims for medically necessary care based on recommendations made by AI models.
- Privacy and data breaches. AI use can increase the risk of cybersecurity threats and result in claims related to privacy breaches. For example, the case Michelle Gills v. Patagonia Inc.4 suggests that using AI in customer service could create litigation risk, as a plaintiff filed a proposed class action on the basis that the defendants used an AI product to record, transcribe and analyze customer calls without consent, in violation of the California Privacy Act.
Regulatory risk
Currently, there is no federal legislation regulating AI in Canada. AI is governed by a patchwork of existing legal/regulatory frameworks, including privacy law, IP law, human rights legislation, contract law and common law. But laws like the EU Artificial Intelligence Act and state laws in the U.S. may have relevance to some insurance companies and can provide helpful guidance on mitigating risk ahead of Canadian regulation.
Provinces are starting to enact sector-specific laws that impact AI, including Ontario’s Working For Workers Four Act and privacy law amendments in Québec and Alberta.
Regulators are also putting out guidance, including the following for financial institutions:
- OSFI’s Guideline E-23 (Model Risk Management) sets expectations for federally regulated financial institutions’ (FRFIs’) model risk management, including models which use AI/machine learning5. The guideline will take effect May 2027.
- OSFI and the Financial Consumer Agency of Canada’s joint report on the risks of AI in federally regulated institutions6.
- Québec’s AMF’s draft Guideline for the Use of Artificial Intelligence, (which we summarized in this article).
Reputational risk
Materialization of any of the above risks can damage an organization’s reputation by eroding trust in the company. Additionally, AI use itself carries a negative perception for certain clients, particularly when used to replace human customer service or in decisions to approve or deny claims.
Risk of fraudulent claims
Insurance companies should also be on the lookout for AI-generated or augmented materials that may be used to support a fraudulent claim. For example, AI can be used to create falsified documents and deepfake images or videos of damages or injuries that did not actually occur. This risk will continue to grow as deepfakes become more sophisticated and prevalent.
What steps can insurance companies take to mitigate these risks?
- Develop principles and practices for AI governance and oversight. Companies should implement policies and procedures governing AI use, which should include clear lines of accountability within the organization, ensure human oversight, prioritize safety and security of data, and minimize bias and discrimination, as well as implement monitoring and reporting systems. Companies should also implement careful claims review processes to safeguard against deepfakes.
- Follow international and industry guidance. Keep track of guidance from regulators, especially within the financial services industry. The Competition Bureau, the Office of the Privacy Commissioner of Canada and the Ontario Human Rights Commission have also provided guidance that may be helpful. Companies can also look to AI laws in the EU and U.S., as well as international guidance documents.
- Stay up to date. While it is difficult to stay up to date in this rapidly moving space, it is important to regularly monitor for regulatory and technological changes in order to realize the benefits and manage the risks of AI as it continues to evolve.