What ADR Professionals Should Know About the Regulation of AI in Insurance Underwriting

By Margarita Echevarria

March 10, 2025

What ADR Professionals Should Know About the Regulation of AI in Insurance Underwriting

3.10.2025

By Margarita Echevarria

As artificial intelligence continues to draw our attention, imagination and concern, this article focuses on the laws and regulations that have been adopted to begin to regulate the use of this technology in the insurance industry. These initiatives identify the concerns of regulators in connection with artificial intelligence and insurance. This article offers alternative dispute resolution practitioners a framework for understanding some of the issues likely to arise in insurance disputes when the use of AI is a material element.

The View from the Top

The impact that AI can have on insurance has been broadly considered by both national and international supervisors in the financial services industry.[1] Given the global importance of the sector and the quickly evolving use of AI within it, regulators are naturally interested in its impact on solvency risks, insurance products, data security, and consumers. In the United States, despite the existence of federal oversight of insurance through the Federal Insurance Office,[2] the insurance industry is directly regulated at the state level. Accordingly, the National Association of Insurance Commissioners, established since 1871, is the body created by state regulators to set standards and regulatory best practices for the industry. Following its publication of “AI Principles” in 2020,[3] the association finalized its guidance with “The Use of AI Systems by Insurers” in 2023 and soon thereafter 19 states adopted state bulletins or specific guidance like New York.[4]

The model adopted by the states generally applies to all insurers – from title insurers to health insurers and across all stages of the insurance lifecycle from product development to claims management.[5] Executing its role to serve the public interest, the association’s model guidance is centered on protecting consumers against inaccurate processes, unfair discrimination, data vulnerability, and other potential uncontrolled risks. In establishing its risk control framework, the model starts by setting out some basic definitions. Significant among its definitions are “artificial intelligence” and “predictive models.” Artificial intelligence is defined by the association as a “branch of computer science that uses data processing systems” to perform functions “normally associated with human intelligence such as reasoning, learning and self-improvement” and includes “machine learning . . . that focuses on the ability of computers to learn from provided data without being programmed.” The model goes on to include in its definitions of predictive models that are based on the “mining of historic data using. . . .  algorithms/and or machine learning to identify patterns or predict outcomes that can be used to support or make decisions.” Interestingly, the model does not make any reference or draw any distinction to the predictive models the industry has used for decades, a concern raised in an industry response to a federal survey on the use of AI by insurers.[6]

Regulatory Focus on AI in Insurance

The definition of AI within the AI system is an important starting point because it is the capability of the system to “train” itself based on large datasets that raises concerns. The self-learning capability of AI warrants oversight. Most of the states regulating AI address these concerns by imposing guardrails to minimize potential inaccuracies, unfair discrimination, data vulnerability, lack of transparency and the risk of reliance on third party vendors. New York’s Circular Letter No. 7 expresses similar concerns focusing directly on underwriting and pricing and the potential for perpetuating historic or systemic biases arising from the use of external consumer data and information sources.[7] New York’s Circular Letter No. 7 builds on earlier pronouncements concerning the use by insurers of external data sources (“geographical data, educational attainment, homeownership data, licensures, civil judgments and court records that have the potential to reflect disguised and illegal race-based underwriting that violate” existing statutory protections) that are not supported by valid actuarial standards.[8] Valid actuarial standards, for example, distinguish between individuals in underwriting and rating based on factors related to expected costs associated with the transfer of risk. Insurers have long relied upon these standards of practice because they demonstrate a clear relationship between the variables used and the insured risk. A related concern is that this data may be collected by external vendors that are not regulated by the New York State Department of Financial Services.

The guardrails articulated by the National Association of Insurance Commissioners’ model allow adopting states to tailor consumer protections to the AI systems used. In summary, the guardrails prescribe the adoption of (1) governance and risk management controls that include oversight by senior management, an independent or enterprise integrated risk management program[9] and the adoption of documented policies and procedures; (2) oversight of third-party vendors for compliance with existing insurance laws, adoption of policies and procedures respecting the acquisition of data, auditing of data, and remediation of incorrect data and cooperation with regulatory investigations and (3) preparation for regulatory exams entailing maintenance of records respecting the source of data, the testing of data, bias analysis, and model drift, including notice and disclosure of adverse underwriting decisions.

The Potential for Insurance Disputes Triggered by AI

It is still too early to identify specific policy changes resulting from the integration of AI technology in the insurance industry. Considering the limited body of insurance litigation, litigators must at times extend their focus beyond traditional insurance law when pursuing insurer liability. With this in mind, we should examine existing cases to anticipate how the evolving use of AI may shape future litigation. The first class actions involved the AI program nH Predict. The technology is an AI predictive model used by the defendant carriers in coverage determinations for medically necessary care. In both Estate of Lokken v. United Health Group and  Barrows v. Humana, plaintiffs rely on established insurance law protections to assert that the carriers claims personnel over-relied on this “faulty technology” and disregarded human judgment to the detriment of Medicare Advantage policyholders.[10] The Huskey v. State Farm Fire & Cas. Co. class action meanwhile highlights the concerns relating to algorithms that can disparately impact policyholders based on their protected class status.[11] In Huskey, plaintiffs allege that profiling algorithm models used for fraud screening and claims automation delayed or denied homeowners’ insurance claims based on race discrimination in violation of the Fair Housing Act. In the Huskey case the plaintiffs argued disparate impact as policyholders of homeowners insurance policies under three sections of the Fair Housing Act. The plaintiffs survived State Farm’s motion to dismiss under 3604(b) based on showing (1) a statistical disparity, (2) a specific policy, i.e. the insurer’s “decision to use algorithmic decision-making tools to automate claims processing” and (3) a causal connection between the policy and the statistical disparity. These early cases filed in most instances prior to the adoption of state guidance for the use of AI by insurers forecast the very issues – bias, data inaccuracy, oversight of third-party vendors – that are now reflected in the regulatory guardrails being imposed on the industry.

Identifying Specific Legal Risks

What can dispute resolution professionals anticipate in a world where arbitration clauses and protecting trade secrets are industry norms? Anything can happen, but there are several key factors that point to a potential for complex disputes. These factors include: (1) reliance on third-party vendors for the large datasets needed to train AI systems, (2) the likelihood of dependency on third-party vendors for the development of AI systems, especially by smaller insurers, (3) the inherent need to share sensitive information across platforms in these processes and (4) the fact that insurers are ultimately liable under the control regime articulated by the National Insurance Association of Commissioners for AI.

Therefore, contractual obligations and due diligence are needed for privacy protections and data security including consideration of technical capabilities, system reliability and system explainability. These concerns will also warrant related representations, warranties and indemnifications regarding the respective parties ongoing need to monitor and assess the AI system to assure regulatory compliance, including oversight of bias and incident reporting. These terms may serve as fertile ground for disputes. And, as the Huskey case demonstrates, determining liability may not be confined to insurance law. Claims may also arise out of state privacy, data protection, bias and other enacted AI protection laws.[12]

Insurers must remain aware that AI creates a new realm of potential claims both in business-to-business and business-to-consumer transactions as the highlights here make clear. At this early stage, the most prominent exposures seem to be data security and bias concerns. Even just the issues around cybersecurity of databases holding personal financial information made richer by external consumer data raise enormous risks. In fact, as I was finalizing this article, New York enhanced its previously mandated cybersecurity regulation[13] by providing further guidance on cybersecurity in connection with the use of AI.[14] The guidance pointedly reflects a concern for the “vast amounts of non-public information” that will be at risk and create a greater incentive for bad actors to target.

In addition, depending on the use of external consumer data and information sources or the AI system, the potential for disparate outcomes the regulators prefer the industry avoids may nevertheless result from model drift,[15] the use of “problematic” proxy variables, defective bias analysis techniques or any other number of inadvertent glitches.[16] The association’s model, while guiding the development and deployment of AI technology, also imposes upon insurers the duty to disclose the basis for their recommendations to all stakeholders, including consumers.[17] This transparency requirement acknowledges that the technology may outpace human understanding of its mechanics, the so called “black box.”[18] Consequently, insurers may be challenged in providing clear and adequate explanations to insureds regarding their automated decisions.

These are early days in the use of AI by insurers in an increasingly regulated environment. Currently, only one-third of the states have adopted the National Association of Insurance Commissioners’ model. Staying abreast of these technological advancements and their evolution is crucial to our role as alternative dispute resolution professionals. This emerging technology will undoubtedly become a focal point of disputes in an industry that is central to both national and global economies.


Margarita Echevarria is an arbitrator and mediator serving on the American Arbitration Association’s commercial and insurance panels, ARIAS-U.S. Certified Arbitrators, NAM, FINRA and N.Y./N.J. federal and state courts arbitration and mediation panels. She is a former in-house counsel and chief compliance officer for major insurers and a former adjunct professor of insurance law. This article appears in a forthcoming issue of NY Dispute Resolution Lawyer, the publication of NYSBA’s Dispute Resolution Section. For more information, visit NYSBA.ORG/DRS.

Endnotes:

[1] Treasury Department RFI, 50048 Federal Register Notice/Vol.89, No.114 (June 12, 2024); International Association of Insurance Supervisors Newsletter, Sept. 2024, Issue 135; EU-US Insurance Dialogue Report, 10/31/2018, “Big Data Issue Paper.”

[2] The Federal Office of Insurance reports to the Dept. of U.S. Treasury pursuant to the Dodd-Frank Act of 2010. Pub. L. 111–203 (text) (PDF); 124 Stat. 1376–2223.

[3] NAIC Principles on Artificial Intelligence (AI) adopted by the Executive Committee, Aug. 14, 2020. https://content.naic.org/sites/default/files/call_materials/Attachment_A_AI_Principles.pdf.

[4] NAIC Model Bulletin: Use of Artificial Intelligence Systems by Insurers, adopted Dec.4, 2023, has been adopted by Alaska, Arkansas, Connecticut, Illinois, Iowa, Kentucky, Maryland, Michigan, Nebraska, Nevada, New Hampshire, Pennsylvania, Rhode Island, Vermont, Virginia, Washington, Washington, D.C., and West Virginia. Insurance specific regulation/guidance adopted by New York, California, Colorado and Texas [Google” Go to URL for Implementation of NAIC Model Bulletin on AI as of December 1, 2024”] Colorado has not adopted the model but see, SB21-169(Jul 2021).” https://doi.colorado.gov/for-consumers/sb21-169; Co. Rev. Stat. 10-3-1104.9; https://casetext.com/statute/colorado-revised-statutes/title-10-insurance/regulation-of-insurance-companies/article-3-regulation-of-insurance-companies/part-11-unfair-competition-deceptive-practices/section-10-3-11049-insurers-use-of-external-consumer-data-and-information-sources-algorithms-and-predictive-models-unfair-discrimination-prohibited-rules-stakeholder-process-required-investigations-definitions-repeal & CO Privacy Act (Jul 2023) Co. Rev.Stat.6-1-1302; https://www.coloradosos.gov/CCR/GenerateRulePdf.do?ruleVersionId=10872&fileName=4%20CCR%20904-3 & CO. Consumer Protection for AI (May 2024) SB24-205; https://leg.colorado.gov/bills/sb24-205.

[5] A notable distinction is N.Y. Circular Letter No.7(Jul.2024), https://www.dfs.ny.gov/industry-guidance/circular-letters/cl2024-07 which focuses on pricing and underwriting.

[6] See p. 5, Comments to the Treasury Dept.’s RFI (50048 Federal Register/Vol. 89, No. 114/June 12, 2024) submitted by the American Property Casualty Insurance Association, dated August 12, 2024 (Comment ID: TREAS-DO-2024-0011-0041).

[7] DFS Superintendent Harris Adopts Guidance To Combat Discrimination in Artificial Intelligence, Press Release, July 11, 2024, https://www.dfs.ny.gov/reports_and_publications/press_releases/pr20240711241. See, also CO SB21-169, effective June 2024 regulating ECIDS. https://doi.colorado.gov/for-consumers/sb21-169-protecting-consumers-from-unfair-discrimination-in-insurance-practices.

[8] N.Y. Circular Letter No.1, Use of External Consumer Data and Information Sources in Underwriting for Life Insurance, Jan. 18, 2019, https://www.dfs.ny.gov/industry_guidance/circular_letters/cl2019_01. The statutory protections of Chapter 28 Article 26, see, §§ 2607-2608 prohibiting discrimination; and Article 42, see, § 4224 prohibiting discrimination, are specifically mentioned by the DFS. https://casetext.com/statute/consolidated-laws-of-new-york/chapter-insurance/article-26-unfair-claim-settlement-practices-other-misconduct-discrimination; https://casetext.com/statute/consolidated-laws-of-new-york/chapter-insurance/article-42-life-insurance-companies-and-accident-and-health-insurance-companies-and-legal-services-insurance-companies.

[9] Reference is made to the Risk Management Framework adopted by the National Institute of Standards and Technology (NIST) for AI. NIST-AI-100 -1, Artificial Intelligence Risk Management Framework (AI RMF 1.0). https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf.

[10] Claims were made under the Unfair Claim Settlement Practices Act, Unfair Deceptive Insurance Practices Acts, and additional claims included those relating to breach of contract and insurer bad faith. The Estate of Lokken v. United Health Group, 23cv03514, USDC-Minnesota, filed 11/14/23, https://casetext.com/case/the-estate-of-lokken-v-unitedhealth-grp; Barrows v. Humana, 23cv00654, USDC-NC, amended file 4/22/24 https://litigationtracker.law.georgetown.edu/wp-content/uploads/2023/12/Barrows-et-al-v.-Humana-Inc.-Docket-No.-3-23-cv-00654-W.D.-Ky.-Dec-12-2023-Court-Docket.pdf.

[11] Huskey v. State Farm Fire & Cas.Co., USDC-N.D. Illinois, 22 C 7014, filed Sept. 11, 2023, op, at 8-9, https://casetext.com/case/huskey-v-state-farm-fire-cas-co.

[12] CA. Consumer Privacy Act of 2018 (Updated Jan.2023) CA. Civil Code § 1798.192 (2023), https://cppa.ca.gov/regulations/pdf/cppa_act.pdf; VA. Consumer Data Protection Act (Jan.2023) Va. Code § 59.1-578 https://law.lis.virginia.gov/vacode/title59.1/chapter53/; NJ Omnibus Privacy Law (Jan.2025), SB 332 https://www.njleg.state.nj.us/bill-search/2022/S332/bill-text?f=S0500&n=332_R6; Consumer Protection for AI, Co. SB 24-205 (May 2024); Rhode Island Data Transparency & Privacy Protection Act (June 2024) 2024-H 7787A, 2024-S 2500, https://webserver.rilegislature.gov/BillText/BillText24/HouseText24/H7787A.pdf.

[13] 23 N.Y.C.R.R. Part 500, Mar.1, 2017.

[14] Industry Letter, Cybersecurity Risks Arising From Artificial Intelligence and Strategies To Combat Related Risks, Oct.16, 2024, https://www.dfs.ny.gov/industry-guidance/industry-letters/il20241016-cyber-risks-ai-and-strategies-combat-related-risks.

[15] “Model drift” refers to the decay of a model’s performance over time arising from underlying changes such as the definitions, distributions, and/or statistical properties between the data used to train the model and the data on which it is deployed; § 2, NAIC Model Bulletin, https://content.naic.org/sites/default/files/inline-files/2023-12-4%20Model%20Bulletin_Adopted_0.pdf.

[16] N.Y. Circular Letter No. 7 (Jul.2024), § IV E; The American Academy of Actuaries, Discrimination: Considerations for Machine Learning, AI Models and Underlying Data (Feb. 2024), § C, https://www.actuary.org/sites/default/files/2023-08/risk-brief-discrimination.pdf.

[17] See endnote 3, specifically “Transparent” and “Safe, Secure, & Robust Systems” sections; and endnote 14.

[18] NIST, Towards a Standard for Identifying and Managing Bias in Artificial Intelligence, ‘AI Systems as Magic,’ Publication No. 1270 (March 2022), https://doi.org/10.6028/NIST.SP.1270; Citigroup Report on AI, AI in Finance, Bot, Bank, & Beyond, (June2024), p.64, https://www.citigroup.com/global/insights/ai-in-finance.

Related Articles

Six diverse people sitting holding signs
gradient circle (purple) gradient circle (green)

Join NYSBA

My NYSBA Account

My NYSBA Account