Digital images interconnected
Blog Post

NAIC Endorses a Model Bulletin Regarding the Utilization of AI within the Insurance Sector

8 minutes

The insurance industry is rapidly evolving with Artificial Intelligence (AI) playing a significant role in the industry’s digital transformation. Insurance companies, and firms and organizations supporting the industry, need clear guidance to ensure that technological advances comply with regulatory standards and ethical principles and safeguard consumer interests. To attempt to address that need, the National Association of Insurance Commissioners (NAIC) adopted the Model Bulletin on the Use of Artificial Indigence Systems by Insurers (the “Bulletin” or “Document”) at its 2023 Fall National Meeting.

The Bulletin was developed by the NAIC Innovation, Cybersecurity, and Technology Committee, led by Chair Kathleen A. Birrane, Maryland Insurance Commissioner, with co-vice chairs Michael Conway, Commissioner of the Colorado Division of Insurance, and Doug Ommen, Commissioner of the Iowa Insurance Division. The NAIC AI Model Bulletin assists insurers in navigating technological transition, while emphasizing the importance of ethical AI innovation and deployment, regulatory compliance, risk management and consumer trust.

The Bulletin should not be viewed as strict regulation, but rather as a guide setting forth the regulators’ principles of how insurers are expected to operate in the complex landscape of AI utilization. The document presents a consumer-centric approach, advocating for transparency, accountability, and fairness in all aspects of the insurer-policyholder interactions that involve sophisticated analytical and computational technologies.

The NAIC defines AI systems as “machine-based systems that can generate outputs such as predictions, recommendations, content (such as text, images, videos or sounds), or other output influencing decisions made in real or virtual environments.” This definition implies that AI incorporates predictive modeling because it enables computer systems to process historical data and recognize patterns within the data to forecast future events and make informed decisions. Predictive modeling is a key component of insurers’ operations and decision-making processes, and the Bulletin confers predictive modeling’s significance and relevance in its intersection with AI.

The NAIC bulletin suggests that AI could impact various aspects of the insurance sector, such as marketing, customer service, underwriting, claims processing, fraud detection and others. But while promoting innovation through AI, the Bulletin also warns that AI systems could present unique risks such as inaccuracies, unfair discrimination, data vulnerability and lack of transparency for consumers.

The overarching message of the Bulletin is that, going forward, regulators will expect insurers to take appropriate steps and measures to control and mitigate those stated risks. An important key point for insurers is NAIC’s caution that an insurer’s AI practices with potential impact on consumers could be “subject to the Department’s examination to determine that the reliance on AI Systems are compliant with all applicable existing legal standards governing the conduct of the Insurer.”

The Bulletin is divided into four sections, each covering core aspects of AI implementation in the insurance industry, and underscoring the importance of implementing careful governance, risk management strategies and protocols to ensure fair and accurate outcomes for consumers. These sections address: 1) the laws and regulations the Bulletin relies on; 2) various definitions related to AI and fairness; 3) regulatory guidance and expectations including general guidelines, governance, and third-party AI systems and data; and 4) regulatory oversight and examination considerations. The Bulletin also references the "Principles in Artificial Intelligence" that the NAIC adopted in 2020 as an additional source for insurer guidance around fairness, accountability, compliance, transparency and security.

The bulletin emphasizes that AI-supported decisions by insurers must adhere to relevant insurance laws and regulations covering aspects such as unfair trade practices, claims settlements, governance reporting and rate fairness in property/casualty insurance and workers' compensation. Insurers’ AI systems are expected to “…ensure that the use of AI Systems does not result in…” unfair trade practices or unfair claims settlement practices, and that “rates, rating rules, and rating plans developed using AI techniques and predictive models that rely on data and machine learning do not result in excessive, inadequate, or unfairly discriminatory insurance rates with respect to all forms of casualty insurance.”

AI systems should prioritize minimizing the risk of adverse consumer outcomes. This involves establishing governance structures, robust risk management and internal audit functions. Responsibility for AI program development, implementation, monitoring, and oversight, along with setting an AI strategy, should be “vested in senior management accountable to the board” or a relevant committee.

It's crucial to identify and address the use of AI systems across the insurance lifecycle, from product development to claim management to fraud detection. Additionally, processes must ensure that consumers are informed about AI usage and provided with appropriate access to information throughout the insurance process. Regarding predictive models specifically, insurers must outline the techniques employed “to detect and address errors, performance issues, outliers, or unfair discrimination in the insurance practices,” stemming from the predictive model's application.

In the brief time since passage of the Bulletin, it has already been adopted in thirteen states, including Alaska, Connecticut, District of Columbia, Illinois, Kentucky, Maryland, Nebraska, Nevada, New Hampshire, Pennsylvania, Rhode Island, Vermont, and Washington, and it is anticipated that additional states will soon implement it as well. The adoption of the NAIC AI Model Bulletin by those states underscores a collective commitment to fostering responsible AI integration in the insurance sector. By embracing ethical principles, regulatory guidance and industry collaboration, states are paving the way for a future in which AI transforms insurance operations while prioritizing consumer protection and ethical considerations.

Three states – California, Colorado and New York – have elected to embrace different approaches to exercising oversight of insurers’ AI implementation in their business operations.

In June 2022, the California Department of Insurance issued Bulletin 2022-5, imposing constraints on the insurance industry's utilization of AI and alternative data sets. The bulletin highlighted recent allegations of racial discrimination in various insurance practices and emphasized the responsibility of insurance companies to treat all individuals equally. It also cited examples of ongoing investigations into potential unfair discrimination, such as subjecting claims from specific urban ZIP codes to additional scrutiny, employing facial recognition in claims assessment and collecting irrelevant personal information during underwriting.

In July 2021, the Colorado Division of Insurance (CDI) enacted Senate Bill 21-1691, which mandates life insurance providers assess their "external consumer data, information sources, algorithms, and predictive models" to prevent any biased treatment against consumers based on protected characteristics.

In September 2023, the CDI also published a draft regulation on quantitative testing for unfairly discriminatory algorithms and predictive models utilized in life insurance underwriting. Using the inferred race of their life insurance applicants, the draft regulation requires insurers to assess their data and models that use external consumer information and take appropriate measures to address such instances. Following their precedent on life insurance regulations, the CDI has recently initiated discussions on personal auto and health insurance. The first personal auto stakeholder meeting occurred in April 2023, and the first health stakeholder meeting took place this past February.

In January, the New York State Department of Financial Services released a proposed Insurance Circular Letter (the “Letter”) regarding “the use of artificial intelligence systems and external consumer data and information sources in insurance underwriting and pricing.” The Letter is intended to address issues only in processes that are part of underwriting and pricing. It is broader in scope, however, than the CDI regulation as it encompasses all types of insurance in addition to just life insurance. Insurers are responsible to demonstrate compliance with existing laws, ensuring that their use of external data and AI systems is not unfairly discriminatory, aligns with actuarial standards, is based on reasonable expectations, and does not serve as a proxy for any protected class.

Unlike the CDI draft regulation on quantitative testing, the New York Letter is broadly applicable to any group of “similarly situated insureds,” rather than solely the insureds of a protected class. Another difference between the CDI draft regulation and the Letter is that the Letter does not prescribe a specific approach to quantitative testing, but instead provides flexible guidelines and several examples of recommended statistical techniques for testing for disproportionate adverse effects.

As AI becomes increasingly integrated into all facets of the insurance sector, effective collaboration among regulators, insurers, and other stakeholders will be vital to ensure the responsible implementation of AI. The adoption of the NAIC Bulletin represents a critical change in how regulators view AI integration within insurance. The emergence of divergent approaches by some states so far points to the possibility of future challenges insurers could encounter in the face of non-uniform regulatory expectations and regulation. But regardless of which way the “winds of change” blow, the Bulletin will continue to provide a robust set of standards and guiding principles to insurers, helping them navigate the regulatory landscape, while (hopefully) also encouraging innovation and achieving efficiencies to the benefit of both insurers and the insured.

The Bulletin is a call for action for insurers to demonstrate their commitment to using AI responsibly in their practice environment. Insurers can foster trust with consumers and regulators by creating a documented program outlining their responsible AI use, developing AI models that are as interpretable as possible ensuring the fairness and accuracy of the data they use to train the AI models, and regularly evaluating the AI models for bias and implementing measures to mitigate it.

Pinnacle is an Illinois-based property and casualty insurance consulting firm - and we’ve produced numerous articles and presentations on the topic. Our research in bias and unfair discrimination has revealed the importance of partnerships in managing this complicated and rapidly evolving environment.  A trusted partner who understands all aspects of various regulations and guidance can help insurance companies effectively and efficiently approaching this new journey of building trust, interpreting insurance regulations on the ethical use of AI, and developing a strategy for evaluating and testing their data and models.

 

References:

  1.   https://content.naic.org/sites/default/files/inline-files/2023-12-4%20Model%20Bulletin_Adopted_0.pdf
  2.   https://insurancenewsnet.com/innarticle/7-states-adopt-ai-rules-insurer-compliance-called-significant-lift
  3.   https://www.insurancejournal.com/news/east/2024/04/10/768942.htm#:~:text=The%20notice%20makes%20clear%20that,the%20responsible%20use%20of%20AI
  4.   https://insurancenewsnet.com/innarticle/7-states-adopt-ai-rules-insurer-compliance-called-significant-lift
  5.   https://www.commerce.alaska.gov/web/Portals/11/Pub/B24-01.pdf
  6.   https://portal.ct.gov/cid/-/media/cid/1_bulletins/bulletin-mc-25.pdf
  7.   https://disb.dc.gov/sites/default/files/dc/sites/disb/page_content/attachments/DISB%20AI%20Bulletin%205.20.24.pdf
  8.   https://idoi.illinois.gov/content/dam/soi/en/web/insurance/companies/companybulletins/CB2024-08.pdf
  9.   https://insurance.ky.gov/ppc/Documents/20240416-0942.pdf
  10.   https://insurance.maryland.gov/Insurer/Documents/bulletins/24-11-The-Use-of-Artificial-Intelligence-Systems-in-Insurance.pdf
  11.   https://doi.nebraska.gov/sites/default/files/doc/IGD%20-%20-%20H1.pdf
  12.   https://epubs.nsla.nv.gov/statepubs/epubs/41785-2024-001.pdf
  13.   https://www.nh.gov/insurance/media/bulletins/2024/documents/bulletin-ins-24-011-ab.pdf
  14.   https://www.pacodeandbulletin.gov/Display/pabull?file=/secure/pabulletin/data/vol54/54-14/484.html
  15.   https://dbr.ri.gov/sites/g/files/xkgbur696/files/2024-03/INS_Bulletin%20-%20Artificial%20Intelligence.pdf
  16.   https://dfr.vermont.gov/sites/finreg/files/regbul/dfr-insurance-bulletin-229-ai.pdf
  17.   https://www.insurance.wa.gov/sites/default/files/documents/2024-02%20naic-ai-technical-assistance-advisory.pdf

News & Insights