In July, the Innovation, Cybersecurity and Technology Committee of the National Association of Insurance Commissioners (NAIC) released an exposure draft of its model bulletin titled “Use of Algorithms, Predictive Models, and Artificial Intelligence Systems by Insurers” (draft Model Bulletin). The draft Model Bulletin, which is intended to be distributed by state insurance departments to all insurers licensed in their jurisdiction, provides regulatory guidance and expectations for the use of artificial intelligence (AI) systems by insurers.
Regulatory Guidance and Expectations
The draft Model Bulletin encourages insurers to develop, implement and maintain a written program for the use of AI systems (AIS program) that is designed to mitigate the risk that the use of AI systems in making or supporting decisions affecting insurers’ customers will result in decisions that are arbitrary or capricious, unfairly discriminatory or that otherwise violate unfair trade practice laws.
According to the draft Model Bulletin, the AIS program should: (1) address governance, risk management controls and internal audit functions, (2) be adopted by the board of directors or an appropriate board committee, (3) be tailored to and proportionate with the insurer’s use and reliance on AI and AI systems, (4) address the use of all AI systems that make decisions impacting customers and (5) address the use of AI systems across the insurance product life cycle.
Governance. The AIS program should include a governance framework for the oversight of the insurer’s AI systems, and the framework should address:
- Standards for development of AI systems.
- Policies, processes and procedures for each stage of an AI system life cycle.
- Requirements to document compliance with the AIS program.
- Roles and responsibilities for key personnel charged with carrying out the AIS program.
- Monitoring, auditing and reporting protocols and functions.
- Processes and procedures for designing, developing, verifying, deploying, using and monitoring predictive models.
Risk Management and Internal Controls. The AIS program should document the insurer’s risk identification, mitigation and management framework and internal controls for AI systems. Risk management and internal controls should address:
- The oversight and approval process for developing, adopting or acquiring AI systems.
- Data practices and accountability procedures.
- Management and oversight of algorithms and predictive models.
- Validating, testing and auditing of data, algorithms and predictive models.
- Protecting non-public information.
- Data and record retention.
- For predictive models, a description of the intended goals and objectives and how the model is developed and validated to ensure that the AI systems that rely on it correctly and efficiently predict or implement those goals and objectives.
Third-Party AI Systems. The AIS program should address the insurer’s standards for acquiring, using and relying on third-party AI systems. The insurer must conduct diligence on third parties to ensure that their AI systems that make or support decisions impacting the insurer’s customers are designed to meet the legal standards imposed on the insurer. The insurer also must conduct audits and other activities to confirm third parties’ compliance with contractual and regulatory requirements.
Under the draft Model Bulletin, contracts with third parties must include terms that:
- Require third-party data and model vendors and AI system developers to have and maintain an AIS program commensurate with the standards expected of the insurer.
- Entitle the insurer to audit the third-party vendor for compliance.
- Entitle the insurer to receive audit reports by qualified auditing entities confirming the third party’s compliance.
- Require the third party to cooperate with regulators, regulatory inquiries and investigations related to the insurer’s use of the third party’s product or services.
Regulatory Oversight
The draft Model Bulletin also provides a list of information and documentation relating to AI systems that an insurer may be asked to provide in connection with an investigation or market conduct action, including material relating to:
- The insurer’s AIS program, such as (1) the scope of the AIS program, (2) how the AIS program is tailored to and proportionate with the insurer’s use and reliance on AI systems and (3) policies, procedures, guidance, training materials and other information relating to the adoption, implementation, maintenance, monitoring and oversight of the AIS program.
- The insurer’s pre-acquisition/pre-use diligence, monitoring, oversight and auditing of AI systems developed or that a third party deployed.
- The insurer’s implementation and compliance with the AIS program, including documents relating to the insurer’s monitoring and audit activities with respect to compliance.
- Data, models and AI systems developed by third parties that are relied on or used by or on behalf of the insurer, including due diligence conducted on third parties and their data, models or AI systems, contracts with third-party AI system, model or data vendors and audits and confirmation processes performed with respect to third-party compliance with contractual and regulatory obligations.
The draft Model Bulletin acknowledges that insurers may demonstrate their compliance with applicable law through means other than those described in the draft Model Bulletin and emphasizes that the goal of the draft Model Bulletin is to ensure that insurers are aware of the relevant insurance department’s expectations about the way AI systems will be governed and managed and the types of information and documentation about AI systems that an insurer may be asked to produce.
Takeaways
As the draft Model Bulletin notes, AI techniques are being deployed across all stages of the insurance life cycle, and they are transforming the insurance industry. While the NAIC encourages the development and use of AI systems that contribute to safe and stable insurance markets, it cautions that AI systems come with their own set of risks, such as the potential for inaccuracy, bias resulting in unfair discrimination and data vulnerability.
It remains to be seen whether and to what extent state insurance departments will adopt and distribute the Model Bulletin in its final form. However, the establishment of minimum expectations with respect to insurers’ development, acquisition and use of AI systems to make or support decisions affecting customers may help mitigate the risks posed by AI systems in the insurance industry.
This memorandum is provided by Grand Park Law Group, A.P.C. LLP and its affiliates for educational and informational purposes only and is not intended and should not be construed as legal advice. This memorandum is considered advertising under applicable state laws.