A forward looking analysis showing how insurers and insurtechs can align with emerging mandates for governance, transparency, and accountability in artificial intelligence oversight.
Introduction
Artificial intelligence is no longer just an emerging technology in insurance. It powers underwriting, pricing, claims triage, fraud detection, personalized marketing, and more. With the growing appetite for efficiency and competitive advantage, insurers have turned to machine learning, predictive analytics, and increasingly, generative AI. But these systems also bring opaque decision making, algorithmic bias, and risk of consumer harm, especially when unfair discrimination or flawed models drive outcomes.
Regulators are responding. The NAIC first issued aspirational Principles for Artificial Intelligence in 2020, emphasizing fairness, accountability, transparency, and privacy. The 2023 NAIC Model Bulletin on the Use of Artificial Intelligence Systems by Insurers turned that guidance into a formal regulatory expectation. It required development of written AI oversight programs and documentation across the entire model lifecycle.
Now the 2025 NAIC AI Bulletin signals the next phase; shifting from framework to enforcement. It builds on the 2023 Bulletin and on the NAIC Big Data and Artificial Intelligence Working Group’s model guidance, which emphasizes three pillars: governance, transparency, and accountability. This guidance is already influencing state regulators, with nearly half the country having adopted the bulletin directly or in modified form.
This article explores the 2023 bulletin's provisions, its foundational three pillar framework, and state-by-state adoption trends. It then looks ahead to the 2025 updates, projecting deeper controls over generative AI, third party accountability, and alignment with the NIST AI Risk Management Framework. Finally, it outlines practical implications for carriers, reinsurers, and insurtechs. The message to compliance leaders is clear: oversight of AI is now operational, not optional.
Key Takeaways from the 2023 NAIC AI Model Bulletin
The 2023 bulletin marked a sharp turn from aspirational to enforceable regulatory posture. It introduced expectations around governance, documentation, risk mitigation, and audit readiness.
1. Governance through an Artificial Intelligence System Program
Insurers must develop and maintain a written AI System Program. This program governs AI use across all areas of the insurance lifecycle including underwriting, rating, claims, fraud detection, and marketing. Accountability must reside with senior leadership. Many insurers have formed cross functional committees spanning business, actuarial, data science, legal, compliance, and technology. The AIS Program must oversee the full lifecycle of any AI or predictive model in use, from design and development to retirement.
2. Risk Mitigation, Fairness, and Transparency
The bulletin emphasizes reducing adverse consumer outcomes such as inaccurate, arbitrary, or unfairly discriminatory results. Insurers are expected to monitor for model drift, validate performance, test for bias, and ensure appropriate human oversight. These steps apply even when using third party models.
“Decisions made by Insurers using AI Systems must comply with legal and regulatory standards, including unfair trade practice laws... regardless of the tools and methods used.”¹
3. Third Party Accountability
Insurers remain responsible for AI systems and data acquired from vendors. The bulletin requires due diligence, validation, and when feasible, contractual rights such as audit access, transparency, and cooperation with regulators.
4. Documentation and Examination Readiness
Insurers must be prepared to provide documentation to regulators at any time. This includes:
- Written AIS Programs and adoption records
- Inventories of models and AI systems
- Data lineage, model objectives, and validation reports
- Governance structures and risk control policies
- Third party agreements and oversight protocols
“An Insurer can expect to be asked about its development, deployment, and use of AI Systems... and outcomes resulting from them.”²
The Three Pillars: Governance, Transparency, Accountability
In addition to the bulletin, the NAIC Big Data and AI Working Group proposed guidance that reinforces three foundational obligations for any AI used in insurance:
- Governance: Defined oversight, controls, and senior-level accountability.
- Transparency: Documentation of decision making, model logic, and consumer communications.
- Accountability: Measurable performance standards, testing protocols, and regulatory cooperation.
These pillars now serve as the baseline for insurer compliance programs and provide structure for future regulatory enforcement.
What the 2025 Bulletin Is Expected to Add
The 2025 bulletin is anticipated to refine and expand the framework set forth in 2023, responding to developments in technology, litigation, and regulatory priorities.
1. Generative AI Oversight
While generative AI was defined in the 2023 bulletin, there were no distinct compliance requirements. That is expected to change. New guidance may require:
- Model hallucination safeguards
- Documentation of training data sources
- Evaluation of generated content’s accuracy and fairness
- Role-specific rules for generative AI in claims summaries, chatbots, and marketing
These changes align with recent Federal Trade Commission guidance warning against deceptive or unexplainable AI-generated outputs.³
2. Integration with the NIST AI Risk Management Framework
The 2023 bulletin allows insurers to align with NIST’s AI Risk Management Framework, but the 2025 update may require it. This would offer consistency across federal and state frameworks, especially regarding:
- Risk classification
- Performance thresholds
- Explainability standards
- Bias mitigation practices
3. Expanded Consumer Rights
New expectations may include:
- Notice to consumers when AI is involved in a decision
- Right to explanation of the logic or data used
- Opt out or appeal rights for high-impact decisions such as underwriting or pricing
This direction reflects developments in Colorado and proposed legislation in California and New York, signaling stronger individual protections.
4. Mandatory Vendor Controls
The update may require enforceable language in third party contracts, such as:
- Audit rights and reporting
- Regulatory cooperation mandates
- Indemnification for compliance violations
- Disclosure of data sources and retraining schedules
State Adoption Landscape
As of August 2025, at least 24 states and the District of Columbia have adopted the 2023 NAIC AI Bulletin in full or substantially similar form⁴. Adopting states include:
- Alaska (February 2024)
- Connecticut (February 2024)
- Illinois (March 2024)
- Kentucky (April 2024)
- Maryland (April 2024)
- Massachusetts (December 2024)
- Michigan (August 2024)
- Nebraska (June 2024)
- Nevada (February 2024)
- New Hampshire (February 2024)
- North Carolina (December 2024)
- Oklahoma (November 2024)
- Pennsylvania (April 2024)
- Rhode Island (March 2024)
- Vermont (March 2024)
- Virginia (July 2024)
- Washington (April 2024)
- West Virginia (August 2024)
- Arkansas (July 2024)
- Iowa (November 2024)
- Delaware (February 2025)
- Wisconsin (March 2025)
- District of Columbia (May 2024)
Colorado and New York are pursuing their own frameworks. Colorado has implemented fairness testing requirements for insurance algorithms, while New York regulators have proposed a circular letter focused on underwriting and rating fairness.
Practical Implications for Carriers and Insurtechs
Start
- Create cross functional AI governance bodies
- Build AIS Programs based on governance, transparency, and accountability
- Audit all internal and third party AI systems for risk and performance
Stop
- Avoid deploying unvalidated models or relying solely on vendor representations
- Eliminate shadow AI initiatives without oversight
- Do not use opaque models in regulated decision making
Improve
- Enhance documentation of data, training, validation, and monitoring
- Align internal audit to include AI oversight
- Require third party contracts to include compliance obligations and auditability
Conclusion
The 2025 NAIC Bulletin represents a shift from theoretical frameworks to operational enforcement. The combined weight of the three pillars: governance, transparency, and accountability, forms a durable foundation for regulatory compliance.
The Bulletin is no longer optional. AI governance now demands the same maturity as financial risk, cybersecurity, and solvency monitoring. Insurers and insurtechs must treat AI oversight as enterprise risk management.
Now is the time to act. Prepare internal systems, document decision logic, and modernize governance. The winners in the AI-powered future of insurance will be those who not only innovate quickly, but govern wisely.
Footnotes
- NAIC Model Bulletin on the Use of Artificial Intelligence Systems by Insurers, Section 3 (2023)
- Ibid., Section 4
- FTC Business Blog: "Keep your AI claims in check", 2023
- NAIC Working Group AI Bulletin Adoption Map (August 2025)