The insurance industry is entering a period of sustained regulatory scrutiny regarding the use of artificial intelligence in claims operations. What began as exploratory inquiries has evolved into formal information requests, detailed questionnaires, and targeted examinations. State Departments of Insurance are no longer asking whether artificial intelligence is being used. They are asking how it is being used, who is accountable, how it is governed, and whether it complies with longstanding duties of good faith and fair dealing.
For carriers, independent adjusting firms, third party administrators, and technology vendors, the regulatory message is clear. Innovation is permitted. Abdication of responsibility is not.
The Regulatory Context for AI in Claims
Insurance regulation has always centered on consumer protection, solvency, and fair market conduct. The introduction of artificial intelligence into claims handling does not alter those principles. Instead, it intensifies the focus on them.
Unfair Claims Settlement Practices statutes prohibit misrepresentation of coverage, unreasonable investigation, untimely communications, and arbitrary claim denials. These duties apply regardless of whether a human adjuster or a technological tool is involved in reviewing the file. Regulators view artificial intelligence as a tool deployed within a regulated process, not as a separate decision maker beyond oversight.
The NAIC Big Data and Artificial Intelligence Working Group has reinforced this position through guidance emphasizing transparency, accountability, and fairness. State regulators are drawing from that framework while also applying their own unfair discrimination statutes and consumer protection standards. As a result, insurers are increasingly required to explain how their systems operate, what they do not do, and how they are controlled.
What Regulators Are Asking
Departments of Insurance are requesting detailed descriptions of artificial intelligence systems used in claims. They are asking whether systems influence coverage determinations, whether any aspect of claim handling is automated, how bias is evaluated, and how changes are deployed. They are also seeking clarity on vendor relationships and contractual accountability.
Regulators want to know whether the system is assistive or determinative. They want to understand whether any adverse action can occur without human review. They want evidence of audit logs, change management protocols, and governance oversight. In short, they want assurance that the insurer remains in control.
Assistive Systems and Human Judgment
In many instances, insurers are deploying artificial intelligence as a workflow support tool rather than an autonomous decision engine. A properly designed system operates within guardrails that preserve human accountability.
Claim related information is aggregated and analyzed to generate summaries and workflow support insights for claim professionals. The system may highlight indicators that could warrant additional human review, for example potential severity, subrogation considerations, or discrepancies within the file. The AI system does not make coverage determinations, liability decisions, claim payment decisions, or fraud determinations. Its outputs are non binding and intended solely to assist trained users in prioritizing and reviewing claim files. Suggested next steps are based on general claim handling workflows and are always subject to user judgment and company policies. The system is not used to deny or delay claims or benefits without human review. All results are fully explainable and auditable via detailed logs and reasoning provided to the user.
Specifically, the AI system may generate structured summaries of claim files to assist adjusters in reviewing large volumes of information efficiently. It may identify potential indicators that may warrant additional human review, including severity considerations, possible subrogation opportunities, coverage related inconsistencies within documentation, or incomplete file elements. It may highlight discrepancies or missing documentation to support timely and thorough investigation. It may suggest general workflow considerations consistent with standard claims handling practices. The system does not make coverage determinations. The system does not determine liability. The system does not authorize, deny, or delay payment of claims. The system does not issue claim denials. The system does not make fraud determinations. The system does not communicate decisions to insureds.
This distinction between assistance and decision making is central to regulatory comfort. When systems are limited to summarization, prioritization, and insight generation, and when every material claim determination requires affirmative human action, regulators are more likely to view the technology as consistent with existing legal frameworks.
Human Oversight and Accountability
Regulators expect more than assurances. They expect structural controls.
AI generated outputs are provided as informational support to licensed claim professionals and outputs are reviewed before any action is taken. The system does not take automated action on a claim file. Users are trained on appropriate use and all claim decisions, including coverage evaluation, payment, investigation, or escalation, remain the responsibility of the authorized user in accordance with internal claim handling guidelines. Controls include user access restrictions, audit logging, and internal governance processes to ensure the AI system is used only for workflow assistance and not for automated decision making. No adverse action is automated in any way and human oversight is required.
For Departments of Insurance, this language must be supported by operational reality. Regulators may request documentation of training programs, evidence of access controls, and samples of audit logs. They may ask how the insurer verifies that users are not over relying on system outputs. They may examine whether internal governance committees review performance metrics and consumer complaints.
The core principle is straightforward. The insurer retains full responsibility for every claim outcome. The presence of a vendor or an algorithm does not dilute that duty.
Vendor Accountability and Insurer Responsibility
One of the most significant areas of regulatory focus concerns vendor oversight. Departments of Insurance increasingly recognize that many artificial intelligence tools are developed and maintained by third party technology companies. Yet regulators consistently maintain that responsibility rests with the licensed insurer.
This creates a dual expectation. Vendors must design systems that enable compliance, and insurers must exercise oversight over those vendors. Contracts should address data use, confidentiality, audit rights, and performance standards. Insurers should understand the functional boundaries of the system and should not rely on marketing representations in place of documented controls.
Regulators may ask whether the insurer has conducted due diligence on the vendor, whether bias testing has been performed, and whether the insurer can obtain documentation of model changes. If the insurer cannot answer those questions, it signals governance gaps that can trigger deeper inquiry.
Change Management and System Evolution
Artificial intelligence systems evolve. Models are updated, prompts are refined, and logic is adjusted. Regulators want to understand how those changes are controlled.
Changes to the AI system are deployed through a controlled change management process consistent with SOC 2 Type II compliance controls. Updates are developed and tested in development and staging environments, undergo peer review, and are subject to approval prior to production release. Significant changes, including model, prompt, or logic updates, are documented, version controlled, and monitored after deployment to ensure continued performance and appropriate use.
For Departments of Insurance, this description provides a starting point. Regulators may request change logs, evidence of testing, and documentation of monitoring protocols. They may ask whether prior versions can be restored if an issue arises. They may inquire whether material changes are reviewed by compliance or legal teams.
The expectation is not that systems remain static. Rather, it is that evolution occurs within a disciplined framework that preserves consumer protections.
Bias, Fairness, and Unfair Discrimination
Beyond operational controls, regulators are increasingly attentive to the risk of unfair discrimination. State statutes prohibit practices that result in disparate treatment or impact without actuarial justification. Although claims handling differs from underwriting, the risk of inconsistent outcomes remains a concern.
Artificial intelligence systems trained on historical data may inadvertently reflect past inconsistencies. Regulators are therefore interested in whether insurers monitor outcomes for patterns that could signal bias. This does not require perfection. It requires vigilance.
Documentation of fairness reviews, complaint analysis, and governance oversight demonstrates that the insurer is not passively relying on technology but actively supervising it.
Bad Faith Exposure and Litigation Risk
Improper use of artificial intelligence can create exposure not only to regulatory action but also to bad faith litigation. If a claimant alleges that a denial was generated or influenced by an opaque system without reasonable investigation, the insurer may face allegations of arbitrary conduct.
Courts have long held that insurers must conduct reasonable investigations and must base decisions on facts and policy language. If artificial intelligence tools are used in a manner that shortcuts investigation or creates the appearance of automated denial, plaintiffs will seize upon that narrative.
Clear documentation of human review, reasoned decision making, and independent evaluation remains the strongest defense. Artificial intelligence should enhance documentation, not replace judgment.
Preparing for a Department of Insurance Inquiry
Insurers should not wait for a questionnaire to arrive before assessing readiness. A proactive review should address the scope of artificial intelligence use, the existence of written policies, the clarity of vendor contracts, and the sufficiency of governance oversight.
Leadership should be prepared to articulate what the system does, what it does not do, how it is monitored, and who is accountable. Claims personnel should understand their role in maintaining independent judgment. Compliance teams should have access to documentation demonstrating auditability and change management.
Artificial intelligence vendors serving carriers, independent adjusters, and TPAs should design systems with regulatory review in mind from the outset. Explainability, logging, and role based controls should not be optional features. They are foundational elements of a system operating within a regulated industry.
Responsible Adoption and Regulatory Readiness
Artificial intelligence offers significant opportunities to improve efficiency, consistency, and documentation in claims operations. Properly deployed, it can help adjusters manage volume, identify gaps, and maintain thorough investigative records. Improperly deployed, it can expose insurers to regulatory scrutiny and litigation risk.
State Departments of Insurance are not signaling hostility toward innovation. They are signaling that innovation must operate within established legal boundaries. Transparency, accountability, and human oversight are not aspirational concepts. They are regulatory expectations.
For carriers, independent adjusting firms, TPAs, and technology vendors, the path forward is disciplined integration. Systems must be assistive rather than determinative. Governance must be active rather than symbolic. Documentation must be comprehensive rather than reactive.
In an environment of heightened scrutiny, those who treat artificial intelligence as an extension of regulated claims handling, rather than a replacement for it, will be best positioned to demonstrate compliance. Responsible adoption is not simply a technical objective. It is a legal and ethical imperative that will define the next chapter of claims operations.