AI Claim Denial Algorithms by Health Insurers

Origin: 2020 · United States · Updated Mar 6, 2026

Overview

The use of artificial intelligence algorithms by major health insurers to automatically deny medical claims represents one of the most consequential confirmed conspiracy theories in contemporary American life. What was long dismissed as paranoid speculation by frustrated patients — the belief that insurance companies were systematically denying valid claims using automated systems designed to minimize payouts rather than evaluate medical necessity — has been confirmed through investigative journalism, whistleblower testimony, internal documents revealed through litigation, and congressional investigations.

The two most prominent cases involve Cigna’s PXDX system, which enabled medical directors to deny claims at rates of thousands per day without reviewing individual patient records, and UnitedHealthcare’s nH Predict algorithm, which used predictive modeling to cut off post-acute care coverage for elderly patients. In both cases, the systems prioritized cost reduction over medical accuracy, with denial overturn rates on appeal reaching 90% in some documented instances — indicating that the algorithms were wrong the vast majority of the time they were used to deny care.

The revelation of these systems transformed the public discourse about health insurance in the United States, contributing to a wave of legislation targeting AI in insurance decisions and fueling a broader reckoning with the health insurance industry that culminated in the December 2024 assassination of UnitedHealthcare CEO Brian Thompson. While the murder was universally condemned, the extraordinary public sympathy for the shooter’s apparent motivations — with polls showing that a significant minority of Americans expressed understanding for the act — revealed the depth of public anger toward an industry increasingly seen as systematically sacrificing patient welfare for corporate profit through technological means.

Origins & History

Concerns about automated claim denial in health insurance long predate the current AI controversy. Since the 1990s, patients, physicians, and advocacy groups have documented patterns suggesting that insurers were denying claims as a default strategy, knowing that many patients would not appeal. The emergence of “utilization management” as an industry practice — in which insurers review and approve medical treatments before they are delivered — created an infrastructure that could be used to restrict access to care under the guise of ensuring appropriate treatment.

The specific AI-driven denial tools emerged from the broader integration of machine learning and predictive analytics into the health insurance industry during the 2010s. NaviHealth, founded in 2012, developed the nH Predict algorithm as a tool for predicting patient recovery trajectories after acute hospitalization. The company was acquired by UnitedHealth Group’s Optum division in 2020, and its predictive tools were integrated into UnitedHealthcare’s coverage decision process for Medicare Advantage plans, which serve approximately 30 million Americans.

Cigna’s PXDX system followed a different path. Rather than using predictive modeling, PXDX operated as a rule-based system that flagged claims for denial based on predetermined procedure-to-diagnosis combinations. The system was designed to identify claims where the procedure billed did not, in Cigna’s assessment, match the diagnosis listed — for example, flagging a specific blood test as unnecessary for a particular diagnosis. While such matching systems have legitimate quality control applications, the scale and speed at which Cigna’s medical directors processed PXDX flags raised immediate questions about whether genuine clinical review was occurring.

The first major public exposure came through a ProPublica investigation published in March 2023, which revealed the internal workings of Cigna’s PXDX system. The investigation documented that Cigna medical directors were denying claims at a rate that made individual review physically impossible — with one medical director denying over 60,000 claims in a single month, amounting to an average review time of approximately 1.2 seconds per claim. Internal documents showed that the denial rate for PXDX-flagged claims was over 90%, regardless of individual circumstances.

STAT News published its own investigation of UnitedHealthcare’s nH Predict system in November 2023, documenting cases of elderly patients being denied continued nursing home or rehabilitation coverage based on the algorithm’s predictions, even when their treating physicians stated they were not ready for discharge. The investigation revealed that the algorithm predicted recovery timelines that were often dramatically shorter than actual clinical outcomes, and that UnitedHealthcare was using these predictions as justification for coverage denials despite knowing the algorithm’s predictions were frequently wrong.

Key Claims

  • Automated mass denial: Major health insurers use AI algorithms to deny medical claims at scale, with individual claims receiving no meaningful clinical review before denial
  • Profit over patients: The algorithms are designed and deployed specifically to reduce claim payouts and increase corporate profits, not to ensure appropriate medical care
  • Knowledge of inaccuracy: Insurers are aware that their AI systems produce inaccurate results — with denial overturn rates on appeal reaching 90% — but continue using them because most patients do not appeal
  • Deliberate appeal friction: The systems are designed to exploit the fact that appealing a denial requires significant time, effort, and expertise that most patients cannot muster, particularly when they are dealing with serious illness
  • Medicare Advantage exploitation: AI denial systems are disproportionately used in Medicare Advantage plans, where the patients — predominantly elderly and often cognitively impaired — are least able to navigate complex appeals processes
  • Regulatory evasion: The systems are designed to technically comply with regulatory requirements for coverage decisions while effectively circumventing the spirit of those requirements
  • Industry-wide practice: The use of automated denial tools is not limited to Cigna and UnitedHealthcare but is an industry-wide practice adopted by most major health insurers

Evidence

This theory is classified as confirmed based on extensive documentary evidence from multiple independent sources.

ProPublica investigation (March 2023): Internal Cigna documents obtained by ProPublica revealed that the company’s PXDX system allowed medical directors to issue bulk claim denials using a point-and-click interface. The investigation documented that one medical director denied 60,000 claims in a single month. Cigna medical directors were required to personally review each claim under federal and state regulations, but the pace of denials made individual review physically impossible. Former Cigna employees confirmed to ProPublica that the system was designed to deny claims quickly and at scale.

STAT News investigation (November 2023): STAT’s reporting on UnitedHealthcare’s nH Predict system documented specific cases of elderly patients who were denied continued post-acute care coverage based on algorithmic predictions that proved wrong. The investigation revealed that UnitedHealthcare employees were trained to rely on the algorithm’s predictions even when patients’ treating physicians disagreed, and that the company continued using the system despite internal data showing a 90% overturn rate on appeal.

Congressional investigations: Multiple congressional hearings in 2023 and 2024 examined the use of AI in health insurance claim decisions. Testimony from former insurance industry employees, patient advocates, and medical professionals confirmed the systemic use of automated tools to deny claims. The Senate Committee on Finance released a report documenting how Medicare Advantage insurers, including UnitedHealthcare, used prior authorization and AI-driven denial systems to restrict access to medically necessary care.

Litigation: Multiple class-action lawsuits filed against UnitedHealthcare and Cigna have produced additional evidence through discovery. In Estate of Gene B. Lokken v. UnitedHealth Group and related cases, plaintiffs’ attorneys obtained internal documents showing that UnitedHealthcare knew its nH Predict algorithm produced inaccurate predictions but continued using it to deny coverage. Cigna faced similar lawsuits alleging that its PXDX system violated patients’ rights to individualized claim review.

State regulatory actions: Insurance regulators in several states investigated and confirmed the use of automated denial systems. California’s Department of Insurance found evidence that insurers were using AI tools to deny claims without the required clinical review, leading to the passage of state legislation banning the practice.

Debunking / Verification

This theory is confirmed. The use of AI algorithms to deny health insurance claims at scale, without meaningful individual clinical review, has been documented through investigative journalism, congressional investigations, litigation discovery, and regulatory findings.

Key confirmed elements include:

The existence and operation of specific AI denial systems — Cigna’s PXDX and UnitedHealthcare’s nH Predict — has been confirmed through internal documents, employee testimony, and the companies’ own admissions. Neither company has denied the existence of these systems, though both have disputed characterizations of how they are used.

The speed at which claims were reviewed and denied — making individual clinical review physically impossible — has been documented through internal records showing the number of claims processed per medical director per time period.

The high denial overturn rate on appeal — approximately 90% in documented cases — has been confirmed through internal data and litigation records, demonstrating that the algorithms frequently produced incorrect results.

The deliberate design of these systems to reduce payouts has been documented through internal communications and employee testimony revealing that the systems were evaluated based on their impact on claim costs rather than on clinical accuracy.

Both companies have defended their practices by arguing that their AI tools assist rather than replace clinical judgment, and that the tools are used to identify claims that warrant closer review rather than to issue automatic denials. However, the documented speed of claim processing and the pattern of bulk denials undermine these defenses.

Cultural Impact

The confirmation of AI-driven claim denial has had profound effects on American political discourse, healthcare policy, and public attitudes toward the health insurance industry.

The most dramatic cultural impact came with the December 4, 2024, assassination of UnitedHealthcare CEO Brian Thompson outside the New York Hilton Midtown. The shooting, carried out by 26-year-old Luigi Mangione, was motivated by what Mangione described in a written manifesto as the health insurance industry’s systematic denial of care for profit. While the murder was widely condemned, the public response was remarkable: viral social media posts expressed sympathy for Mangione’s grievances if not his actions, and fundraising campaigns for his legal defense raised substantial sums. Polls showed that while most Americans opposed the violence, a significant minority expressed understanding of the motivations behind it. The incident crystallized years of accumulated public anger toward health insurers and made AI-driven claim denial a centerpiece of national political debate.

The revelations have driven significant legislative action. California’s 2024 law prohibiting AI-only claim denials served as a template for similar legislation in other states. At the federal level, proposed legislation would require insurers to provide meaningful clinical review for all claim denials and would ban the use of AI as the sole basis for coverage decisions. The Centers for Medicare and Medicaid Services (CMS) issued new guidance restricting the use of predictive algorithms in Medicare Advantage coverage decisions.

The controversy has also influenced the broader debate about AI in decision-making. The health insurance AI denial scandal is frequently cited in discussions about algorithmic bias, automated decision-making in government services, and the need for AI transparency and accountability regulations. It has become a case study in how AI can be deployed to automate harmful practices at a scale that would be impossible through human decision-making alone.

Within the health insurance industry, the revelations have prompted some companies to publicly modify their AI practices, though critics argue that many changes are cosmetic. The industry trade group America’s Health Insurance Plans (AHIP) has published guidelines for responsible AI use, while simultaneously lobbying against legislation that would impose binding restrictions.

Timeline

  • 2012 — NaviHealth founded, begins developing predictive analytics for post-acute care
  • 2010s — Major health insurers increasingly adopt machine learning and predictive tools for utilization management
  • 2020 — UnitedHealth Group acquires NaviHealth and integrates nH Predict into Medicare Advantage coverage decisions
  • March 2023 — ProPublica publishes investigation revealing Cigna’s PXDX bulk denial system
  • 2023 — Multiple class-action lawsuits filed against UnitedHealthcare and Cigna over AI-driven claim denials
  • November 2023 — STAT News publishes investigation of UnitedHealthcare’s nH Predict algorithm
  • 2023-2024 — Congressional hearings examine AI in health insurance claim decisions
  • 2024 — California enacts legislation prohibiting AI-only claim denials
  • 2024 — CMS issues guidance restricting AI use in Medicare Advantage coverage decisions
  • December 4, 2024 — UnitedHealthcare CEO Brian Thompson is assassinated; public response highlights deep anger over claim denial practices
  • 2025 — Federal legislation addressing AI in insurance decisions introduced in Congress
  • 2025-2026 — Multiple states pass laws restricting or banning AI-driven claim denial without physician review

Sources & Further Reading

  • Rubin, Selena Simmons-Duffin, and Maya Miller. “How Cigna Saves Millions by Having Its Doctors Reject Claims Without Reading Them.” ProPublica, March 25, 2023.
  • Ross, Casey, and Bob Herman. “UnitedHealth Pushed Patients Out of Rehab Using an Algorithm.” STAT News, November 14, 2023.
  • U.S. Senate Committee on Finance. “Denied: How Medicare Advantage Plans Use Prior Authorization to Deny Access to Care.” Report, 2024.
  • Fein, Jay. “Denial Management: The Dark Side of AI in Healthcare.” Health Affairs Blog, 2024.
  • Boodman, Eric. “Inside the Algorithm That’s Cutting Off Elderly Patients’ Care.” STAT News, December 2023.
  • Government Accountability Office. “Medicare Advantage: CMS Should Take Additional Steps to Improve Prior Authorization.” GAO Report, 2024.
  • Fang, Lee. “Health Insurers Are Vacuuming Up Your Data.” The Intercept, February 2024.

Frequently Asked Questions

How does Cigna's PXDX system work to deny claims?
Cigna's PXDX system (short for procedure-to-diagnosis) is an automated algorithm that flags claims for denial based on combinations of medical procedures and diagnoses that the system has determined should not typically go together. According to a ProPublica investigation, Cigna medical directors used the system to deny claims in bulk — reportedly reviewing and denying up to 60,000 claims in a single month — spending an average of 1.2 seconds per case. The system bypassed individual medical review by applying blanket rules to patient-diagnosis-procedure combinations, meaning claims were denied without a physician reviewing the patient's actual medical records or circumstances.
What is UnitedHealthcare's nH Predict algorithm and what happened to it?
The nH Predict algorithm was developed by NaviHealth, a subsidiary of UnitedHealth Group, to predict how much post-acute care (such as nursing home stays or rehabilitation) a patient would need after hospitalization. Investigations revealed that UnitedHealthcare used the algorithm's predictions to cut off coverage for elderly patients, even when the patients' own doctors said they needed continued care. A 2023 STAT News investigation found that the algorithm had an approximately 90% denial overturn rate, meaning it was wrong about 90% of the time when patients appealed. Following public outcry and multiple lawsuits, UnitedHealthcare faced congressional scrutiny, and the use of AI in coverage decisions became a major legislative issue.
Is it legal for health insurers to use AI to deny claims?
The legality of AI-driven claim denials is actively contested. Federal law (ERISA and the Affordable Care Act) requires that coverage decisions be made based on individual medical necessity, which arguably requires human clinical judgment. Several states have passed or proposed laws specifically banning the use of AI to deny health insurance claims without physician review. In 2024, California enacted legislation prohibiting health insurers from using AI algorithms as the sole basis for coverage denials. Multiple class-action lawsuits against UnitedHealthcare and Cigna allege that automated denial systems violate existing insurance regulations and patients' rights to individualized review. As of 2026, federal legislation addressing AI in insurance decisions remains under consideration.
AI Claim Denial Algorithms by Health Insurers — Conspiracy Theory Timeline 2020, United States

Infographic

Share this visual summary. Right-click to save.

AI Claim Denial Algorithms by Health Insurers — visual timeline and key facts infographic