Artificial Intelligence and Health Insurance Claim Denials

Jason Velligan
Associate Editor
Loyola University Chicago School of Law, JD 2024

Artificial intelligence headlines are grabbing the attention of people from most industries. Artificial intelligence helps doctors diagnose and treat patients, and pharmaceutical manufacturers develop new medications. Politicians, subject matter experts, and numerous publications voice concerns over using artificial intelligence. In late 2023, class actions were filed against health insurers UnitedHealthCare and Humana, who were accused of using an artificial intelligence tool to deny a high volume of claims inappropriately.

Artificial Intelligence

Artificial Intelligence (AI) is a system that uses “machine and human-based inputs to perceive real and virtual environments; abstract such perceptions into models through analysis in an automated manner; and use model inference to formulate options for information or action.” Humana and UnitedHealthCare are accused of using naviHealth’s nH Predict AI to review and deny claims that class plaintiffs argue to have otherwise been eligible if examined by a qualified human being. NaviHealth’s, nH Predict is a care-support tool that takes into account the patient’s own cognition, mobility and ability to perform daily activities. It is used to generate an outcome report that is shared with providers and caregivers to help guide the individual’s path to recovery.”


In 2020, OptumHealth, a subsidiary of UnitedHealth Group (UnitedHealth), purchased naviHealth. OptumHealth is a health and wellness business focusing on peoples’ physical, emotional, and health-related financial needs. A tool like nH Predict was undoubtedly attractive to OptumHealth’s business needs. nH Predict could help guide decision-making and improve patient outcomes. NaviHealth notes that nH Predict is not used to deny care or make coverage determinations. But, two insurers are being accused of doing just that.

UnitedHealth and Humana Class Actions

In late 2023, class actions were filed against two different Medicare Advantage (MA) insurers, UnitedHealth and Humana. It is worth noting that the same attorney represents the plaintiffs in two different cases. UnitedHealth and Humana allegedly used the nH Predict AI to review and deny claims with little or no human input on the claim reviews. One allegation is that employees of one of the insurers were retaliated against when overriding AI decisions. It is further alleged that UnitedHealth and Humana denied claims because they knew that only a small percentage of people would appeal to them. Both complaints allege that the insurers knew that only .2% would appeal, and 90% of those appeals were successful. This begs the question: Are these algorithms fed information regarding appeal probability and factored into the denial of certain services? Insurers are in the business of making money, and if the reward is greater than the risk, then why not deny claims with a low probability of appeal. If that is true, state and federal statutes need to be enacted to curtail this behavior.

Regulations Affecting AI

Nonetheless, MA claims are subject to a June 2023 rule, and the Centers for Medicare & Medicaid Services (CMS) provides guidance on the rule. The rule states that MA organizations must ensure that they make medical necessity determinations based on the specific individual’s circumstances instead of using an algorithm or software that doesn’t account for them. Humana admitted they used it but required humans to make the determinations on claims.

NaviHealth’s nH Predict AI tool appears to have not been designed to review insurance claims. CMS rules do not prohibit the use of AI in reviewing claims, but humans must review the claims and make the determination. When insurers take the agency from humans and solely rely on AI, then there appears to be a problem. However, United and Humana are accused of using it for a purpose outside of its original design.

AI use is expected to grow rapidly for the foreseeable future. Laws and regulations affecting its use and implementation will have to keep pace. If they do not, then AI misuse will detrimentally affect people on a large scale.