HHS, nondiscrimination, and artificial intelligence

May 02, 2024

On April 26, 2024, the U.S. Department of Health and Human Services (HHS) Office of Civil Rights (OCR) issued a final rule addressing discrimination in health care.  The 558-page final rule is scheduled to be published on May 6, 2024, but HHS made an unpublished copy available here. The final rule includes some requirements on covered entities that use artificial intelligence (AI) in health care, which we summarize below.

The new AI requirements govern the use of AI in “patient care decision support tools” for clinical care.  HHS defines the term to mean “any automated or non-automated tool, mechanism, method, technology, or combination thereof used by a covered entity to support clinical decision-making in its health programs or activities.”  (§ 92.4)

The new AI rule, § 92.210, contains only three sections, which can be summarized as:

(a) Covered entities cannot discriminate on the basis of race, color, national origin, sex, age, or disability through the use of the patient care decision support tools.

(b) The covered entity has “an ongoing duty to make reasonable efforts to identify uses of patient care decision support tools in its health programs or activities that employ input variables or factors that measure race, color, national origin, sex, age, or disability.”

(c) For each patient care decision support tool, the covered entity must “make reasonable efforts to mitigate the risk of discrimination resulting from the tool’s use in its health programs or activities.”

HHS listed a current use case for the tools: “tools used for prior authorization and medical necessity analysis, which directly impacts clinical decision-making and affects the care received by patients as directed by their providers.”

Additionally, HHS identified uses cases where the new rule (§ 92.210) would NOT apply:

  • Tools used for administrative and billing-related activities
  • Automated medical coding
  • Fraud, waste, and abuse
  • Patient scheduling
  • Facilities management
  • Inventory and materials management
  • Supply chain management
  • Financial market investment management
  • Employment and staffing-related activities

The new rule “does not prohibit covered entities from using patient care decision support tools that identify, evaluate, and address health disparities so long as their use does not constitute prohibited discrimination on the basis of race, color, national origin, sex, age, or disability.”  HHS recognized that there were many indirect measures of those protected bases, but “covered entities are not required to identify all patient care decision support tools with input variables or factors that indirectly measure these protected bases.”

The regulation does not require covered entities to obtain datasets or other attribute information from developers when purchasing or using patient care decision support tools, but if the covered entity knows or should know that the tool could result in discrimination, the covered entity should consult publicly available sources or request this information from the developer, such as peer-reviewed medical journals or professional associations.

HHS declined to require transparency in the use of patient care decision support tools, but did state that “it would be a best practice for covered entities to disclose information to patients about the patient care decision support tools used in their health programs and activities.”

HHS indicated that it would evaluate a covered entity’s “reasonable basis” to identify uses of the AI tools that employ factors that measure those protected bases on a case-by-case basis, using four factors:

  1. The covered entity’s size and resources;
  2. Whether the covered entity has altered the tool or is using it as the developer intended and as approved by regulators;
  3. Whether the covered entity received information from the developer that that AI tool had a potential for discrimination or identified that the AI tool used race, color, etc.;
  4. Whether the covered entity has methodologies or processes to evaluate AI tools, including seeking information from the developer, reviewing medical journals, etc.

HHS stated that the covered entity’s mitigation efforts described in (c) above may vary based on the input variable or factor, as well as the purpose of the tool in question.  That said, a covered provider's liability is not contingent on or related to a developer's potential liability under this rule.  HHS declined to place strict liability on covered entities.

In order to give covered entities time to comply, HHS is delaying the effective date of § 92.210 for 300 days from the publication date, until approximately March 2, 2025.