OMB guidance may help companies determine AI safety and rights risks

April 11, 2024

On March 28, 2024, the White House Office of Management and Budget (OMB) issued guidance to federal government agencies on agency use of artificial intelligence (AI). The guidance recommends that agencies take a risk-based approach with respect to AI.

In particular, the OMB memorandum “establishes new agency requirements and guidance for AI governance, innovation, and risk management, including through specific minimum risk management practices for uses of AI that impact the rights and safety of the public.” Although not federal agencies, many companies are also struggling with a risk-based approach to AI and the development of risk management practices governing their use of AI. The OMB guidance may provide some assistance concerning what OMB considers “high-risk” uses of AI with respect to safety and rights. While federal agencies often have different concerns than companies, companies are increasingly becoming concerned about safety, discrimination, bias, and other risks, including any "decision that produces a legal or similarly significant effect concerning a consumer" under state comprehensive privacy laws, such as the Texas privacy law that goes into effect on July 1, 2024, and is described here.

The OMB guidance lists (on pages 31-33) several real-world AI uses that the OMB presumes to be safety-impacting or rights-impacting, and those examples include several industries. As companies look to implement AI governance and risk management frameworks, companies may wish to consider the listed items to be “high risk” in their AI analyses—or be prepared to document why those items are not “high risk” for the company.

Medicine and Healthcare:

Presumed safety-impacting:

  • “Carrying out the medically relevant functions of medical devices; providing medical diagnoses, determining medical treatments; providing medical or insurance health-risk assessments; providing drug-addiction risk assessments or determining access to medication; conducting risk assessments for suicide or other violence; detecting or preventing mental-health issues; flagging patients for interventions; allocating care in the context of public insurance; or controlling health-insurance costs and underwriting”
  • “Controlling the safety-critical functions within . . . emergency services”
  • “Controlling the physical movements or robots or robotic appendages within a . . . medical or law enforcement setting”

Presumed rights-impacting:

  • “Carrying out the medically relevant functions of medical devices; providing medical diagnoses; determining medical treatments; providing medical or insurance health-risk assessments; providing drug-addition risk assessments or determining access to medication; conducting risk assessments for suicide or other violence; detecting or preventing mental-health issues; flagging patients for interventions; allocating care in the context of public insurance; or controlling health-insurance costs and underwriting”

Financial Services (including Insurance):

Presumed safety-impacting:

  • “Allocating care in the context of public insurance; or controlling health-insurance costs and underwriting”

Presumed rights-impacting:

  • “Allocating loans; determining financial-system access; credit scoring; determining who is subject to a financial audit; making insurance determinations and risk assessments; determining interest rates; or determining financial penalties (e.g., garnishing wages or withholding tax returns)”
  • “Allocating care in the context of public insurance; or controlling health-insurance costs and underwriting”

Energy

Presumed safety-impacting:

  • “Controlling the safety-critical functions within dams, emergency services, electrical grids, the generation or movement of energy, . . . water and wastewater systems, or nuclear reactors, materials and waste”

Education

Presumed rights-impacting:

  • “In education contexts, detecting student cheating or plagiarism, influencing admissions processes; monitoring students online or in virtual-reality; projecting student progress or outcomes; recommending disciplinary interventions; determining access to educational resources or programs; determining eligibility for student aid or Federal education; or facilitating surveillance (whether online or in-person)”

Labor and Employment

Presumed rights-impacting:

  • “Determining the terms or conditions of employment, including pre-employment screening, reasonable accommodation, pay or promotion, performance management, hiring or termination, or recommending disciplinary action; performing time-on-task tracking; or conducting workplace surveillance or automated personnel management”