Publication

Jun 13, 2023Published Article

Weighing The Risks Of AI For Employee Benefits Admin

Law 360

Artificial intelligence will be able to streamline benefits administration, reduce costs, increase accuracy and enhance compliance.

Just ask your favorite generative AI engine. For example, when ChatGPT was asked "Can AI be used in employee benefits administration?" it said:

Overall, AI can greatly enhance the efficiency and effectiveness of employee benefits administration by automating tasks, providing personalized recommendations, improving communication, and generating valuable insights. However, it's important to strike a balance between AI-driven automation and maintaining a human touch to ensure that employees still have access to personalized support and assistance when needed.

To be clear, the authors of this article agree wholeheartedly, and we look forward to the upcoming revolution.

As with the implementation of any new technology, however, employers must be cautious with the use of AI as a primary tool in benefits administration.

What Can AI Do?
Large organizations often have an entire department overseeing their employee benefit plans. Smaller employers often rely upon third-party administrators to handle those tasks. Both can benefit greatly from AI to potentially transform the benefits administration function and enhance their overall work product.

Streamlined Benefits Administration
From employee enrollment and eligibility verification to claims processing and plan design, AI can automate repetitive tasks, reduce administrative burdens and improve accuracy.

Enhanced Employee Self-Service
Chatbots and virtual assistants could provide real-time support to participants, answering benefit-related queries and guiding them through various decision trees and oft-encountered issues. This self-service approach could improve accessibility for employees and allow them to manage their benefits more independently while simultaneously reducing administrative burdens.

Personalization and Decision Support
AI-driven algorithms can analyze vast amounts of data, such as employee demographics, health records and utilization patterns, to personalize benefits offerings to individual employees.

By leveraging machine learning techniques, plan sponsors could tailor their plans to meet the specific needs and preferences of their diverse workforce. It's possible that AI-powered decision support tools could help employees make informed choices by analyzing individual data and providing personalized recommendations, thereby simplifying the often distressing enrollment process.

Enhanced Compliance and Risk Management
Maintaining compliance with ever-changing benefits regulations is a complex task.

AI can assist employers by monitoring legislative changes and automating compliance updates in benefits administration systems. AI could also be used to facilitate communication among HR professionals, employees and benefit providers, and to update required annual notices and restate summary plan descriptions or generate a summary of material modification.

Predictive Analytics and Cost Optimization
AI can leverage predictive analytics to anticipate future benefits needs and trends. By analyzing historical data and market trends, AI algorithms can forecast factors like health care costs, utilization rates and employee preferences, enabling employers to proactively adjust benefits programs, negotiate better rates with vendors, and make informed decisions on plan modifications.

Many of these advances have already been rolled out in various forms and we expect more to follow. Even more intriguing are the advances that we cannot anticipate

Drawbacks, Limitations and Legal Risks
While these potential enhancements may seem like a dream come true, there are potential drawbacks, limitations and legal risks associated with the use of AI in employee benefits. Consider these simple hypotheticals:

  • An employer uses an AI chatbot to assist with open enrollment. Employees are directed to the open enrollment website where they share certain personal and financial information with the AI chatbot. The chatbot is designed to help employees make informed decisions as to what plan options are optimal for them, using the information provided to offer advice on what plan options they should elect. After the open enrollment data is analyzed, it turns out that the chatbot steered a group of participants in a protected class — e.g., race, age, gender, etc. — disproportionately toward the high deductible health plan instead of a preferred provider organization plan, potentially to their detriment.
  • An employer uses an algorithm-based program to assist retirement plan participants with their retirement investment decisions. The algorithm bases its advice on information provided by the participant. A participant provides incorrect information regarding the participant's age, resulting in an obviously inappropriate investment allocation. The employer has the employee's age on record but does not correct the error.
  • An employer approves the use of AI to translate its summary plan description into another language. The translation incorrectly describes a crucial benefit for a participant.
  • An employer's human resources personnel are asked by a participant to explain the tax treatment of early distributions from a retirement plan. The HR personnel use a generative AI program to summarize a response. However, unbeknownst to the HR personnel, the generative AI program "hallucinated" certain facts and provided an incorrect answer.[1]

As you can see, it's easy to imagine numerous scenarios in which AI will inadvertently produce incorrect or incomplete answers.

While AI holds the potential to revolutionize how plan sponsors and service providers manage and deliver, the imperfect nature of AI and the dynamic legal landscape regulating its usage requires measured consideration.

Employee Retirement Income Security Act
Many benefit plans are subject to regulation under the Employee Retirement Income Security Act[2] which "sets minimum standards for most voluntarily established retirement and health plans in private industry to provide protection for individuals in these plans."[3] Decision makers overseeing benefit plans regulated by ERISA must consider the fiduciary duties imposed upon them.[4]

According to ERISA, fiduciary duties include the duty (1) to act prudently, (2) to diversify plan assets, (3) to comply with the plan's terms, (4) of loyalty, (5) to pay only reasonable plan expenses, (6) and not to engage in prohibited transactions.[5]

Courts, in ERISA fiduciary cases, have recognized that the fiduciary obligations are the highest known to the law.[6]

The duty to act prudently is perhaps the most commonly litigated duty.

Often referred to as the "prudent person rule," aka the "prudent expert rule,"[7] at a basic level, the rule requires fiduciaries to discharge their duties with the care, skill, prudence and diligence that a prudent person acting in a like capacity would use in similar circumstances.[8]

Functionally, this rule requires that a plan fiduciary follow a well-reasoned decision-making process that involves thorough documentation and monitor the results of its decision on an ongoing basis.

How courts and the enforcing agencies will interpret use of AI in benefit matters is largely unknown.

However, ERISA fiduciaries must certainly be mindful that the use of AI does not excuse them from their fiduciary duties. When AI "enters the chat," it will be important to assess whether a reasonable person in similar circumstances would decide to implement AI for a given task.

In addition, the prudent person rule applies both in the context of deciding to use AI and deciding whether to implement any thoughts, ideas or processes generated by AI. For example, how the committee vets an AI chatbot or an algorithm may become an important analysis. Similarly, the ability to rely upon a disclaimer warning the employees about the potential risks associated with using the AI model remains untested.

Perhaps the use of AI will eventually lead the prudent person rule to evolve into the "prudent machine rule." Until then, ERISA fiduciaries should assume that they will be ultimately responsible for AI's responses and decisions and, to that end, may be forced to become experts for purposes of validation that the AI tool yields the intended result and does not inappropriately "hallucinate" or "assume" false information.[9]

If a fiduciary who is not an AI expert wishes to use AI to execute any fiduciary function — including benefits administration — they will need to become educated on how to evaluate the appropriateness of the AI. To do so, they may need to engage certain experts, which happens regularly with respect to investment advice, evaluating vendors and a number of other common fiduciary functions.

We expect this to become the norm for the evaluation of any AI-powered tools.

Plan sponsors should review their policies to confirm whether the use of AI is covered by its insurance before adopting any AI-powered tools. While we have yet to see express carveout language to the extent any fiduciaries or plan administrators use AI, such carveouts may be on the horizon.

In addition to the basic ERISA fiduciary standards that will continue to apply, employers must consider numerous other ERISA-centric issues such as whether the data gathered on participants belongs to the plan as a plan asset, or whether information in employee files can or should be shared with an AI to help generate a better response.

State Law Landscape
Many states have begun to consider or propose legislation that would regulate how AI is used in employment-related decisions and situations.

Some state laws proposing to regulate AI usage purport to extend to decisions made to deploy AI in employee benefits enrollment or administration.

For example, a bill pending in Massachusetts — H.B. 1874 — purports to regulate the use of AI in "employment-related decisions," which includes, among other things, any decision made by an employer that affects wages, benefits and other terms or conditions of employment.[10]

It is likely that as the use of AI becomes more commonplace, more states will introduce legislation attempting to regulate the use of AI, and in some instances, legislators could creep into ERISA preemption territory.

ERISA preemption rules were first adopted to help ensure uniformity and consistency in the regulation of employee benefit plans across different states. The basic principle is that ERISA supersedes state laws that relate to employee benefit plans.

While ERISA preemption has historically been far-reaching, giving states only a narrow avenue to regulate employer-sponsored plans, in 2020 the U.S. Supreme Court's ruling in Rutledge v. PCMA signaled a cutback of this principle[11] finding that state laws that "merely" affect or regulate benefit costs — i.e., have an "indirect economic influence" — are not preempted even though they may alter the incentives and decisions facing employer-sponsored plans.[12]

It is hard to tell whether courts will react the same way to AI-powered tools that affect employee benefits. Although cost is a factor, AI-powered tools that affect the outcome of a participant's claim or that influence a participant's decisions within a benefit plan would appear to be much more directly connected to benefit plan administration — and, therefore, laws affecting such tools are more likely to be preempted.

That said, there will almost certainly be a fight in the courts to decide who is the ultimate authority when it comes to regulating AI — and ERISA may end up being at the center of that battle.

Conclusion
AI has the potential to revolutionize benefits, offering streamlined processes, personalized experiences, enhanced compliance and improved employee engagement.

However, with these opportunities come great risks related to ERISA fiduciary duties and preemption considerations, making the evolving state law landscape even more critical.

Employers and benefits professionals must navigate these complexities with care, ensuring that AI is harnessed responsibly to deliver equitable, compliant and employee-centric benefits programs.

To read the entire Law 360 article, click here...

 

The opinions expressed are those of the author(s) and do not necessarily reflect the views of their employer, its clients, or Portfolio Media Inc., or any of its or their respective affiliates. This article is for general information purposes and is not intended to be and should not be taken as legal advice.

[1] A recent New York Times is one of many mainstream media articles exploring the issues associated with AI, which includes "hallucinations". "When A.I. Chatbots Hallucinate", New York Times, Karen Weise and Cade Metz (May 1, 2023). Hallucinations are generally understood to be situations when an AI model fabricates information. The New York Times article cited to an internal Microsoft document noting "The new AI. systems are 'built to be persuasive, not truthful…This means that outputs can look very realistic but include statements that aren't true.'"
[2] 29 USC § 1001 et seq.
[3] https://www.dol.gov/general/topic/retirement/erisa.
[4] While outside the scope of this article, it is important to recognize that whether and how a benefit plan is regulated by ERISA depends, in part, upon the type of benefit offering and whether insurance laws apply.
[5] 29 USC § 1104(a)(1).
[6] Howard v. Shay, 100 F.3d 1484, 20 EBC. 2097 (9th Cir. 1998) (ERISA's duties are "highest known to the law").
[7] 29 USC § 1104(a)(1)(B).
[8] Ibid.
[9] A New York personal injury firm recently made headlines after two of its attorneys submitted a brief in federal court containing nonexistent cases and opinions produced by ChatGPT. See "Atty Citing 'Bogus' Cases From ChatGPT is 'Unprecedented'," Matt Perez, May 30, 2023, https://www.law360.com/articles/1682364.
[10] Massachusetts H.B. 1874.
[11] Rutledge v. PCMA, 141 S. Ct. 474 (2020).
[12] Ibid.

back to top