Publication

Jun 15, 2023Client Alert

All Eyes on AI: New EEOC Guidance and State Updates regarding Artificial Intelligence in Hiring

The Equal Employment Opportunity Commission (EEOC) issued new guidance on the use of artificial intelligence (AI) in employment selection procedures. The guidance, titled "Select Issues: Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964," provides employers with information on how to assess the potential adverse impact of AI-based selection procedures on protected groups. The EEOC has been examining AI issues in employment closely through the Artificial Intelligence and Algorithmic Fairness Initiative, which it launched in 2021. Furthermore, the joint statement recently issued by the Consumer Financial Protection Bureau, Department of Justice’s Civil Rights Division, EEOC, and Federal Trade Commission reaffirms their commitment to applying their enforcement authorities to automated systems employing AI.

The guidance opines that selection procedures should be associated with the skills needed to perform the particular job successfully. Algorithmic decision-making tools constitute a selection procedure when they are used to make or inform decisions about whether to hire applicants and can take many forms, including:

  • Software that prioritizes or scores job applicants based on keywords in their resumes.
  • Algorithms that use data to make decisions about who to hire, promote, or fire.
  • Virtual assistants that interview job applicants.

Although the guidance is not binding, it provides insight into the EEOC’s views on preventing disparate impact in hiring procedures. Some of the EEOC’s recommended best practices and reminders include the following:

  • To check whether a certain selection procedure has an adverse impact, employers should analyze whether use of the procedure causes a selection rate for individuals in the group that is “substantially” less than the selection rate for individuals in another group. Conducting a statistical analysis of the selection procedure’s results may indicate whether a selection procedure results in disparate impact.
  • Even if the algorithmic decision-making tool was designed or facilitated by an outside agency, such as a software vendor, an employer is usually still responsible for any adverse impact that occurs.
  • If an employer discovers a selection tool would have a disparate impact, it should take steps to reduce the impact or select a different tool.

State Introduced and Enacted Laws
In addition to oversight from the EEOC, several states and localities have passed or introduced laws aimed at regulating the use of AI in automated employment decision tools.

  • California: introduced AB 331 which would require a deployer and a developer of an automated decision tool to perform an impact assessment for any automated decision tool the deployer uses, including a statement of the purpose of the automated decision tool and its intended benefits, uses, and deployment contexts. It would also require a deployer or developer to provide the impact assessment to the California Civil Rights Department.
  • District of Columbia: introduced B 114, Stop Discrimination by Algorithms Act of 2023, which would prohibit employers who use algorithmic decision making from utilizing algorithmic eligibility determinations in a discriminatory manner and require corresponding notices to individuals whose personal information is used.
  • Illinois: Illinois became the first state to enact restrictions regarding using AI in hiring through the Artificial Intelligence Video Interview Act, which the state enacted in 2020, and requires employers who use AI-enabled assessments in analyzing video interviews of applicants to provide notice and obtain consent from the applicant. It also imposes distribution limitations and destruction requirements. Additionally, Illinois’s introduced HB 3773 would prohibit employers from using predictive data analytics in employment decisions.
  • Massachusetts: introduced bill H.1873, An Act Preventing A Dystopian Work Environment, which would require employers to notify applicants and employees if the employer will implement an automated decision system and would also prohibit employers from relying solely on output from an automated decision system to make employment decisions, such as hiring, promotion, termination, or disciplinary actions. Additionally, employers would need to complete an algorithmic impact assessment, which would, among other things, evaluate potential discrimination against protected classes.
  • New Jersey: New Jersey has several introduced bills, one of which is A 4909 and would regulate the use of automated tools in hiring decisions to minimize discrimination in employment by: (1) prohibiting tools that have not been subject to a bias audit; and (2) requiring disclosure to applicants that an automated employment decision tool was used in connection with the candidate’s application.
  • New York: New York City’s Local Law 144 regulates the use of automated employment decision tools and requires employers in NYC using these tools in hiring and promotion to: (1) have an independent auditor conduct a bias audit of the tool based on race, ethnicity and sex; (2) provide notice to applicants and employees subject to the tool; and (3) publicly post a summary of the bias audit and distribution date of the tool. Enforcement will begin on July 5, 2023. New York state also has introduced bill SB 5641, which would establish criteria for the use of automated decision tools in screening candidates.
  • Vermont: introduced H.114, which would restrict the use of automated decision systems for employment-related decisions unless the employer has conducted an impact assessment.

What does this all mean for employers?
With the rapid expansion of technological advancements in AI, the corresponding rise in regulations is inevitable. Between administrative agencies and state regulations, the widespread attention to AI highlights the importance of addressing potential biases and ensuring fairness in the use of AI systems. Ultimately, it is the employer who is responsible and liable for disparate impact, regardless of whether it is a human, AI system, or service provider who makes the employment decision. Employers using AI technology for job applicants and hiring decisions should proactively stay informed about pending and current legislation to determine potential obligations. Additionally, when using AI systems in hiring from a vendor, employers should confirm with the vendor of the technology whether steps have been taken to evaluate whether use of the tool causes a substantially lower selection rate for individuals with a protected characteristic.

back to top