Ai in hand

Workplace challenges in using artificial intelligence

Author: Gordon M. Berger, Partner

Will artificial intelligence (“AI”) programs like OpenAI’s ChatGPT and Google’s Bard replace humans and start to take our jobs? Will the apoplectic outcome seen in some movies play out and result in the demise of the human race?

While no one really knows for sure what the future of AI looks like (or the human race), PEOs should be focusing on how AI tools can assist their clients, but with the understanding that this is an emerging technology that remains in flux. Yes, increasingly, employers are using AI in the workplace to locate, recruit, evaluate and communicate with job applicants. Employers are also using AI to assist employees with benefits or benefits enrollment, training, writing job descriptions, to avert spam attacks or to translate documents and forms into foreign languages.

Clearly, there are pitfalls and risks of using AI in the workplace. AI can violate the privacy of worker or operate in discriminatory manner towards certain protected classes of employees.
When we think of AI in the workplace, we are not talking about algorithms that predict an outcome or predictions – we are talking about generative AI, which uses algorithms that generate new results based on the data it has been fed (or trained on).

The challenge and concern is that AI programs are created by humans and therefore AI programs are inherently flawed and biased. Therefore, the use of AI in the workplace can create employee claims under such laws as Title VII of the Civil Rights Act (“Title VII”), the Age Discrimination in Employment Act, the Americans with Disabilities Act (“ADA”) and state law counterparts.

AI has been on the Equal Employment Opportunity Commission’s (“EEOC’s”) radar for some time. In 2021, the EEOC formed an initiative to address AI. As part of the initiative, the EEOC pledged to:

  • Issue technical assistance to provide guidance on algorithmic fairness and the use of AI in employment decisions;
  • Identify promising practices;
  • Hold listening sessions with key stakeholders about algorithmic tools and their employment ramifications; and
  • Gather information about the adoption, design, and impact of hiring and other employment-related technologies.

Then, on May 18, 2023, the EEOC issued technical guidance on the use of AI to assess job applicants and employees under Title VII. In short, AI tools can violate Title VII under a disparate impact analysis, which looks at whether persons in protected classes (e.g., race, sex or age) are hired at disproportionately lower rates compared to those outside of the protected classes.
Further, EEOC Chairwoman Charlotte Burrows is on record as saying that more than 80% of employers are using AI in some form of their work and employment decision-making. Given the apparent volume of employers using AI, the EEOC will certainly focus on AI-related discrimination in employment.

Note that the EEOC looks at disparate impact discrimination by using the “four-fifths rule” enumerated in 29 C.F.R. § 1607.4(D). According to the four-fifths rule,” a selection rate for any race, sex, or ethnic group which is less than four-fifths (4/5) of the rate for the group with the highest rate will generally be regarded by the Federal enforcement agencies as evidence of adverse impact, while a greater than four-fifths rate will generally not be regarded by Federal enforcement agencies as evidence of adverse impact.” The EEOC guidance uses the following example to illustrate this rule: if an algorithm used for a personality test selects Black applicants at a rate of 30% and White applicants at a rate of 60% resulting in a 50% selection rate for Black applicants as compared to White applicants (30/60 = 50%), the 50% rate suggests disparate impact discrimination because it is lower than 4/5 (80%) of the rate at which White applicants were selected.

One example of possible AI bias is related to the EEOC lawsuit against a company that was using AI for job candidate screening (EEOC v. iTutorGroup, Inc., et al., Civil Action No. 1:22-cv-02565 in U.S. District Court for the Eastern District of New York). That company paid $365,000 to settle the lawsuit in which the EEOC alleged age discrimination by disqualifying more than 200 female workers over the age of 55 and males over 60. An applicant who was not considered for a position with the company resubmitted a job application with a more recent birth date but the remainder of the information was identical to the original (rejected) application. She was offered an interview when she presented as being younger.

Other agencies have addressed AI in the workplace. On the same day that the EEOC issued its technical guidance on AI, the Department of Justice posted its own guidance on AI-related disability discrimination and how the use of AI could violate the ADA.
On the state level, Illinois led the way in 2019 with one of the first AI workplace laws, the Artificial Intelligence Video Interview Act, which regulates employers that use an AI to analyze video interviews of applicants for positions based in Illinois. Employers are required to make certain disclosures and obtain consent from applicants if they use AI-enabled video interviews. And, if employers rely solely on AI to make certain interview decisions, they must keep applicant demographic data, including race and ethnicity, which must be submitted annually to the state to look at whether there was racial bias in the use of the AI.

Then came Maryland in 2020, which passed a law restricting employers’ use of facial recognition services during preemployment interviews until an employer receives consent from the applicant.

Takeaways

  • AI should not be solely relied on with respect to employment decisions. If AI is making hiring or termination decisions, management or HR should still review and ensure that decisions are made with an unlawful purpose (i.e., not based on a protect classification, including race, religion, age, gender, disability, etc.).
  • AI is known to make up information. You may have heard about a law firm that used AI for legal research and the AI provided case law citations that did not exist (i.e., fictitious cases) which resulted in the law firm being sanctioned by the court.
  • Employers are not excused from complying with the law if AI gets it wrong. Per the EEOC, employers are liable under Title VII for “algorithmic decision-making tools even if the tools are designed or administered by another entity, such as a software vendor”.
  • Check state and local law for any AI-specific requirements. For instance, New York City has a law that prohibits employers from making certain employment decisions using AI unless notice has been given to employees or candidates who live in the City. And, California Gov. Gavin Newsom recently issued an executive order on IA that includes a number of provisions intended to review potential threats to and vulnerabilities of California’s critical energy infrastructure by the use of GenAI.

To learn more about this legal topic and how your PEO business can operate in total compliance, contact Gordon Berger directly or visit the FisherBroyles website here.