Artificial intelligence-automated decision-making tool liability and audits: What HR needs to know
Overview of AI use in hiring
HR professionals’ use of artificial intelligence (AI)-automated decision-making tools in hiring is rapidly expanding. It can be particularly useful for tasks like reviewing résumés and identifying candidates with relevant experience and skills. AI can even conduct and analyze video interviews.
However, because of bias in AI systems, employers using AI for recruiting purposes must do so with caution. Biases in an AI system can occur because of a number of reasons, including the design of the model or the training data used. Regardless of the reason, employers must proceed thoughtfully and with due diligence.
Legal responsibility under antidiscrimination laws
Employers are responsible for AI bias in hiring decisions under antidiscrimination laws whether or not the AI tools were developed in-house or by a third-party vendor. This is because the employer is ultimately making the hiring decision, even if it’s with the assistance of an AI tool. Notably, discriminatory intent isn’t needed for legal liability to arise, as employers may be held responsible for practices that have a disparate impact on legally protected groups when the outcomes are based on factors that aren’t job-related or consistent with business necessity.
State and local regulation trends
An increasing number of states and municipalities are proposing and enacting laws specifically regulating the use of AI-automated decision-making systems, with some even requiring audits of these systems. Accordingly, it’s crucial that employers make sure the AI vendors they use are regularly auditing their AI tools. Employers should also conduct their own audits of AI tools they develop and/or use.
Key steps for auditing AI hiring tools
As part of their audits, employers should do the following:
- Examine their data. Review company data that AI tools are being trained on and using and what types of bias might be contained within it.
- Assess how AI is making decisions. Check to ensure the AI tool isn’t using any protected characteristics in reaching outcomes. One means of doing this is counterfactual reasoning, which involves altering data inputs to see how the system’s decisions change, highlighting dependencies on biased factors.
- Review hiring process. Identify every stage of the hiring process where AI is making or influencing a decision. Examine whether the tool’s results differ for protected groups at each of these stages.
- Scrutinize AI criteria. Validate that the factors AI is using in its analyses are job-related and predictive of success. This might entail evaluating what key criteria are vital for candidates to have.
- Statistical analysis. Statistical analysis can be used to compare outcomes across different demographic groups. One such method to measure for disparate impact is the 80/20 rule. Under this rule, a selection rate for a protected group that’s less than 80% of the rate for the most favored group suggests the possibility of adverse impact. This isn’t a definitive test but rather is an indicator that a closer review of a particular decision-making process is warranted.
- Obtain human input. Compare AI tool decisions against nonbiased human evaluations.
- Document the audit process. Keep written records of all steps taken to assess AI systems for bias, results, and steps taken to rectify.
Actions employers may need to take after audits
Depending on the outcome of audits, employers may need to take some or all of the following steps:
- Add additional human oversight. AI should always support, not replace, human decision-making. However, if employers are noticing AI bias, they need to add more human review and oversight.
- Get AI vendor involved. Employers should seek their vendor’s expertise and support in eliminating bias. If a vendor isn’t transparent or can’t resolve the issue, the employer may want to consider making a switch.
- Retrain and reconfigure. If the AI bias is caused by bad training data, employers may want to consider retraining the tool with new, clean data.
- Obtain an independent audit. Many AI vendors, as well as employers, have their tools audited by a third party. While such audits will require additional cost, it could be worth it to avoid discrimination claims against the company.
- Involve legal counsel. With the increase in AI bias-related lawsuits and legislation, employers that become aware of a potential problem will want to obtain an attorney’s expert guidance. An attorney can conduct a privileged assessment and plan for mitigation.
Final takeaway for HR leaders
AI hiring tools can be tremendously helpful to HR departments, saving time, freeing them up for other tasks, and enhancing the efficiency and quality of the hiring process. However, employers must be mindful of AI’s limitations and the potential for bias. By conducting AI audits, employers can protect their company’s reputation and mitigate legal liability for discrimination.