AI in the workplace: Balancing efficiency with employee privacy
In the first article of this series, we explored how artificial intelligence (AI) is being leveraged in HR, from automating tasks to streamlining performance reviews. Now, we’ll take a closer look at a critical concern: balancing efficiency with employee privacy.
AI can be attractive and tempting to employers. Often, it can accomplish tasks much more quickly than humans, and using it can be more efficient, allowing employers to automate time-consuming and repetitive tasks. HR professionals are particularly attracted to AI, as their responsibilities include managing and analyzing a large amount of data. AI systems rely on large amounts of data for their algorithms to learn and function effectively.
For HR, much of that data includes employees’ personal information. When using AI, employers must be careful not to prioritize efficiency and productivity at the expense of employee privacy. If they do, they risk harming employee morale, as well as potential legal liability. However, there are a number of ways employers can reduce such risks.
How to protect employee privacy while maintaining efficiency
Employer policies and training
It’s crucial that employers include AI use in workplace policies. This might include what AI may be used and what it can be used for, who may use AI and have access to AI-generated information, what data may be used in AI technology, under what circumstances AI-generated information may be disclosed, and what laws need to be followed. Employees who will be using employee data should be trained on such guidelines.
Avoid using open AI systems
Open AI systems (those that don’t limit how the prompts entered into the system are used by the AI tool), such as ChatGPT, are free and available to the public. However, any information that’s entered into these systems might be shared with unintended users and retained in the AI’s network indefinitely to be used for further system training. Sharing employee data with an open AI system could potentially violate state and federal privacy laws, especially if the information contains sensitive identification or health or genetic information.
Vet AI vendors
Closed AI systems are usually proprietary and may limit access by outside users. As with any software or technological platform, employers should vet all AI programs (open and closed) employees use. Employers need to know under what circumstances data entered into the system could be shared with users outside the workplace to ensure compliance with applicable privacy laws. When selecting AI vendors, employers should ask about their data security and retention practices to ensure they’re adequate to protect employee data. Ask vendors for external validation studies, as well. This can be done through a clear and professional inquiry, explaining how these studies will help ensure the solution meets your company’s requirements.
Monitor AI systems
AI technologies are new and evolving. Many AI systems function as a “black box,” meaning they operate and arrive at conclusions or decisions without providing any explanations as to how they were reached. Therefore, systems must be closely monitored, and AI materials produced should be reviewed carefully. Employers need to know how AI systems use and share information so they can comply with various privacy laws.
Implement and maintain strong company data security protocols
As with all company technology, robust data security measures are a best practice to ensure breaches don’t occur. Malicious AI applications or cyberattacks exploiting vulnerabilities in AI systems can threaten and compromise an organization’s sensitive data.
Use AI for employee monitoring cautiously
Many states have laws addressing employee surveillance, while others are considering how to address such practices. Such laws include restrictions on video recordings, GPS tracking, and computer monitoring.
Be mindful of state laws pertaining to employee information
There are a variety of state laws that may protect employee information, including employee consent and notification requirements, as well as restrictions on how employers may retain, dispose of, and use such information. In some states, consumer data privacy laws apply to employment data. Others have laws centered specifically on employee data, such as Social Security numbers and other personally identifying information, credit history, and even personnel files and disciplinary records.
Be sure to comply with biometric privacy laws
Some biometric AI systems monitor physiological data, such as heart rate, body temperature, and facial expressions. A growing number of states have laws governing facial recognition technology and biometric information usage.
Consider data being collected during open enrollment
During open enrollment, employees must make important health and financial decisions. Many companies have begun using AI-powered decision-making support technology to answer employee questions and assist them in this process. However, to do so, such platforms must collect sensitive employee information.
Ensure AI complements HR and legal expertise
In conclusion, AI can be a powerful and efficient tool, but it’s important for employers to remember that it should complement—not replace—HR and legal professionals who are responsible for safeguarding employee personal information. To ensure proper safeguards, employers should foster collaboration between their legal, HR, and IT teams.