AI in the workplace: Key risks employers must address
In the second article of this series, we discussed steps employers can take to protect employee privacy while using artificial intelligence (AI) in the workplace. In this third installment, we explore the risks HR professionals face when implementing AI in the workplace and the steps they can take to mitigate them.
AI in the workplace is a much-debated and growing issue. Due to AI’s rapid proliferation, employers often find themselves ill-prepared to manage and contain the risks of employees using AI in their job duties. It’s important that employers be aware of the risks and have strong policies and safeguards in place.
Generative AI is a type of AI that creates new text, images, video, audio, or other content based on large amounts of data it was trained on. It can also create images, videos, audio, text, and other digital content. Large language models are a type of generative AI that can emulate the structure and characteristics of input data that can provide clear and coherent human-like text in response to a question or prompt submitted by a user. Employees often use it to write e-mails, letters, and reports, as well as analyze data. Some surveys indicate that as much as 85% of American workers use AI to help complete tasks at work. For example, ChatGPT, perhaps the most well-known generative AI tool, is a free AI tool that can be accessed simply by providing an e-mail address.
Risks associated with employee use of AI in the workplace
Unauthorized disclosure or public release of company confidential information, copyrighted materials, and trade secrets
When employees are using such tools and enter information, this information is retained by the AI tool indefinitely and can be accessed by unintended third parties. Under trade secret law, for information to rise to the level of a trade secret, it must be kept secret and subject to reasonable efforts to maintain secrecy. But, because AI programs are third-party programs that use their own algorithms and platforms, what may seem like an innocent use of helpful productivity tools can result in significant legal claims. Information may not rise to the level of a trade secret when it’s readily shared with a third-party software program with unknown security parameters.
Lack of ownership of employee-created materials
While the governing terms of a generative AI tool may purport to grant copyright ownership to users to output or content created, the company that owns the AI tool often doesn’t have sufficient rights to grant that ownership because some or all of the output is owned by others.
Plagiarism and/or copyright infringement
Just as third parties may obtain company confidential and/or legally protected materials, employees may also inadvertently obtain and use prohibited information. Whether this results in legal claims being made against the company or harm to the company’s reputation for using another company’s owned property, employers will want to prevent this from happening.
Incorrect information/“hallucinations”
AI hallucinations are incorrect or misleading results that AI models create and may be caused by poor data quality. Generative AI models obtain data from publicly available information on the Internet, information they partner with third parties to obtain, and information that users provide. If data is incomplete, incorrect, or otherwise flawed, results can suffer as a result.
Bring your own AI/“shadow IT”
When employees bring their own AI/use applications that haven’t been vetted and approved by the company, the employer’s IT systems are put at risk. Like other forms of shadow IT, this can leave the organization vulnerable to phishing attacks, malware, and potential data breaches that compromise sensitive company information.
Using AI to record meetings
Special problems arise when AI is used to record and transcribe meetings. Such AI tools can be attractive, as they allow participants to focus on the discussion rather than note-taking and save information so those who can’t be present don’t miss out. However, employers must make sure they’re compliant with necessary consent requirements. Some states require the consent of all parties to a call before it can be lawfully recorded while other states may only require consent from one party (usually the person doing the recording). Either way, it’s important to verify the laws specific to your jurisdiction. Another consideration is what type of information is being captured and how a recording is preserved once created. Who is able to access it? In considering how (and how long) recordings should be maintained, the type of subject matter discussed in the meeting may create legal requirements pertaining to retention and disposal. If these meetings are with third parties, employers will want to address how they’ll approach such situations.
Effective tips for managing risks of employee AI use in the workplace
Decide what your stance as an employer is pertaining to employee AI use. Depending on a particular employer’s need, this may be an open-use, a limited-use, or a prohibited-use policy. Employers will want to train employees on such policies and monitor employees to ensure compliance with policies on what AI tools may be used, what they may be used for, and who may use them.
- Educate employees on what’s considered confidential information and trade secrets. Prohibit employees from supplying such information to open AI tools and require them to avoid using the property of others.
- Implement quality control measures to reduce the chance of inaccuracies.
- Implement deidentification standards to protect confidential and personal information, which may involve obscuring or completely removing sensitive data.
- Ensure employees recognize the limitations of AI tools and don’t overly rely on them. They should recognize that they need to be vigilant about reviewing material for incomplete or erroneous information.
Employers will want to communicate to employees the benefits and risks inherent in using AI tools. In addition, employers will want to address any employee concerns that are being made redundant by AI. As the U.S. Department of Labor (DOL) has stated in its Artificial Intelligence and Worker Well-being: Principles and Best Practices for Developers and Employers:
“AI can positively augment work by replacing and automating repetitive tasks or assisting with routine decisions, which may reduce the burden on workers and allow them to better perform other responsibilities. Consequently, the introduction of AI-augmented work will create demand for workers to gain new skills and training to learn how to use AI in their day-to-day work. AI will also continue creating new jobs, including those focused on the development, deployment, and human oversight of AI. But AI-augmented work also poses risks if workers no longer have autonomy and direction over their work or their job quality decline.”