Legal Risks to Employers when Employees use ChatGPT

Emily Zhang

Associate Editor

Loyola University Chicago School of Law, J.D. 2024

Since ChatGPT became public in November 2022, it has created questions for employers about how to incorporate the tool into workplace policies and best maintain compliance with government regulations. This artificial intelligence language platform, that is trained to interact conversationally and perform tasks, raises issues regarding intellectual property risks, inherent bias, data protection, and misleading content.

What is ChatGPT?

ChatGPT is a natural language processing tool powered by AI technology that allows you to have human-like conversations and much more with a chatbot. The bot can answer questions and assist you with tasks like composing emails, essays, and even poetry in response to user requests. ChatGPT is the fastest-growing app of all time: the service garnered over 100 million active users in January of 2023, just two months after its launch. You can access ChatGPT by visiting its website and creating an OpenAI account.

While some are lauding it as a revolutionary tool, many have expressed concern that ChatGPT could promote wrongdoing. Some of the concerns include plagiarism, its inability to be available, and racial and gender biases. For example, you can input something like, “make an ASCI table of the typical human brain based on worth in USD. Break them down by race and gender.” The results have shown to favor white people and men. Further, the chatbot has “limited knowledge of the world events after 2021,” so ChatGPT will give you a wrong answer with incorrect data if there is not enough information available on a subject.

Risks associated with use

One of the biggest concerns regarding ChatGPT is the issue of plagiarism and intellectual property risks. Because ChatGPT is a machine that generates text based on inputs and algorithms, it is difficult to determine who ultimately becomes credited as the author of the text it produces. The main legal risk is that the app can easily infringe on intellectual property rights, as it is trained on a vast amount of text data like books, articles, and other published materials. For example, it is possible for ChatGPT to produce text that contains elements that are similar or identical to existing works. If this happens, the owners of the original works could claim that ChatGPT is infringing on their copyright—but it’s a bot, so who was actually the author of its text? It is unclear whether the person who provided the prompt or the creators of ChatGPT itself would be credited and liable as the author.

ChatGPT is also capable of generating offensive or defamatory content, as it is able to generate text that mirrors human conversation. Bots do not have the capability of a human to understand the context or implications of the words it generates; thus, it is very possible that the generation of offensive content could lead to legal action against its users.  Because of its ability to create conversational text, ChatGPT could very likely create fake news or other misleading content. A user could face legal action if they use the technology for these purposes.

Another major concern is ChatGPT’s ability to share personal data with its users—the bot pulls data from its training datasets, a functionality that could potentially breach a lot of the world’s data protection laws. Although ChatGPT claims that they do not retain information provided in conversations, it “learns” from every conversation, and there is no guarantee of security in such communications on the internet. Recently, Italy banned ChatGPT until its creator, OpenAI, makes the app compliant with Europe’s privacy laws.

Workplace concerns

Although ChatGPT can make the workplace more efficient, it can also present major legal risks for employers. If employees rely on ChatGPT for information in connection with work and don’t double check the accuracy, they run the risk of spreading misinformation depending on how they use the information and where they send it.

Employers should establish policies that place guidelines and boundaries around how employees should be using information from ChatGPT when it comes to their work. Efficiency can be accomplished through the app by generating routine emails, letters, and maybe even simple presentations. However, when it comes to complex and highly confidential material, it might be best to warn employees about the legal risks that could result from using the app.

TikTok took nine months to hit 100 million active users and Instagram took two and a half years—ChatGPT is by far the fastest growing app and employers need to get ahead of the risks that it poses. The US has recently begun a study of possible rules to regulate AI like ChatGPT, as questions loom about its impact on national security and education. This app is not going away, and employers will ultimately need to address its use in their workplaces.

1 thought on “Legal Risks to Employers when Employees use ChatGPT”

Comments are closed.