Loyola University Chicago School of Law, JD 2024
From “Fake news” to misinformation and Bots; it has become overwhelmingly challenging to authenticate information on the internet. This has not stopped the evolution of technology as innovators compete to be on the cutting edge of the latest software. OpenAI is an artificial research and deployment company that is responsible for the launch of ChatGPT in November of 2022. The newly released artificial intelligence chatbot is trained to generate realistic and convincing text. The software was fed human literature and internet language enabling it to create a body of text within the parameters of the prompt presented. With more than 1 million users, it has gained traction across the masses. However, the natural language processor has sparked controversy over cybersecurity threats and ethical concerns in its usage.
ChatGPT impact on cybersecurity risk and end-user harm
The ChatGPT is an example of the continuous evolution of technology, however, it brings about cybersecurity concerns. The ChatGPT is able to create sophisticated responses based on questions presented or information input. However, while revolutionary, it has been proven to cause harm. Due to the software training stemming from a large dataset of text, its ability to learn sensitive information about individuals and organizations is prevalent. As it stands now anyone can create an account thus becoming an easily accessible potential tool for attackers. The potential implications of ChatGPT getting in the wrong hands has caused a frenzy among experts. There are already examples of attackers disclosing their efforts on underground hacking forums to recreate malware strains and dark web marketplace scripts using ChatGPT. The software has the ability to quickly generate large amounts of text enabling attackers to automate their efforts while evading detection. This places everyone, especially those who are part of vulnerable populations, at higher risk as the ability to create more sophisticated phishing scams is heightened.
ChatGPT end users face a great deal of potential harm. For example, the accuracy of ChatGPT is, at best questionable, as it regurgitates information stemming from a sweep of the internet which frequently does not contain reliable sources. Students, attorneys, and medical professionals have been known to utilize the tool, however, though it may sound convincing, it is often not accurate. The plausible information provided, whether used to help craft customer responses or advise defendants in the courtroom, can have costly tragic results. It is only a matter of time before the SEC places more restrictions within the realm of AI in light of the viable vulnerabilities identified despite the guardrails that are currently in place. The multifaceted tool still allows for small adjustments to be able to bypass these restrictions despite the revolutionary efforts. Many have already banned the usage of ChatGPT due to the overwhelming number of concerns and potentially harmful effects.
Ethical concerns and consequences
While pitfalls come with the territory of innovation in the realm of technology, ChatGBT admits to the limitations they experience within the software despite its launch. OpenAI’s very own CEO Sam Altman has warned its users to be careful in their reliance on the validity of the resource as it is a work in progress. Google being the most relied-on search engine has had its own unique conversational technology, Language Model for Dialogue Applications it stands strong on its careful usage in weighing the risk of greater consequences if something were to go wrong. While it is likely that Google will come forth with a comparable option, they are likely monitoring the SEC response as regulations continue to roll out.
Innovators have an ethical obligation within the technology industry to protect its users and the general public. In an effort to minimize potential damages, I believe Open AI would have benefited from an internal launch of the ChatGPT to work out some of these recognizable challenges prior to being placed on the market. By using a selected group as test subjects, they could have better gauged its limitations and established better guardrails for protecting its consumers. Until the software has addressed many of the challenges it faces, I recommend holding off on making an account as you weigh the risk.