Loyola University Chicago School of Law, JD 2023
In November of last year, Open AI launched ChatGPT, an AI chatbot that engages users with dialogue to answer questions, write responses to prompts, and interacts with the user. Google quickly responded to the technological advancement by creating their own version of a chatbot called Bard that Google claims will draw “on information from the web to provide fresh, high-quality response.” AI has quickly embedded itself into most everyday activities. Additionally, in light of recent mass layoffs, experts predict that AI could displace tens of millions of jobs in the United States in the coming years. And with new chatbots run by AI, AIs have gone from simply replacing people to assisting people with tasks. As with all emerging technology, the general public may worry about regulating something that can be so intrusive and yet powerful and helpful to society. With the unsurmountable amount of knowledge provided by AI in seconds, it is necessary that Congress catch up to the emerging technology and create regulations for AI that can respect intellectual property and copyright laws and eradicate how AI adds to racial and gender disparities in the United States.
Issues with AI
Over the years, AI has quickly become more prevalent as new platforms and systems emerge. As a result, attention is shifting to how global governments can regulate AI and ensure compliance with ethics and presiding regulatory bodies. Harvard business review compares the recent rise of AI to concerns in the past decade surrounding technology and access to personal data. Past concerns about protecting personal data led to the passage of laws and measures worldwide to regulate data access and consumer data. And now, people are waiting for what will come regarding regulating AI.
Like advancements in technology and access to personal data, lawmakers are still hesitant to stifle innovation. They, therefore, have not fully stepped in when regulating up-and-coming companies and technology, such as Google, Amazon, Facebook, and Twitter. As a result, Congress’s lack of action has led to widely unregulated AI in all sectors that have no uniform way to regulate their systems and ensure compliance with other regulatory laws. Experts worry that Congress will fail to step in in time because they hesitate to interfere with innovation. In fact, the last time congress stepped in to regulate technology was in 1988 through the Children’s Online Privacy Protection Act.
Surprisingly, even the creator of ChatGPT thinks it needs to be regulated. In light of ChatGPT’s monumental launch, Mira Murati, the chief technology officer at OpenAI, worries that AI may be misused if not regulated. While promoting the new chatbot, she has called for freedom for AI companies to bring their products to the general public. Still, she urges governmental regulation and involvement in the new technology. Experts have agreed that the fast rollout of AI products coupled with Congress’s limited knowledge and hesitation to step in could result in unprecedented risks.
The risks of unregulated AI are already being seen today with intrusive usage of the systems in deep fakes and facial recognition technology that could lead to increased racial disparities in policing. For example, in January of this year, a Georgia man was misidentified through facial recognition as a fugitive and arrested for theft in Louisiana- a state he had never even stepped foot in. The AI system used by investigators racially profiled him, leading to a false arrest and yet another example of racial disparities in the country’s police force. However, this time it was different as the racial profiling was conducted by an AI, not a person.
Concerns surrounding the abuse of AI have already presented themselves in the form of litigation. In June of 2022, Microsoft released Copilot, a form of AI that can generate its own computer code. As innovative as this was, a group of programmers quickly filed a class-action suit against Microsoft in November of 2022, claiming that the data used by the AI to program to create the codes did not belong to Microsoft. Copilot specifically relied on codes posted to the internet to create new code, and the class action suit claims this is a form of piracy as Microsoft did not credit existing work.
Earlier this year, a group of artists in California also sued AI companies for copyright infringement because the companies (consisting of Stability AI Ltd, Midjourney Inc, and DeviantArt) created AIs that used artists’ visual art styles without the artists’ permission. Unsurprisingly, this lawsuit was brought by the same team of lawyers that brought the lawsuit against Microsoft for the Copilot coding program late last year. The newfound lawsuits against large AI companies were filed as a response from artists, writers, programmers, and other creators concerned about AI systems taking their work without consent, credit, or compensation.
Congress must act to regulate AI
AI can be good and even fun but can also have daunting effects, such as using it to discriminate against minorities and women. Especially for AIs that do essential things such as scanning for disease and diagnosing and screening job candidates, regulation is needed to ensure compliance with anti-discrimination laws and that AI works for everyone, not just some. Some companies have tested each new AI algorithm introduced in their business across stakeholders to ensure the AI aligns with company values, does not raise regulatory concerns, and can commit to diversity and privacy. Other companies have introduced and appointed people to leadership positions that are in charge of overseeing regulation with AI.
However, the United States should enact a hands-on approach and regulate AI before the consequences become overwhelming and more damaging. It should not be up to the individual companies who create or promote AI to be responsible for its ethical and regulatory compliance. Congress has acknowledged that AI is new and uncharted territory that lawmakers do not widely understand. Despite this, representatives have begun to discuss how AI can be regulated to prevent detrimental effects. These conversations should continue and expand to create concrete legislation and regulations for AI.