ELVIS Act and FCC Ruling: Steps Toward Comprehensive AI Regulation in the United States

ELVIS Act and FCC Ruling: Steps Toward Comprehensive AI Regulation in the United States

 

Rachel Kosmos

Associate Editor

Loyola University Chicago School of Law, JD 2025

 

ELVIS Act: safeguarding musicians from AI voice replication 

On March 21st of this year, Tennessee Governor Bill Lee signed into law the Ensuring Likeness Voice and Image Security Act, aptly known as the ELVIS Act. ELVIS is a “first-in-the-nation bill” with bi-partisan support that “aims to protect musicians from artificial intelligence by adding penalties for copying a performer’s ‘voice’ without permission.” A staunch supporter of the new law is country music sensation, Luke Bryan, who underscored the necessity of such measures by revealing instances where he is unable to discern his voice amongst AI generated vocals. At least eighteen other states, now including Tennessee, have enacted laws to regulate the use of AI. 

 

Contrasting AI regulation landscapes in the U.S. and Europe

At the federal level, there is no current comprehensive law regarding AI regulation. There are, however, state laws and regulations that address specific aspects of AI, such as privacy, security, and anti-discrimination. In terms of AI regulation, the U.S. remains far behind the rest of the world. Europe, however, is making major strides in the world of AI regulation. Set to be fully applicable in April 2024, Europe’s AI Act is the “first comprehensive regulation on AI by a major regulator anywhere.” The AI Act will categorize AI systems into two categories: unacceptable risk and high risk. AI systems which pose an ‘unacceptable risk’ to people, such as real-time facial recognition, will be banned. AI systems which pose a ‘high risk’ to people, such as those that affect safety or fundamental rights, will be continuously assessed. As for the US, policy experts say, we are “…only at the beginning of what is likely to be a long and difficult path toward the creation of AI rules.”

 

Voluntary safeguards and FCC action

In 2023, the Biden administration issued a “landmark” Executive Order that aims to regulate AI systems in the US. The Executive Order, however, has been heavily criticized for its limited reach. While the Executive Order is able regulate how the federal government utilizes AI, it is not able to place limits on the private sector. Shockingly, however, after holding meetings with Biden, seven of the leading AI companies, including Amazon and Google, have agreed to comply with voluntary AI safeguards. As reported upon, “the agreements include testing products for security risks and using watermarks to make sure consumers can spot AI-generated material.” Surely, these voluntary safeguards represent an “early, tentative” solution to the growing need for comprehensive AI regulation. While AI proves beneficial in numerous scenarios, it can also present as a significant threat. 

In recognizing these threats, The Federal Communications Commission (FCC) has made groundbreaking regulatory changes aimed at ensuring tangible results. In February of this year, the FCC ruled that voice cloning technology, which is commonly used in robocall scams that target consumers, is now illegal. This ruling expands upon the Telephone Consumer Protection Act’s (TCPA) prohibition of telephone calls to residential phones that use “any automatic telephone dialing system or an artificial prerecorded voice, ” to now ban AI-generated voices. Its passage comes as a response to the steady influx of AI-generated calls that see scammers impersonating the voices of “celebrities, political candidates, and close family members.” This January, for instance, the New Hampshire AG’s office began investigating claims of telephone calls that were using an AI-generated version of Biden’s voice to discourge voters from going to the polls during the state’s primary election. According to the FCC Chairwoman Jessica Rosenworcel, “State Attorneys General will now have new tools to crack down on these scams and ensure the public is protected from fraud and misinformation.” Thus far, 26 State Attorneys General, which makes up more than half of the nation’s AGs, have written to the FCC to in support of the new passage of this law. 

             

U.S. urgency to meet European standards

The United States urgently needs to adopt comprehensive AI regulation similar to that of the EU. Coincidentally, the term, “Brussels Effect”, was coined by a Columbia University law professor to describe the phenomenon of EU rules often evolving into global standards. Recognizing the immense influence wielded by AI, one can only hope this holds true. AI possesses not only the power to produce deepfake explicit photos of celebrities, but can also compromise our political elections and national security. Biden described deepfakes as using “AI-generated audio and video to smear reputations, spread fake news, and commit fraud.” Thus, it becomes imperative for US regulatory measures to mitigate these risks and safeguard our society from the inevitable consequences of unchecked AI.