Identity Crisis: Deepfakes and the Battle for Image and Likeness

The impact of Artificial Intelligence on Intellectual Property law has been an ongoing debate, especially how it impacts inventorship or authorship. But with increasing numbers of deepfakes going around the internet, AI might impact another important area of IP law—the right of publicity. A better understanding of deepfakes and the dangers of deepfakes will educate how much the right of publicity could protect individuals whose name and likeness is appropriated by deepfakes.

“Artificial Intelligence & AI & Machine Learning” by mikemacmarketing is licensed under CC BY 2.0.

What are Deepfakes?

 A deepfake is an artificial image or video generated by a type of AI that uses a special kind of “deep” machine learning. Often times, a deepfake can realistically depict someone saying or doing something that never actually happened. As AI becomes more advanced, deepfakes are becoming more and more realistic, to the point where it can be impossible to tell if deepfakes are real or not.

For example, a recent video emerged of Mark Zuckerberg trashing Elon Musk’s new AI platform, Grok. Yet, it was entirely fake. This video is an example of how realistic deepfakes are becoming. This is also an example of how they can be used in negative and harmful ways.

Dangers of Deepfakes

Anyone with the technology to generate a deep fake can use it to solicit information or spread misinformation. This could lead to several different harms for individuals and the public. They can also impersonate bank officials or corporate executives in order to solicit personal information or issue fraudulent transfer instructions. Deepfakes can cause reputational harm to individuals if the public believes the individual made certain comments or engaged in certain activities that were completely fabricated. They could also facilitate defamation or other criminal and civil acts.

Additionally, public trust might be eroded as it becomes harder to discern what is real and what is fake. After New Hampshire voters received a fake audio call purporting to be President Joe Biden, political leaders are concerned that prevalence of deepfakes could impact the upcoming election. Despite the Joe Biden example, the larger concern is in regard to local elections. That kind of misinformation might be harder for local journalists to tackle because there will be fewer news outlets to verify the veracity of a deepfake. This could lead more voters to believe in the authenticity of the deepfake.

Considering all of these potential harms, can the right of publicity provide some protection?

“Big question mark” by benjaminreay is licensed under CC BY-NC 2.0.

Deepfakes and Right of Publicity

 The right of publicity is an Intellectual Property right available to all people. It protects against misappropriation of a person’s name, image, and likeness for a commercial benefit. This right allows people the ability to license their own identity for a commercial purpose if they so choose. As such, it prevents unauthorized commercial use of their identity. While there is no federal right of publicity, most states recognize this right.

Deepfakes could violate this right. Deepfakes surely misappropriate the image and likeness of individuals if they are non-consensual. They are often being created without any permission or even knowledge of the individual. But most deepfakes today, and thus the accompanying harms of deepfakes, do not have a commercial use. If a person’s image and likeness are misappropriated but not for a commercial benefit, then the right of publicity provides no protection. However, newly proposed legislation could federally regulate AI generated deepfakes.

NO FAKES Act

The proposed NO FAKES Act purports to create a right for an individual for up to 70 years after the individual’s death to authorize the use of their name, image and likeness in any digital replica. Unlike the current right of publicity, this Act would create a right for individuals without the need to prove the digital replica was used for a commercial benefit. Instead, the NO FAKES Act states that unauthorized use of a digital replica or publishing an unauthorized digital replica, with knowledge of lack of authorization, is sufficient for a cause of action. The Act also describes what would not suffice as a defense, such as use of a disclaimer stating that the replica was not authorized. The NO FAKES Act would create a federal right where one currently does not exist.

Since the act is broad in nature, it raises some additional concerns. For example, the Act prescribes a subjective standard for what constitutes “digital replicas.” Some critics argue that the subjectivity of this standard could lead individuals to claim that something looks and sounds like them, and thus is a digital replica, even if it is not an identical deepfake.

Regardless of any concerns, the NO FAKES Act is a huge step in the direction of creating a broader federal right protecting name, image, and likeness As deep machine learning advances, we find ourselves in a greater need to proactively provide protections for individuals as technologies and social norms evolve.

Elizabeth Schrieber
Associate Blogger
Loyola University Chicago School of Law, J.D. 2025