Synthetic Media, Real Harm: Regulating AI-Generated Deepfakes

Carolyn Nsimpasi

Associate Editor

Loyola University Chicago School of Law, JD 2026

The rapid advancement of artificial intelligence (AI) has enabled the creation of highly realistic synthetic media, commonly known as deepfakes. These AI-generated images, videos, and audio recordings can convincingly replicate real people, making it increasingly difficult to distinguish truth from fabrication. While deepfake technology has legitimate uses in entertainment, education, and accessibility, its growing misuse presents significant social, political, and ethical risks. As a result, the regulation of AI-generated deepfakes has become an urgent necessity.

Understanding what is at stake

There are several dire concerns involving deepfakes. First, is the ability to spread misinformation and undermine public trust. Deepfake videos can be used to falsely depict people, including public figures, making statements or engaging in actions that never occurred. In political contexts, such content can influence elections, incite unrest, or damage diplomatic relations. For example, on January 20, 2026, Brian Shortsleeve, a Republican candidate for Massachusetts governor, posted a fake radio advertisement on Instagram that included an AI-generated voice of his opponent, incumbent Governor Maura Healey. The deepfake misrepresented Governor  Healey’s words in an attempt to persuade her supporters to vote her out of office.

Furthermore, such content can cause unjustified reputational harm. In January 2024, a school principal from Baltimore, Maryland was placed on administrative leave after an offensive voice recording went viral on X. The voice recording featured the principal making racist and antisemitic remarks about his students and staff in a private conversation. The principal subsequently received countless death threats. Following backlash, a thorough investigation began, which led to a shocking revelation: the recording was fabricated using AI. Ultimately, a deepfake, created by a disgruntled employee, almost led to irreparable reputational harm and employment loss to an innocent civilian.

Deepfakes also threaten personal rights and bodily autonomy through non-consensual deepfake and pornography. This form of abuse has disproportionately targeted women, leading to emotional trauma, and professional setbacks. Such violations erode fundamental principles of consent and identity, reducing human beings to manipulable objects. Due to the proliferation of the creation and distribution of non-consensual deepfakes, legislation across the country has been emerging to eliminate its’ creation and/or provide recourse for victims.

On January 12, 2026, the U.S. Senate unanimously passed a bill, the DEFIANCE Act, which allows victims of digital abuse and exploitation to sue the creators of non-consensual deepfakes. This is imperative considering the ease with which one can make a non-consensual deepfake. For example, X has an integrated chat box known as Grok, which is able to swiftly generate images from user prompts. Grok’s ability reaches far, even extending to the creation of explicit images of women or girls in mere seconds following a user’s request. One of its more disturbing abilities includes the creation of child sexual abuse material. It also has the ability to create artificial images of Muslim women with their hijabs removed. As such, the effortless access and availability of deepfake tools, such as Grok, lowers the obstacle for malicious actors. What once required advanced technical expertise can now be accomplished with easily accessible software and no training.

Laws emerging to eradicate deepfakes

Regulating deepfakes does not mean stifling technological progress or limiting freedom of expression. Instead, effective regulation should focus on transparency, consent, and harm prevention. Measures such as mandatory labeling of synthetic media, clear consent requirements for the use of a person’s likeness, and penalties for malicious misuse can strike a balance between innovation and protection. For example, California’s Artificial Intelligence Transparency Act (SB 942)requires providers that create, code, or otherwise produce generative AI systems that have over 1,000,000 monthly visitors or users and is publicly available, to make an AI detection tool which would allow users to know if an image, video, audio content, or the like has been altered.

AI-generated deepfakes represent a powerful but double-edged technological development. Left unregulated, their misuse can undermine or eliminate trust in our political administration and irreparably harm people’s lives. If citizens continue to lose confidence in the authenticity of digital media, the very foundation of informed democratic decision-making is threatened. By establishing clear legal and ethical frameworks, governments and institutions can harness the benefits of deepfake technology while minimizing its most dangerous consequences. Without regulation, individuals remain vulnerable to exploitation, harassment, and defamation on an unprecedented scale.