Regulating Deepfakes: Strengthening the Fight Against Deceptive Media

Pete Haas
Associate Editor
Loyola University Chicago School of Law, JD 2025

In response to the growing threat of deepfake technology, two significant pieces of legislation have emerged: California’s 2024 Deepfake Deception Defense Act and the DEEPFAKES Accountability Act. A deepfake is a digitally altered video or image that uses artificial intelligence to make it appear as though someone is saying or doing something they never actually did. These laws aim to curb the spread of deceptive synthetic media and ensure transparency and accountability in digital content. Both Acts represent progress, but enhanced real-time monitoring, a registry, and a dedicated regulatory body are needed for better deepfake regulation.

The 2024 Deepfake Deception Defense Act

California’s 2024 Deepfake Deception Defense Act focuses on protecting the integrity of elections by targeting materially deceptive content. Large online platforms are required to block the posting of such content related to elections during specified periods before and after an election. Additionally, platforms must label certain content as inauthentic, fake, or false within those same periods. To support this, the Act mandates that platforms develop procedures for users to report content that has not been appropriately blocked or labeled. In cases of noncompliance, the Act authorizes candidates, elected officials, election officials, and legal authorities to seek injunctive relief against platforms. However, the Act exempts broadcasting stations and regularly published online newspapers, magazines, or periodicals that meet specific requirements, as well as content that is clearly satire or parody.

The DEEPFAKES Accountability Act

The DEEPFAKES Accountability Act aims to combat disinformation by establishing transparency and accountability measures for deepfake content. For instance, the Act requires that deepfake content include technologies that clearly identify it as altered or generated by AI. The US Government Accountability Office lists authentication technologies to include digital watermarks, metadata, and blockchain to detect deepfakes while Meta has incorporated AI-labelling tools when they detect manipulated media. Additionally, deepfakes must feature verbal and written statements identifying them as altered. The Act establishes penalties for failing to comply with these disclosure requirements, with particular emphasis on deepfakes depicting sexual content, criminal conduct, or foreign interference in elections. It also creates new criminal offenses for producing deepfakes that do not comply with disclosure requirements and for altering deepfakes to remove or obscure disclosures. Furthermore, the Act allows individuals to bring civil actions for damages caused by non-compliant deepfakes. The Department of Justice is also directed to publish a report on deepfakes and their impact on elections and to establish a task force to combat national security threats posed by deepfakes.

Shortcomings of current legislation

While these legislations are significant steps towards regulating deepfake technology, they have shortcomings that need to be addressed to provide more comprehensive protection for victims and subjects of deepfakes. One major shortcoming is the lack of real-time monitoring and enforcement mechanisms. Both acts focus on post-creation detection and labeling, but they do not provide robust systems for identifying and flagging deepfakes as they are uploaded or shared on platforms. This delay can allow harmful content to spread before it is detected and removed.

Another issue is the insufficient support for victims. While the acts provide legal recourse and penalties for non-compliance, they do not offer specific mechanisms for victims to prevent further propagation of their likeness or to address the harms caused by deepfakes. Victims often face significant challenges in having their likeness removed from platforms and in seeking compensation for damages.

Proposals for enhanced deepfake regulation

There are opportunities to bolster these legislations:

  1. Mandatory real-time monitoring: Develop real-time monitoring systems that platforms are required to implement to detect and flag deepfakes as they are uploaded or shared. This would involve integrating AI-powered detection algorithms capable of analyzing content in real-time. Platforms must ensure that these systems are regularly updated to keep pace with the evolving sophistication of deepfake technology. Real-time monitoring would help prevent the spread of harmful content before it reaches a wide audience, thereby minimizing potential damage.
  2. Centralized deepfake registry: While many solutions focus on detection using AI to combat AI-generated deepfakes, there is no clear indication of what, where, or how the information is compiled and used once detection occurs. This lack of transparency suggests the need for a centralized list, comparable to the Known Entities List maintained by the Bureau of Industry and Security (BIS), to ensure accountability and effective management of detected deepfakes. Such a list would tie individuals and organizations to malicious deepfakes, providing a clear record of those responsible for creating and distributing harmful synthetic media.
  3. Dedicated deepfake task force or authority: Establishing a dedicated regulatory body to oversee the identification and removal of deepfakes is crucial in effectively managing this growing threat. This Deepfake Detection and Takedown Authority (DDTA) would have the legal power to order the removal of verified deepfakes from platforms and provide much-needed support for victims. The DDTA would be responsible for maintaining a centralized database of verified deepfakes, which platforms could use to cross-check content and ensure compliance. This centralized approach is akin to the Payment Card Industry Data Security Standard (PCI DSS), which has significantly reduced payment card fraud by providing a unified framework for security standards. Similarly, the DDTA would review and authenticate reports of deepfake content, ensuring accuracy and accountability in the detection and removal process. By centralizing these efforts, we can enhance the effectiveness of deepfake regulation and build public confidence in digital media.

The necessity of enhanced deepfake regulation

Implementing these proposals is crucial for several reasons. Firstly, real-time monitoring and mandatory reporting mechanisms would provide an immediate response to the spread of harmful deepfakes, mitigating their impact before they can cause significant damage. This proactive approach is essential in the fast-paced digital environment where content can go viral within minutes.

Secondly, a dedicated regulatory body like the DDTA would ensure consistent and effective enforcement of deepfake regulations. By having the authority to order the removal of harmful content and maintaining a centralized database of verified deepfakes, the DDTA would streamline the process of identifying and addressing deepfakes, providing victims with quicker and more efficient recourse.

Lastly, these enhanced regulations would provide much-needed support for victims of deepfakes. By offering legal assistance and mental health resources, victims would have the support they need to navigate the challenges posed by deepfakes. Additionally, by preventing the further propagation of their likeness, these regulations would help protect individuals’ reputations and well-being.

By incorporating these enhancements, a more robust framework for addressing the challenges posed by deepfakes and ensuring the integrity of digital content may emerge. This approach would not only help in identifying deepfakes but also provide mechanisms for victims to prevent further propagation of their likeness and address the harms caused by deepfakes.