Regulating the Worst Kind of AI-Generated Content

Ariez Bueno

Associate Editor

Loyola University Chicago School of Law, JD 2025

On September 05, 2023, a bipartisan coalition of all fifty state attorneys general along with four attorneys general from U.S. territories came together to sign a letter to Congress. The letter urged Congress to establish an expert commission to specifically study how artificial intelligence (AI) contributes to the exploitation of children. The attorneys general further stressed the urgency of expanding existing laws on Child Sexual Abuse Material (CSAM) restrictions to include AI-generated content.

By regulating AI-generated CSAM, we enable not only prosecution but, most importantly, the protection of vulnerable children. Despite the growing AI advancements, the law has yet to catch up. The need for federal regulation is evident as the safety of our youth is on the line.

Current concerns

AI’s ability to generate disturbingly realistic images of child exploitation has prompted what can be termed a “predatory arms race” on the dark web. These images pose a growing concern for law enforcement agencies that serve to save these victims as agencies must now take extra measures to determine whether the images depict real children. The flood of AI-generated CSAM can delay timely interventions and rescues of real children.

Immediate U.S. federal action is required. Organizations that advocate for child safety, like Thorn, have noted that the U.S. is one of the largest producers and consumers of CSAM. The pressure question remains: What will it take to implement successful federal regulation specifically aimed to protect children? One concern involving potential prosecution includes AI-generated CSAM depicting a fake image of a child. Although this concern might seem new, it revisits legal dilemmas that have been raised before.

Past ideas resurfacing

In 2002, the Supreme Court addressed concerns about AI-generated content while reviewing the Child Pornography Prevention Act of 1996 (CPPA). The CPPA was a federal law aimed (among other concerns) to restrict computer-generated child pornography by equally banning real and fake images of CSAM. This would have essentially allowed for prosecution of malicious actors even if the content did not involve real children. However, the Court struck down the CPPA after a thorough First Amendment analysis for being too broad. Interestingly, Justice Thomas seemed to foreshadow current issues with AI in his concurrence by noting that “technology may evolve to the point where it becomes impossible to enforce actual child pornography laws because the Government cannot prove that certain pornographic images are of real children.” Justice Thomas’ opinion reigns true two decades later, further emphasizing the urgency of regulating AI-generated CSAM content.

State action

Certain states have already begun to face prosecution of AI-generated CSAM without the guidance of federal legislation. In New York, a man was sentenced to six months in prison along with a ten-year probation for distributing sexually explicit deepfake images of more than a dozen underage women. He posted these images on pornographic websites and incited users to threaten and harass the depicted women. The lack of criminal statutes prompted Nassau County District Attorney Anne T. Donnelly to propose a “Digital Manipulation Protection Act” pursuant to this case. The Act aims to close current loopholes that allow predators and malicious actors to evade prosecution of deepfakes.

The clock is ticking

While the U.S. lacks federal regulation on AI, other countries like China have already implemented strict controls. Chinese authorities require AI providers, such as OpenAI (developers of ChatGPT), to monitor and stop the production of illegal content. AI providers must constantly improve their algorithms and report illegal content to relevant state agencies. Earlier this year, China also implemented regulation surrounding deep synthesis technology (“deepfakes”) that require consent from individuals whose information is being manipulated. The regulation also makes deepfakes easier to spot as AI providers must watermark AI-generated material.

As of March 23, 2023, OpenAI updated their usage policies to ban the generation of CSAM and actively “report CSAM to the National Center for Missing and Exploited Children.” However, the penalties for violating these policies are relatively mild, leading only to suspensions or terminations of OpenAI accounts. The consequence seems like a small price to pay for possible irreparable harm.

This year appears to be pivotal as other countries, states, and AI providers are strengthening regulations on AI-generated CSAM. The U.S. federal government lags behind for now, but how long will this lack of regulation continue?