Curbing Censorship: The Constitutional Challenges of Addressing Social Media Moderation

Kate Rice

Associate Editor

Loyola University Chicago School of Law, JD 2026

At a time when so many people rely on online spaces for news, connection, and the exchange of ideas, the balance between free speech and content moderation is more important than ever. In recent months, there have been rising concerns over potential government censorship and the proliferation of misinformation, especially on social media. The lack of transparency in the tech industry makes this issue uniquely tricky, as each platform’s distinct algorithms are largely proprietary. However, many users feel that their voices are being silenced based on the nature of the content they are releasing. The possibilities for remedying these concerns are limited, as the First Amendment expressly protects private companies from government censorship (including the requirement that they host specific content), but there are several potential paths forward that could have far-reaching implications for the future of social media content moderation.

Allegations of censorship

TikTok users in the United States have reported noticeable signs of censorship on the app, particularly following its brief ban in January. Due to national security concerns over the Chinese-owned platform, Congress passed the Protecting Americans from Foreign Adversary Controlled Applications Act (PAFACA) to restrict TikTok’s operation in the U.S., raising First Amendment questions about the government’s ability to control and limit digital platforms. Whether bans such as these constitute government censorship is still up for debate. Some TikTok users and other content creators argue that PAFACA attempts to unlawfully control and limit their content. Others, particularly government officials, are concerned about foreign adversaries interfering in the U.S. social discourse.

After President Trump brought back TikTok via executive order, many users flagged that comments and tags containing certain words or phrases, such as “Free Palestine” or “Free Luigi,” had been hidden. Practices like shadowbanning, also known as “algorithmic suppression,” are one of the primary ways platforms silence specific creators and content by excluding them from other users’ feeds. However, because the algorithms behind social media feeds are not public, it is impossible to prove these occurrences.

TikTok is not the only platform with this problem; Instagram, Facebook, and X are among others that have faced claims of censorship by users. These accusations come at a time when hate speech is on the rise, complicating the situation even further. Most social media platforms have community guidelines that users must agree to when creating an account, like restricting content promoting violent or hateful sentiment on the platform. Because of the uncertainty of how exactly content is assessed, there is ample room for abuse of these guidelines depending on how open interpretations may be or who is making these calls.

Enforcement

The Federal Trade Commission (FTC) is the agency responsible for regulating online platforms. On February 20, 2025, the FTC used its investigative authority to release a Request for Information (RFI) entitled, “Request for Public Comment Regarding Technology Platform Censorship.” The RFI calls for technology platform users and employees to provide input on platforms’ alleged practices of demonetizing and shadowbanning users due to their speech or affiliations. It argues that these practices can violate platforms’ terms of service, run the risk of anti-competitive practices, and effectively result in censorship or violations of deceptive business practice laws. Alongside the release of the RFI, FTC Chairman Andrew N. Ferguson said, “Tech firms should not be bullying their users . . . This inquiry will help the FTC better understand how these firms may have violated the law by silencing and intimidating Americans for speaking their minds.” The FTC is expected to use this input to craft future rulemaking and enforcement actions on the matter.

The FTC does have some power to act in these situations. If the agency identifies fraud, deception, or unfair business practices, it can enforce federal consumer protection laws to remedy the violations. If a platform is not transparent about its content moderation policies, this may constitute deception, which the FTC could use to justify regulatory action.

However, tech firms are likely to aggressively defend their right to expression from the FTC. The First Amendment states that the federal government “shall make no law … abridging the freedom of speech or of the press.” If the FTC does act on this issue, companies will undoubtedly lean on this language to argue that a government agency cannot directly regulate content, or in effect, require that a platform host certain content—regardless of materiality.

Further, in 2024, the Supreme Court clarified that social media platforms possess editorial rights under the First Amendment, which newspapers, magazines, and other forms of media also enjoy. Justice Elena Kagan wrote, “The editorial judgments influencing the content of those feeds are, contrary to the Fifth Circuit’s view, protected expressive activity . . . [platforms] include and exclude, organize and prioritize—and in making millions of those decisions each day, produce their own distinctive compilations of expression. And while much about social media is new, the essence of that project is something this Court has seen before.”

It is also important to note that on February 18, 2025, President Trump signed an executive order mandating that all significant regulatory actions be submitted for review to the Office of Information and Regulatory Affairs (OIRA), strengthening presidential oversight of executive agencies. The implementation of this order will likely affect how the FTC approaches this issue and what it can realistically achieve.

An uncertain path forward

While users demand transparency and fairness in social media content moderation, platforms must weigh these with the need to combat misinformation, hate speech, and other harmful or offensive content. Similarly, lawmakers, courts, and agencies must balance competing priorities of preventing undue censorship and ensuring that platforms operate fairly and transparently. However, the Supreme Court’s recognition of social media platforms’ editorial rights further complicates efforts to regulate online content moderation, underscoring the constitutional challenges of government intervention in this area.

Current moderation and censorship practices are increasingly toeing the line of outright suppression, which is particularly concerning as online spaces continue to shape the public discourse. When certain content is restricted, so is that viewpoint; this becomes extremely problematic when certain voices are continually silenced based on a given perspective or stance. This also has serious implications for content creators and consumers alike, creating a less pluralistic digital landscape where users must exercise skepticism about the information that they are being fed.

Additionally, with increased executive oversight through OIRA, the FTC will likely face additional scrutiny in its enforcement efforts. With so many unknowns, aggressive public pushback will be crucial to hold the many decision-makers and stakeholders accountable as they navigate this increasingly complex and consequential issue.