Big Tech & Its Algorithms: Is It Time to Hold Them Accountable?

Kristen Salas Mationg

Associate Editor

Loyola University Chicago School of Law, JD 2024

It’s no secret that companies like Google, Alpha, Meta, and Twitter use and sell our data.  However, in recent years, the content that companies display to us while we use their platform, from the ads we see to the websites that we find on search engines, has become a major contentious issue.  While Section 230 of the 1996 Communications Decency Act shields Big Tech and other online platforms from liability for user-generated content, the Supreme Court recently announced that it will hear Gonzalez v. Google. The outcome of this case could have a huge impact on tech policy and could fundamentally change the type of content that we see online.

Section 230 of the 1996 Communications Decency Act and what it means

Big Tech companies and other online platforms utilize algorithms to push content to consumers. However, partially because of Section 230 of the 1996 Communications Decency Act, these companies are virtually held unaccountable for whatever the algorithms show consumers.

Section 230 states that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” Essentially, Section 230 provides two critical protections to these companies: it shields companies from any civil actions arising from illegal content posted by website’s users, and it allows companies to curate, edit, and delete user content as they see fit. These two protections have shaped the Internet as we know it today because tech companies can push certain content to consumers without being held liable. As long as reasonable measures are taken to address illegal content on their platforms, companies are not accountable for that material.

Gonzalez v. Google and how it could affect Section 230

Gonzalez v. Google is a landmark case that arose from Nahemi Gonzalez’s tragic killing in an ISIS attack in Paris in 2015.  Unexpectedly, Gonzalez’s estate brought suit against Google for allowing ISIS to post radicalizing videos of violence and recruitment of potential supporters on YouTube, one of Google’s subsidiaries. Specifically, Gonzalez’s estate argues that Google actually promoted this content through its algorithms that push content to people it thinks would be interested in such content. While the Ninth Circuit Court of Appeals upheld a lower court’s decision in favor of Google, the Supreme Court granted certiorari this month.

Ultimately, Gonzalez’s family believes that Google should be held responsible for failing to remove, and potentially even promoting, extremist views on its platform. While Section 230 generally shields internet platforms from liability for content protected by others, it may be time to limit that scope of protection. During this session, the Court will consider the question of whether Section 230 protects internet platforms when their algorithms target users and recommend someone else’s content.

Potential implications of the Gonzalez v. Google decision for Big Tech

The Supreme Court’s pending decision in Gonzalez v. Google could have huge implications for Big Tech companies and other online platforms. If the Court decides that Section 230 does in fact apply to algorithm-generated recommendations, companies would be exposed to major legal risk for the algorithms that they rely on to push content to consumers. As of now, lower courts have ruled that Section 230 does not apply to algorithm-generated recommendations.  There is even a chance that the Supreme Court allows for the Federal Communications Commission to make the decision, which is the agency of record for most of the 1996 Communications Decency Act.

The Court’s decision can result in Big Tech needing to exercise a lot of caution about any content that it shows users or risk being held liable. Alternatively, the Court’s ruling could impact Big Tech’s ability to control anything that consumers see, leading to the potential spread of misinformation. Whether or not the Supreme Court decides that Section 230 applies to algorithm-generated recommendations, companies will need to understand how the content they push out fits into and how to comply with the Supreme Court’s pending framework.

Modernizing Section 230 and effect on regulatory agencies

Section 230 has created the Internet as we know it today. However, while the Supreme Court’s decision can impact the applicability of Section 230, both Democrats and Republicans have also called for changes to allow for Big Tech to be held accountable for the type of content that it promotes. For instance, the bipartisan PACT Act was introduced in 2021 by Senators Brian Schatz and John Thune.  While this Act has not been passed through Congress, it seeks to protect consumers by allowing federal regulators, like the Department of Justice and Federal Trade Commission, to pursue civil actions online under Section 230, allowing state attorneys general to enforce federal civil laws against online platforms, and requiring research into a FTC-administered whistleblower program for employees or contractors of online platforms.

Although one of the major purposes of Section 230 was to protect freedom of expression and innovation on the Internet, online providers have done more than just display content to consumers – rather, they are relying on algorithms to actively promote content. It seems like it might be the time for the protections that Section 230 provides to align with the Internet as it is used today.  Whether any potential changes to the applicability of Section 230 comes from the Supreme Court or Congress, it appears that regulatory agencies will be empowered to pursue civil actions against online platforms if they do not comply with the law.