Sarah Ryan
Associate Editor
Loyola University Chicago School of Law, JD 2022
Twitter made the news once again yesterday after removing a tweet by Dr. Scott Atlas, one of President Trump’s main White House Coronavirus advisors. The tweet, which questioned the effectiveness of wearing masks in combatting the virus, was said to have violated a policy on misleading information relating to COVID-19.
This comes just days after Twitter was criticized for “limiting sharing” of a New York Post article because it exposed private information (read: personal email addresses) and contained material obtained through hacking.
Allegations that big tech companies are guilty of “censoring” information on their social media platforms are far from new. A Pew Research Center survey conducted in 2018 revealed that 72% of the public thought that social media platforms actively censored political views. Results from this year’s version of the same survey found roughly the same results.
Even the President has waged a war against Twitter. His criticisms of Twitter for “silencing conservative viewpoints” escalated to threats of “shutdowns” or at least heavy regulation in response to the site adding a fact-check warning to tweets that claimed that “mail-in ballots are fraudulent” without any evidence. Not long after, Trump signed an executive order attempting to punish social media companies.
Section 230 of the Communications Decency Act
Trump’s executive order was aimed at Section 230 of the Communications Decency Act, which has been deemed “one of the most valuable tools for protecting freedom of expression and innovation on the Internet.” Section 230 includes various guidelines for regulating interactive computer services. Today, interactive computer services include social media companies like Twitter and Facebook, or basically any online service that publishes third-party content.
One authority Section 230 grants to social media sites’ is an ability to regulate the content that appears on their platforms. The statute provides that,
“No… interactive computer service… shall be held liable on account of any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.”
This provision gives social media companies expansive authority to regulate speech on their platforms, thereby permitting them legal immunity for “good faith” efforts to remove what they perceive to be objectionable content.
But what about freedom of speech?
Among other individualistic values, Americans cherish the First Amendment’s guarantee of free speech. Through decades of case law, the Supreme Court of the United States has tried to determine what constitutes free speech and what does not.
Social media sites generally do not come within the First Amendment’s jurisdiction since they are owned by private companies. In a way, Section 230 was intended to preserve freedom of expression on the internet by safeguarding companies from liability by specifically saying that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” This provision protects sites that publish third-party content, like Twitter, from being held liable for most of the content they produce and publish. Through this qualification, online intermediaries are able to avoid being regulated as publishers, and distance themselves from the content that users generate and post onto their platforms.
On the other hand, the “good faith” provision of Section 230 protects social media sites against claims that the First Amendment gives people the power to post anything they want to without it being taken down. In the past, sites like Twitter and Facebook have refrained from heavy regulation of speech on their platforms, often relying on users to report questionable content, and then flagging or removing anything that violates their rules and policies.
Since these platforms have become forums for virtual conversations about topics from politics to science, this effectively subjects some public discourse to private regulations that often diverge from legal understandings of free speech. For example, both Facebook and Twitter have their own policies banning what they would classify as hate speech, which is not actually prohibited by the First Amendment.
With the upcoming election and ongoing global pandemic heightening the spread of misinformation, some social media sites have adjusted their community norms and policies in response. Twitter flagged a tweet from the President himself because the tweet violated their updated rules about “spreading misleading and potentially harmful information relating to COVID-19.”
Looking forward
The future of Section 230 is still up in the air. Politicians on both sides of the spectrum have called for its revision but for very different reasons. Democrats, including presidential nominee Joe Biden want tech companies to be held more accountable for extremist content or content containing hate speech or misinformation. On the other hand, Republicans want online platforms to be held accountable for “unlawfully censoring speech” and “knowingly facilitating egregious criminal activity online.” It is unclear what changes the rest of 2020 will bring.