Click, Share, Mislead: Why Social Media Needs Regulation

Carolyn Nsimpasi

Associate Editor

Loyola University Chicago School of Law, JD 2026

Social media platforms have transformed the way people communicate, access news, and share information. Social networks such as Facebook, Instagram, TikTok, and X allow information to spread instantly to millions of users around the world. While this connectivity has countless benefits, it has also made it easier for misinformation and false narratives to circulate widely, often faster than accurate information. The spread of misleading content can influence public opinion, undermine trust in government institutions, and affect important events such as elections and public health responses. As a result, social media platforms should be fairly and reasonably regulated to reduce misinformation while still protecting freedom of expression.

Why social media regulation is needed

One of the main arguments for regulating social media platforms is the potential harm caused by the rapid spread of misinformation. False or misleading information can influence public understanding of critical issues such as health, science, and politics. For example, during the global spread of COVID-19, misleading claims about treatments, vaccines, and the severity of the virus circulated widely across various platforms. The misinformation contributed to public confusion, undermining trust in health authorities, and hindering efforts to effective control the pandemic. Without stronger oversight, these platforms can unintentionally amplify harmful content through algorithms designed to maximize engagement. Regulation could require companies to take greater responsibility for identifying and limiting the spread of demonstrably false information. For instance, on January 23, 2021, X introduced Birdwatch, now known as Community Notes, to moderate misleading content posted throughout its platform. When X users spot a misleading or untruthful post, they can write a Note to provide the missing context or correct the information all together. X evaluates the submission, and other users on the platform rate the Note(s) as helpful, somewhat helpful, or unhelpful. Community Notes is one mechanism used to dismantle the spread of misinformation, however with how vast X is, it’s not enough.

Despite its potential benefits, Community Notes has significant limitations. Because X relies on users to identify and correct misleading posts, many false claims may remain visible for long periods before a note is added…if one is added at all. Altogether, the effectiveness of Community Notes depends on active participation from a large number of users who are willing to review and rate notes. If participation is low or users disagree about what counts as misleading information, inaccurate content can continue spreading widely. Moreover, misinformation also tends to spread faster than corrections, meaning that by the time a Note is added, thousands or even millions of users may have already seen or shared the original post. These challenges highlight why relying solely on voluntary or platform-based tools is insufficient. Instead, stricter enforcement, clearer regulations, and stronger accountability measures may be necessary to ensure that social media companies take more effective action against the spread of harmful misinformation.

Furthermore, there are other benefits to increasing the regulation of content on social media, such as mitigating online harassment; protecting users, especially children and teenagers, from exploitation and mental health risks; and promoting healthier online habits. With billions of social media users worldwide, it is imperative that clear and enforceable standards are put in place to ensure platforms act responsibly and prioritize user well-being. The rise in the use of social media leaves no room for little to no regulation to continue. Ultimately, strategic measures taken can contribute to a safer digital environment that benefits both individuals and society as a whole.

Who should lead the charge?

The ultimate question arises: who should be responsible for enforcing rules and regulations surrounding misinformation? Some may believe that governments should create clear legal standards and penalties for companies that fail to address misinformation. Others may argue that social media companies themselves should take the lead by improving content monitoring, increasing transparency about their algorithms, and partnering with independent fact-checking organizations. Ultimately, the debate over regulating social media and misinformation reflects a broader challenge of the digital age: how to manage powerful communication technologies responsibly. As social media continues to shape public discourse and opinions, governments, social media companies, and users must work together to develop solutions that protect both the integrity of information and the fundamental right to free speech.