How Much Should Social Media Companies Monitor Content, Really?

By Madeline Kopp

The tension between the limitation of off-putting content and the guaranteed First Amendment right of free speech isn’t new. This has been a continuous battle in the digital age. Of course, much of this tension is expanded by the still growing use of social media, by more people, on more platforms, in more ways.[1] Numerous problems relating to the censorship of speech on social media platforms have emerged, such as how one should navigate the rules that can result in the swift removal of social media user comments or accounts that supposedly violate vague Community Guidelines.[2] While most people probably think about these individual user rights, the social media platforms too find themselves amidst legal battles about how they monitor the content that gets put onto their platforms.

            Many people and organizations support a greater level of regulation surrounding social media platform content moderation policies, especially because of moderation’s role in combatting hate speech, racism, and misinformation.[3] However, because this regulation comes from government legislation, it may pose the question of when protective content moderation becomes forced engagement in a state-controlled narrative. This question is currently being addressed in a lawsuit brought by Elon Musk and X Corp., formerly known as Twitter, against California.[4]

            The law that X now attacks was just passed in 2022, and yet is already a prominent player in content moderation and involved in litigation.[5] The California law known as Assembly Bill 587 requires social media companies to share information relating to their current moderation policies, including their terms of service regarding what users are allowed to do and say on their platforms.[6] The Bill also requires these companies to frequently–on a semi-annual basis–submit a report detailing how they are currently defining “hate speech,” “racism,” “disinformation/misinformation,” “harassment,” and “foreign political interference.”[7] Additionally, since social media companies are monitoring this content and removing the things that violate their policies, the Bill also asks that they disclose details about the content that they have flagged.[8]

            X claims that Bill 587’s actual purpose is to “pressure social media companies into eliminating content the state found objectionable.”[9] Ultimately, X believes that the Bill is a violation of social media companies’ state and federal Constitutional free speech rights.[10] While legislatures claim that this law and others like it are merely about encouraging transparency among social media companies, X Corp alleges that it is a way to censor what they allow to be placed on their website.[11] As content moderation policies have increased over the last few years, regardless of the outcome, this lawsuit is certainly likely to play a role in defining the proper line between censorship and free speech.

[1] Between April of 2022 and April of 2023, there were 150 million new social media users. See Annabelle Nyst, 134 Social Media Statistics You Need to Know for 2023, Search Engine J. (July 14, 2023),,different%20social%20networks%20each%20month.

[2] See Kyle Langvardt, Regulating Online Content Moderation, 106 Geo L. J. 1353, 1355-56 (2018).

[3] See Big Social Media Companies Can’t Be Trusted to Regulate Themselves. It’s Time for Real Transparency, ADL (May 23, 2023),

[4] See Jonathan Stempel, Elon Musk’s X Corp sues California to undo content moderation law, Reuters (Sept. 8, 2023, 7:18 PM),

[5] See Minds, Inc. v. Bonta, No. 2:23CV02705, 2023 U.S. Dist. LEXIS 146729, at *1 (Cal. C.D. Aug. 18, 2023).

[6] Id. at *2-3.

[7] Id.

[8] Id.

[9] Stempel, supra note 4.

[10] Eric He, Elon Musk’s X sues California over content moderation law, POLITICO (Sept. 8, 2023, 7:11 PM),

[11] Id.