Social Media Censorship – Is Big Tech Taking Away Your Rights?

Social Media Censorship – Is Big Tech Taking Away Your Rights?

Written collaboratively by Christopher McHattie and anonymous (for fear of retribution)

The recent unfortunate events in Washington DC, culminating in protests that turned violent, have elicited unprecedented reactions from all corners of our society and around the world.  Among the most extraordinary responses is the decision by social media outlets, hosting services, technology platform providers – Twitter, Facebook, Apple, Amazon, YouTube, Google – and many other Silicon Valley tech companies, to deny those involved with access to social media. 

Some cheer, and others decry, these actions.  Regardless of one’s political persuasion, the decision to ban access should raise concerns of all citizens about this exercise of market power by lightly regulated companies, who increasingly dominate the media landscape and exercise outsized control over the current “marketplace of ideas.” Of particular importance is their status as private sector actors, who facilitate access to the most widely used forums for public discourse and debate, yet (as more citizens are realizing every day), are unbound by the constitutional constraints of the First Amendment and, as a result of legislation, largely above and outside of the law as discussed below.  

Regulation of these entities resides primarily in Section 230 of the Communications Decency Act, which provides internet service providers legal immunity from liability for content posted on their platforms (in the same way that a telephone service provider has immunity as a “common carrier” for what is said on a phone “line.”)  Section 230 provides: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” That immunity is buttressed by additional “immunities” as a result of the Electronic Communication Privacy Act (“ECPA”) and the Stored Communications Act (“SCA”)[1], which have been used quite effectively by social media platforms to erect a “Rubik’s Cube” of regulations and immunities which effectively put them out of reach from any but the most well-healed antagonists, including law authorities and their stretched budgets. 

First, let’s get politics out of the way. This article is not about riots, Trump, Democrats or Republicans – political persuasions should not interfere with a dispassionate examination of the issues. In fact, the events of the past weeks and months offer a unique opportunity to observe first-hand the implications of our current social media regulatory framework.  Whether you agree or disagree that social media posts by the President and others precipitated the events that followed is irrelevant to these broader questions: Are the bans by social media companies legal, and if so, should they be?

The short answer to whether the bans are legal is undeniably yes; it is completely legal under existing law for private companies, such as Twitter, to ban individual speakers, such as the President, and deny platform support to whole user communities, like Parler (setting aside potential breach of contract, tortious interference and illegal exercise of monopoly power issues). 

But what about the First Amendment of the Constitution and freedom of speech as a reason why these actions are “legal”?  The First Amendment provides:

Congress shall make no law … prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.” 

Thus, the First Amendment is actually protecting the people from government action, not from actions by private parties. The “marketplace of ideas,” as it was coined by Justice Douglas of the Supreme Court[2], is a “free market” where the private participants are free to shut speech down if they so choose. For the same reason, the NFL is free to prohibit its employees from protesting during a football game, Twitter is free to prohibit its users from posting offensive content. Following the rules of your employer or service provider is the price of employment/membership, no matter how arbitrarily they are enforced.  The First Amendment prohibits laws hindering freedom of speech, not the power of private companies to control content.

So, it is “legal” under existing law. However, that is not dispositive of whether it should be.  The effect of current regulation is to deny citizens access to what are essential means of sharing ideas based upon editorial decisions made by media outlets that enjoy unprecedented immunity for the speech they facilitate or fail to facilitate.[3] This raises the question:  Are we, as a society that purports to celebrate free speech as a fundamental right, willing to accept the “commercialization” of access to speech without clear rules that regulate under what circumstances the providers of such access can be held accountable for denying it? 

Consider that these private sector actors are first and foremost commercial businesses, the stewards of some of the most highly valued assets in the World, not the least of which are their brands.  It is undeniable that these entities have their own reputations to protect and will follow the desires of their consumers (advertisers) and shareholders, and other stakeholders they serve.  When the majority (at least the most vocal) of these stakeholder’s clamor for action, these commercial enterprises must decide how to respond based on sound business considerations. 

So, what’s wrong with that?  Perhaps the single greatest problem with this dynamic is that it places a great deal of power in a few sets of hands, at the expense of the populace at large when it comes to who will be allowed contribute to the ideas marketplace using the most prevalent, widely used means available to the populous. It has often been said that the Constitution is a “minoritarian” document, designed to protect against the “tyranny of the majority” as it inherently manifests in a political system that derives its power from the electorate (in other words when the majority decides by its “vote” to take the property of the “minority,” the Constitution is intended to stand in the way). Are we comfortable that the majority can, by exercising its disproportionate market power as consumers of information dictate who can access social media, and what they are allowed to say?  

It must be noted that, under our current jurisprudence, it is likely that the extreme examples of fomenting rebellion and conspiring to overthrow the government, or for that matter, inciting violence, would all be subject to criminal and civil action in the appropriate state and federal forums, trial by jury and upon judgment, the imposition of appropriate penalties and remedies. Why then, should we allow extra-legal actions by social media outlets and technology platforms – already immune from any liability for facilitating such speech – to resort to the sort of self-help now being taken. Have we, as a society, ceded to social media platforms a de facto role in law enforcement? Are these social media outlets effectively acting as “judge and jury” without ever having heard any evidence and allowing the defendant an opportunity to defend themselves? Are social media outlets above our fundamental tenet of “innocent until proven guilty?”

In law school I wrote an article which effectively anticipated where we are today. I argued that without a “right to hear,” the First Amendment could be rendered hollow and meaningless by private action and a complacent government structure. So, is our government complacent?  

In order to encourage the dissemination of ideas, we the people, through our government, have done many things with an eye toward a more robust marketplace of ideas, and we hold that market “sacrosanct.” Why else do we allow flags to be burned and empower our government to place only the most extreme limits on speech, such as to use the old example, prohibiting yelling “Fire” in a crowded theater.  

And to that end, we enacted Section 230 of the U.S. Communications Decency Act (and the SCA and ECPA). The Communications Decency Act was bipartisan and written in 1996 before the major social media sites were what they are today (at the time “ISPs hosted chat rooms, bulletin boards and allowed for email communication, it was not what it is today). It is now thought by leaders on both sides to be outdated and too protective of the tech giant social media platforms. 

A co-author of the Act explained it as a way: 

“to give up-and-coming tech companies a sword and a shield, and to foster free speech and innovation online. It was also, to a great extent based on arguments by Internet Service Providers offering website hosting, chat and e-mail services, that requiring them to exercise editorial control over their users was burdensome and even to some extent nearly impossible.[4] Essentially, [Section] 230 says that users, not the website that hosts their content, are the ones responsible for what they post, whether on Facebook or in the comments section of a news article. That’s what I call the shield. But it also gives companies a sword so that they can take down offensive content – lies and slime, the stuff that may be protected by the First Amendment but that most people do not want to experience online. And so, they are free to take down white supremacist content or flag tweets that glorify violence without fear of being sued for bias or even of having their site shut down. Section 230 gives the executive branch no leeway to do either.” 

In short, Section 230 allows social media providers to moderate, or not moderate, its platform as it sees fit without being liable for any content that it chooses to leave up or take down, again barring certain very limited exceptions. While some political leaders, like President Trump, see Section 230 as a legislation that needs to be removed immediately, such an approach is likely as bad as or worse than the problems it has created. 

Completely repealing Section 230 would actually force tech companies to moderate the content on their sites more closely because they would now have the potential to face litigation if posts on their site led to illegal activity or otherwise harmed someone. Instead, modifying Section 230 to establish a better balance between the business exigencies of social media platforms, the rights of users and the public at large would seem to be the better course.

Where to begin with this reform? Let’s talk about what’s bad about Section 230. For example, the El Paso mass shooting in 2019 was a premediated attack actually posted about in advance by the shooter on the website 8chan. 8chan boasts that it is a “free speech for all” site that chooses not to moderate posts except for blatant illegal activity such as child pornography (think “Silk Road” and drugs, early Napster for music sharing). The shooter’s posts were commented on by many, and some actually encouraged him to carry out his plan. Thanks to Section 230, 8chan could not be held liable for: (a) not taking the posts down; (b) allowing the shooter to be encouraged further; or (c) for not reporting the shooter’s “manifesto” which had also been posted. If not for Section 230, the site could have faced legal action for its “role” in the shooting. 

Problems created by Section 230’s blanket immunity extend beyond consistent rules addressing inflammatory, libelous, and other objectionable speech. In our law practice, we’ve experienced the unfortunate unintended consequences of Section 230 first-hand when it comes to addressing defamation and violations of copyright law by social media users. We have on numerous occasions served subpoenas and DMCA Take Down Notices on behalf of clients in order to address copyright violations. Suffice it to say, given the immunities created by Section 230 these social media sites are not cooperative. In fact, while trying to prove the infringing conduct of a defendant who was using “private” accounts to disseminate copyright protected materials, we served Twitter and Facebook with subpoena demanding records of all such infringing posts. Twitter’s response: “Twitter is legally prohibited by the Stored Communications Act from producing materials from private accounts, whether or not you serve us with an otherwise proper subpoena.” So, you can’t hold them liable and they don’t have to produce evidence of the other party’s conduct. They are, in fact, currently above the law.

While the foregoing are good examples of why to remove or amend Section 230 and the Stored Communications Act, we have to be cognizant, as noted above, that actually repealing Section 230 would likely have the opposite effect of that desired and would dampen speech, not increase it. Allowing a “free for all” of litigation against social media for content on their websites would mean that these sites would be forced to further moderate (not lessen) their site’s content, essentially being forced to infringe on the “Freedom of Speech” argued by the Section 230 adversaries, in order to mitigate litigations. Removing the protection of these tech companies from litigation would also make it much more difficult for new companies to enter the market by raising costs (they’d need moderators and other monitoring infrastructure) and they’d be at risk of litigation costs and other potential liability making them much less attractive investments, while the already established “giants” (like Twitter and Facebook) would enjoy a significant first mover advantage. This would conceivably embolden the already entrenched players as they would become even more essential speech outlets having been relieved of competitive pressure to moderate their conduct.

Outraged reaction to the latest removal of social media posts and user access has further hardened opposition to Section 230, but at its core this opposition may be on to something more than a mere emotional reaction, something deserving of serious consideration. Section 230’s blanket immunity has in fact made these tech companies exempt them from suit over their selective enforcement of their own rules regarding truly damaging posts (for a blatant example search #KillTrump on Twitter), and their unilateral removal of posts or accounts due to their subjective (and at times clearly faulty) opinions and alleged bias. 

Clearly, the “apparent monopolistic” model created by social media and the evident mischief allowed under Section 230 and the other Acts is a complicated and problematic conundrum with a seeming “dammed if you do, dammed if don’t” feeling to it. And while getting it “exactly right” may be beyond our abilities, it is not beyond our abilities to try. Repealing Section 230 outright is the wrong move, but amendment is required. 

In the end, and in our humble opinion, as with many standards under the law, if social media wants to be treated as a common carrier, their determinations of what must be removed cannot be subjective. It cannot be that #killtrump is okay, but “To all of those who have asked, I will not be going to the Inauguration on January 20th,” is not okay – particularly from an important public figure like the then sitting President. Yelling “fire” in a crowded theater is not okay, questioning whether the theater has appropriate fire exits is. 

In sum, social media outlets are not above the law and cannot be allowed to be above the law. While they hold a special place in our discourse (just like traditional publishers) and as such require special rules to protect their special place, they need to be required to respond to subpoena in same way as AT&T and the New York Times are, they need to be held accountable for willfulness and negligence on their part (if they are actually aware of a gunman’s manifesto it is willful and negligent not to report it) and they need to be protected against frivolous and harassing claims in the absence of such willfulness and negligence.


[1] Under the ECPA and SCA Limitations, social media platforms “shall not knowingly divulge to any person or entity the contents of a communication while in electronic storage by that service.” 18 U.S.C. § 2510(a)(1). This narrow language has allowed social media providers to successfully resist lawful discovery by invoking this statute to “quash” subpoenas for their customer information. See In re Facebook, Inc. which held that an otherwise proper subpoena was invalid because it sought customer communications. While some narrow exceptions apply, these platforms regularly argue for broaden interpretation and thus less disclosure.

[2]Support for competing ideas and robust debate can be found in the philosophy of John Milton in his work Areopagitica in 1644 and also John Stuart Mill in his book On Liberty in 1859.[1] However, the more precise metaphor of a marketplace of ideas comes from the jurisprudence of the Supreme Court of the United States. The first reference to the “free trade in ideas” within “the competition of the market” appears in Justice Oliver Wendell Holmes Jr.’s dissent in Abrams v. United States.[2] The actual phrase “marketplace of ideas” first appears in a concurring opinion by Justice William O. Douglas in the Supreme Court decision United States v. Rumely in 1953: “Like the publishers of newspapers, magazines, or books, this publisher bids for the minds of men in the marketplace of ideas”. The Supreme Court’s 1969 decision in Brandenburg v. Ohio enshrined the marketplace of ideas as the dominant public policy in American free speech law (that is, against which narrow exceptions to freedom of speech must be justified by specific countervailing public policies).

[3]This circumstance is starkly contrasted with the regulation of the editorial behavior of traditional media companies under our well-developed jurisprudence of libel law, etc.  At the same time, common carriers, such as phone companies and internet service providers enjoy no immunity precisely because they are unable, and uninterested, in monitoring the speech occurring on their platforms. 

[4] “Social Media” as we know it today, was nearly non-existent – perhaps most closely related to internet bulletin boards where asynchronous character-based messages were posted, read and responded to. 

This blog is for informational purposes only.  It does not constitute legal or financial advice and may not be relied upon as such.  If you face a legal issue, you should consult a qualified attorney for independent legal advice with regard to your particular set of facts.  This blog may constitute attorney advertising.  This blog is not intended to communicate with anyone in a state or other jurisdiction where such a blog may fail to comply with all laws and ethical rules of that state of jurisdiction. 

Share the article