Section 230: Should We Get Rid of it?

Title V of the Telecommunications Act of 1996, known as the Communications Decency Act, contains the famous Section 230(c)(1), which consists of the 26 words that created the internet:

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

You can see the full text of the Telecommunications Act of 1996 here (Section 230(c)(1) is on page 101). Why this is in the news lately is that a case before the Supreme Court of the United States (SCOTUS) may be deciding whether Section 230(c)(1) ought to be upheld or disposed (namely, in the cases Gonzalez v. Google, LLC and Taamneh v. Twitter, Inc.).

The following videos discuss the SCOTUS case concerning Section 230(c)(1).

The short version is that Section 230(c)(1) defines websites as platforms rather than publishers. This may seem like a mere semantic difference, but it is important. A platform is not responsible for the content it hosts where a publisher can be liable for the content. In other words, if I want to, say, post defamatory content on my Twitter feed, Twitter is not legally responsible for that content, i.e., Twitter cannot be sued for libel if I am the one posting libelous content on Twitter.

The case in front of SCOTUS alleges that the algorithms used by these companies to promote content means that the companies are not neutral platforms, but are in fact publishers who promote certain content. The companies say this is not true since the content being promoted is based on people’s use of the platform: if you engage with content of a certain type, the algorithm will promote that content to you.

Whether or not websites are platforms or publishers is an interesting question and one I’ve touched on in the past. But I’m not going to make a case one way or the other here. What I’m interested in is the question of whether or not it would be a good or bad thing should Section 230(c)(1) be struck down. The detriments to this seem fairly obvious. As the above videos point out, it would likely result in one of three things in online spaces: anarchy, authoritarianism, or simply giving up.

In the first case, websites and social media will not regulate anything lest they be accused of promoting some content over other content (thereby endorsing the promoted content). No longer will there be moderation of content. Nor will there be algorithms suggesting content to us. A website like Youtube, for example, would be more like a database of videos. If you want to find something, you will have to sift through the billions of videos available, which will include all of the garbage (e.g., pornography) that Youtube does not take down for fear of being accused of promoting some content over another (which would be construed as endorsing some views over another).

In the second case, websites will lean into their position as publishers. As such, only approved content will be allowed. Everything else will be prevented or swiftly removed. Only content that conforms with some transparent mission statement set out by the company hosting the site will be permitted. For instance, if Youtube decides to adopt a Woke stance, then any video that does not promote this (or, at least, any video critical of it) will not be permitted, lest Youtube be accused of endorsing some other non-Woke position.

In the the third case, websites may just give up and shut down. The other two approaches may seem either too odious or onerous and therefore not worth the time and effort.

Of course, these three approaches to a post-Section 230(c)(1) are what we predict right now in the current Section 230(c)(1) regime. It may be that some other alternative crops up. Humans are good at finding loopholes, workarounds, and novel solutions. Surely the authors of the Telecommunications Act of 1996 could not have predicted the impact that Section 230(c)(1) would have on the way the internet developed over the last three decades. Just like a machine learning algorithms finding novel or exploitative solutions to satisfying their objective functions, humans have a tendency to behave in ways we cannot anticipate beforehand.

There is, of course, a status quo bias when discussing Section 230(c)(1). People want the internet the remain the way it is because it is what people are used to. Yet everyone seems to agree that the internet is terrible. It is a cesspool of trolls, misinformation, outrage, mob cancelling, extremism, conspiracy theories, and so on. It leads to depression, anxiety, and loneliness. So, my question is: why should people want to preserve the status quo?

Don’t get me wrong, my own status quo bias is screaming at me that striking down Section 230(c)(1) would be disastrous. The way the internet currently functions is familiar and comfortable to me. I have my content I regularly consume and I don’t want to see that change or go away. Yet there is a part of me that would be excited for the new post-Section 230(c)(1) internet. A part of me that says that maybe 10, 20, or 30 years from now we would all be thankful that it was struck down, that our new internet with our novel approaches to content curation and consumption are better than what we have with Section 230(c)(1). That the world is a better place for it, that there is less misinformation, extremism, siloing, incentivizing outrage, pile-ons, and so on.

A common theme for me on this blog is that we humans have not evolved for the world we’ve created for ourselves. The vast, globalized realm of the internet, where people are exposed to dozens of remote outrages on a daily basis, where anonymity brings out our worst impulses, and where we confine ourselves to echo chambers that appeal to our legion of biases, is just one aspect of the world we’ve erected that our primitive ape brains routinely fail to navigate rationally.  Likely the post-Section 230(c)(1) internet would have its own incompatibilities with the mental architecture with which evolution has furnished humankind, yet we’ll never know if it is worse or better if Section 230(c)(1) remains in effect.

In the end I am torn on the issue. I understand the potential downsides should we jettison Section 230(c)(1). My status quo bias for the current regime is powerful. There is a sense in which it works. And even more of a sense in which it just makes sense: in what way can we say that every post or video posted on a website is endorsed by the website hosting it? Is that not unlike saying that the phone company endorses everything a person says over their network? It’s not a perfect one-to-one comparison, since phone companies don’t have algorithms to promote certain phone conversations over others, yet the algorithm (at least ideally) begins as a sort of tabula rasa and only learns to promote things based on a person’s past engagement with content (not to mention the practical utility of such algorithms for curating our internet experience).

I suppose, upon reflection, where I ultimately come down is that, should Section 230(c)(1) be struck down, I will be excited to see how the internet evolves from there, for good or ill. It might even be the kick-in-the-ass society needs to get out of the rut in which it finds itself in terms of the internet’s well-known toxicity. We of course might find ourselves in an even more malignant environment post-Section 230(c)(1), but my pessimism does not make me a conservative on the issue. Indeed, in my view, the way the internet is right now is terrible for people and for society as a whole and will not get any better. We can be sure that maintaining the status quo will continue the rapid deterioration of social cohesion and epistemic unity. Thus, it is perhaps better to take the leap into the void on the off chance we’ll land somewhere less terrible. Do we really have that much to lose?