Why you should care about Section 230

Section 230 opponents want to punish social media sites for offending their Dear Leader and are willing to burn down online culture as we know it to do so.

Why you should care about Section 230

Section 230

There is a lot of quality writing on Section 230 already.

Here are some curated resources to get you up to speed:

tl;dr

Section 230 protects websites that allow users to post content.

Section 230 covers all social media sites, but also forums, Discord servers, listservs, and many other forms of online communities.

Section 230 protects the websites from civil legal liability, in other words, from lawsuits, for hosting content that was made and posted by users.

User-made content includes tweets, comments on a news article, blog posts, videos on YouTube, recordings on SoundCloud, art on DeviantArt, and newsgroup posts.

Section 230 was adopted as part of the Communications Decency Act in 1996, when most user-made content was on chatrooms or in the form of newsgroup posts.

Many of the first section 230 cases involved AOL and other dial-up carriers.

Originally tech-illiterate courts said that ISPs like AOL could be sued for newsgroup posts published by users, as if the CEO of AOL had written the post themselves.

Congress realized this would be a problem to the adoption of the web and created section 230.

The courts have since gradually applied section 230 to new online platforms as they have evolved.

Section 230 has allowed social media and online content culture to thrive.

Here is a 2:18 explainer video:

How does section 230 work?

Say an activist publishes a video on YouTube of a politician saying something controversial at a public event. The local media pick it up and then it goes national. The politician holds a press conference, denies the controversial statement, and threatens to sue YouTube for hosting the video.

Under section 230 the politician is barred from suing YouTube for hosting the video and dragging Google through expensive legal proceedings over it. Section 230 protects YouTube from responsibility for the content the activist posted. It is the activist's video, after all.

You might not ever have thought about it this way because this idea is so baked into our online content culture. Of course YouTube isn't responsible for the user's content itself. This is the inherent assumption of section 230 which enables the modern web to work.

If YouTube could be sued for videos you post as if Sundar Pichai posted it himself, Google might have to verify your identity before you could open an account, preventing anonymous whistleblower accounts. Google might also require human review of every new video uploaded before it went live, which may not even be humanly possible given the sheer volume of videos posted on YouTube.

For what it is worth, the politician is also barred from suing the activist for posting the video too, under New York Times Co. v. Sullivan and anti-SLAPP statutes in several states. These intersecting liability shields, Section 230 and anti-SLAPP statutes, promote minority voices, help expose government corruption, and foster healthy dissent online.

Section 230 is not a shield for breaking any law though. For example, under the DMCA YouTube must remove videos that are reported as infringing copyright. Section 230 is not a defense to posting true threats on YouTube, using your website to sell illegal drugs, or committing housing discrimination on Craigslist. Section 230 is not the carte blanche legal pass for big tech that opponents portray it as.

Section 230 also allows websites to moderate that content the way they want. It allowed, for example, Verizon to ruin Tumblr by purging adult content (RIP Tumblr) and for the curators at hell site Gab to promote the dangerous right-wing apocalyptic cult QAnon.

Section 230 allows you to start a community website and set any rules you want for what your members can post on your site. If members think your rule enforcement is arbitrary or overbearing, they will leave, and your subscription or ad revenue will dry up. If someone posts something defamatory or infringes on a third-party's copyright you, as website owner, cannot be sued provided you promptly remove it when alerted. It is a good system that has worked well for 26 years. It allows communities to form spontaneously and self-govern online, for everyone from forums for trans youth to Battlestar Galactica fans.

Section 230 has a bit of an awkward side too that can't be ignored. If someone posts non-consensual porn of a former sexual partner to PornHub and it gets 300,000 views there, PornHub's parent company is not on the hook provided they take it down promptly when notified. Though, unlike the activist posting video of a politician, the nonconsensual porn victim has some recourse directly against the poster.

My dumb opinion on all this

Section 230 catches a lot of flak, from both the right and some from the left. It has also gotten caught up, unfortunately, in the current fuss in DC to 'do something' about big tech and foreign influence on elections, both of which have become moral panics rather than sane, rational policy discussions.

Many on the left, including Democratic Presidential nominee Biden, have called for revisions to section 230 to combat hate speech and disinformation. Laudable in theory but challenging to carry out. In practice, such regulations tend to sweep in valid dissent and could be open to manipulation by future administrations.

The prime reason to not let the government meddle with online speech regulations is on full display right now:

Repealing section 230 is really all the rage on the right. Spurred on most recently by clumsy handling of a New York Post report by Twitter.

A lot of the right (but not all) are focused on repealing section 230 for two reasons:

  1. President Trump and his allies want to be able to frivolously sue Twitter for letting people say mean things about them and drag Twitter down in expensive, time-consuming lawsuits, even if they know they won't win. He does this already to newspapers, it's his thing.
  2. They don't like Twitter and Facebook's content enforcement policies. They believe the removal of misinformation, hate speech, and conspiracy theories are politically motivated censorship against them. They want to make it so hard for social media websites to work or so unbearably awful no one uses them. They want to burn it all down.

This surge on the right has prompted an executive order and an FCC rulemaking procedure to "clarify" section 230. By "clarification" they mean for the Trump administration to "re-interpret" section 230 bypassing Congress, who wrote it, and the courts, who have been interpreting it fine enough since 1997. This will be halted by the Courts based on constitutional limits on administrative law, but that won't stop them from trying and throwing the web into chaos in the interim.

The hypocrisy

The right wants to tell Twitter how they can moderate their site, on their own servers, which is their private property and business, all because Twitter labeled a tabloid article about Biden's son misleading. They had really hoped it would be an 'October surprise' for Trump.

They want the government to compel Twitter to endorse their political viewpoints, which itself is a violation of free speech and the First Amendment rights of Twitter.

If Twitter doesn't play ball with the Trump administration, the right plans to revoke section 230 and unleash a torrent of frivolous class actions when anti-vax and nutball conspiracy stuff gets removed.

That is it. That is their play.

It is worse than the obnoxious lefty kid shouting down a conservative speaker at their university. The President wants to burn down the metaphorical auditorium (and all the cool stuff that goes on in there) because someone dared to question conspiracy theories.

It is possible that the right is correct, Facebook and Twitter are censoring conservative speech. It may also be that the right indulges in more conspiracy theories, misinformation, and hate speech than the left, which just naturally gets removed more often. Go figure.

Either way, to me when you threaten the entire system by repealing section 230, you lose all credibility and I stop caring about your personal claims. Sorry.

To be fair, Twitter and Facebook do seem to fuck up their enforcement a lot, which doesn't always help their case. I would prefer Facebook and Twitter get better at removing hate speech and disinformation because the market demands it, not because the government regulates them. But this doesn't just affect the big social networks, it affects millions of small communities online.

What would happen?

Repealing 230, depending on how it is done, could go a few ways:

Some tech companies would want to allow user-generated content but, faced with liability for everything posted by users, they simply ban all comments and other user-generated content. Their websites become entirely static. This may be a viable option for your local newspaper, but not social media sites like Facebook. It will reduce online social interaction and further isolate people in their ideological bubbles.

Some tech companies want to allow user-generated content and moderate it. They must then find ways to mitigate the massive liability for everything posted by users as if they said it themselves. This would require even more aggressive moderation than happens now that might be impossible at scale for sites like YouTube. Videos and retweets would require human approval before posting. Anonymous accounts would be prohibited, requiring users to verify their identity with government documents to sign up. The cost of all this additional legal compliance would be passed onto users. If the company cannot scale moderation or cover costs, it will go out of business.

Some tech companies want to allow comments but do not want the liability that comes with curation, so they turn off all moderation. This allows anyone to post anything, including spam, hate speech, conspiracy theories, and harassment. A complete free-for-all on the platform. Curation tools like lists and blocking on Twitter would have to removed. This would make the site miserable to use.

Section 230 opponents realize all these potential outcomes. They just want to punish social media sites in any way they can for insulting their Dear Leader and are willing to burn down the web as we know it. They don't really care if websites go out of business or face frivolous class-action lawsuits (funny because they usually aren't friends to the trial lawyer bar).

It is the same strain of nihilism that brought us Trump in the first place:

Sometimes opponents of section 230 inject "publisher" vs. "platform" nonsense, suggesting that if Twitter adds a "misleading" label to one of Trump's conspiracy theory tweets suddenly Twitter loses all Section 230 protection. They don't. Twitter could be held liable for statements it makes in the label, but it can add the label. Courts examine these on a case-by-case basis, but the presumption is section 230 still applies.

Source

Trump's own legal team has invoked section 230 protections, for Trump personally, when he has added his own editorial content to an alleged defamatory tweet. This is based on a very liberal reading of section 230 from California state and Federal courts that hold the section 230 liability shield applies not just to the websites, but individual users as well. If section 230 were repealed, it wouldn't apply to either, and Trump would be liable to face trial on his defamatory re-tweets:

Source

Note: The paragraph below refers to the effort to modify section 230 through the FCC rulemaking process. As of December 23, 2020 the Trump Administration is now vetoing routine defense spending bills for not containing a repeal of section 230.

Ironically, the right seems intent on repealing section 230 through an unelected, unaccountable bureaucratic agency, the FCC, and in one proposal expand Federal power to allow the FTC to decide if platforms deserve section 230. This is the exact same kind of thing they railed against the Obama administration for.

Remember when conservatives supported free enterprise, property rights, opposed to executive branch overreach, and made fun of SJWs demanding safe spaces? Ah, 2015.

It is also ironic the team who want to shield gun makers and dealers from legal liability for mass shootings suddenly want to impose legal liability on social media companies for...hurting their feelings. On a site no one is forced to use and indeed a vast majority of Americans do not even use.

Leave it the fuck alone

Messing with section 230 is dumb. Proposals from the left and right to 'tweak' section 230 are bad ideas. The current move on the right to repeal section 230 is particularly dangerous. Changes made to section 230 to prevent sex trafficking, another moral panic, have had the inverse effect.

I opposed reclassifying internet access under Title II because it opened the door to Congress regulating the web, which was just as bad an idea. I oppose any change to section 230 for the same reason. Leave it the fuck alone.

New regulations on social media are likely to impose compliance costs that small web communities could not afford, which will only further entrench the existing major players like Facebook. This would reinforce existing big tech monopoly power, not fight it. It could harm anonymous speech by dissidents and politically oppressed groups. It would do nothing about foreign interference in our political discourse.

Trying to 'reign in' big tech by opening them up to frivolous lawsuits is particularly dumb. It will also not fix the problem of cancel culture or hate speech online. It will fuck up the unique balance of the law that has enable wide-ranging online civic participation.

I worry about big tech too. I don't think Facebook should be allowed to own Instagram and WhatsApp, but the proper remedy for that is to rekindle basic anti-trust law enforcement in the US, not let Trump sue Twitter for labeling his tweets as misleading.