For years, Section 230 was thought of as an internet bill. Originally passed in 1996, it was a way to guard against the purposeful spread of unsavory content on the internet. But the bill could never have contemplated the rise of social media platforms at a time when the most exciting thing you could do with your cell phone was a game of snake. But now, Section 230 is at the heart of a political and economic tornado that’s swept up freedom of speech, accountability, data security, political bias, and even sex-trafficking. Yet, the truth about Section 230 is that it has much to do with maintaining political influence and the exploitation of personal data than it does with your personal rights.

Section 230 has come storming back into the news cycle with President Trump vowing to veto the defense bill if it does not include language that would roll Section 230 back or repeal it altogether. 

Section 230 is designed to protect internet companies from being liable for any falsities, filth, or illegal content posted on their platforms. It also protects them from being sued for moderating that content. In other words, private tech companies like Twitter, Facebook, and YouTube have the legal right to censor posts they deem to be in violation of their standards.

This is legally viable because of two reasons: 

First, these companies are deemed to be distributors of content, not publishers. This small distinction holds a lot of power. By law, a publisher can be held liable for anything they publish. This is why libel, slander, and defamation suits exist. If an individual or company feels they have been misrepresented by a publisher, and that misrepresentation adversely affects them or their business, they can sue. But a distributor of content cannot be held to the same standards. Thus, the argument is that Twitter et al. are not responsible for their content and therefore cannot be held legally liable for it. 

Second, the tech giants are also private companies. As private companies, they have the right to enforce their own standards on their platforms. Therefore, they are well within their rights to remove — or moderate — content that is in violation of those standards. Under Section 230, distributors cannot be sued for infringing on the freedom of speech of censored content creators. 

Section 230 was essentially designed to counter what experts call a perverse incentive. In other words, prior to Section 230, it was less risky for a website to let anything get posted, because once you moderate content, you are making editorial decisions and are therefore a publisher and open to free speech litigation. Section 230 was an attempt to — if you’ll permit me — un-disincentivize sites from moderating filthy, false, or illegal content like hate speech. 

Several years ago, Section 230 got some carveouts. And by all estimations, they were needed. In 2018, a pair of bills called FOSTA-SESTA (the House bill was FOSTA, the Fight Online Sex Trafficking Act, and the Senate bill, SESTA, the Stop Enabling Sex Traffickers Act) were signed into law by President Trump. The bills made it so publishers and distributors would be responsible if third parties were found to be posting ads for prostitution — including consensual sex work — on their platforms. 

The bills were simultaneously hailed as a victory for sex trafficking victims and lambasted for gutting the internet of open forums like Craigslist. In the wake of the bills, Craigslist removed its entire “Personals” section from their platform. But removing the section was not indicative of wrongdoing having occurred. Rather, Craigslist weighed the cost of having to review and moderate every single post against the loss it would incur if something should slip through and result in a lawsuit. Craigslist erred on the side of caution and removed the section in its entirety. 

Facebook was for FOSTA-SESTA and lobbied to get it passed. Some believe it was a way to kill the competition from smaller sites like Craigslist.

Interestingly, both President Trump and Joe Biden are in favor of removing Section 230. Unsurprisingly, for entirely different reasons. 

President Trump is waving the flag of free speech. See, under Section 230, the private tech companies — who are denoted as distributors, NOT publishers — are legally permitted to remove users or content which violates their standards and policies. In the months leading up to the general election, the Trump camp accused Twitter of censoring his tweets and in so doing exposing the company’s political bias. According to Twitter, the flagging and removal of tweets made by the president and his administration were in an attempt to quell the spread of misinformation. This becomes especially important when considering the claims of massive voter fraud emanating from the White House which have yet to be substantiated by any court.

President Trump and his team want to remove the Section 230 protections so that they can pursue legal action against Twitter and others for what they see as undue censorship. Censorship which they claim is biased against the administration. 

Last year, Republican Senator Josh Hawley of Missouri introduced legislation that sought to roll back Section 230’s protections. One of its provisions was to remove protections for companies that exhibit a “political bias” when moderating content on their platforms. Senator Hawley’s proposal was to have the Federal Trade Commission certify the moderation approach employed by big tech companies every two years to ensure a neutral approach. 

Joe Biden, on the other hand, sees rolling back Section 230 as a way to increase the accountability we place on digital distributors for disseminating false, filthy, or illegal content. The Democrats believe the big tech trio — Facebook, Google, and Twitter — should be liable for the content their users post and share. The goal, it seems, is to curb the spread of misinformation to a massive audience of content consumers. The problem? Who’s to say what is information and what is misinformation? 

Congress Passes 2021 Defense Budget, With Veto-Proof Majority

Read Next: Congress Passes 2021 Defense Budget, With Veto-Proof Majority

The simple answer is that the companies themselves would be responsible for identifying what is true and what is false; what is appropriate, and what is not fit for consumption. But social media companies have billions of users posting content 24 hours a day. According to information circulated in January of this year, there are 6,000 tweets a second. That’s over 500 million tweets every day. According to information from 2019, 95 million photos and videos are posted to Instagram every day. Facebook reports 2.6 billion active users. 

With all of this user-generated content inundating the platforms, it would take an outsized effort to review, flag, judge, and moderate it all. Only the largest companies would have the resources to develop an infrastructure of that magnitude. Even then, it’s likely that things would slip through the cracks. In those instances, only the largest companies would have the capability to defend themselves in the event of a lawsuit or the capital to erase such a lawsuit with a settlement. Remember the Craigslist “Personals” section?  

These two opposing forces suggest the presence of two essential problems with the big tech world in which we live. Firstly, people are always going to post filth, falsities, and content that toes the line of appropriateness. With the morass of user-generated content, it is impossible to police it all. On the other hand, how do you ensure that the moderation of inappropriate content doesn’t become censorship? Doesn’t our free speech hang in the balance when the avenues of discussion and collaboration are barricaded by private standards of private companies?

It’s a convoluted and complex issue. But it’s not about free speech or protecting our children from being preyed upon. It’s about money and influence. 

This is part I of a multi-part series on data, media, and politics. Part II will publish tomorrow.