Analysis The US Justice Department has emitted its proposals for changes in Section 230 of the Communications Decency Act – the magic shield that, with a few caveats, protects websites from being held legally responsible for their users’ comments, posts, and other content.
The law is a cornerstone of the internet as we know it today: while it allows the likes of Facebook and Twitter to not be treated as publishers of information and thus largely avoid any repercussions as a result of stuff shared through their platforms, it mostly lets millions of netizens just get on with freely communicating with each other. Within limits, sites can remove certain unwholesome content without being reclassified as publishers and lose their cloak of protection.
In an announcement on Wednesday, Attorney General William Barr said the rules give internet giants too much leeway, in that sites can avoid lawsuits as non-publishers and heavily moderate content. “For too long Section 230 has provided a shield for online platforms to operate with impunity,” he said. “We therefore urge Congress to make these necessary reforms to Section 230 and begin to hold online platforms accountable both when they unlawfully censor speech and when they knowingly facilitate criminal activity online.”
In a summary of the proposed changes, first floated in June, the DoJ again flagged the issue of censorship of views: “The current interpretations of Section 230 have enabled online platforms to hide behind the immunity to censor lawful speech in bad faith and is inconsistent with their own terms of service.”
President Trump signed an executive order seeking to curb the legislation, which led to a widely-mocked request that the FCC review Section 230 (which has, inevitably, become a farce), and then pushed the Department of Justice to pressure Congress to make changes to the law to stop false information from being labelled as such. It is worth noting only Congress can change the law.
“We’re here today to defend free speech from one of the greatest dangers it has faced in American history,” Trump opined back in May when talking about what he felt were necessary changes.
Fortunately, despite hyperbole from politicians and the Attorney General, the actual DoJ proposals [PDF] do not add the kind of language that could interject political judgement in what online platforms decide to do with user content, unlike several legislative proposals put forward by lawmakers.
US senators propose yet another problematic Section 230 shakeup: As long as someone says it on the web, you can’t hide it away
There are several significant proposed changes, however.
On this issue of apparent censorship, the DoJ’s proposal would add four criteria to Section 230‘s legal shield for platforms that moderate netizens’ content: the websites would have to publicly state the criteria they use; moderation would have to follow those criteria; anyone whose content is affected by moderation would have to be informed and told why it was being moderated, and they would have to be given a chance to respond (unless that impacted a legal investigation); and, perhaps most significantly, “a provider must not base its decisions on pretextual or deceptive grounds or treat content inconsistently with similarly situated material that it intentionally declines to restrict.”
That last one is the much-desired requirement for politically and objectively neutral moderation that does not currently exist within the Communications Decency Act. While it would likely require significant case law to define in real terms – how do you decide whether one tweet is “similarly situated” to another, for example? – it is not an unreasonable approach, and is far from what some Republicans have been calling for based on the questionable claim that conservative voices are being censored.
Or, in other words, the Department of Justice is maintaining some level of political independence and professionalism when it comes to proposing changes to legislation. Although it should be noted that such a proposal is quite rare in the first place.
A most problematic issue may come in an effort to narrow what is currently a broad standard over what kind of content can be deleted by platforms while still retaining their legal shield.
The current definition is material that is “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.” That “otherwise objectionable” will be removed and replaced with four additional categories under the proposal: promoting terrorism, promoting violent extremism, promoting self-harm, and anything unlawful.
Content moderation would have to be made “in good faith and based on an objectively reasonable belief that the materials fall within the enumerated categories.”
The question then becomes: what would happen if President Trump again posted false information about the risk of fraud with mail-in voting, or again threatened “serious force” against protesters? Would Twitter be in a position to add a content warning to it as before?
It’s hard to see under what category the mail-in voting tweet could be included. The “serious force” tweet could be placed under the new “unlawful” category, or the “harassing” category, though today’s President would undoubtedly argue that anything he says cannot be “unlawful” through the sheer act of being President.
By requiring an online platform to identify on what grounds content has been moderated, it opens the door for legal review and precedent. Which, to be fair, may be no bad thing given the current aggressive state of online interactions. And then there is also the question: would adding a warning to content, without changing the content itself, even be considered “moderation” under law?
The two other main changes are the inclusion of a “Bad Samaritan” clause – which is a painful distortion of language and legal terminology but, again, will not actually be included in the law – and that may serve a useful function by withdrawing legal protections from a platform that is “continuing to host known criminal content on its services, despite repeated pleas from victims to take action.”
And then there is a carve-out for federal civil enforcement actions to cover online crime including child sexual abuse, terrorism, and cyberstalking. There is nothing to stop the federal government from investigating these crimes but the additional language within Section 230 would remove any legal ambiguity.
So, in summary: the DoJ’s proposals are not bad, and certainly don’t attempt to write political hyperbole or Trumpian nonsense into the law. But it is highly debatable whether limiting content moderation to a specific set of criteria is a good idea, and if Congress does take up the DoJ’s proposals, hopefully wiser heads will prevail and add an additional broader category to both provide flexibility and future-proof Section 230. ®