On May 28, 2020, US President Donald Trump signed an executive order “Preventing Online Censorship” in response to a contextual fact check on a Twitter post. This moved against Section 230 of the Communications Decency Act (1996) - Title V of the Telecommunications Act of 1996. The first part of that code limited a platform’s liability for libel, while the second explicitly allowed the moderation of content in the context of the First (free speech) Amendment to the United States Constitution.
While the order does not directly quash Section 230 provisions, it is not just sound and fury signifying nothing; it does take several steps against the act. Several arms of government are directed to review, report, and propose legislative and regulatory actions. These are directed at the second part of Section 230, which could make the moderation of content problematic.
What the executive order orders
Under Section 230, platforms are not to “be treated as the publisher or speaker”, preventing them from being held liable for libellous or other statements. In the context of the First Amendment to the United States Constitution, the second part of Section 230 protects platforms from liability caused by any actions taken to restrict users.
The executive order posits that moves by platforms to restrict content are not always conducted in the “good faith” described in Section 230 and are, therefore, “editorial conduct”. The order charged all executive departments with reviewing the application of Section 230(c). The Secretary of Commerce, in consultation with the Attorney General and the National Telecommunications and Information Administration (NTIA), is to file a petition for rulemaking with the Federal Communications Commission (FCC) to propose clarifying regulations on the interplay between the two parts of the code, creating potential liability. The order also seeks to defund platforms, requesting each executive department and agency review advertising spend on online platforms that restrict free speech.
References to “viewpoint-based speech restrictions” relate to President Trump’s recurring complaint that right-wing voices are targeted. In May 2019, a “tech bias reporting tool” was launched by the White House to gauge this. The 16,000 responses received will now be reviewed by the Department of Justice and the Federal Trade Commission (FTC), with the FTC considering action relating to “unfair or deceptive acts or practices in or affecting commerce”. This targets the “deplatforming” of users who have been sanctioned (e.g. limits to audience reach, demonetisation) or removed for abuses. They may use platforms to generate revenue through donations, direct advertising, redirects to websites with advertising, or sales of merchandise. In a high-profile example of this, Alex Jones of Infowars was deplatformed by YouTube, Facebook, Spotify, Twitter, and Apple in 2018.
The FTC was also charged with considering “whether complaints allege violations of law” in relation to the order’s stated policies on enabling free speech—and to “consider developing a report” on said complaints.
The executive order directed the Attorney General to establish a working group with State Attorneys on the enforcement of state statutes prohibiting “deceptive acts or practices”. That would also consider model legislation for state legislatures. The group would consider the complaints gathered in the Tech Bias Reporting tool along with additional information on user targeting, content suppression based on political alignment, reviewer bias, and demonetization. Finally, the executive order directs the Attorney General to develop a proposal for federal legislation to promote the policies laid out in the order.
Section 230 enabled the internet economy
Although only a short entry in the telecommunications act, Section 230 of the Communications Decency Act is widely viewed as the key component of enabling legislation that led to the development of the current internet ecosystem. It arose after a 1995 court case found Prodigy Services liable for a statement from an anonymous user of its bulletin board, which was found to defame Stratton Oakmont, the company later made famous in The Wolf of Wall Street. Countering earlier cases, this was due to Prodigy using content guidelines, moderators, and screening software, which was adjudged to amount to editorial control. Section 230 provided explicit protection for platforms from being held to account as publishers or speakers. Combined with the second part of the code, they were also provided protection from being sued for limiting free speech through their actions to moderate content.
(1) TREATMENT OF PUBLISHER OR SPEAKER: No provider or user of an interactive computer service, shall be treated as the publisher or speaker of any information provided by another information content provider.
(2) CIVIL LIABILITY- No provider or user of an interactive computer service shall be held liable on account of-- `(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected;
Without such protections, social media platforms would have been forced to either accept a publisher’s liability and exert strict controls to the extent of approving each post or leave all content unmoderated and retain potentially abusive members unless they explicitly break the law.
Regulatory changes are inevitable
Although the order does not in itself change the legislated position, it does, as summarized above, seek to attack Section 230 and the platforms themselves through several branches of government. These include all 15 executive departments from Agriculture; through Commerce, Homeland Security, Justice, State, Treasury, and Transportation; to Veteran Affairs. The FCC and FTC are counted as independent agencies but will be subjected to the petitions of the Secretary of Commerce (a presidential appointee, cabinet member, and head of the Department of Commerce), the Attorney General (a presidential appointee, cabinet member, and head of the Department of Justice), and the NTIA, an agency under the auspices of the Department of Commerce, which advises the Presidential executive branch.
The FCC is directed by five presidentially appointed commissioners that are appointed for a five-year term and is often clearly split along partisan lines. With a maximum of three appointees belonging to one party, it currently holds three Republican commissioners. Three commissioners have released statements responding to the order. Commissioner Carr (Republican) welcomed the chance to review Section 230 and indicated alignment with the President’s free speech primacy. Commissioner Starks (Democrat) was broadly dismissive of the order while Commissioner Rosenworcel (Democrat) stated that “an Executive Order that would turn the Federal Communications Commission into the President’s speech police is not the answer”.
The original legislation is clear, and the outcome of this order is uncertain. The order itself is limited to directing reviews, instigating petitions to regulatory bodies to propose rulemakings, and developing proposals for state and federal legislation. Given that Section 230 has been critical to enabling the dominant position of American companies in developing the internet ecosystem, each of these elements will face significant resistance in passing through the regulatory and legislative processes.
The necessarily complex changes will take time to design and implement. With the presidential election set to take place in November, it is also unlikely to be implemented in 2020. Whatever the outcome of the election however, Section 230 is under attack by both sides. Democratic nominee Joe Biden stated in January that he believed the broad liability protections should not remain in place. Biden has taken a different tack from that of Trump, taking most issue with the first part of the code, which, by providing broad liability protections, has allowed misinformation to proliferate. In short form, Biden supports greater moderation of false or objectionable content while Trump seeks to mandate platforms to publish content that they find objectionable.
Given the scale of the problem revealed by Facebook’s Community Standards Enforcement Report, both removing moderation safeguards or enhancing liability for content could pose problems. In the first quarter of 2020 (1Q20), Facebook removed 1.7 billion fake accounts, still leaving an estimated 5% of the total or around 120 million fake accounts active. This was before a rise in fake accounts that has been connected to a new Philippine terrorism law that could see critical posts on social media leading to detention and indicates a new version of cyber “swatting”. In addition, 9.6 million pieces of content were actioned by Facebook for hate speech, receiving 1.3 million appeals of which only 63,600 were restored. Hate speech is just one of ten categories of enforceable content, each generating millions of cases. An additional 4.7 million pieces of actionable content were flagged under another category of “Organised Hate”. That content could well share much common ground with the politically protected speech terrain that Trump’s order seeks to facilitate.
Constructive dialogue is necessary
The simplicity of Section 230 cut through the significant complexities involved in regulating user speech on platforms by passing decisions on content moderation to the platforms themselves. Highlighting the inherent dichotomies this created, Facebook employees have openly vocalized dissent on policies allowing false claims in political adverts and not acting on posts from President Trump deemed false or inflammatory by Twitter. Opposing this, Mark Zuckerberg has expressed support for the Trump position, appearing on Fox Business shortly after Twitter’s actions to state that “I just believe strongly that Facebook shouldn’t be the arbiter of truth of everything that people say online”.
Despite taking that position, prior to meeting with the EU in February 2020, Zuckerberg published a white paper on “online content regulation” and penned a widely published op-ed that appeared to invite further regulation on moderation. The op-ed and white paper looked to steer the conversation, posing questions—and setting parameters for the answers—on managing accountability, the balance between free speech and harmful content, defining harmful content, and what regulatory performance targets may look like.
Understandably, Facebook and other platforms are looking for the certainty of a framework in which to operate without fear of liability. Preferably, such a regulatory framework would be clear; not penalize the platforms unduly; and, critically for companies operating near globally, avoid the complexities of emerging contradictory national and supranational regulations by working on a global scale. As it stands, national regulators in the UK, Germany, and other countries are developing their own regulatory frameworks for social media. Legal cases such as the recent dismissal of an appeal by Australian media companies against a ruling holding them liable for Facebook comments on stories that they post have further complicated the laissez-faire status quo. In the US, Section 230 provided such a certainty, although without a framework setting the parameters for their self-regulation. As the scale and impact of platforms has grown, that is no longer tenable, and further regulation is inevitable. Engaging with these efforts poses its own contradictions and must be finely judged but will prove the best option for all stakeholders.