This week has witnessed increased political attacks against Section 230 of the Communications Decency Act of 1996 (CDA 230) by the Trump Justice Department and federal legislation seeking to “reform it” and the creation of a new association to assist professionals dealing with the many issues involved in content moderation under CDA 230.
CDA 230
Section 230 of the Communications Decency Act of 1996 (CDA 230) stated a desire to both promote the continued development of interactive media and preserve “the vibrant and competitive free market that presently exists for the Internet and other interactive computer services, unfettered by Federal or State regulation,” by providing immunity for liability for third-party content posted on its platform and for its content moderation efforts.
To many CDA 230 is, in essence, the Magna Carta of the free internet we enjoy today.
After being left alone for over a decade, in 2018 Congress passed FOSTA-SESTA which created a carve-out to CDA 230 for platforms knowingly facilitating sex trafficking in an attempt to target notorious Backpages.com website (which was shut down prior to the law enactment).
CDA 230 Under Political Attack
Since then social media platforms have come under increased political scrutiny due to their role in promoting disinformation in the 2016 presidential election, amplification of hate speech, and perceived political bias. This has led to a political assault on social media platforms and CDA 230 beginning with the Trump administration’s Executive Order targetting Twitter and other platforms after Twitter began to flag some of President Trump’s tweets spreading disinformation about mail-in ballots.
Justice Department Report
The attacks continued this week with the U.S. Department of Justice (DOJ) releasing its report, “Section 230 — Nurturing Innovation or Fostering Unaccountability?” calling for “potential reforms” of CDA 230. The report concludes that the time is “ripe to realign the scope of Section 230 with the realities of the modern internet” since:
The combination of significant technological changes since 1996 and the expansive interpretation that courts have given Section 230, however, has left online platforms both immune for a wide array of illicit activity on their services and free to moderate content with little transparency or accountability.
DOJ wants the platforms to take down more and less content. On one hand, they want to create CDA 230 carve-outs for “particularly egregious content, including (1) child exploitation and sexual abuse, (2) terrorism, and (3) cyber-stalking;” as well as where a platform had “actual knowledge or notice” that third-party content violated federal criminal law or a court judgment.
At the same time, DOJ weighs in on a political grievance of conservatives that social media platforms censor conservative voices. It is an ironic view for an administration that came to power thanks, in part, to the success of a social media disinformation campaign and which is not supported by the data. Nonetheless, DOJ wants to limit immunity for content moderation decisions to only those made in good faith, which it defines as meeting published site policies, being objectively reasonable, and only after timely notice providing the basis for any action.
S. 3983 – The Limiting Section 230 Immunity to Good Samaritans Act
The Justice Department report merely states proposals that have no effect unless enacted by Congress. This week, however, Senator Josh Hawley (R-MO) introduced S. 3983 – “The Limiting Section 230 Immunity to Good Samaritans Act.” Hawley exclaims:
“For too long, Big Tech companies like Twitter, Google and Facebook have used their power to silence political speech from conservatives without any recourse for users. Section 230 has been stretched and rewritten by courts to give these companies outlandish power over speech without accountability. Congress should act to ensure bad actors are not given a free pass to censor and silence their opponents.”
Like the Justice Department recommendations, Hawley’s bill establishes a “good faith” standard for content moderation that requires platforms to publish terms and conditions spelling out their policies and then would enjoy immunity only if they act in good faith and do not engage in “selective enforcement”. It does not include the notice requirement recommended by DOJ but significantly does include a private right of action awarding a prevailing plaintiff the greater of $5,000 or actual damages plus attorneys’ fees and costs.
Faced with the significant costs of defending what is likely to be a flurry of frivolous and politically motivated litigation, you can expect platforms to restrict content or offer less moderation (including for non-controversial things such as anti-spam measures). In addition, restricting private platforms in the manner in which they restrict content on their platform may be an improper restriction on their First Amendment rights.
The Internet Association, which represents leading global internet companies including Facebook, Google, Reddit, and Twitter, opposes the bill, commenting that:
Opening up those moderation decisions to second-guessing via a never-ending slew of frivolous lawsuits would not make the internet better or safer. The First Amendment exists to protect individuals and entities from exactly this type of governmental intrusion into private activity, something courts have repeatedly affirmed.
The blog Techdirt offered a less measured assessment of the bill:
It is not a serious attempt at reform. It’s an unconstitutional pile of crap that seems to serve no other purpose than to allow whiny aggrieved grifters to shake down every platform for their moderation and design choices.
The Hawley bill is more political theater and does not appear to have a realistic change of passage during this unique election year/ COVID environment.
Is a Socially Media Account a Violation of US Sanctions Laws?
On May 29, 2020, Senator Ted Cruz (R-TX) wrote to Attorney General Barr and Treasury Secretary Mnuchin to allege that Twitter was in violation of the U.S. international sanctions regime under the International Emergency Economic Powers Act (IEEPA), which prohibits “the making of any contribution or provision of… goods[] or services” to persons designated pursuant to that order, by providing accounts to Iran’s Supreme Leader Ali Khamenei (@khamenei_ir) and its foreign minister (@JZarif).
IEEPA, however, has long excepted the provision of “postal, telegraphic, telephonic, or other personal communication,” so long as there is no exchange of value and the dissemination of information or information materials in any format or media. Nonetheless, this offers another hammer in conservatives’ ideological war against Twitter.
Content Moderators Form Association
The introduction of the FOSTA-SESTA legislation led Santa Clara Law Professor Eric Goldman, an ardent CDA 230 defender, to organize through the school’s High Tech Law Institute a forum entitled “Content Moderation & Removal at Scale,” which brought together many leading voices in industry and academia on the challenges of content moderation. The exchanges between the panels and attendees were very informative and led Goldman to have follow up forums in Washington, D.C. and Brussels.
From this effort, has come a new project designed to provide a forum for those in this area. As Professor Goldman announced on his blog:
Today, we’re pleased to announce the Trust & Safety Professional Association (TSPA) and the Trust & Safety Foundation Project (TSF).* TSPA is a new, nonprofit, membership-based organization that will support the global community of professionals who develop and enforce principles and policies that define acceptable behavior online. TSF will focus on improving society’s understanding of trust and safety, including the operational practices used in content moderation, through educational programs and multidisciplinary research.