June 12, 2017

Communications Decency Act Shields Facebook from Anti-Terrorism Act Liability

Digital Technology & E-Commerce Blog

The "Good Samaritan" (which is the statutory language used by Congress) provision of the Communications Decency Act of 1996 (CDA, codified at 47 U.S.C. § 230(c)(1)) shields social media platforms from liability related to certain types of third party content, including user-generated content. Specifically, any "provider or user" of an "interactive computer service" cannot be treated as the "publisher or speaker" of any information provided by "another information content provider," i.e. a user's Tweets or video content uploaded to YouTube. Accordingly, in most circumstances, social media providers can edit, screen or delete third-party content at their discretion and without fear of liability.

The expansive "Good Samaritan" immunity has extended to protection of "interactive computer service" providers from claims of, inter alia, defamation (considered the prototypical cause of action in this area of law), discriminatory housing advertisements, negligence, violation of anti-sex-trafficking laws and public nuisance.

More recently, social media sites invoke Section 230 immunity to defend against claims alleging that the sites provide "material support" to terrorists in violation of the Anti-Terrorism Act (18 U.S.C. § 2331 et seq.). The plaintiffs in these ATA suits often allege that social media platforms are "instrumental" to the rise and lethality of terrorist groups insofar as the platforms provide "material support" for terrorists because the platforms have allegedly become havens for hate speech, propaganda, fundraising and organizational coordination by terrorists, including the Islamic State of Iraq and Syria (ISIS).

Using social media to weaponize and "crowdsource" terrorism has brought the platforms under intense scrutiny and criticism, particularly from European governments.

Social media companies have responded. For instance, Twitter suspended nearly 400,000 accounts in the second half of 2016 for "violations related to promotion of terrorism." Writ large, social media sites have spent at a minimum hundreds of millions of dollars in their efforts to stop terrorist communications on their platforms, including through use of spam bots, video "fingerprinting" technologies, reporting options for fellow users and human reviewers.

Although these efforts are intensifying, pressure on social media sites to thwart facilitation of terrorism is likely to increase in upcoming months, given intrinsic philosophical, public relations, logistical and technological challenges. As noted above, in numerous instances, social media sites succeeded in defending against legal claims of providing "material support" to terrorists, principally by invoking the CDA immunity provision. The following case is the most recent example.

Facts and Procedural History

In Cohen v. Facebook, Inc., --- F. Supp. 3d ----, 2017 WL 2192621 (E.D.N.Y. 2017), a consolidated action filed in the Eastern District of New York, the set of roughly 20,000 Israeli citizen plaintiffs alleged that Facebook assists terrorists affiliated with the Palestinian group Hamas in potentially perpetuating future terrorist acts against plaintiffs. The citizen plaintiffs alleged further that they "have been and are continue to be targeted by" terrorist organizations and are "presently threatened with imminent violent attacks."

The second set of plaintiffs includes the estates and family members of victims of past terrorist attacks in Israel by Hamas. Facebook allegedly played a vital role in spreading terrorist content by refusing to deactivate accounts that incite violence. More fundamentally, plaintiffs alleged that Facebook's algorithms were utilized to connect users with fellow users, groups and other content that played a "vital role" in spreading hateful speech and violent imagery, by allowing terrorists to "more effectively disseminate [incitements to violence] … to those most susceptible to that message, and who most desire to act on that incitement."

Including ATA-based claims, plaintiffs brought claims of, inter alia, violations of the Justice Against Sponsors of Terror Act by aiding and abetting acts of international terrorism, providing material support to terrorist groups, and conspiracy in furtherance of acts of international terrorism. Facebook move to dismiss.

Legal Analysis

Citizen Plaintiffs

The claims of the citizen plaintiffs were dismissed without prejudice for lack of subject matter jurisdiction, as the "non-speculative, future harm" alleged by these plaintiffs of imminent violent attacks was insufficient to confer the "irreducible constitutional minimum" of standing under recent Supreme Court precedent, including Susan B. Anthony List v. Driehaus, --- U.S. ----, 134 S. Ct. 2334, 2441 (2014) and Clapper v. Amnesty Int'l USA, 568 U.S. 398 (2013). Such harm is considered "non-speculative, future harm" when there is only an "objectively reasonable possibility" that plaintiffs would sustain speculative harm, and not harm that is "certainly impending" or that there is a "substantial risk" it will occur. Terrorist attacks seem to not fall into harm classified as the latter. See, e.g., Tomsha v. Gen. Servs. Admin, No 15-CV-7326 (AJN), 2016 WL 3538380, at *2-3 (S.D.N.Y. June 21, 2016). In the milieu of this specific action, citizen plaintiffs' future harm "relied on multiple conjectural leaps, most significantly its central assumption that [plaintiffs] will be among the victims of an as-yet unknown terrorist attack by independent actors before the court."

Estate Plaintiffs

Facebook challenged the estate Plaintiffs' complaint on both procedural and substantive grounds. First, Facebook moved to dismiss for lack of personal jurisdiction. This argument failed for a number of reasons, and primarily because Facebook, as a Delaware corporation with a principal place of business in California, had the required contacts with the United States.

Moving to the merits, the estate Plaintiffs' primary argument was that Facebook provided "material support" to terrorists by providing account access, "coupled with Facebook's refusal to use available resources … to identify and shut down Hamas [ ] accounts." Facebook moved to dismiss under the "Good Samaritan" provision. Estate plaintiffs did not contest that Facebook is a (1) a "provider or user of an interactive computer service"; and (2) the claim is based on information provided by another information content provider, i.e. the Hamas supporting users.

Estate plaintiffs contested the third element of the "Good Samaritan" test, that it treated Facebook as a "publisher or speaker" of the Hamas users odious views. Again, Facebook moved to dismiss for failure to state a claim.

The court agreed and dismissed plaintiffs' justification for the inapplicability of the CDA's expansive immunity, even after affording plaintiffs "the most generous reading of their allegations." Rather, the allegations against Facebook, which were not content-neutral, placed it
"squarely within the coverage of Section 230(c)(1)'s immunity. The court held that Plaintiffs' distinction between "policing accounts" and "policing content" to be a distinction without a difference in the context of Section 230(c)(1) immunity because "Facebook's choices as to who may use its platform are inherently bound up in its decisions as to what may be said on its platform," which is a "necessarily antecedent editorial decision" epitomizing the role of a "publisher or speaker." See F.T.C. v. LeadClick Media, LLC, 838 F.3d 158 (2d Cir. 2016).

The court also rejected estate plaintiffs' claims that Facebook merely provided a content-neutral account to its users, which would exclude Facebook from the characterization as a "publisher or speaker," as the causation element (and thus resulting liability) relied on content posted on Facebook by Hamas to incite and encourage attacks against these plaintiffs, not its use of Facebook. Since the gravamen of estate plaintiffs' complaint was not harm inflicted by Hamas's ability to obtain Facebook accounts, but rather its use of Facebook's platform for, among other things, "recruiting, planning [and] inciting terrorists attacks," Facebook's role in publishing this harmful and offensive content inherently endeavored to hold it liable as the publisher or speaker of the content provided by Hamas. Accordingly, Facebook's motion was to dismiss was granted without prejudice.

Related Insights