NY Appeals Court: Lol, No Of Course You Can’t Sue Social Media For The Buffalo Mass Shooting

Spread the love


from the a-rare-correct-230-ruling dept

When politicians immediately blamed social media for the horrific 2022 Buffalo mass shooting—despite zero evidence linking the platforms to the attack—it was obvious deflection from actual policy failures. The scapegoating worked: survivors and victims’ families sued the social media companies, and last year a confused state court wrongly ruled that Section 230 didn’t protect them.

Thankfully, an appeals court recently reversed that decision in a ruling full of good quotes about how Section 230 actually works, while simultaneously demonstrating why it’s good that it works this way.

The plaintiffs conceded they couldn’t sue over the shooter’s speech itself, so they tried the increasingly popular workaround: claiming platforms lose Section 230 protection the moment they use algorithms to recommend content. This “product design” theory is seductive to courts because it sounds like it’s about the platform rather than the speech—but it’s actually a transparent attempt to gut Section 230 by making basic content organization legally toxic.

The NY appeals court saw right through this litigation sleight of hand.

Here, it is undisputed that the social media defendants qualify as providers of interactive computer services. The dispositive question is whether plaintiffs seek to hold the social media defendants liable as publishers or speakers of information provided by other content providers. Based on our reading of the complaints, we conclude that plaintiffs seek to hold the social media defendants liable as publishers of third-party content. We further conclude that the content-recommendation algorithms used by some of the social media defendants do not deprive those defendants of their status as publishers of third-party content. It follows that plaintiffs’ tort causes of action against the social media defendants are barred by section 230.

Even assuming, arguendo, that the social media defendants’ platforms are products (as opposed to services), and further assuming that they are inherently dangerous, which is a rather large assumption indeed, we conclude that plaintiffs’ strict products liability causes of action against the social media defendants fail because they are based on the nature of content posted by third parties on the social media platforms.

The plaintiffs leaned on the disastrous Third Circuit ruling in Anderson v. TikTok—which essentially held that any algorithmic curation transforms third-party content into first-party content. The NY court demolishes this reasoning by pointing out its absurd implications:

We do not find Anderson to be persuasive authority. If content-recommendation algorithms transform third-party content into first-party content, as the Anderson court determined, then Internet service providers using content-recommendation algorithms (including Facebook, Instagram, YouTube, TikTok, Google, and X) would be subject to liability for every defamatory statement made by third parties on their platforms. That would be contrary to the express purpose of section 230, which was to legislatively overrule Stratton Oakmont, Inc. v Prodigy Servs. Co. (1995 WL 323710, 1995 NY Misc LEXIS 229 [Sup Ct, Nassau County 1995]), where “an Internet service provider was found liable for defamatory statements posted by third parties because it had voluntarily screened and edited some offensive content, and so was considered a ‘publisher’ ” (Shiamili, 17 NY3d at 287-288; see Free Speech Coalition, Inc. v Paxton, — US —, —, 145 S Ct 2291, 2305 n 4 [2025]).

Although Anderson was not a defamation case, its reasoning applies with equal force to all tort causes of action, including defamation. One cannot plausibly conclude that section 230 provides immunity for some tort claims but not others based on the same underlying factual allegations. There is no strict products liability exception to section 230.

Furthermore, it points out (just as we had said after the Anderson ruling) that the Anderson ruling messes up its interpretation of the Supreme Court in the Moody case. That case was about the social media content moderation law in Florida, and the Supreme Court noted that content moderation decisions were editorial discretion protected by the First Amendment. The Third Circuit in Anderson incorrectly interpreted that to mean that such editorial discretion could not be protected under 230 because Moody made it “first party speech” instead of third party.

But the NY appeals court points out how that’s complete nonsense because having your editorial discretion protected by the First Amendment is entirely consistent with saying you can’t hold a platform liable for the underlying content which that editorial discretion is covering:

In any event, even if we were to follow Anderson and conclude that the social media defendants engaged in first-party speech by recommending to the shooter racist content posted by third parties, it stands to reason that such speech (“expressive activity” as described by the Third Circuit) is protected by the First Amendment under Moody. While TikTok did not seek protection under the First Amendment, our social media defendants do raise the First Amendment as a defense in addition to section 230.

In Moody, the Supreme Court determined that content-moderation algorithms result in expressive activity protected by the First Amendment (see 603 US at 744). Writing for the majority, Justice Kagan explained that “[d]eciding on the third-party speech that will be included in or excluded from a compilation—and then organizing and presenting the included items—is expressive activity of its own” (id. at 731). While the Moody Court did not consider social media platforms “with feeds whose algorithms respond solely to how users act online—giving them the content they appear to want, without any regard to independent content standards” (id. at 736 n 5 [emphasis added]), our plaintiffs do not allege that the algorithms of the social media defendants are based “solely” on the shooter’s online actions. To the contrary, the complaints here allege that the social media defendants served the shooter material that they chose for him for the purpose of maximizing his engagement with their platforms. Thus, per Moody, the social media defendants are entitled to First Amendment protection for third-party content recommended to the shooter by algorithms.

Although it is true, as plaintiffs point out, that the First Amendment views expressed in Moody are nonbinding dicta, it is recent dicta from a supermajority of Justices of the United States Supreme Court, which has final say on how the First Amendment is interpreted. That is not the type of dicta we are inclined to ignore even if we were to disagree with its reasoning, which we do not.

The majority opinion cites the Center for Democracy and Technology’s amicus brief that points out the obvious: at internet scale, every platform has to do some moderation and some algorithmic ranking, and that cannot and should not somehow remove protections. And the majority uses some colorful language to explain (as we have said before) 230 and the First Amendment work perfectly well together:

As the Center for Democracy and Technology explains in its amicus brief, content-recommendation algorithms are simply tools used by social media companies “to accomplish a traditional publishing function, made necessary by the scale at which providers operate.” Every method of displaying content involves editorial judgments regarding which content to display [*5]and where on the platforms. Given the immense volume of content on the Internet, it is virtually impossible to display content without ranking it in some fashion, and the ranking represents an editorial judgment of which content a user may wish to see first. All of this editorial activity, accomplished by the social media defendants’ algorithms, is constitutionally protected speech.

Thus, the interplay between section 230 and the First Amendment gives rise to a “Heads I Win, Tails You Lose” proposition in favor of the social media defendants. Either the social media defendants are immune from civil liability under section 230 on the theory that their content-recommendation algorithms do not deprive them of their status as publishers of third-party content, per Force and M.P., or they are protected by the First Amendment on the theory that the algorithms create first-party content, as per Anderson. Of course, section 230 immunity and First Amendment protection are not mutually exclusive, and in our view the social media defendants are protected by both. Under no circumstances are they protected by neither.

There is a dissenting opinion that bizarrely relies heavily on a dissenting Second Circuit opinion in the very silly Force v. Facebook case (in which the family victim of a Hamas attack blamed Facebook claiming that because some Hamas members used Facebook, Facebook could be blamed for any victims of a Hamas attack—an argument that was mostly laughed out of court). The majority points out what a silly world it would be if that were actually how things worked:

To the extent that Chief Judge Katzmann concluded that Facebook’s content-recommendation algorithms similarly deprived Facebook of its status as a publisher of third-party content within the meaning of section 230, we believe that his analysis, if applied here, would ipso facto expose most social media companies to unlimited liability in defamation cases. That is the same problem inherent in the Third Circuit’s first-party/third-party speech analysis in Anderson. Again, a social media company using content-recommendation algorithms cannot be deemed a publisher of third-party content for purposes of libel and slander claims (thus triggering section 230 immunity) and not at the same time a publisher of third-party content for strict products liability claims.

And the majority calls out the basic truths: all of these cases are bullshit cases trying to hold social media companies liable for the speech of its users—exactly the thing Section 230 was put in place to prevent:

In the broader context, the dissenters accept plaintiffs’ assertion that these actions are about the shooter’s “addiction” to social media platforms, wholly unrelated to third-party speech or content. We come to a different conclusion. As we read them, the complaints, from beginning to end, explicitly seek to hold the social media defendants liable for the racist and violent content displayed to the shooter on the various social media platforms. Plaintiffs do not allege, and could not plausibly allege, that the shooter would have murdered Black people had he become addicted to anodyne content, such as cooking tutorials or cat videos.

Instead, plaintiffs’ theory of harm rests on the premise that the platforms of the social media defendants were defectively designed because they failed to filter, prioritize, or label content in a manner that would have prevented the shooter’s radicalization. Given that plaintiffs’ allegations depend on the content of the material the shooter consumed on the Internet, their tort causes of action against the social media defendants are “inextricably intertwined” with the social media defendants’ role as publishers of third-party content….

If plaintiffs’ causes of action were based merely on the shooter’s addiction to social media, which they are not, they would fail on causation grounds. It cannot reasonably be concluded that the allegedly addictive features of the social media platforms (regardless of content) caused the shooter to commit mass murder, especially considering the intervening criminal acts by the shooter, which were not “not foreseeable in the normal course of events” and therefore broke the causal chain (Tennant v Lascelle, 161 AD3d 1565, 1566 [4th Dept 2018]; see Turturro v City of New York, 28 NY3d 469, 484 [2016]). It was the shooter’s addiction to white supremacy content, not to social media in general, that allegedly caused him to become radicalized and violent.

From there, the majority opinion reminds everyone why Section 230 is so important to free speech:

At stake in these appeals is the scope of protection afforded by section 230, which Congress enacted to combat “the threat that tort-based lawsuits pose to freedom of speech [on the] Internet” (Shiamili, 17 NY3d at 286-287 [internal quotation marks omitted]). As a distinguished law professor has noted, section 230’s immunity “particularly benefits those voices from underserved, underrepresented, and resource-poor communities,” allowing marginalized groups to speak up without fear of legal repercussion (Enrique Armijo, Section 230 as Civil Rights Statute, 92 U Cin L Rev 301, 303 [2023]). Without section 230, the diversity of information and viewpoints accessible through the Internet would be significantly limited.

And the court points out, ruling the other way would “result in the end of the internet as we know it.”

We believe that the motion court’s ruling, if allowed to stand, would gut the immunity provisions of section 230 and result in the end of the Internet as we know it. This is so because Internet service providers who use algorithms on their platforms would be subject to liability for all tort causes of action, including defamation. Because social media companies that sort and display content would be subject to liability for every untruthful statement made on their platforms, the Internet would over time devolve into mere message boards.

It also calls out how the immunity part of 230, getting these kinds of frivolous cases tossed out early on is an important part of 230, because if you have to litigate every such accusation you lose all the benefits of Section 230.

Although the motion court stated that the social media defendants’ section 230 arguments “may ultimately prove true,” dismissal at the pleading stage is essential to protect free expression under Section 230 (see Nemet Chevrolet, Ltd., 591 F3d at 255 [the statute “protects websites not only from ‘ultimate liability,’ but also from ‘having to fight costly and protracted legal battles’ “]). Dismissal after years of discovery and litigation (with ever mounting legal fees) would thwart the purpose of section 230.

Law professor Eric Goldman, whose own research and writings seem to be infused throughout the majority’s opinion, also wrote a blog post about this ruling, celebrating the majority for getting this one right at a time when so many courts are getting it wrong, but (importantly) notes that the 3-2 split on this ruling, including the usual nonsense justifications in the dissent mean that (1) this is almost certainly going to be appealed, possibly to the Supreme Court, and (2) it’s unlikely to persuade many other judges who seem totally committed to the techlash view that says “we can ignore Section 230 if we decide the internet is just, like, really bad.”

I do think it’s likely he’s right (as always) but I still think it’s worth highlighting not just the thoughtful ruling, but how these judges actually understood the full implications of ruling the other way: that it would end the internet as we know it and do massive collateral damage to the greatest free speech platform ever.

Filed Under: 1st amendment, blame, buffalo shooting, free speech, intermediary liability, new york, product liability, section 230

Companies: 4chan, amazon, discord, google, meta, reddit, youtube


Share this content:

I am a passionate blogger with extensive experience in web design. As a seasoned YouTube SEO expert, I have helped numerous creators optimize their content for maximum visibility.

Leave a Comment