Content Moderation Is Not Censorship – Renee DiResta



Tori Tinsley’s recent essay, “When Social Media Obscures Truth,” laments the state of public discourse, warns against government overreach, and celebrates John Stuart Mill’s faith in the individual reader. But in drawing a direct line from Mill’s nineteenth-century critique of mass media to today’s content moderation debates, the piece blurs critical distinctions—chief among them, the difference between moderation and censorship—and misrepresents how modern information systems actually work.

At the heart of Tinsley’s argument is the claim that “government and private entities” have become modern-day “super-regulators” of speech, threatening the intellectual autonomy of the individual. She invokes Mill to argue that truth should be determined by individuals, not institutions. But her framing then conflates very different actions within today’s social media infrastructure—removing content, choosing not to promote it, and selecting what to surface. It misunderstands who the First Amendment protects in the content moderation debates and equates moderation with telling the public what is true. And it misrepresents my own work, turning a defense of editorial freedom and a call for increased user agency into a strawman for top-down control.

Let’s begin with the basics. Moderation is not censorship. As law professor Kate Klonick details in her foundational article “The New Governors,” content moderation refers to the suite of policies and enforcement practices that online platforms use to shape the user experience of their services. Some are straightforward: removing illegal material like child sexual abuse content or explicit incitement to violence. Others involve value judgements: rules for addressing “lawful but awful” content such as spam (which, yes, is technically “speech”), harassment, hate speech, and viral hoaxes and misinformation. Platforms set their rules based on a mix of their own individual business incentives, community norms, and moral priorities. Rumble, for example—a video platform that brands itself as a free speech alternative to YouTube—uniquely prohibits content that promotes or supports Antifa. It is rare to find an online community ecosystem that does not have some speech guidelines against bullying and harassment; most people do not actually find free-for-alls pleasant to spend time in.

Moderation is not a binary between removing content or leaving it up. Enforcement typically falls into three buckets: remove, reduce, or inform. “Remove” is most akin to censorship (for those who apply the term to private companies enforcing their own rules); the content or account under question is deleted. “Reduce” throttles distribution; the content stays up, but may be shown to fewer users. “Inform” refers to labels or pop-ups placed atop a post to let users know the content is disputed in some way. This is adding more speech, or context, to the conversation.

The First Amendment protects the platforms’ right to set these rules. It protects Rumble’s Antifa rule. It protects YouTube’s choice not to promote videos claiming that vaccines cause autism. It protected Old Twitter’s right to label President Trump’s tweets alleging election fraud, offering a link that users could click to visit a third-party site with facts about mail-in ballots. Platforms have editorial and associational discretion—the government cannot force them to host or amplify speech that they don’t want to carry. They choose when and how they use that discretion; the growing number of platforms in the market, ranging from Bluesky to Truth Social, make distinctly different choices.

Separate from moderation rules are curation decisions—what platforms choose to amplify, recommend, or highlight on their front pages or within algorithmically-ranked feeds. Platforms are not neutral conduits. Their choices—whether determined by recommender systems or editorial teams—shape what people see. Here, too, the First Amendment applies. Platforms cannot be compelled to promote particular content any more than newspapers can be told what to print on their front pages.

That said, both moderation and curation represent significant concentrations of private power. And they are opaque. I study these systems; it is often extraordinarily difficult to determine why particular content decisions were made, or how recommendation algorithms are shaping what we see. Platforms exert real control over what information rises to the surface. When Mill wrote about the risks of public opinion overwhelming individual reasoning, he could not have imagined the automated and attention-optimized information systems we contend with today. But his concern remains relevant—perhaps even more so now.

If we want to solve the “indolent man” problem Mill identified, we need to equip individuals to think for themselves within the structure of modern media.

This context helps clarify the difference between editorial judgment and suppression. Tinsley, however, collapses that distinction in her treatment of my work. She cites a single fragment of a sentence from my book Invisible Rulers: The People Who Turn Lies Into Reality—that platforms have no obligation “to promote false content on all surfaces, or recommend it to potential new followers, or run ads against it”—and claims this means that I “want social media companies to limit the reach of false speech on their sites.” But those are not the same thing.

That sentence appears in a section laying out the principle of “freedom of speech, not freedom of reach”: the idea that platforms can enable expression by hosting and allowing access to controversial content, without being required to amplify it or accept money to promote it. It is a defense of editorial discretion with a nod to ethics: a platform does not have to accept ad dollars to promote claims that juice cures pediatric cancer, or weight a recommender system to boost sensationalism. It may choose to—and again, different platforms make different choices, as they appeal to different segments of the market—but liberty means it doesn’t have to. Any given content producer is not entitled to an algorithmic boost. This principle, which social media ethicist Aza Raskin and I first laid out in 2018, has become X’s moderation policy.

Tinsley reinterprets this argument as a prescriptive call for suppression and frames it as incompatible with Mill’s view that even falsehoods can illuminate truth. In reality, my position keeps ideas on the table while insisting that platforms are not compelled to place every idea at the top of the stack in modern communication architecture. It is precisely because platforms enjoy First Amendment protections—and because, as I emphasize in the section she selectively quotes, governments have no business writing content policies—that they are free to exercise discretion. Flattening my distinction is not analysis. It’s misdirection.

Tinsley’s confusion about how infrastructure shapes discernment continues in her next claim: “Some platforms, such as Facebook and Instagram, took action to combat fake news by installing misinformation features, perhaps to DiResta’s partial satisfaction. X, formerly known as Twitter, has a ‘Community Notes’ feature on its platform, and now other companies, like TikTok, have adopted similar features.” I’m not sure what “[my] partial satisfaction” is meant to imply—it’s a vague dig masquerading as insight—for user-controlled tools that promote discernment. Community Notes is exactly that. Although the platform rolled it out, it enables users to collectively flag misleading content, contribute context, and see that context surfaced transparently to others. It’s individually- and community-led deliberative infrastructure—precisely the kind of human-centered judgment Mill called for.

Community Notes complements other labeling systems that fall under the “inform” category. These are not coercive tools. They are interventions designed to support users in forming their own judgments, rather than leaving them entirely at the mercy of virality and opaque algorithmic decisions.

Indeed, just a few lines after the fragment Tinsley quotes, Invisible Rulers includes a section titled “Put More Control in the Hands of Users.” That’s a throughline of my work. If we want to solve the “indolent man” problem Mill identified—and that Tinsley rightly raises—we need to equip individuals to think for themselves within the structure of modern media. That means, for example, developing middleware: tools that give users more control over what they see. It means pushing platforms to offer transparency, choice, and appeals. And yes, it also means investing in facilitating an informed public. Invisible Rulers additionally explores ways we might learn from the historical efforts like the Institute for Propaganda Analysis, which emerged at a time when the mass media disruption was influencer-propagandists on the radio.

Tinsley and I agree: we don’t want the government—or platforms, for that matter—to declare what is true. But we must also recognize that the infrastructure of amplification has changed. Any serious defense of free speech today must contend not only with the law but with the architecture that determines which speech is seen.




Share this content:

I am a passionate blogger with extensive experience in web design. As a seasoned YouTube SEO expert, I have helped numerous creators optimize their content for maximum visibility.

Leave a Comment