Jonathan Rauch is a senior fellow at the Brookings Institution and the author of “The Constitution of Knowledge: A Defense of Truth” and “Kindly Inquisitors: The New Attacks on Free Thought.” Renée DiResta was the technical research manager at the Stanford Internet Observatory and contributed to the Election Integrity Partnership report and the Virality Project. Her new book is “Invisible Rulers: The People Who Turn Lies Into Reality.”
Timestamps:
00:00 Intro
03:14 Content moderation and free speech
12:33 The Election Integrity Partnership
18:43 What activity does the First Amendment not protect?
21:44 Backfire effect of moderation
26:01 The Virality Project
30:54 Misinformation over the past decade
37:33 Did Trump’s Jan 6th speech meet the standard for incitement?
44:12 Double standards of content moderation
01:00:05 Jawboning
01:11:10 Outro
Abridged Transcript
Editor's note: This abridged transcript highlights key discussions from the podcast. It has been lightly edited for length and clarity. Please reference the full unedited transcript for direct quotes.
Private companies’ role in content moderation
NICO PERRINO, HOST: Many people argue that content moderation by platforms like Facebook and Twitter constitutes censorship. Jonathan, is that a fair assessment?
JONATHAN RAUCH, GUEST: No, I don’t think so. The First Amendment protects private companies’ rights to edit and curate content. That’s not censorship in the legal sense. If a private company like Facebook decides not to allow certain types of speech on their platform, that’s within their rights. They have to make these decisions to maintain a healthy environment for users and advertisers. What we need to do is focus on making these decisions transparent and consistent, rather than attacking the companies for exercising their rights.
RENÉE DIRESTA, GUEST: Exactly. If we label every act of moderation as censorship, we’re missing the point. Companies should be free to create the kind of communities they want, and users can choose whether to engage with them or not. It’s about respecting both the platforms’ right to moderate and the users' rights to engage with different spaces.
The challenge of defining misinformation
PERRINO: Renée, there’s been a lot of talk about misinformation, especially during the pandemic. How do you define it, and do you think it’s possible for platforms to effectively combat it?
DIRESTA: Misinformation is a tricky term. It’s not always about fact versus fiction; sometimes it’s about context or intent. For example, during the pandemic, there were evolving narratives around vaccines, masks, and treatments. What was labeled as misinformation one week was considered a valid opinion the next. The problem with misinformation is that it’s not just about being factually wrong — it’s about how certain narratives spread and influence behavior. Platforms can combat misinformation by providing context, like flagging certain posts with fact checks or links to credible sources, but they can’t completely eliminate it.
RAUCH: I think the key is understanding that misinformation isn’t always deliberate. Sometimes people genuinely believe in the content they’re sharing. What’s dangerous is when misinformation spreads widely, unchecked, and influences public opinion or behavior in harmful ways. Platforms should focus less on trying to remove every piece of false information and more on highlighting credible sources and adding context. That way, users can make informed decisions without feeling like their voices are being suppressed.
Government pressure and jawboning
PERRINO: Let’s talk about government pressure on social media companies. Should governments be involved in moderating content on social media?
RAUCH: This is where things get tricky. There’s a fine line between government jawboning — where officials pressure companies to remove content — and outright censorship. Governments do have a role in flagging illegal content, but they should never compel platforms to take down lawful speech. I think a good solution would be to make any government requests for content removal completely transparent. There should be public records of these interactions so that the public knows when and how the government is involved.
DIRESTA: I agree with that. There have been instances, like during COVID, where governments pressured platforms to remove content, and sometimes platforms complied. But transparency is key here. Platforms should be able to receive input from governments, but there needs to be a system in place that ensures the public can see what’s happening. That’s how we rebuild trust.
Content moderation’s impact on public trust
PERRINO: Do you think the moderation efforts during the COVID pandemic eroded trust in platforms and institutions like the Centers for Disease Control?
DIRESTA: It’s possible. Some people believe that the content moderation around COVID actually amplified distrust in public health authorities. When platforms remove or flag content that’s considered misinformation, it can backfire, making the people posting that content seem like martyrs. It’s a tough balancing act because platforms want to promote accurate information, but silencing certain voices can lead to deeper mistrust.
RAUCH: That’s true. The question we should be asking is: how do we handle the wicked problem of content moderation without losing trust? One answer could be to focus on adding context rather than just removing content. When platforms provide users with more information or fact-checks, it allows for a more open debate without creating the perception that certain voices are being unfairly silenced.
Show notes:
The Virality Project (2022)
Moody v. NetChoice and NetChoice v. Paxton (2024)
“This Place Rules” (2022)
Murthy v. Missouri (2024)
“Why Scholars Should Stop Studying 'Misinformation',” by Jacob N. Shapiro and Sean Norton (2024)
Ep. 225: Debating social media content moderation