Playback speed
×
Share post
Share post at current time
0:00
/
0:00
Transcript
1

Ep. 228: Does artificial intelligence have free speech rights?

In this live recording of “So to Speak” at the First Amendment Lawyers Association meeting, Samir Jain, Andy Phillips, and Benjamin Wittes discuss the legal questions surrounding free speech and AI.
1

Samir Jain is the vice president of policy at the Center for Democracy and Technology. Andy Phillips is the managing partner and co-founder at the law firm Meier Watkins Philips and Pusch.

is a senior fellow in governance studies at the Brookings Institution and co-founder and editor-in-chief of Lawfare.

Read the transcript.

Timestamps: 

00:00 Intro

01:54 The nature of AI models

07:43 Liability for AI-generated content

15:44 Copyright and AI training datasets

18:45 Deepfakes and misinformation

26:05 Mandatory disclosure and AI watermarking

29:43 AI as a revolutionary technology

36:55 Early regulation of AI 

38:39 Audience Q&A

01:09:29 Outro

Show notes:

Court cases:

Legislation:

Articles:

Edited transcript:

Editor's note: This abridged transcript highlights key discussions from the podcast. It has been lightly edited for length and clarity. Please reference the full unedited transcript for direct quotes.

Section 230 and artificial intelligence

NICO PERRINO, HOST:  I’d like to see whether you three think that Section 230 of the Communication Decency Act, which provides a liability shield for service providers that host third party content, provides protection for artificial intelligence companies.

SAMIR JAIN, GUEST: Under Section 230, service providers are protected with respect to hosting content provided by another information content provider. An information content provider is one who has played a role, in whole or in part, in creating or developing the content at issue. So, I think one key question will be does the generative AI system, for example, play a role, at least in part, in creating or developing the content at issue? I think that may end up being a little bit of a fact specific question. 

For example, if a GenAI system that creates a hallucination where we know that sometimes the output of a GenAI system is entirely made up, it’s not grounded in any source, and the results of the sort of stochastic parrot nature of GenAI systems, which focus on predicting the next words rather than assigning meaning. It’s hard to argue that a GenAI system creating a hallucination does not play a role in the creation of that content. And therefore, it’s not content strictly provided by another information content provider. 

On the other hand, if a GenAI system merely regurgitates training data, so there’s an image that was used to help train it, and all it does is reproduce that training data with no alteration, then there may be more of an argument it really had no role in the creation or development of that content. It was simply distributing that content and therefore that 230 should apply. I think there are no court cases yet addressing this.

ANDY PHILLIPS, GUEST: Under Section 230, the big distinction really is are you just hosting content or are you playing a role in creating content. ChatGPT, for example, seems to me like it’s more of the latter where you have unleashed a tool that is creating content, rather than just hosting comments on a blog or whatever. You are actually creating words and content. 

BENJAMIN WITTES, GUEST: Yeah, I think there’s an antecedent question to Samir’s analysis, which is whether the large language models that are going around the world sucking up content without the consent of the original providers of that content, I don’t think a reasonable court should consider that user submitted content, a la if I leave a comment on an AOL chat site, that that is user submitted content that they are immunized for purposes of Section 230. I don’t think if they go out and take everything I’ve ever written and then regurgitate it, that means I submitted that content to them. 

Deepfakes and misinformation

PERRINO: A 2024 Forbes survey found that over 75% of consumers are concerned about misinformation from artificial intelligence. Americans are more concerned about misinformation than sexism, racism, terrorism, or climate change. Does the First Amendment doctrine, which protects lies and misinformation apply in a place where you can have, for example, Kamala Harris really look like she’s saying or doing something that she didn’t actually say or do? Or does our existing framework surrounding forgery, false light, right of publicity, and defamation, for example, accomplish really what you would hope would be accomplished in regulating deep fakes or misinformation?

JAIN: California passed a law in the election context purporting to outlaw deepfakes that harm the reputation of political candidates, harming their electoral prospects. And the court enjoined it and said look, this is reaching far beyond defamation and other categories that we’ve traditionally held outside of First Amendment protection. And, therefore, you can’t enforce this law. So, I think this is a real issue because there’s no question that disinformation and misinformation is a real problem, deepfakes in the form of nonconsensual intimate images will probably be the most common form of deepfake images that are created online. And they cause real unquestionable harm. 

WITTES: Let me say here I am less concerned about the political sphere than I am about high schools. So, if you want to know what’s going on in our political system five years from now, look at what’s going on in a high school today.

In the political space, it’s really, really hard. And you’re also dealing with, frankly, victims who have, in the case of Kamala Harris, literally billions of dollars at her disposal to respond and create her own image. So, I think it’s a less worrying problem than the 15 year who is actually at risk of committing suicide in response to it. So, my caution in this area is we all jump to the worst disinformation, misinformation use case in the political arena that affects everything. I think those are mostly, but not entirely, mythological. 

But the solution to that problem is mockery, it’s people pointing out in public that it’s not real. It’s a sort of hygienic response on the part of the public to being lied to. I’m much, much more worried about, and the circumstance that I think really does give rise to regulation, is the mass production of terrorizing individuals. 

Artificial intelligence and copyright

PERRINO: So, you had the New York Times sue OpenAI and Microsoft over the alleged use of their copyrighted works. The lawsuit alleges that millions of articles from the New York Times were used to train chatbots that now compete with the Times. Then, in August of this year, OpenAI was sued for allegedly using YouTube videos without the creators consent. Do these models need to license these datasets? Is it any different from, for example, me going to the library and reading and producing content based on what I research in the library, for example? 

WITTES: I think this question, first and foremost, sounds in intellectual property, not in First Amendment law. I would say it is a remarkable proposition, to me anyway, just as an intuitive matter, that you can go out and collect all material, not in the public domain, but all material that is public facing without the consent of the producer, and use it for commercial purposes. My bias is as a content producer and I’m very careful about the rights of my writers. I don’t really understand why that proposition should have legs. 

That said, the amount of intellectual property law that I don’t know is something close to the entirety of the field.

JAIN: The one analogy I might point to as just something I think courts will look to is search engines crawling websites in order to determine search results. In a way, we’ve addressed that issue on a technical level through what’s called robots.text, which is a file that a website can essentially put on its website that can give instructions and tell a search engine, “no, actually I don’t give you permission to crawl my site.” So, you can imagine potentially a similar kind of technical solution emerging and evolving that allows sites to give permission or not for their content to be used as training data for AI models. 

Artificial intelligence’s progression compared to past emerging technologies

PERRINO: We’ve had technological revolutions throughout humanity’s existence, going back to the printing press, the radio, the television, the internet. And there’s always the thought that this new technology is different and that we need to change our approach to the First Amendment as a result. How do you all see it?

PHILLIPS: It feels like [this time is different], but [AI is] so new. People probably felt that way about the internet. But the law finds a way, to paraphrase Jurassic Park. Sometimes it feels like putting square pegs in round holes. But the law is conservative that way and it tends to over time find a way to pigeonhole new developments into old standards. The internet’s an easy example in the field in which I work. It confounded traditional understanding of even simple things like publication. If you’re trying to figure out where to file a defamation lawsuit, do I have personal jurisdiction over the defendant in a certain state or court or wherever. 

A relevant question is where was this thing published? If someone wrote a letter and put it in the mail, easy enough. If someone printed a magazine and distributed it, easy enough. If you put it on the internet, it’s sort of published everywhere at once. And the courts had to tackle this in that context. It’s still being fleshed out in some states decades later. I think the law will find a way, at least in my field, to tackle these issues without a real change in the underlying law. 

I’m worried from the client’s perspective. We haven’t seen the avalanche of AI. I have not personally litigated an AI case yet. As I mentioned, there’s one going on in Georgia right now in state court against OpenAI involving a ChatGPT output that I think is going to be really interesting if it gets to summary judgment and starts to answer some of these questions. But I worry about it from a client perspective because my day job is trying to help clients who have been defamed or portrayed in a false light by somebody and you just see enormous capability with AI to do that and to cause real harm. 

WITTES: Imagine two worlds. One in which basically this is a human assist technology and chatbots and these entities are not actually autonomously producing a lot of content that’s public facing. What they’re producing is content to you, that you are then turning into stuff that you produce, and for which you are liable if it’s really bad. I think that is currently the predominant use case. 

There’s this other use case in which the chatbot itself is the speaker. And you can see this now in Google’s AI responses to searches. You imagine that that develops into something like a newspaper. What’s going on in the world today, Google? And the Google chatbot gives you a personalized newspaper. That’s radically different because there you have the production of consumed content, consumed presumably as truth, that is generated by a non-human. I think if that becomes a predominant use case that ends up injuring a lot of people, we really need to think about what the law that regulates that looks like. I think that’s profoundly different from any question that we’ve ever confronted before. 

So, I will return to the question with which you started, which is it really depends whether the machine is helping people speak or whether we’re creating machines that are themselves speaking in public domains.

Discussion about this podcast

So to Speak: The Free Speech Podcast
So to Speak: The Free Speech Podcast
So to Speak: The Free Speech Podcast takes an uncensored look at the world of free expression through the law, philosophy, and stories that define your right to free speech. Hosted by FIRE's Nico Perrino.
New episodes post every other Thursday.