Meet the Lawyers Suing ChatGPT Over the Use of Copyrighted Works

Steven Lieberman and Jennifer Maisel of Rothwell Figg are representing The New York Times and other prominent newspapers in high-stakes AI litigations that will have significant repercussions on AI and the media industry.

In two of the most closely watched litigations, The New York Times and a coalition of newspapers owned by the MediaNews Group and Tribune Publishing are currently suing OpenAI and Microsoft, the creators of ChatGPT and other artificial intelligence tools, for allegedly infringing their copyrighted material by unlawfully copying articles and other copyrighted content to train their large language models and power their AI platforms, including ChatGPT and Copilot. These cases hold significant repercussions, not just for the parties but for the media industry in general, and legal experts say the cases will have a major impact on the emerging field of AI law, determining, among other things, whether the scraping of copyrighted material for the training of AI-powered tools and the use of such content in retrieval augmented generation results is a fair use under copyright law.

On the plaintiffs’ side, the newspaper coalition litigation is being led by attorneys at Rothwell, Figg, Ernst & Manbeck, and the firm is co-leading the litigation on behalf of The New York Times. Based in Washington, D.C., the Rothwell Figg team brings a rare combination of deep intellectual property law experience, trial success and cutting-edge technological expertise. The team includes veteran litigator Steven Lieberman, who built his career on representing drug makers and media companies in patent litigation and IP matters. Over the past six years, he has added to his practice by advising a wide swath of the media industry in the United States on issues related to AI and machine learning.

Also on the Rothwell Figg team is Jennifer Maisel, who came to law from an information science background where she gained hands-on experience developing language models for natural language processing applications. She joined Rothwell Figg as an associate in 2012, made partner in 2020, and now co-chairs the firm’s AI practice team.

Both Lieberman and Maisel have been honored in the Lawdragon 100 Leading AI & Legal Tech Advisors for two years running, since the guide's inception.

The plaintiffs’ lawyers won a significant procedural victory in New York Times Co. v. Microsoft Corp. et al and Daily News LP v. Microsoft Corp et al in March when a federal judge in the Southern District of New York denied defendants’ motions to dismiss claims of direct and contributory copyright infringement, violation of the Digital Millennial Copyright Act, and federal and state trademark dilution. With the cases now consolidated (In re: OpenAI Inc.) and proceeding towards trial, Lieberman and Maisel spoke to Lawdragon about the litigation and the new frontier of AI law.

Lawdragon: What led both of you into this field?

Jennifer Maisel: I was a computer scientist before becoming an attorney. I worked on speech recognition applications, turning speech into text and text into speech, and built language models prior to ever thinking about law. I always loved AI and machine learning and in college, I studied information science although I hated programming and wanted to get out of actual computer science development.

I pivoted over to law school and one of my professors said, "We need more people in law who understand technology." I thought it was a really interesting career path and I landed in intellectual property law at Rothwell Figg. I've always focused on artificial intelligence, first as a patent attorney. Then my first foray into copyright law involved open source issues with software and data sets and things like that. And it was just kind of a perfect confluence of events that led to our involvement with The New York Times litigation and then The Daily News litigation.

Steven Lieberman: I came from a very different direction. I started my career as a First Amendment lawyer at Cahill Gordon & Reindel in New York where I worked with a lot of media companies and other content creators. Then in 1990 or so, I began working on intellectual property cases, particularly high-tech patent cases. And I learned that it was critically important to be able to present complex technology to judges and juries in a way that makes sense to them. Almost every litigation involves a story. Judges and juries want to know who’s right and who’s wrong. Who invented what and why is the invention important? Who stole someone else’s intellectual property? For the last 30 years I’ve been telling my clients’ stories.

It's an existential danger for them when somebody can just take their content and regurgitate it without any compensation.

About six or seven years ago, Jen and I began advising our media clients about artificial intelligence issues we saw arising. We were telling clients: “artificial intelligence is beginning to make repeated appearances in our work on technology issues. Here are some things that you need to think about.” So when ChatGPT was launched, when artificial intelligence became a focus of conversation for people, our clients knew that we knew what we were talking about. And we had a set of 20 or so media clients that we had been advising on IP and technology issues for years who said, "Okay, you've been telling us about artificial intelligence for years and now it's happening, help us deal with it."

LD: What was the trajectory you had with The New York Times?

SL: The Times came to us when it learned, after ChatGPT was launched, that its content was being hoovered up by OpenAI and was being used to make a commercial product. OpenAI didn't compensate The Times for the enormous amount of time, effort, and money that it had put into creating this content. And ChatGPT often attributed statements to The Times that were simply not true. We worked with the Times on coming up with the strategy for the first big news lawsuit against OpenAI and Microsoft.

Shortly after that, we began working with other newspapers, including The New York Daily News and the Chicago Tribune, that are part of MediaNews Group. It's an existential danger for them when somebody can just take their content and regurgitate it without any compensation.

LD: Where are the cases now?

SL: Both The New York Times complaint and The Daily News complaint have survived broad ranging motions to dismiss filed by OpenAI and Microsoft. All of the copyright claims are intact, and almost all of our other claims survived as well. We're deep into discovery. There are scores of lawyers on the other side. OpenAI alone has five law firms and 50+ lawyers who have entered an appearance. They're fighting this tooth and nail, but the cases are going very well and we're quite happy with the progress we're making.

LD: What are some of the unique challenges that AI litigation brings?

JM: There are a lot of really unique challenges in these cases. One right out of the gate is AI systems tend to be black boxes. From an evidentiary standpoint, how do we probe those black boxes in a scientifically sound way to prove the allegations in our lawsuits? As part of the litigation, we've had to negotiate first-of-their-kind protocols. We have a training data inspection protocol and step-by-step instructions and terms by which plaintiffs get discovery into petabytes worth of training data. We have a model inspection protocol, dictating the rules by which we get access to a large language model and are able to probe it. Right now, we're dealing with discovery into the output log data from these models and figuring out sampling methodologies. We're literally building the plane as we're flying it because the technology is so new.

LD: Are you basing the claims in traditional IP copyright law? Or are we on new ground here in terms of what's allowed and what isn't?

JM: It feels like very new territory even in copyright law. We have really unique facts here where it's hard to draw direct analogies from precedent. We're creating new precedent and we're creating new law. That’s part of why discovery is so important. In an area like fair use and copyright infringement where the facts really drive the analysis, we have to really unearth those facts and discovery and put some guardrails and parameters over: What is this technology, what is it doing, and does it comply with the law?

SL: These facts are slightly different than the facts in prior cases but in a way, this is a lot like Napster and its theft of music – theft that was ultimately held to be illegal. Microsoft and OpenAI argue that what they are doing is no different than building and selling VCRs. But that’s not the right analogy. Instead, it’s like they are selling VCRs that they pre-loaded with thousands of copyrighted movies for which they never paid. And the judge in our case has denied defendants’ motions to dismiss in part for that reason.

There are scores of lawyers on the other side. OpenAI alone has five law firms and 50+ lawyers who have entered an appearance.

LD: In addition to litigation, what other kinds of work do you handle in your AI practice?

JM: It's kind of full spectrum. We advise clients on best practices for implementing, deploying and using AI technology as part of business. It's a constantly evolving area based on new guidance and new tools coming out. We represent innovators of all kinds, whether it's securing patent protection for AI-based innovation or helping people license technology. We deal a lot with the IP implications. We also have a privacy practice that addresses all aspects of data protection, including third-party privacy rights and strategies for managing compliance obligations. We help clients navigate the whole data universe.

SL: A lot of what we do is help clients figure out how to use artificial intelligence products responsibly and safely. For example, making sure that they’re not using artificial intelligence products in a way that endangers their own trade secrets, infringes somebody else's copyrights, or puts the business at risk. Virtually every day, we get calls from some company saying, "We need to talk to you about how various artificial intelligence products are impinging on our intellectual property."

LD: Can you tell me a bit more about your intellectual property backgrounds and how that helps your work when it comes to AI?

SL: We do a lot of patent litigation here involving very high-tech stuff. In my first patent case, we represented the company that invented AZT, the first drug that was used to treat HIV infections. Back then, hundreds of thousands of people were dying of AIDS and there were no therapies at all. The case involved having to explain to a jury the very high-level chemistry and biotech issues underlying the invention and how it worked in the body. We worked really hard on that, and we won that case after a four-week jury trial. Patent cases involve two things. One, you have to learn the technology, and then second, you've got to be able to explain the technology to the judge and the jury.

JM: We also do trademark and trade secrets litigation and we've been involved in very interesting cases concerning emerging technologies. We've had trade secret litigation where you really have to identify the trade secret at issue and weave the narrative of how the trade secret came to be, how it was misappropriated, and why that's a problem. In The New York Times and The Daily News cases, we have trademark claims relating to the many times that generative AI output confabulates facts and information together and uses our client's trademarks in a way that's incredibly damaging. It’s important to be able to piece different forms of intellectual property together and tell the full story about the type of harm being caused by OpenAI and Microsoft.

LD: What major AI trends are you paying close attention to right now?

SL: We're seeing real growth of an emerging market for content for AI purposes. The companies that have these large language model-based products are coming to realize that they have to pay for content. So one big area is going to be the development of a payment system like you have for music. Now you have Spotify instead of Napster, right? Spotify pays. Napster didn’t. So how exactly is that market going to develop?

Another question is going to be the responsible use of artificial intelligence and putting up guardrails. Artificial intelligence can be very useful, but it can also be very dangerous. It can be made in a responsible way or it can be made in an irresponsible way. It’s a question of corporate responsibility versus corporate greed and the desire to get things to the market quickly.

LD: Responsible use seems like a really tricky area.

SL: This is an area where we're advising a lot of clients. There is so much potential liability because the uses of AI interact with so many different federal and state regulations. A few years ago, Jen and I were doing artificial intelligence training for a worldwide convocation of the in-house lawyers from a client. And in the introduction, the general counsel looked around the room and said, "I know many of you are nervous about artificial intelligence because it's inevitably going to eliminate some of your jobs. Our legal department's going to be half the size because of AI." And the first thing that I said was, "Don't worry. Your general counsel is wrong. Artificial intelligence is going to create so many new risks along with the efficiencies that she's going to need to double the size of her department to deal with all the issues." And that's turned out to be true.

AI systems tend to be black boxes. From an evidentiary standpoint, how do we probe those black boxes in a scientifically sound way to prove the allegations in our lawsuits?

LD: How do you see the intersection of law and AI heading over the next few years?

SL: It's going to be a very exciting roller coaster. One major privacy issue is going to be if artificial intelligence agents can be used, for example, by the government or by law enforcement to track certain parts of people's medical information and then if that information can be used for criminal prosecution. And then think about artificial intelligence programs that analyze the acceleration and braking by drivers and what happens with differential pricing on a person-by-person basis for insurance. Who can use that information? Is it permissible to be used in a particular way and what are the societal consequences? If you think through the issues in advance and you come up with a responsible way to deploy AI and sometimes bright-line rules about how not to deploy AI, it can avert all kinds of problems down the road. The key is really working with clients to think about the problems well in advance. You can save a client 10 years of litigation by having a one-hour seminar with the marketing people in advance and telling them what they simply can't do.

LD: You both have such different backgrounds, but clearly the partnership works. What do you appreciate about working together?

JM: Steve has been a mentor for me since I started at the firm. He's taught me about the ins and outs of litigation and client management. At the same time, I can give Steve an explanation of the technology and he's able to learn it and use those explanations in really compelling ways.

SL: I don't think there's a lawyer in the United States who understands the technology of artificial intelligence and the artificial intelligence products better than Jen. Her grasp of the technology and the legal issues is extraordinary. And we're supported by a group of, I would say, 15 lawyers at the firm who are deeply knowledgeable about both the legal and technological issues concerning artificial intelligence. So when clients come to us with these issues, we don't have to start from scratch. There's a very deep bench here with that knowledge.

JM: The thing I'll say about Steve is he has a vision for where a case is going. He always thinks three or more steps ahead, and he always has the client at the forefront of his mind. It's always about the client and the client relationship. I’d also never want to be on the other side of a negotiation table with Steve. He knows exactly where we're going. He knows the pain points, and he knows how to press them.