In-depth reporting and analytical commentary on artificial intelligence regulation. No legal advice.

‘That is theft, it’s pure theft’: an interview with Susman Godfrey AI lawyer Justin Nelson, co-lead counsel in $1.5B Anthropic matter

In August, Anthropic and book authors’ counsel announced that they struck a $1.5 billion (or higher) settlement of the Bartz v. Anthropic copyright class action (August 26, 2025 ai fray article) – a case that has set huge precedents for the industry. Judge William H. Alsup of the United States District Court for the Northern District of California then granted preliminary approval of the settlement late last month (September 25, 2025 ai fray article), setting out his reasoning in a memorandum issued on Friday (October 17, 2025 Northern District of California opinion).

Co-lead counsel to the class of authors is Justin Nelson of Susman Godfrey, who has a significant track record in IP litigation. Mr. Nelson has been a partner at Susman Godfrey since 2005 and has since helped Dominion Voting Systems strike a $787.5 million settlement with Fox, Green Mountain win a $64.5 million judgment against Ardagh (included on National Law Journal’s Top 100 Verdicts of the Year list) and a $38 million judgment for Fractus (a case that later settled on appeal).

Currently, he also acts for the Authors Guild and at least 28 different authors in a class-action against OpenAI in the Southern District of New York (February 5, 2024 class action complaint).

Mr. Nelson sat down with ai fray to have a thorough discussion about his career to date – in both copyright and patent litigation – as well as diving into the significance of the Anthropic settlement and where the future of AI litigation is headed.

Interview

ai fray: Your latest settlement with Anthropic is the first major copyright case against an AI company to settle, and it settled big. When you first took this case on, did you expect such an outcome? 

Justin Nelson: Yes, it is an amazing result. There were a couple of reasons I got into AI cases in the first place, including the rise of misinformation, the importance of protecting the works of copyright holders and creators, and what it means to have a society based on truth and shared facts. I saw AI was the next frontier. We are just at the beginning of figuring out how AI is going to shape society. This set of cases is uniquely important, and it may be the most important in civil litigation over the next few years. In many ways, law and litigation are very blunt instruments. But they are important ones. Being able to have accountability and to strike a blow on behalf of the creators is crucial.

I saw this AI docket as being something incredibly important. It required a lot of thought and strategy to make sure we continue to get to the right result to protect what is really the works of humanity and to continue to incentivize creation.

How do we innovate so it is consistent with our societal values and our continuous need to produce and create, and not just turn it over to machines? I, for one, am all for humans.

ai fray: Everyone is talking about AI, and the Anthropic settlement is not just the most significant AI-related settlement to date, but also the largest copyright settlement in history. Can you expand further on the significance of this case?

Justin Nelson: The deal is not just the largest copyright settlement, but the largest copyright recovery in known history (including trial, settlement, class action, etc.). There was a lot of risk involved, and there still is to a certain extent. The judge has preliminarily approved it, but we don’t expect until Q2 2026 for the final approval hearing to happen.

However, the settlement itself sends a strong message that creators and copyright holders deserve compensation. It’s true in this case and true across the board, which is why it has been met with such praise from so many authors and creative community groups. That would have been true even if the settlement had been significantly less than what it was.

It’s always toughest to be the first – to be the trailblazer. But we think over $3,000 per book is really a home run for the class. It’s real money for both authors and publishers across the board.

ai fray: How do you know if you are a member of the class, and how do you file a claim?

Justin Nelson: Go to this website: www.anthropiccopyrightsettlement.com. Not everyone is a part of it; you will have had to register your copyright prior to when the download occurred. Just to be clear: even if you make a claim in the settlement, you are not giving permission to Anthropic to continue to use your work or to have used your work in a commercial model in the past. And on the flip side, if your works are not included in the case, you keep all your individual rights and claims.

In addition to the monetary settlement, we also got certification from Anthropic that it did not use any of these books in any of the commercial models it released. It also agreed to destroy these datasets, subject to certain legal preservation obligations.

So file a claim if you are the legal or beneficial owner (usually the publisher and author) of work that is part of the class. Direct notice will go out in a month, but you can check today on the website if your book is a part of the class or not.

ai fray: Could international authors also be members of the class?

Justin Nelson: Yes, you do not need to be a citizen or a resident of the U.S. to be a class member. However, you need to own a U.S.-registered copyright. And it has to have been registered before the illegal downloads occurred.

ai fray: Apart from the acclaim, the publicity and the economic magnitude of those wins, is it also gratifying that the settlement righted a wrong? Why?

Justin Nelson: It is wrong legally, morally, and ethically to use these pirated sites. It should be so self-evident that piracy is wrong, and that it is destructive to use these pirated websites. Years and years of effort go into each one of these works. More generally, our society is really built on creativity. Whether you are an author, a publisher, a filmmaker, or a movie studio, the Constitution of the United States protects IP. The founders understood the importance of IP in creation and innovation to our society as a whole.

The judgment on piracy really sent a strong message, and I hope it is a wake-up call to other AI companies that have used pirated datasets. Many of them have been undisclosed; they have not been transparent about the datasets that they have used and how they have built these models. Not only is piracy wrong, it is doubly wrong that many of these companies have tried to cover up their use of pirated databases.

We can agree or disagree on training as an argument for fair use, although I think training AI is not fair use, and I hope that cases will recognize as much. But where there should not be any debate is taking material from pirate sites without compensation. In the Anthropic case, one of the websites they went to is literally called “Pirate Library”. That is theft, it’s pure theft. And it doesn’t excuse the theft to say: “I’ve put the book that I stole to another purpose.”

The example we used in the Anthropic case is that just because you write a book review, it doesn’t give you a license to steal the book. If you stole a book from a bookstore, the fact that you wrote a book review doesn’t excuse your theft, even if you assume that AI training is fair use (which I again disagree with). 

As my partner Rohit Nath argued at summary judgment, if Anthropic’s argument were to prevail, if the fair use excused initial piracy, all one would have to do to excuse the piracy is post a review on Amazon or write a blog post about the music they’re listening to. To state the argument is to refute it.

ai fray: But the other side argued, and some major defendants in other cases still argue, that it falls under the fair use exception. What do you say to that?

Justin Nelson: Many of these companies are very coy about how they obtained their training material and don’t publicly disclose it. I dare these companies to say whether they visited their pirated sites. They know it’s wrong. OpenAI wrote a paper in 2020 that called their datasets “books 1” and another “books 2”, but we now know it wasn’t just a collection of books. It was a collection that came from a pirated site called Library Genesis. And that is wrong. We are not treading new ground in preventing privacy. This has been going on since before the Internet. And it’s black-letter law that piracy is illegal.

Some pirate websites said they were dying, even going out of business. But as these pirate sites admit openly, “then came AI”. AI has reinvigorated these pirate sites. Even just using it once allows piracy to flourish, let alone the signal it sends if everyone decides to use these pirated databases. I think this case sends the message that this behavior is wrong. We cannot ignore how this data was acquired: strip-mining humanity’s creations without compensation.

The broader fair use question is still in the early days of litigation, with multiple suits still in preliminary and discovery stages. The AI companies are entitled to make their argument, but so far, the balance of decisions and the expert judgment of the U.S. Copyright Office is against them. I’m hopeful that, at the end of the day, courts will see the huge adverse effect that these AI models are causing to creators and copyright holders. There is nothing fair or responsible for taking copyrighted works without permission and using those works as the data for your company to make billions and billions of dollars. As OpenAI has said, “data is the new oil”. They just don’t want to pay for it. We can encourage innovation and still protect IP. Taking copyrighted works without permission—and then keeping all the value for yourself as you demolish creativity—is not the answer.

ai fray: You are first-chair trial counsel, but what can you tell us about your team on the Anthropic copyright case?

Justin Nelson: We had a great team at Susman Godfrey, including partners Rohit Nath, Jordan Connors, Alejandra C. Salinas, and Michael Adamson, as well as associates Craig Smyser and Samir Doshi, along with others.

Craig, Rohit, and I wrote an article about the acquisition theories – and how acquisition is different from training. Rachel Geman at Lieff Cabraser Heimann & Bernstein was my co-lead counsel, and we had tremendous help and support from publishers’ coordination counsel at Oppenheim & Zebrak and Edelson PC. It really was a group effort.

ai fray: And finally, you have brought AI copyright cases, and you are also litigating patents in the AI hardware context: are you also interested in or involved with AI-related legal issues that do not involve patents or copyrights?

Justin Nelson: As OpenAI’s Sora 2 begins, what we’ll start to see is a lot of name image likeness litigation, and the expropriation of faces, videos, and voices. This goes to the heart of some of the misinformation and disinformation that exists and how easy it might be for malicious actors to impersonate.

The AI companies that are putting this out very well know the misuses that these models can have, but it’s really important to be thinking through the consequences of what these models can do. Just because you can do something doesn’t mean you should, or that it is the legal, moral, or ethical thing to do. Companies should wonder whether it is the best use of their time to create very good likenesses in video and audio of people, which can be misused to spread disinformation, harass individuals, and erode trust. We as a society should strive for truth and shared facts.

Given the importance of AI to our economy, there will be a lot of legal issues that arise in the coming years that also resemble traditional commercial litigation or civil litigation. I think we will see more issues arise with respect to defamation, privacy, and other torts, too.

ai fray: Thank you for your time today. It was great chatting with you.

Justin Nelson: Thank you as well. And if you are an author or a publisher, please go to www.anthropiccopyrightsettlement.com to see if your Work is included in the class.