Mass Law Blog

Kadrey v. Meta: Will Market Dilution Reshape AI Copyright Law?

by | Jul 8, 2025

The recent blockbuster decisions in Bartz v. Anthropic and Kadrey v. Meta have raised a number of important and controversial issues. On the facts, both cases held that using copyright-protected works to train large language models was fair use. 

However, AI industry executives should not be celebrating. Bartz held that Anthropic is liable for creating a library of millions of works downloaded illegally from “shadow libraries,” and it could be facing hundreds of millions of dollars in class-action damages. And, as I discuss here, Kadrey argued for a new theory of copyright fair use that, if adopted by other courts, could have a significant negative impact on generative AI  innovation. 

Both cases were decided on summary judgment by judges in the Northern District of California. Bartz was decided by Judge William Alsup; Kadrey was decided by Judge Vince Chhabria. However, the two judges took dramatically different views of copyright fair use. 

Judge Chhabria set the stage for his position as follows: 

Companies are presently racing to develop generative artificial intelligence models—software products that are capable of generating text, images, videos, or sound based on materials they’ve previously been “trained” on. Because the performance of a generative AI model depends on the amount and quality of data it absorbs as part of its training, companies have been unable to resist the temptation to feed copyright-protected materials into their models—without getting permission from the copyright holders or paying them for the right to use their works for this purpose. This case presents the question whether such conduct is illegal.

 

Although the devil is in the details, in most cases the answer will likely be yes.

Did a federal judge really just say that in most cases using copyrighted works to train AI models without permission is illegal? Indeed he did.

Let’s unpack. 

Market Dilution – A New Fair Use Doctrine?

Judge Chhabria’s rationale is that generative-AI systems “have the potential to flood the market with endless amounts of images, songs, articles, books, and more,” produced “using a tiny fraction of the time and creativity” human authors must invest. From that premise he derived a new variant of factor-four fair use analysis – “market dilution,” the idea that training an LLM on copyrighted books can harm authors even when the model never regurgitates their prose. It does so, he says, by empowering third parties to saturate the market with close-enough substitutes.

Copyright law evaluates fair use by weighing the four factors identified in the copyright statute. Factors one and four are often the most important. Factor one asks whether the use is “transformative.” Judge Chhabria had no difficulty concluding (as did Judge Alsup in Bratz), that the purpose of Meta’s copying – to train its LLMs – was “highly transformative.”

Factor four looks at “the effect of the use upon the potential market for or value of the copyrighted work,” and Judge Chhabria’s analysis focused on this factor. 

Judge Chhabria reasoned that because an LLM can “generate literally millions of secondary works, with a minuscule fraction of the time and creativity used to create the original works it was trained on,” no earlier technology poses a comparable threat; therefore “the concept of market dilution becomes highly relevant.” Judge Chhabria stressed that the harm he fears is not piracy but indirect substitution: readers who pick an AI-generated thriller or gardening guide instead of a mid-list human title, thereby depressing sales and, with them, the incentive to create.

Judge Chhabria recognized that the impact on works other than text (all that was at issue in the case before him) could be even greater: “this effect also seems likely to be more pronounced with respect to certain types of works. For instance, an AI model that can generate high-quality images at will might be expected to greatly affect the market for such images, diminishing the incentive for humans to create them.” Although Judge Chhabria didn’t mention it, music is already suffering from AI-generated music.

A Solitary Theory – So Far

Judge Chhabria acknowledged that “no previous case has involved a use that is both as transformative and as capable of diluting the market for the original works as LLM training is.” Courts have often considered lost sales from non-literal substitutes, but always tethered to copying and similarity. By contrast, “dilution” here is the main event: infringement-adjacent competition, scaled up by algorithms, becomes dispositive even where every output may be dissimilar and lawful. That outlook has no counterpart in the copyright statute or prior case law. 

Why the Plaintiffs Still Lost

However, a novel legal theory does not excuse absent proof. The thirteen authors in Bartz “never so much as mentioned [dilution] in their complaint,” offered no expert analysis of Llama-driven sales erosion, and relied chiefly on press reports of AI novels “flooding Amazon.” Meta, meanwhile, produced data showing its model’s launch left the plaintiff’s sales untouched. Speculation, Judge Chhabria concluded, “is insufficient to raise a genuine issue of fact and defeat summary judgment.” The court elevated market dilution to center stage and then ruled against the plaintiffs for failing to prove it.

The Evidentiary Gauntlet Ahead

Judge Chhabria’s opinion outlines what future litigants will have to supply to prove dilution. They must demonstrate that the defendant’s specific model can and will produce full-length works in the same genre; that those works reach the market at scale; that readers choose them instead of the plaintiff’s title; that the competitive edge flows from exposure to the plaintiff’s expression rather than public-domain material; and that the effect is measurable through sales data, price trends or other empirical evidence. Each link is contestable, and the chain grows longer as AI models add safety rails or licensing pools. The judge’s warning that “market dilution will often cause plaintiffs to decisively win the fourth factor—and thus win the fair use question overall,” may prove to be true, but the proof he demands for this is nothing short of monumental.

Policy Doubts

Judge Chhabria’s dilution theory invites several critiques. First, it risks administrative chaos: judges will referee dueling experts over how similar, how numerous and over what time period AI outputs must be considered before they count as substitutes. Second, it blurs the line between legitimate innovation and liability; many technologies have lowered creative barriers without triggering copyright damages simply for “making art easier.” Third, it revives the circularity the Supreme Court warned against in Google v. Oracle: the rightsholder defines a market (“licensing my book for AI training”) and then claims harm because no fee was paid, a logic the judge himself rejects elsewhere in the opinion. Such broad-brush dangers may be better handled, if at all, by statutory solutions – collective licensing or compulsory schemes – than by case-by-case fair-use adjudication.

A Split Already Emerging

Two days before Kadrey, Judge William Alsup faced similar facts in Bartz v. Anthropic and dismissed dilution as nothing more than teaching “schoolchildren to write well,” an analogy he said posed “no competitive or creative displacement that concerns the Copyright Act.” Judge Chhabria rebuts that comparison as “inapt,” pointing to an LLM’s capacity to let one user mass-produce commercial text. This internal split in the Northern District is an early signal that the Ninth Circuit, the Supreme Court, and perhaps even Congress, will need to clarify the law.

Practical Takeaways

For authors contemplating suit based on a dilution theory Kadrey offers both hope and the challenge of proof. To meet the challenge plaintiffs must plead dilution explicitly. Retain economists early. Collect Amazon ranking histories, royalty statements, and genre-level sales curves. Show, with numbers, how AI thrillers or gardening guides cannibalize their human counterparts. AI Defendants, in turn, should preserve training-data logs, document output filters, and press for causation: proof that their model, not the zeitgeist, dented the plaintiff’s revenue. Until one side clears the evidentiary bar, most LLM cases will continue to rise or fall on traditional substitution and lost-license theories.

But whether “market dilution” becomes a real threat to AI companies or stands alone as a curiosity depends on whether other courts embrace it. With over 40 copyright cases against generative AI developers now winding through the courts, we shouldn’t have to wait long to see if Judge Chhabria’s dilution theory was the first step toward a new copyright doctrine or a one-off detour. 

The Bottom Line 

Despite Judge Chhabria’s warning that most unauthorized genAI training will be illegal, Kadrey v. Meta is not the death knell for AI training; it is a judicial thought experiment that became dicta for want of evidence. “Market dilution” may yet find a court that will apply it and a plaintiff who can prove it. Until then, it remains intriguing, provocative, and very much alone. Should an appellate court embrace the theory, the balance of power between AI developers and authors could tilt markedly. Should it reject or cabin the theory, Kadrey will stand as a cautionary tale about stretching fair use rhetoric beyond the record before the court.