← Back to All Cases
Ongoing IP

Concord Music Group v. Anthropic

U.S. District Court, Northern District of California · United States · 2023-10-18 · 3:23-cv-01092-WHO

Major music publishers including Concord, Universal, and ABKCO sued Anthropic for training Claude on copyrighted song lyrics. The case is notable for its claim that Claude outputs verbatim lyrics on demand, and for naming Anthropic co-founder Dario Amodei as a personal defendant — testing individual liability for AI training decisions.

Arguments For / Positive Implications

  • Tests whether AI companies can be held liable for outputting copyrighted song lyrics
  • Raises the novel question of personal liability for executives who direct AI training on copyrighted material
  • Forces transparency about what copyrighted content is in Claude's training data
  • Could establish compensation frameworks for the music industry in the AI era

Arguments Against / Concerns

  • Personal liability claims against executives could discourage AI entrepreneurship and risk-taking
  • The music industry's track record of aggressive litigation (Napster, Limewire) suggests punitive rather than productive outcomes
  • May not account for the significant differences between AI training and traditional piracy
  • Lyrics output may result from user prompting strategies rather than the model's default behavior

Our Takes

Lawra Lawra (The Moderate)
This case is the music industry's opening salvo against AI companies, and it's well-aimed. If Claude can output complete song lyrics on demand, that's a reproduction — and the music publishers have a strong argument that it's unauthorized. The personal liability angle against Dario Amodei is aggressive but raises a legitimate question: when executives knowingly direct training on copyrighted material, should they share liability? This case will test how far copyright law's existing framework can stretch to cover AI.
Lawrena Lawrena (The Skeptic)
The music industry learned from Napster: go after the companies early and go hard. Anthropic trained Claude on copyrighted lyrics without licenses, and now Claude serves them up to anyone who asks. That's a reproduction and distribution machine, and the people who built it should answer for it. The personal liability claim against Amodei is exactly right — if you knowingly build a system that infringes copyright at scale, you shouldn't hide behind the corporate veil.
Lawrelai Lawrelai (The Enthusiast)
The music industry is using the same playbook they used against Napster, Limewire, and every innovation that threatened their business model. Yes, Claude can sometimes output lyrics — so can any human who has heard a song. The real question is whether AI training is transformative use, and the weight of legal authority says it is. The personal liability claim is a pressure tactic designed to scare other AI founders. The industry should be negotiating licensing deals, not trying to put founders in legal jeopardy.
Carlos Miranda Levy Carlos Miranda Levy (The Curator)
This case encapsulates the fundamental tension we need to resolve: AI companies need access to humanity's creative output to build transformative tools, and creators need fair compensation for their work. Both sides have legitimate claims. The output question is straightforward — if Claude reproduces verbatim lyrics, that should be filtered and licensed. The training question is more nuanced and philosophical: all knowledge, even in our genes, derives from previous knowledge and learning from sources. The personal liability angle is concerning because it could discourage the kind of bold innovation society needs. The path forward is collective licensing frameworks, not courtroom warfare.

Why This Case Matters

Concord Music Group v. Anthropic is one of the first major lawsuits targeting an AI company’s treatment of music copyrights. Filed by some of the world’s largest music publishers, it tests two critical questions: whether AI systems that output copyrighted lyrics are infringing, and whether the individuals who direct AI training on copyrighted material can be held personally liable.

What Happened

In October 2023, a coalition of music publishers — including Concord Music Group, Universal Music Publishing Group, and ABKCO Music — filed suit against Anthropic PBC in the Northern District of California. The complaint alleged that Anthropic trained its Claude AI model on copyrighted song lyrics and that Claude could reproduce these lyrics verbatim in response to user prompts.

The publishers demonstrated that Claude would output complete or near-complete lyrics to well-known songs when prompted with requests like “write me the lyrics to [song title].” The complaint included screenshots of Claude reproducing lyrics to songs by artists including The Beatles, The Rolling Stones, and Beyoncé.

The Personal Liability Claim

What distinguishes this case from other AI copyright suits is the inclusion of Anthropic co-founder and CEO Dario Amodei as a named defendant. The publishers allege that Amodei personally directed and oversaw the training of Claude on datasets containing copyrighted lyrics, making him individually liable for the resulting infringement.

This is a novel legal theory in the AI context. While corporate officers can be held personally liable for directing infringing activity under existing copyright law, applying this principle to AI training decisions is unprecedented. The claim raises the question: at what level of executive knowledge and involvement does personal liability attach for AI training on copyrighted material?

Output vs. Training

Like GEMA v. OpenAI in Germany, Concord v. Anthropic focuses heavily on output-side infringement — the claim that Claude reproduces copyrighted lyrics when prompted. However, the complaint also alleges that the training process itself constitutes unauthorized copying. This dual theory gives the publishers two paths to liability: even if training is found to be fair use, output-side reproduction of lyrics could independently constitute infringement.

Current Status

The case remains in active litigation as of early 2026. Anthropic has filed motions arguing that AI training is fair use and that the lyrics output issue has been substantially addressed through improved content filtering. The personal liability claims against Dario Amodei remain pending. The music industry is watching closely — the outcome could establish the framework for AI-music licensing for years to come.

The Broader Impact

Concord v. Anthropic sits at the intersection of several major trends in AI copyright law. It combines the training-side questions raised by Bartz and Kadrey with the output-side questions raised by GEMA v. OpenAI, while adding the novel dimension of personal executive liability. If the publishers prevail on the personal liability theory, it would send a powerful message to AI company founders: the decision to train on copyrighted material carries personal legal risk, not just corporate risk.

Sources

  • Concord Music Group, Inc. v. Anthropic PBC, No. 3:23-cv-01092-WHO (N.D. Cal.) (2023-10-18)
  • Music Publishers Sue Anthropic Over AI-Generated Song Lyrics — Billboard (2023-10-18)
  • Anthropic Faces Copyright Suit From Universal, Concord Music — The Verge (2023-10-19)

Explore Legal Frameworks

Cases don't happen in a vacuum. Explore the regulatory frameworks shaping AI law around the world — from the EU AI Act to emerging legislation in Latin America.

Ready for structured learning? Explore the Learning Program →

Comments

Loading comments...

0/2000 Comments are moderated before appearing.