The New York Times sued OpenAI and Microsoft alleging copyright infringement, claiming that GPT models were trained on millions of Times articles without permission and can reproduce near-verbatim excerpts, threatening the newspaper's business model.
Arguments For / Positive Implications
- Could establish critical precedent on whether AI training constitutes fair use
- Raises important questions about compensating content creators in the AI era
- Forces transparency about what data AI companies use for training
- May lead to licensing frameworks that benefit both publishers and AI developers
Arguments Against / Concerns
- A ruling against AI training could severely limit AI development
- May be impossible to 'untrain' existing models if ruled infringing
- Could create a patchwork of licensing requirements across jurisdictions
- Risk of chilling effect on AI research and open-source development
Our Takes
This is the copyright case of the decade. However it's decided, it will reshape how AI companies acquire training data and how content creators are compensated. Every lawyer should be watching this one — the precedent will ripple across every practice area.Lawra (The Moderate)
OpenAI built a multi-billion-dollar business by ingesting the work of journalists, authors, and creators without permission or payment. If that's not copyright infringement, the concept has no meaning. AI companies cannot be allowed to treat the world's creative output as free raw material.Lawrena (The Skeptic)
This case needs a creative solution, not a binary win/lose. AI models learn from data the way humans learn from reading — the question is how we build fair compensation systems without killing the technology. Licensing deals, revenue sharing, and collective agreements are the path forward.Lawrelai (The Enthusiast)
All knowledge is built on previous knowledge — that's the foundation of human progress. The real question here isn't whether AI can learn from published content; it's how we create ecosystems where creation is incentivized and creators are fairly compensated. The answer lies in licensing frameworks, revenue sharing, and market mechanisms — not in restricting access to knowledge. The spirit of copyright is to promote creation, not to build walls around ideas.Carlos Miranda Levy (The Curator)
Why This Case Matters
The New York Times v. OpenAI is the highest-profile test of whether training AI models on copyrighted content constitutes fair use. The outcome could define the legal framework for the entire generative AI industry and determine whether content creators have enforceable rights over how their work is used to build AI systems.
What’s at Stake
The Times alleges that OpenAI’s models can reproduce near-verbatim passages from its articles, effectively creating a substitute for the original content. OpenAI argues that training on publicly available data is transformative fair use. The case sits at the intersection of copyright law, technology policy, and the economics of journalism.
Cases to Watch
This litigation is part of a broader wave of copyright suits against AI companies. Similar claims have been filed by authors (Silverman v. OpenAI), visual artists (Andersen v. Stability AI), and music publishers. The NYT case is the most significant because of the newspaper’s resources, the specificity of its claims, and the potential for a precedent-setting ruling.
Sources
- The New York Times Co. v. Microsoft Corp., No. 23-cv-11195 (S.D.N.Y.) (2023-12-27)
- The Times Sues OpenAI and Microsoft Over A.I. Use of Copyrighted Work — New York Times (2023-12-27)
Explore Legal Frameworks
Cases don't happen in a vacuum. Explore the regulatory frameworks shaping AI law around the world — from the EU AI Act to emerging legislation in Latin America.
Ready for structured learning? Explore the Learning Program →
Lawra
Lawrena
Lawrelai
Carlos Miranda Levy
Comments
Loading comments...