Attorney Steven Schwartz used ChatGPT to draft a legal brief and submitted it containing six entirely fabricated case citations. The court sanctioned Schwartz and his colleague for submitting fake precedent without verification.
Holding
The court imposed $5,000 in sanctions on Schwartz and co-counsel Peter LoDuca, finding they acted in bad faith by submitting non-existent judicial opinions generated by AI and failing to verify them despite multiple opportunities.
Arguments For / Positive Implications
- Established a clear precedent that AI-generated legal research must be verified before submission
- Put the legal profession on notice about the risks of AI hallucinations
- Prompted bar associations worldwide to issue AI guidance
- Relatively modest sanctions signal courts will educate, not destroy careers
Arguments Against / Concerns
- Sanctions were arguably too lenient to truly deter future AI misuse
- The ruling focused on attorney negligence rather than creating specific AI rules
- May create a chilling effect discouraging beneficial AI use in legal practice
- Did not address the underlying question of AI tool reliability standards
Our Takes
This case is the wake-up call every lawyer needed. It doesn't mean you shouldn't use AI — it means you must verify everything AI produces, just as you would a junior associate's first memo. The duty of competence hasn't changed; the tools have.Lawra (The Moderate)
This is exactly why I've been warning about AI in legal practice. An attorney trusted a machine over basic professional responsibility. Six fake cases. In a federal court. If this doesn't convince you that AI is a liability in the courtroom, nothing will.Lawrena (The Skeptic)
Look, one lawyer misused a tool and got caught. That's not an indictment of AI — it's an indictment of skipping verification. Pilots don't stop using autopilot because someone crashed; they train better. This case should push us toward better AI workflows, not away from AI entirely.Lawrelai (The Enthusiast)
This is not an AI failure — it's a human process failure. The real lesson isn't 'don't use AI.' It's 'build verification into your workflow.' AI augments what we can do; it doesn't replace the need to think critically. The legal profession should respond by integrating AI literacy into training, not by retreating from innovation. The firms that build structured AI workflows now will outperform those that ban the tools out of fear.Carlos Miranda Levy (The Curator)
Why This Case Matters
Mata v. Avianca is the case that launched a thousand CLE courses. When attorney Steven Schwartz asked ChatGPT to find supporting case law for a personal injury claim against Avianca Airlines, the AI confidently produced six judicial opinions — none of which existed. Schwartz submitted the fabricated citations in a federal court brief without verification.
What Happened
Judge P. Kevin Castel discovered the non-existent cases and ordered Schwartz and co-counsel Peter LoDuca to show cause. When confronted, Schwartz initially stood by the citations, then admitted he had used ChatGPT and “did not think it could fabricate cases.” The court found this explanation unpersuasive and imposed sanctions.
The Broader Impact
This case single-handedly accelerated the legal profession’s reckoning with generative AI. Within months of the ruling, dozens of courts adopted standing orders requiring attorneys to disclose AI use. Bar associations from Texas to the European Union issued ethics opinions. The case became shorthand for “verify your AI output.”
Sources
- Mata v. Avianca, Inc., No. 22-cv-1461 (PKC) (S.D.N.Y. June 22, 2023) (2023-06-22)
- Here's What Happens When Your Lawyer Uses ChatGPT — New York Times (2023-05-27)
Explore Legal Frameworks
Cases don't happen in a vacuum. Explore the regulatory frameworks shaping AI law around the world — from the EU AI Act to emerging legislation in Latin America.
Ready for structured learning? Explore the Learning Program →
Lawra
Lawrena
Lawrelai
Carlos Miranda Levy
Comments
Loading comments...