← Back to All Cases
Concluded Ethics

Mata v. Avianca, Inc.

U.S. District Court, Southern District of New York · United States · 2023-06-22 · 22-cv-1461 (PKC)

Attorney Steven Schwartz used ChatGPT to draft a legal brief and submitted it containing six entirely fabricated case citations. The court sanctioned Schwartz and his colleague for submitting fake precedent without verification.

Holding

The court imposed $5,000 in sanctions on Schwartz and co-counsel Peter LoDuca, finding they acted in bad faith by submitting non-existent judicial opinions generated by AI and failing to verify them despite multiple opportunities.

Arguments For / Positive Implications

  • Established a clear precedent that AI-generated legal research must be verified before submission
  • Put the legal profession on notice about the risks of AI hallucinations
  • Prompted bar associations worldwide to issue AI guidance
  • Relatively modest sanctions signal courts will educate, not destroy careers

Arguments Against / Concerns

  • Sanctions were arguably too lenient to truly deter future AI misuse
  • The ruling focused on attorney negligence rather than creating specific AI rules
  • May create a chilling effect discouraging beneficial AI use in legal practice
  • Did not address the underlying question of AI tool reliability standards

Our Takes

Lawra Lawra (The Moderate)
This case is the wake-up call every lawyer needed. It doesn't mean you shouldn't use AI — it means you must verify everything AI produces, just as you would a junior associate's first memo. The duty of competence hasn't changed; the tools have.
Lawrena Lawrena (The Skeptic)
This is exactly why I've been warning about AI in legal practice. An attorney trusted a machine over basic professional responsibility. Six fake cases. In a federal court. If this doesn't convince you that AI is a liability in the courtroom, nothing will.
Lawrelai Lawrelai (The Enthusiast)
Look, one lawyer misused a tool and got caught. That's not an indictment of AI — it's an indictment of skipping verification. Pilots don't stop using autopilot because someone crashed; they train better. This case should push us toward better AI workflows, not away from AI entirely.
Carlos Miranda Levy Carlos Miranda Levy (The Curator)
This is not an AI failure — it's a human process failure. The real lesson isn't 'don't use AI.' It's 'build verification into your workflow.' AI augments what we can do; it doesn't replace the need to think critically. The legal profession should respond by integrating AI literacy into training, not by retreating from innovation. The firms that build structured AI workflows now will outperform those that ban the tools out of fear.

Why This Case Matters

Mata v. Avianca is the case that launched a thousand CLE courses. When attorney Steven Schwartz asked ChatGPT to find supporting case law for a personal injury claim against Avianca Airlines, the AI confidently produced six judicial opinions — none of which existed. Schwartz submitted the fabricated citations in a federal court brief without verification.

What Happened

Judge P. Kevin Castel discovered the non-existent cases and ordered Schwartz and co-counsel Peter LoDuca to show cause. When confronted, Schwartz initially stood by the citations, then admitted he had used ChatGPT and “did not think it could fabricate cases.” The court found this explanation unpersuasive and imposed sanctions.

The Broader Impact

This case single-handedly accelerated the legal profession’s reckoning with generative AI. Within months of the ruling, dozens of courts adopted standing orders requiring attorneys to disclose AI use. Bar associations from Texas to the European Union issued ethics opinions. The case became shorthand for “verify your AI output.”

Sources

Explore Legal Frameworks

Cases don't happen in a vacuum. Explore the regulatory frameworks shaping AI law around the world — from the EU AI Act to emerging legislation in Latin America.

Ready for structured learning? Explore the Learning Program →

Comments

Loading comments...

0/2000 Comments are moderated before appearing.