The Case
Roberto Mata filed a personal injury lawsuit against Avianca Airlines in the Southern District of New York, alleging he was injured when a metal serving cart struck his knee during a flight. The case seemed routine enough. What made it extraordinary — and ultimately infamous — was not the merits of the claim, but a catastrophic failure in how legal research was conducted.
Steven A. Schwartz, a solo practitioner with over 30 years of experience working at the firm Levidow, Levidow & Oberman, was tasked with preparing an opposition brief to Avianca's motion to dismiss. Seeking efficiency, Schwartz turned to ChatGPT — a tool he had recently begun experimenting with — to conduct legal research. The AI chatbot obligingly provided him with a series of case citations that appeared to support his client's position.
There was one devastating problem: none of the cases were real. ChatGPT had "hallucinated" — generating plausible-sounding but entirely fabricated judicial decisions, complete with made-up case names, docket numbers, and legal reasoning. Schwartz, unfamiliar with AI's tendency to confabulate, did not verify the citations through traditional legal databases. Instead, he asked ChatGPT itself whether the cases were real. The chatbot confidently confirmed they were.
Timeline of Events
March 1, 2023 — The Brief Is Filed
Attorney Peter LoDuca, a senior partner at the firm, signs and files the opposition brief prepared by Schwartz. The brief cites six judicial decisions, including Varghese v. China Southern Airlines, Martinez v. Delta Airlines, and Petersen v. Iran Air. None of them exist.
April 2023 — Opposing Counsel Raises the Alarm
Avianca's attorneys, unable to locate any of the cited cases in legal databases, file a letter with the court noting the citations appear to be fictitious and requesting clarification.
May 4, 2023 — The Court Demands Answers
Judge P. Kevin Castel issues an order requiring the attorneys to submit copies of the cited decisions. Schwartz, now realizing the gravity of the situation, attempts to locate the cases and cannot. He provides ChatGPT-generated "opinions" which are also fabricated.
June 22, 2023 — Show-Cause Hearing
Judge Castel convenes a show-cause hearing. Schwartz testifies under oath, admitting he used ChatGPT and expressing regret. He describes himself as "embarrassed, humiliated, and deeply remorseful." The transcript reveals he even asked ChatGPT, "Are the cases you provided real?" — to which it responded, "Yes."
June 22, 2023 — Sanctions Imposed
Judge Castel issues his sanctions opinion. Schwartz, LoDuca, and the firm are each ordered to pay $5,000 in fines. They are further ordered to send copies of the sanctions opinion to every judge falsely cited in the fabricated cases. The opinion is published, making international headlines.
Why This Matters
Mata v. Avianca was not the first time an attorney made an error, nor even the most consequential sanctions case in history. But it became a watershed moment because it crystallized a new category of professional risk. For the first time, the legal profession confronted the reality that AI tools — accessible, persuasive, and fundamentally unreliable in their current form — could undermine the very foundation of legal practice: the duty of candor to the tribunal. Within months of the ruling, courts across the United States and around the world began issuing standing orders requiring disclosure of AI use in legal filings.
Context Analysis
Understanding the systemic context that made this case possible — and inevitable.
Legal Framework
- ABA Model Rule 1.1 — Duty of competence requires lawyers to understand tools they use
- ABA Model Rule 3.3 — Duty of candor prohibits presenting false information to the court
- Federal Rule of Civil Procedure 11 — Requires reasonable inquiry into legal contentions
- 28 U.S.C. 1927 — Attorneys liable for unreasonable and vexatious multiplication of proceedings
Technology Context
- ChatGPT (GPT-3.5) launched November 2022, reaching 100M users in two months
- Large language models generate probabilistic text, not verified facts
- "Hallucination" — the model produces confident, plausible-sounding but fabricated outputs
- No built-in mechanism to verify claims against authoritative legal databases
Professional Standards
- Attorneys are personally responsible for every citation in their filings
- Delegation of research does not absolve the signing attorney of verification duties
- Technology competence is now recognized as part of professional competence
- Good faith is not a defense for failing to verify readily checkable facts
Systemic Factors
- Pressure on solo practitioners and small firms to adopt cost-saving tools
- Gap between the speed of AI adoption and the development of professional guidance
- Lack of training programs for lawyers on responsible AI use at the time
- Public perception of AI as infallible, fueled by media hype around ChatGPT's launch
Stakeholders & Roles
In the role simulation, participants assume the following roles. Each role has distinct objectives, constraints, and exclusive information.
Steven Schwartz
Profile
A mid-career attorney with 30+ years of experience but limited technology expertise. Used ChatGPT for the first time for legal research, trusting its outputs without independent verification.
Objectives
- Minimize personal sanctions and preserve his law license
- Demonstrate genuine remorse and lack of malicious intent
- Protect his client's underlying case from being dismissed
Exclusive Information
Schwartz knows he explicitly asked ChatGPT to confirm the cases were real, and it did. He has screenshots of the conversation. He also knows a junior associate warned him about AI reliability weeks earlier, which he dismissed.
Judge P. Kevin Castel
Profile
A veteran federal judge in the Southern District of New York. Must balance proportionate punishment with the need to send a clear message to the legal profession about AI use.
Objectives
- Determine appropriate sanctions that uphold the integrity of the court
- Set meaningful precedent for AI use in legal filings without overreaching
- Ensure the underlying case can proceed fairly on its merits
Constraints
The judge is aware that other attorneys have privately reported similar issues to his clerks. He knows this ruling will be scrutinized nationally. He is also aware of the need not to chill legitimate innovation.
Peter LoDuca
Profile
Senior partner at Levidow, Levidow & Oberman. Signed and filed the brief without independently verifying the research. As the attorney of record, he bears formal responsibility for the filing.
Objectives
- Protect the firm's reputation and limit institutional liability
- Distance himself from Schwartz's AI use while acknowledging supervisory failure
- Maintain the client relationship and salvage the case
Exclusive Information
LoDuca knows the firm has no AI use policy and that other attorneys at the firm have also been using ChatGPT. He is also aware that the firm's malpractice insurer has been asking questions.
Roberto Mata
Profile
The plaintiff in the underlying personal injury case. A passenger injured on an Avianca flight whose legitimate legal claim is now overshadowed and jeopardized by his attorneys' conduct.
Objectives
- Ensure his personal injury claim is not dismissed due to his lawyers' mistakes
- Decide whether to seek new representation or continue with the current firm
- Understand his rights and potential recourse, including a malpractice claim
Exclusive Information
Mata was never informed that AI was being used in his case. He has been contacted by another law firm offering to take over his case and pursue a malpractice action against Levidow.
Opposing Counsel
Profile
Avianca's defense attorney from a large corporate firm. Discovered the fabricated citations during routine research verification and brought the issue to the court's attention.
Objectives
- Protect Avianca's interests and potentially seek dismissal with prejudice
- Decide how aggressively to pursue sanctions without appearing vindictive
- Use the situation strategically while maintaining professional courtesy
Constraints
Opposing counsel's own firm has recently adopted AI tools for document review. They must balance aggressive advocacy with the awareness that overzealous sanctions-seeking could backfire on the profession as a whole.
Legal Ethics Advisor
Profile
A representative from the New York State Bar Association's Committee on Professional Ethics. Present as an amicus-style observer evaluating the professional conduct implications.
Objectives
- Assess whether existing ethics rules adequately address AI use in legal practice
- Recommend whether new guidelines or formal opinions are needed
- Balance innovation with protection of the public interest
Exclusive Information
The ethics advisor has data showing that 23% of surveyed New York attorneys have used generative AI for work tasks, and only 11% of firms have adopted formal AI use policies. A formal ethics opinion is being drafted but has not yet been published.
Learning Activities
Six task types based on the Smoother methodology, designed to build progressively deeper understanding of the case and its implications.
- Read the full case narrative and court filings. Identify the five key facts that led to the sanctions order.
- List all actors involved (individuals, institutions, and technologies) and their roles in the sequence of events.
- Research what ChatGPT (GPT-3.5) was capable of in early 2023. Note what was publicly known about hallucination at that time.
- Identify the specific ABA Model Rules and Federal Rules at issue. Summarize each in one sentence.
- Note which aspects of this case feel familiar from your own professional experience and which are entirely new.
- Summarize the case in your own words (max 200 words), as if explaining it to a colleague who has never heard of it.
- Create a stakeholder map showing the relationships and power dynamics between all parties.
- Explain the case from Schwartz's perspective: What was he trying to achieve? What did he believe about the technology?
- Now explain it from Judge Castel's perspective: What obligations did the court have? What message needed to be sent?
- Identify the moment when the situation became irreversible. Could it have been caught earlier? By whom?
- Evaluate each decision Schwartz made in sequence. At which point did negligence become sanctionable conduct?
- Identify the systemic failures: What should the firm, the legal profession, and the technology industry each have done differently?
- Assess the reliability of different information sources in this case: ChatGPT, Westlaw/LexisNexis, the attorney's own judgment, peer review.
- Question the assumption that a "reasonable attorney" would have caught this error. Is that standard realistic in 2023?
- Were the sanctions proportionate? Compare with sanctions in other cases involving fabricated or misrepresented authorities.
- Analyze the tension between encouraging AI innovation in legal practice and protecting the integrity of the judicial process.
- Draft an AI use policy for a mid-sized law firm that addresses the specific failures exposed in this case.
- Design a verification workflow: When an attorney uses AI for legal research, what steps must be followed before a citation is included in a filing?
- Role-play the sanctions hearing: As your assigned role, prepare a 3-minute opening statement.
- Create a one-page "AI Research Checklist" that could be posted in a law firm's research department.
- Propose a continuing legal education (CLE) module on AI literacy for practicing attorneys. Outline the curriculum.
- Self-assess: Before and after studying this case, rate your understanding of AI risks in legal practice on a 1-10 scale. What changed?
- Exchange your AI use policy draft with another participant. Provide written feedback: What does it cover well? What gaps remain?
- Evaluate the proposed verification workflows from other groups. Which would be most practical in a real firm setting? Why?
- Review Judge Castel's actual sanctions opinion. Did it achieve its stated goals? What would you have done differently?
- Assess whether the sanctions had the intended deterrent effect. Research subsequent cases of AI misuse in court filings.
- What assumptions about AI did you hold before studying this case? Which of those assumptions have changed?
- How does this case connect to your own professional practice? Identify one specific way your behavior will change.
- Reflect on your emotional response to Schwartz's situation. Did you feel sympathy, judgment, or something else? How did that affect your analysis?
- Consider the "there but for the grace of God" factor: How close have you come to relying on unverified information in your own work?
- Write a brief reflection (150 words) on your three most important takeaways and one action item for the next 30 days.
Role Simulation
An immersive role-play exercise that places participants in the courtroom on the day of the show-cause hearing.
Simulation Scenario
It is June 22, 2023. Judge P. Kevin Castel has convened a show-cause hearing in the courtroom of the Daniel Patrick Moynihan United States Courthouse in Lower Manhattan. Steven Schwartz, Peter LoDuca, opposing counsel, and Roberto Mata are all present. The Legal Ethics Advisor has been invited by the court as an amicus observer. The judge has read the submissions. The courtroom is packed with journalists. Everyone knows that what happens in this room today will reverberate throughout the legal profession.
Rules
Total Duration
90 minutes
Communication
All statements directed through the judge; no sidebar conversations unless the judge permits
Decision Mechanism
The judge issues a ruling at the end; all parties may make final statements
Phases
Preparation (20 minutes)
Each participant studies their role card and exclusive information. Prepare your position, anticipate questions, and identify your key arguments. Schwartz and LoDuca may confer briefly. The judge reviews the case file and prepares questions.
Hearing Simulation (45 minutes)
The judge opens proceedings and addresses each party. Schwartz and LoDuca explain the situation. Opposing counsel presents their position. Roberto Mata may address the court. The Ethics Advisor offers perspective on professional standards. The judge questions all parties.
Deliberation & Ruling (25 minutes)
The judge retires briefly to deliberate (5 minutes). Returns to issue the ruling and explain the rationale. All parties may make brief final statements. The Ethics Advisor delivers closing observations about implications for the profession.
Optional Variations
- What if Schwartz had disclosed voluntarily? Replay the scenario assuming Schwartz discovered the error himself and immediately notified the court. How does the hearing change? Would sanctions still be appropriate?
- What if the client sues the firm? After the sanctions hearing, run a second simulation where Roberto Mata has retained new counsel and is pursuing a legal malpractice claim against Levidow, Levidow & Oberman.
- What if this happened in your jurisdiction? Adapt the scenario to your local rules of professional conduct and court procedures. How would the outcome differ?
Debriefing
After the simulation, use these questions to guide group discussion and individual reflection.
Reflection from Role
- What did it feel like to be in your assigned role? What pressures and constraints shaped your decisions?
- What arguments did you find most and least persuasive from the other parties?
- Was there a moment during the simulation where you felt genuinely conflicted? Describe it.
- If you could go back and change one thing your character did, what would it be?
Information Asymmetry Reveal
- Each role had exclusive information. Share yours now with the group. How would knowing this earlier have changed the dynamic?
- Were there moments where you suspected another party was withholding information? Were you right?
- How did information asymmetry affect the fairness and outcome of the hearing?
Out-of-Role Reflection
- Step out of your role. Do you agree with the ruling that was issued during the simulation? Why or why not?
- Compare the simulation's outcome with the actual court ruling. What was similar? What was different?
- What ethical principles were in tension during this case? Is there a clear "right answer"?
- Would you have sanctioned Schwartz more harshly, less harshly, or the same as Judge Castel actually did?
Real-World Connection
- Does your firm or organization currently have an AI use policy? If yes, would it have prevented this situation? If no, what should it include?
- Have you personally used generative AI for professional tasks? Did this case study change how you think about that?
- What is the single most important systemic change that could prevent future Mata v. Avianca situations?
- Name one concrete action you will take within the next month based on what you learned today.
References & Sources
Court Documents
- Mata v. Avianca, Inc., No. 22-cv-1461 (PKC) (S.D.N.Y. 2023) — Docket and case history
- Order to Show Cause, dated April 11, 2023 — Judge Castel's initial order requiring explanation of cited authorities
- Affidavit of Steven A. Schwartz, dated May 25, 2023 — Attorney's sworn account of ChatGPT use
- Sanctions Opinion and Order, dated June 22, 2023 — Judge Castel's published ruling imposing sanctions
Professional Standards
- ABA Model Rules of Professional Conduct, Rule 1.1 (Competence) — Comment 8 on technology competence
- ABA Model Rules of Professional Conduct, Rule 3.3 (Candor Toward the Tribunal)
- New York Rules of Professional Conduct, Rules 1.1 and 3.3
- Federal Rules of Civil Procedure, Rule 11 — Representations to the Court
Analysis & Commentary
- Thomson Reuters, "ChatGPT and Caselaw: Mata v. Avianca and the Perils of AI-Generated Legal Research" (2023)
- American Bar Association, "Lawyer Who Used ChatGPT Gets Sanctioned — And What It Means for the Profession" (2023)
- Reuters Legal News, "New York Lawyers Sanctioned for Using Fake ChatGPT Cases in Legal Brief" (June 2023)
- Legal Ethics Forum, "The Mata v. Avianca Problem: Hallucination, Verification, and the Duty of Competence" (2023)
- Artificial Lawyer, "Post-Mata: How Courts Are Responding to AI in Legal Filings" (2023-2024)
Ready to Experience This Case?
This case study is designed for guided facilitation as part of the Lawra Learning Program. Request a personalized program that includes role simulation with expert moderation.
Comments
Loading comments...