Don't Submit AI Output Without Reading Every Word
Lawyers are personally responsible for every word they file with a court. AI-generated text can contain fabricated citations, hallucinated case law, and confident-sounding nonsense.
Risk
Professional sanctions, malpractice liability, case dismissal, judicial referral to bar disciplinary authorities
Real-World Example
In Mata v. Avianca, Inc. (S.D.N.Y. 2023), attorneys Steven Schwartz and Peter LoDuca submitted a brief containing six entirely fabricated case citations generated by ChatGPT. When opposing counsel and Judge P. Kevin Castel could not locate the cases, the attorneys doubled down — asking ChatGPT to confirm the cases existed. Judge Castel imposed a $5,000 fine on each attorney, finding they acted in bad faith by failing to verify the AI's output. The case became a global cautionary tale.
Mitigation
Read every word of AI-generated output. Verify every citation against primary sources — Westlaw, LexisNexis, or official court databases. Treat AI output as an unverified first draft from an unreliable intern, not as finished work product.
Look, I get it — the whole point of AI is to save time. But 'saving time' by filing fake cases is like 'saving money' by not paying your malpractice insurance. You wouldn't file a brief your paralegal wrote without reading it. You definitely shouldn't file one your robot wrote without reading it. The AI doesn't have a law license to lose. You do.
-- Lawra
Comments
Loading comments...