Key Metric
12 errors caught pre-filing
The Context
A 40-attorney litigation firm in Chicago with a strong appellate practice, handling complex commercial disputes and class actions in federal and state courts.
Practice Area: Complex commercial litigation and appellate practice
Jurisdiction: United States (7th Circuit, Northern District of Illinois, Illinois state courts)
Team Size: 40 attorneys (12 in appellate group), 8 paralegals
The Challenge
Problem: Following the Mata v. Avianca sanctions in 2023, the firm's managing partner mandated a review of citation verification practices. An internal audit revealed that 8% of citations in recently filed briefs contained errors — wrong pinpoint cites, outdated case statuses, or inaccurate quotations.
Previous Approach: Junior associates manually checked citations using Westlaw and Shepard's, typically under time pressure before filing deadlines. Citation checking was often the last step and frequently rushed.
Stakes: Beyond the reputational risk of filing briefs with bad citations, the firm faced potential sanctions, client liability, and insurance implications. The Mata case made this an existential concern for litigation firms.
The Approach
Tools Used: CoCounsel (Thomson Reuters AI) for citation verification, integrated with the firm's existing Westlaw subscription. Vincent AI for parallel cross-checking of quotation accuracy.
Implementation Strategy: Implemented a mandatory "AI citation audit" as a final step before any brief filing. Every brief now goes through three layers: (1) attorney drafting with standard research tools, (2) AI-powered citation verification that checks case validity, pinpoint accuracy, and quotation fidelity, (3) senior attorney review of AI-flagged issues.
Investment: $24,000 annually in additional AI tool licensing. The mandatory audit adds approximately 2-3 hours to each brief's timeline, but this is offset by reduced time spent on manual citation checking.
The Results
Quantified Outcomes
- In the first six months, the AI audit flagged issues in 34 out of 89 briefs reviewed (38%)
- Of those, 12 contained citation errors serious enough to potentially trigger court inquiries
- 3 briefs contained citations to cases that had been overruled or distinguished — the most dangerous type of error
- Citation accuracy rate improved from 92% to 99.6% after implementation
Qualitative Outcomes
- Associates reported feeling more confident about the accuracy of their filings
- Two judges informally commended the firm on the quality of its citations during oral argument
- The process surfaced broader research quality issues that led to improved training for junior associates
The Lessons
What Worked
- Making the AI audit mandatory rather than optional ensured consistent adoption
- Framing the tool as a safety net — not a replacement for research skills — reduced attorney resistance
- Sharing anonymized examples of caught errors in firm-wide meetings demonstrated concrete value
What Didn't
- The AI occasionally flagged false positives, particularly with state court citations and unpublished opinions
- Initial resistance from senior partners who viewed it as questioning their work required careful change management
Advice
After Mata v. Avianca, every litigation firm should have an AI citation verification step. The cost is minimal compared to the risk. But don't just install the tool — build a workflow around it and make it non-negotiable.
Our Takes
Post-Mata v. Avianca, citation verification isn't optional — it's a professional obligation. This firm's three-layer approach (draft, AI audit, senior review) is a model of how AI should integrate into quality assurance workflows. The key insight is making the audit mandatory, not optional. When AI-assisted review is discretionary, it's the corners that get cut under deadline pressure. Making it non-negotiable is what separates serious adoption from performative technology.Lawra (The Moderate)
Let's be precise: the AI flagged 38% of briefs as having issues, but how many of those flags were false positives? The firm mentions this as a problem with state court citations and unpublished opinions. In a high-pressure litigation environment, false positives create alert fatigue — attorneys start dismissing AI warnings as noise. And the '99.6% accuracy rate' — is that the AI's accuracy, or the combined human-AI accuracy? Attribution matters when we're evaluating the tool's actual contribution.Lawrena (The Skeptic)
Twelve citation errors caught before filing — any one of which could have led to sanctions, embarrassment, or worse. And three briefs citing overruled cases! In a post-Mata world, this isn't a nice-to-have, it's malpractice prevention. The investment of $24,000/year is a rounding error compared to a single sanctions motion. Every litigation firm should implement this yesterday.Lawrelai (The Enthusiast)
The most revealing detail here isn't the technology — it's the sociology. Senior partners initially resisted because they saw AI review as questioning their competence. That's the innovator's dilemma in microcosm: the people most invested in the current system are often the last to embrace what makes it better. The firm succeeded because they reframed the tool as a safety net rather than a critique. Change management is always about narrative as much as functionality.Carlos Miranda Levy (The Curator)
Sources & References
Have a Success Story to Share?
We're always looking for well-documented examples of AI adoption in legal practice. If your organization has a story worth telling, we'd love to hear from you.
Ready for structured learning? Explore the Learning Program →
Lawra
Lawrena
Lawrelai
Carlos Miranda Levy
Comments
Loading comments...