The Case
Margaret Chen had been managing partner of Chen, Okafor & Associates for eleven years. The 14-attorney firm, specializing in commercial real estate and corporate transactions, was profitable but increasingly pressured by clients demanding faster turnaround and lower fees. When a client casually mentioned that their in-house team was using AI to review leases 'in minutes instead of hours,' Margaret felt the ground shift beneath her.
At the National Legal Technology Conference in Chicago, Margaret attended a demo by LegalMind AI. The presentation was extraordinary: the system appeared to review a 60-page commercial lease, flag 23 risk clauses, generate a redline comparison against the firm's standard positions, and produce a client-ready summary — all in under two minutes. The sales representative quoted an annual license of $48,000 for the firm's size. 'That's less than a first-year associate's monthly salary,' Margaret noted. 'And it doesn't take vacation.'
Margaret returned to the office on Monday morning and called an emergency partners' meeting. 'I've seen the future,' she announced. 'If we don't move on this within 30 days, we'll be the last firm in this market still doing contract review by hand.' She placed the vendor's proposal on the table: $48,000 annual license, 3-year commitment, implementation support included. She wanted a vote by Friday.
Key Timeline
Week 1 — The Conference and the Pitch
Margaret attends the LegalMind AI demo at the tech conference. The vendor uses a pre-selected commercial lease optimized for demonstration. Margaret does not request a trial or reference clients. She returns convinced the tool will transform the firm's practice.
Week 2 — The Partners' Debate
Margaret presents the proposal to the partnership. David Okafor, the co-founding partner, raises concerns about data security, accuracy validation, and the 3-year commitment. Junior partner Sarah Kim asks about integration with the firm's existing document management system. The vote is postponed pending further investigation.
Week 3 — Due Diligence Begins
Sarah Kim conducts an informal evaluation: she submits three of the firm's actual contracts to the vendor's trial portal. The results are mixed — the tool correctly identifies standard risk clauses but misses a jurisdiction-specific zoning restriction that is critical in their practice, and incorrectly flags a standard indemnification clause as 'high risk.'
Week 4 — The Decision Point
The vendor's 'early adopter discount' expires Friday. Margaret pushes for approval. David insists on a formal evaluation framework. The firm's IT consultant raises concerns about data residency and the vendor's terms of service, which permit 'anonymized data' to be used for model improvement. The partnership must decide.
Why This Matters
Every law firm will face a version of this decision. The question is not whether to adopt AI — it is how to evaluate AI claims critically, resist the pressure of vendor timelines and competitive anxiety, and build a disciplined process for technology adoption that protects the firm's clients, reputation, and bottom line. The gap between what AI can do in a staged demo and what it delivers in daily practice is the most important thing a legal professional can learn about AI today.
Context Analysis
The landscape of factors shaping this decision — technical, professional, financial, and organizational.
Technical Reality
- AI contract review tools perform best on standardized document types they were trained on — performance degrades significantly on non-standard or jurisdiction-specific language
- Demo environments are optimized: pre-selected documents, curated outputs, and controlled conditions that do not reflect production use
- Integration with existing systems (DMS, billing, matter management) is often more complex and costly than vendors represent
- Accuracy metrics quoted by vendors typically reflect best-case scenarios and may not account for false negatives (missed risks)
Professional Obligations
- ABA Model Rule 1.1, Comment 8: Duty of competence includes understanding the benefits and risks of technology
- ABA Formal Opinion 477R: Lawyers must make reasonable efforts to prevent inadvertent or unauthorized disclosure of client information
- State bar opinions increasingly require firms to evaluate AI tools before deployment, not merely after problems arise
- Supervisory obligations (Rule 5.1) require partners to ensure that AI-assisted work product meets professional standards
Financial Considerations
- $48,000 annual license represents approximately 3% of the firm's annual revenue
- Hidden costs: training time, workflow redesign, quality assurance processes, potential re-work when the tool produces errors
- 3-year commitment totals $144,000 — comparable to a senior associate's annual salary
- ROI depends entirely on actual accuracy and adoption rates, neither of which can be determined from a demo
Organizational Readiness
- The firm has no AI use policy, no technology evaluation framework, and no designated technology decision-maker
- Attorney comfort with technology varies dramatically — from Margaret's enthusiasm to David's caution to associates who use AI daily in their personal lives
- Change management is as critical as technology selection — tools that attorneys resist using deliver zero ROI regardless of capability
Stakeholders & Roles
Each participant assumes one role and advocates for their position throughout the case study discussion.
Margaret Chen — Managing Partner
Profile
Experienced litigator turned managing partner. Visionary leader who sees AI as existential — the firm either adopts or dies. Attended the demo personally and was genuinely impressed. Tends to make fast decisions based on strong instincts.
Objectives
- Secure partnership approval for the LegalMind AI purchase within the vendor's discount window
- Position the firm as a technology leader in the local commercial real estate market
- Demonstrate to clients that the firm is investing in efficiency and innovation
Constraints
Margaret has staked her credibility on this recommendation. Walking it back would undermine her leadership position. She is also aware that the firm's largest client mentioned AI adoption approvingly — but she has not verified whether that client would actually value AI-reviewed contracts from the firm.
David Okafor — Co-Founding Partner
Profile
Methodical, risk-averse transactional attorney who built the firm's reputation on meticulous attention to detail. Supportive of technology in principle but deeply skeptical of rushed decisions. Has seen previous technology investments underperform.
Objectives
- Ensure any AI adoption is preceded by a rigorous, documented evaluation process
- Protect client confidentiality and the firm's professional obligations above competitive pressure
- Prevent the firm from locking into a 3-year contract for a tool that may not deliver on its promises
Constraints
David knows that his cautious approach has sometimes caused the firm to miss genuine opportunities. He is also aware that two associates have already been using free AI tools for personal research — a shadow IT risk that an official tool adoption might actually mitigate.
Sarah Kim — Junior Partner and Informal Tech Lead
Profile
The firm's most technically literate attorney. Conducted the informal trial evaluation and discovered the tool's limitations. Believes in AI's potential but insists on evidence-based adoption. She is the only partner who has actually tested the tool with real documents.
Objectives
- Establish a formal technology evaluation framework that the firm can use for this and future AI tools
- Present the trial results — both positive and negative — objectively to the partnership
- Advocate for a structured pilot program rather than an immediate full commitment
Constraints
Sarah is the most junior partner and is conscious that pushing back too hard against Margaret's proposal could affect her standing. She also knows that her trial was limited to three contracts and may not be statistically significant — but the errors she found were substantively serious.
James Whitfield — External IT Consultant
Profile
The firm's part-time technology consultant. Has reviewed the vendor's terms of service and data handling practices. Brings a technical perspective that the attorneys lack but sometimes struggles to translate technical risks into legal language the partners understand.
Objectives
- Ensure the firm understands the data security and privacy implications of the vendor's terms of service
- Recommend minimum technical safeguards before any cloud-based AI tool is deployed with client documents
- Establish his role as the firm's technology advisor for future AI-related decisions
Constraints
James discovered that the vendor's terms of service include a clause permitting the use of 'anonymized and aggregated data' for model training. He is not certain whether this creates a confidentiality risk under legal ethics rules, and he needs a lawyer's input to assess the professional implications.
Learning Activities
Six task categories based on the Smoother methodology, progressing from factual exploration to reflective metacognition.
- Map the sequence of events from Margaret's conference attendance to the Friday deadline. Identify every decision point where a different choice could have changed the trajectory.
- List all the claims the vendor made during the demo. For each claim, note whether it is verifiable, how you would verify it, and what information is missing.
- Research three real-world AI contract review tools currently on the market. Compare their published capabilities, pricing, and data handling policies.
- Identify the specific professional obligations (ABA Model Rules, state bar opinions) that apply to a firm evaluating AI tools for client work.
- Rewrite the case from each stakeholder's perspective in 150 words. How does the same set of facts look different depending on where you sit?
- Explain why Margaret's enthusiasm and David's skepticism are both rational responses to the same information. What prior experiences might shape each perspective?
- Interpret Sarah's trial results: what do they actually tell us about the tool's reliability? What are the limitations of a three-contract sample?
- Analyze the vendor's sales tactics: early adopter discount, 3-year commitment, demo with pre-selected documents. What does each tactic achieve?
- Evaluate the claim that '87% accuracy' in clause identification is sufficient for legal practice. What accuracy rate would you require, and why?
- Assess whether the vendor's data handling terms create a genuine confidentiality risk or whether 'anonymized and aggregated' use is acceptably safe. What additional information would you need?
- Challenge the assumption that not adopting AI will cause the firm to 'fall behind.' Is there evidence that clients are selecting firms based on AI adoption? What does the competitive landscape actually look like?
- Analyze the hidden costs of adoption that Margaret's proposal does not account for. Build a realistic total cost of ownership estimate for the first year.
- Draft a formal AI tool evaluation framework for Chen, Okafor & Associates — a reusable template the firm can apply to any future technology decision.
- Design a 30-day pilot program for LegalMind AI that would generate sufficient data to make an informed adoption decision. Specify metrics, document selection, and success criteria.
- Write a counter-proposal to the vendor that protects the firm's interests: modified data handling terms, a shorter initial commitment, and performance guarantees.
- Create a client communication explaining that the firm is adopting AI-assisted contract review. Address likely client concerns about accuracy, confidentiality, and billing.
- Peer-review another group's evaluation framework. Is it comprehensive? Is it practical? Would it actually be used by busy attorneys?
- Compare your pilot program design with others in the group. Which approach would generate the most reliable data in the least time?
- Assess the partnership dynamics at play. How do power, seniority, and interpersonal relationships affect technology decisions in a small firm?
- Evaluate your own bias: are you naturally more sympathetic to Margaret's position or David's? How does that affect your analysis?
- Before this case study, how would you have evaluated an AI tool for your practice? What steps would you have skipped? What will you do differently now?
- Reflect on a time you made a technology decision based on a demo, recommendation, or marketing claim. How did reality compare to expectations?
- What is the most important thing you learned from this exercise — and why does it matter for your professional practice?
- Consider the broader question: how should the legal profession develop institutional competence in technology evaluation? Whose responsibility is it?
Putting It Into Practice
Identify one AI tool that is relevant to your current practice. Using the evaluation framework you developed in this case study, conduct a preliminary assessment. Document your findings, including what you could and could not determine from publicly available information, and identify what a pilot program would need to test.
References & Sources
Professional Standards
- ABA Model Rules of Professional Conduct, Rule 1.1, Comment 8 — Duty of Technological Competence
- ABA Formal Opinion 477R — Securing Communication of Protected Client Information (2017)
- ABA Resolution 112 — Encouraging development of AI governance frameworks in legal practice (2019)
Further Reading
- Thomson Reuters, "State of the Legal Market Report" — annual analysis of technology adoption trends in law firms
- ILTA (International Legal Technology Association), "Legal Technology Buyer's Guide" — vendor evaluation frameworks for legal-specific tools
- Stanford Center on Legal Informatics (CodeX), "AI and Legal Practice" research series
Ready to Build Your AI Evaluation Skills?
This case study is part of Module 1 of the Lawra Learning Program. Request a facilitated session that includes guided discussion, role assignments, and expert debriefing tailored to your firm or organization.
Comments
Loading comments...