Simulation Scenario
It is Thursday, February 13 — one day after the CyberLaw Report published its story on the Hartwell, Sinclair & Pratt data breach. The firm's managing partner has convened an emergency governance policy drafting session. The state bar's ethics committee has requested a written description of the firm's AI governance measures by the end of next week. Crawford Pharmaceutical's board has given the firm until Friday to present a credible governance framework or face termination of the legal relationship. The room contains five people with very different perspectives on what went wrong, who is responsible, and what the governance framework should look like. They have 90 minutes to produce a working draft that satisfies clients, regulators, and the partnership.
Stakeholders & Roles
modules.m10.simulation.stakeholdersSubtitle
Margaret Sinclair — Managing Partner
Profile
Chairs the meeting and must synthesize competing views into a workable framework. Her priority is speed and credibility — the firm needs a presentable governance document before the Friday client deadline and the state bar deadline next week.
Objectives
- Produce a governance framework that is comprehensive enough to satisfy regulators and clients but concise enough to be implementable within 30 days
- Navigate the internal politics — balancing accountability for Ashworth with the need to retain her revenue and expertise
- Establish Osei's authority as CISO without alienating the partnership, who are unaccustomed to technology oversight of their practices
Constraints
Sinclair knows that the insurance exclusion for unauthorized third-party applications means the firm may face uninsured liability. She must ensure the governance framework addresses this gap going forward.
Exclusive Information
Sinclair has received a private call from the senior partner at Crawford Pharmaceutical's alternative outside counsel firm, offering to 'take the matter off her hands gracefully.' She has not told anyone about this call. She also knows that two additional partners have been using unapproved AI tools and that the governance framework must address their situations before those become public as well.
Daniel Osei — Chief Information Security Officer
Profile
Three days into the job, working with incomplete knowledge of the firm's systems. Brings deep cybersecurity expertise from financial services but limited understanding of law firm culture, partnership dynamics, and legal professional obligations.
Objectives
- Establish a governance framework with teeth — mandatory tool vetting, data classification, monitoring, and enforcement mechanisms that meet enterprise security standards
- Secure budget and authority commitments from the partnership for the infrastructure upgrades needed to support the governance framework
- Create an incident response protocol so the firm is prepared if another breach occurs
Constraints
Osei's infrastructure assessment reveals that implementing enterprise-grade governance controls will cost $800,000-$1.2 million in the first year. The firm's current IT budget is $340,000 annually. He must propose a framework that is aspirationally robust but pragmatically phased.
Exclusive Information
Osei has discovered during his infrastructure review that the firm's document management system has a logging gap — there is no complete record of which documents Ashworth uploaded to LegalMind Analytics. The 4,200-page figure reported publicly is the vendor's estimate, but the actual exposure could be larger. He has not shared this finding with anyone yet because he wants to verify it before creating additional panic.
Victoria Ashworth — Senior Litigation Partner
Profile
The partner whose use of LegalMind Analytics caused the breach. She is present because Sinclair believes that excluding her would be both unfair and counterproductive — the governance framework needs input from practitioners who understand why attorneys adopt unauthorized tools.
Objectives
- Ensure the governance framework addresses the root cause — the gap between attorneys' technology needs and the firm's approved tool offerings — rather than just punishing unauthorized adoption
- Protect her team of three associates who used LegalMind Analytics at her direction from individual disciplinary action
- Demonstrate accountability and constructive engagement to rebuild credibility with the partnership and clients
Constraints
Ashworth's continued presence at the firm is politically contentious. Some partners want her suspended or expelled. She must contribute meaningfully to the governance discussion without appearing to be deflecting blame or minimizing the harm her actions caused.
Exclusive Information
Ashworth has conducted her own review and discovered that LegalMind Analytics' terms of service — which she never read — included a clause granting the vendor a royalty-free license to use uploaded documents for AI model training purposes. This means the exposed documents may have been incorporated into the vendor's training data, creating a broader and potentially irremediable exposure beyond the data breach itself. She has not disclosed this to anyone.
Kevin Park — Associate Representative, Firm Technology Committee
Profile
A fifth-year associate elected to the firm's technology committee by the associate class. Represents the perspective of junior lawyers who use AI tools daily and are most directly affected by governance policies they had no role in creating.
Objectives
- Ensure the governance framework does not eliminate AI tool access that associates rely on for competitive efficiency and workload management
- Advocate for a streamlined tool approval process — the current informal system takes months and discourages innovation
- Push for training and support resources rather than punitive enforcement, recognizing that associates often adopt tools because partners assign unrealistic deadlines
Constraints
Park knows that associates are the heaviest users of AI tools in the firm and that many are using unapproved tools for tasks ranging from research drafting to time entry. A restrictive governance framework will face immediate noncompliance from the associate class.
Exclusive Information
Park has been quietly running an anonymous survey of associates about AI tool use. The preliminary results show that 67% of associates have used at least one unapproved AI tool for firm work in the past six months. The most common reason cited is 'the approved tools are inadequate for my needs.' 23% report that a partner specifically asked them to use an AI tool that was not approved. He has not shared these results with firm leadership.
Eleanor Vance — Ethics Committee Chair
Profile
Senior counsel who chairs the firm's professional responsibility committee. A former state bar disciplinary counsel, she understands the regulatory exposure better than anyone in the room and has strong views on the ethical obligations that the governance framework must address.
Objectives
- Ensure the governance framework satisfies the state bar's expected requirements and positions the firm favorably in any disciplinary proceedings
- Establish clear ethical guidelines for AI use that go beyond technical security to address duties of competence, confidentiality, and supervision
- Create a reporting and escalation mechanism for AI-related ethical concerns that protects attorneys who raise issues in good faith
Constraints
Vance has been contacted by the state bar's ethics committee chair, who indicated informally that the bar is considering using this case as the basis for a new formal ethics opinion on AI governance in law firms. The governance framework the firm produces may influence statewide standards.
Exclusive Information
Vance has reviewed the disciplinary history and discovered that the firm received an informal ethics inquiry 18 months ago from a different partner about whether using a cloud-based AI summarization tool for client documents required client consent. The inquiry was routed to the firm's general counsel, who provided an informal 'green light' without conducting a formal analysis. This prior inquiry — never documented in a formal opinion — suggests the firm had notice of the AI governance gap well before the breach.
Rules
Duration
90 minutes total (three phases of 30 minutes each)
Communication
Open roundtable discussion chaired by Sinclair. Any participant may raise issues, propose language, or challenge others' positions. Sinclair manages time and resolves procedural disputes.
Decision Method
The session must produce a governance framework outline with specific provisions on: tool approval, data classification, training, monitoring, enforcement, and incident response. Sinclair has final authority on unresolved disputes, but consensus is strongly preferred.
Phases
Principles & Priorities (30 minutes)
Each participant presents their view of what the governance framework must achieve and their non-negotiable requirements. Osei presents the technical reality. Ashworth presents the practitioner's perspective. Park presents the associate viewpoint. Vance presents the ethical obligations. Sinclair synthesizes the priorities and identifies areas of agreement and conflict. By the end of this phase, the group should have agreed on the framework's guiding principles.
Framework Drafting (30 minutes)
The group works through the six core sections of the governance framework: (1) Tool Approval Process, (2) Data Classification and Handling, (3) Training and Certification, (4) Monitoring and Compliance, (5) Enforcement and Consequences, (6) Incident Response. For each section, participants propose specific provisions, debate alternatives, and work toward consensus. Exclusive information may be revealed as it becomes relevant. This is where the hardest negotiations occur.
Implementation & Accountability (30 minutes)
The group finalizes the framework and addresses implementation: Who owns each section? What is the timeline? What resources are required? How will success be measured? Each participant makes a commitment statement — what they will do to support implementation and what they need from others. The session concludes with each participant rating their confidence in the framework's effectiveness and identifying the single biggest risk to successful implementation.
modules.m10.simulation.simVariationsTitle
- What if the state bar accelerates? During Phase 2, Vance receives a call and announces that the state bar has moved up its deadline — they want the governance framework description by Monday, not the end of next week. How does this compressed timeline affect the quality and scope of what the group can produce?
- What if Park reveals the survey results? At any point during the simulation, Park may choose to share the anonymous associate survey showing 67% unauthorized AI tool use. How do these numbers change the group's approach to enforcement and the perceived urgency of the governance challenge?
- What if a second breach is discovered? During Phase 3, Osei receives an alert and announces that his infrastructure review has uncovered evidence of a second unauthorized AI tool — used by a different partner — that may have exposed a smaller set of client documents. How does a second incident, discovered during the governance drafting session itself, affect the framework and the group dynamics?
Debriefing
modules.m10.simulation.debriefSubtitle
Policy Substance
- Review the governance framework your group produced. Does it address all six core sections? Which section was the strongest? Which needs the most additional work?
- Compare your framework with the governance policies of real law firms (if available). What gaps exist? What did your group include that others might miss?
- Is the framework you produced realistic? Could a 180-lawyer firm actually implement it within 30 days? What would need to change?
- Does the framework balance security with usability? Would attorneys at this firm actually follow it, or would it drive more shadow AI use?
Stakeholder Dynamics
- Which stakeholder had the most influence over the final framework? Was that influence proportional to their expertise, their authority, or their emotional leverage?
- Share your exclusive information with the group. How would knowing these facts earlier have changed the discussion and the outcome?
- Were there moments where personal interests conflicted with institutional interests? How were those conflicts resolved — or avoided?
Governance Design Principles
- What is the right balance between prescriptive rules (specific dos and don'ts) and principles-based guidance (standards of care with attorney judgment)?
- How should a governance framework handle the tension between partner autonomy and institutional compliance? Is the traditional partnership model compatible with enterprise-grade governance?
- Should the governance framework be developed internally or should the firm engage external experts? What are the tradeoffs?
- How often should the governance framework be reviewed and updated? What triggers should prompt an immediate review outside the regular cycle?
Personal Application
- Does your own organization have an AI governance policy? After this simulation, what would you add or change?
- If you were asked to lead AI governance at your organization, what is the first thing you would do? What is the biggest obstacle you would face?
- Reflect on Ashworth's situation: a talented attorney who adopted a tool to do better work, without malicious intent, and caused a catastrophe. How does your governance framework prevent this without stifling innovation?
- Name one specific action you will take within the next 30 days to improve AI governance in your professional environment.
References & Sources
Professional Standards
- ABA Model Rules of Professional Conduct, Rules 1.6(c), 5.1, and 5.3 — Confidentiality and supervisory duties
- ABA Formal Opinions 477R (2017) and 483 (2018) — Technology security and post-breach obligations
- State bar ethics opinions on AI governance requirements for law firms
Governance Frameworks & Resources
- NIST AI Risk Management Framework (AI RMF 1.0) — Comprehensive AI governance guidance adaptable to legal practice
- ISO/IEC 42001:2023 — AI Management System standard for organizational AI governance
- ACC (Association of Corporate Counsel) — Model Policies for Outside Counsel AI Use
Ready to Run This Simulation?
This role simulation is designed for guided facilitation as part of the Lawra Learning Program. Request a personalized program that includes expert moderation, governance framework templates, and structured debriefing.
Comments
Loading comments...