シミュレーションシナリオ
It is Wednesday, 10:00 AM. Diana Rivera has called an emergency review meeting in the main conference room. The prospective client meeting was postponed with a vague excuse. Sarah Park has printed the AI-generated memo alongside her Westlaw verification showing three fabricated citations and a conflated statutory analysis. Marcus Chen has been asked to bring his laptop with the AI conversation history. Dr. James Whitfield has prepared a one-page summary of the AI governance recommendations he submitted two months ago that were never implemented. Everyone in the room knows this could have been a disaster. The question now is what happens next.
ステークホルダーと役割
modules.m3.simulation.stakeholdersSubtitle
Diana Rivera — Managing Partner
プロフィール
Championed the AI initiative, now facing the consequences of inadequate implementation. Must balance accountability with morale and the firm's continued AI adoption.
目的
- Determine appropriate accountability without scapegoating
- Establish immediate safeguards to prevent recurrence
- Preserve the firm's AI adoption momentum while restoring confidence
制約
Must report to the malpractice insurer next week. Knows that overreacting could cause staff to abandon AI tools entirely, while underreacting could lead to a real incident.
限定情報
Diana has received an anonymous email from another paralegal stating that AI-generated memos have been sent to clients in at least two other matters without any attorney verification. She has not yet confirmed whether this is true.
Marcus Chen — Senior Paralegal
プロフィール
The paralegal who generated the memo. Experienced, overworked, and deeply embarrassed. Knows the AI tool was used exactly as most staff had been using it.
目的
- Demonstrate that the workflow failure is systemic, not individual
- Protect his professional standing and continued employment
- Advocate for realistic workload expectations alongside new AI protocols
制約
Knows that admitting the systemic nature of the problem may implicate colleagues, but staying silent makes him the sole scapegoat.
限定情報
Marcus has chat logs showing that the AI tool's own documentation recommends 'always verifying legal citations against authoritative databases' — a warning that was not included in the firm's training materials. He also knows two other paralegals have been doing the same thing.
Sarah Park — Associate Attorney
プロフィール
The attorney who assigned the memo and caught the errors. Relieved but aware of her own role in the delegation failure. Under pressure from partners to maximize AI-driven efficiency.
目的
- Establish that the delegation was reasonable but the verification gap was not
- Push for clear assignment protocols that specify jurisdiction, depth, and verification expectations
- Address the culture of time pressure that incentivizes cutting corners
制約
The partner she reports to has explicitly told associates to 'let the AI do the first draft and just clean it up.' Raising this could damage her standing.
限定情報
Sarah discovered during her verification that one of the fabricated cases was actually a real case name but with completely different facts and holdings. The AI had taken a real citation and attached fabricated content to it — a more sophisticated form of hallucination than simple fabrication.
Dr. James Whitfield — Quality Assurance Lead
プロフィール
The firm's AI governance specialist who has been advocating for stricter protocols since being hired. Has a comprehensive governance proposal that was submitted but never reviewed.
目的
- Use this incident to implement the governance framework he has been proposing
- Establish a formal AI incident review and reporting process
- Secure budget and authority for ongoing AI competency assessments
制約
Must present reforms as constructive rather than punitive to maintain staff buy-in. Knows that overly burdensome protocols will be ignored just like the original guidelines.
限定情報
James has benchmarked the firm's AI practices against 15 other firms of similar size. Rivera & Goldstein ranks in the bottom quartile for AI governance maturity. He also discovered that the enterprise AI vendor recently updated its terms of service to include a liability limitation clause that the firm has not reviewed.
ルール
所要時間
60〜90分(3フェーズに分割)
コミュニケーション
Open discussion format; Diana chairs the meeting and manages speaking order. Participants may address each other directly but must stay in character.
決定方法
The meeting must produce three written outcomes: (1) an accountability determination, (2) immediate process changes effective today, and (3) a 30-day action plan. Diana has final decision authority but must secure consensus from at least two other participants.
フェーズ
インシデントレビュー(20分)
Diana opens the meeting and asks each participant to present their account of what happened. Marcus walks through his process, including the exact prompt he used. Sarah explains how she caught the errors and what specifically was wrong. James presents his governance assessment. Each person has 4-5 minutes. No cross-examination yet — this phase is about establishing the facts.
根本原因分析と責任(25分)
Open discussion about what went wrong and who bears responsibility. Participants may reveal exclusive information strategically. Diana must navigate between individual accountability and systemic reform. Key tensions: Was this Marcus's fault for not verifying? Sarah's fault for a vague assignment? Diana's fault for championing adoption without governance? The firm's fault for ignoring James's proposals?
解決と改革(20分)
The group must agree on three deliverables: an accountability determination (consequences, if any, for individuals), immediate process changes (what changes today), and a 30-day action plan (what the firm will implement within a month). Each participant advocates for their priorities. Diana must forge consensus and make final decisions.
modules.m3.simulation.simVariationsTitle
- What if the memo had reached the client? Replay the scenario assuming Sarah did not catch the errors and the memo was forwarded to the prospective client, who then retained the firm based on the inflated assessment. How does the accountability calculus change?
- What if the anonymous tip is true? During Phase 2, Diana reveals the anonymous email about unverified AI memos being sent to clients in other matters. The problem is systemic. How does the group respond when the incident is no longer isolated?
- What if Marcus pushes back? During the accountability discussion, Marcus reveals that the partner pressuring associates to maximize AI efficiency once told him directly: 'Just let the AI handle it, that is what we are paying for.' How does this change the responsibility analysis?
デブリーフィング
modules.m3.simulation.debriefSubtitle
プロンプト設計について
- What specific elements of Marcus's original prompt contributed to the AI producing fabricated output?
- How would you have structured the prompt differently? Write your improved version.
- Is it realistic to expect every paralegal to be an expert prompt engineer? If not, what institutional solutions help?
- How do you distinguish between an AI output that is wrong because of a bad prompt versus one that is wrong despite a good prompt?
監督と検証について
- Should AI-generated work product be subject to different verification standards than human-generated work product? Why or why not?
- How do you build verification into workflows without making AI tools slower than doing the research manually?
- What role should the assigning attorney play in specifying how AI should be used for delegated tasks?
組織文化について
- How does a culture of time pressure and efficiency metrics contribute to AI-related risks?
- What is the right balance between encouraging AI experimentation and enforcing quality controls?
- Should firms measure AI output quality as rigorously as they measure AI adoption rates?
- How do you create psychological safety for reporting AI-related errors and near-misses?
自分の実務について
- Have you ever used an AI tool for research without verifying every citation? What made you trust the output?
- Does your organization have clear protocols for AI-assisted legal research? If not, what would you propose?
- What is the single most important prompt engineering habit that would have prevented this incident?
- Name one concrete change you will implement in your own workflow within the next week.
参考文献・出典
専門的基準
- ABA Model Rule 5.3 — Responsibilities Regarding Nonlawyer Assistants (supervision of AI-assisted work)
- ABA Formal Opinion 512 (2024) — Generative AI and the duties of competence, confidentiality, and supervision
- State Bar of California Practical Guidance on AI for Lawyers (2024) — Verification and supervision requirements
プロンプトエンジニアリングとAIガバナンス
- Legal Prompt Engineering: Principles for Reliable AI-Assisted Research — Stanford CodeX (2024)
- AALL Guidelines for the Use of AI in Legal Research (2024)
- International Legal Technology Association — AI Governance Framework for Law Firms (2024)
コメント
コメントを読み込み中...