Simulationsszenario
It is 9:00 AM in the State Bar Association's hearing room. The Ethics Committee has convened a special session prompted by the Okafor v. Nextera case, but the implications extend far beyond a single dispute. Judge Patricia Holden has referred the evidentiary questions to the committee before ruling. Priya Sharma is present to testify about both the fabricated evidence she discovered and the client request she refused. Robert Kessler chairs the committee and must guide it toward actionable standards. A Court Technology Advisor has been invited to provide technical context on AI detection capabilities and limitations. Outside the hearing room, journalists and legal bloggers are waiting. Whatever this committee decides will be cited in courtrooms across the country.
Beteiligte & Rollen
modules.m4.simulation.stakeholdersSubtitle
Judge Patricia Holden — Presiding Judge
Profil
Federal district judge who referred the evidentiary questions to the ethics committee. She needs practical, workable standards she can apply in her courtroom — not aspirational principles that collapse under real-world conditions.
Ziele
- Obtain clear guidance on authentication standards for AI-generated documents
- Understand the reliability of forensic AI detection before making evidentiary rulings
- Ensure any standards adopted are practical for judges who are not technology experts
Einschränkungen
Must maintain judicial impartiality and cannot advocate for either party in the underlying case. Privately concerned that overly strict authentication standards would paralyze discovery in complex cases.
Exklusive Informationen
Judge Holden has received a sealed motion from Nextera's counsel arguing that AI-generated documents should be treated the same as any computer-generated document under existing authentication rules — no new standards needed. She has also learned that two other cases in her district involve suspected AI-generated evidence.
Priya Sharma — Plaintiff's Attorney
Profil
Employment litigator who discovered the fabricated investigation report and declined her client's request to use AI for testimony enhancement. She is both a witness to the specific facts and an advocate for clear ethical boundaries.
Ziele
- Establish that AI-fabricated evidence warrants severe sanctions, including potential criminal referral
- Draw a clear, enforceable line between legitimate AI assistance and impermissible evidence fabrication
- Obtain guidance that protects attorneys who refuse client requests for AI-enhanced testimony
Einschränkungen
Must maintain client confidentiality about Okafor's specific testimony enhancement request. Can discuss the general ethical question without revealing privileged communications.
Exklusive Informationen
Priya has learned from a colleague at another firm that Nextera's defense counsel has used AI to generate 'internal documents' in at least one other case. If true, this is not an isolated incident but a pattern of practice. She cannot prove this yet.
Marcus Webb — Defense Attorney (Nextera's Counsel)
Profil
Senior partner at a major defense firm representing Nextera Systems. His firm submitted the challenged investigation report. He maintains the report is authentic and that the forensic linguistics analysis is unreliable junk science.
Ziele
- Defend the admissibility of the investigation report under existing evidence rules
- Challenge the reliability of forensic AI detection methodologies
- Prevent the establishment of new authentication burdens that would disadvantage defendants in employment cases
Einschränkungen
Must zealously advocate for his client's position while maintaining his own ethical obligations. If the report is fabricated and he knew, he faces personal disciplinary exposure.
Exklusive Informationen
Webb privately knows the investigation report was drafted by a junior associate using AI and then presented as an authentic company document. He did not instruct the associate to do this but became aware of it after the document was produced. He has not yet decided what to do with this knowledge.
Robert Kessler — Ethics Board Chair
Profil
Chair of the state bar's ethics committee. A respected legal ethics professor who must guide a divided committee toward standards that are both principled and practical.
Ziele
- Develop clear, enforceable standards for AI use in evidence and testimony
- Achieve committee consensus despite deeply divided members
- Produce guidance that addresses both the immediate case and the broader systemic challenges
Einschränkungen
The committee is split between members who want strict prohibitions and those who favor permissive standards. Kessler must find a middle ground. His draft opinion is due within 30 days.
Exklusive Informationen
Kessler has received a confidential survey showing that 31% of attorneys in the jurisdiction have used generative AI to assist with document preparation in litigation, and only 8% have disclosed this to opposing counsel or the court. The gap between practice and disclosure is enormous.
Dr. Amara Osei — Court Technology Advisor
Profil
A computer science professor specializing in AI forensics, appointed by the court to provide technical context on AI detection capabilities, limitations, and the current state of the art.
Ziele
- Provide honest, balanced testimony about what AI detection can and cannot do
- Help the committee understand the technical limitations of any authentication standard they adopt
- Prevent the committee from adopting standards based on either overconfidence or excessive skepticism about AI detection
Einschränkungen
Must maintain scientific objectivity and resist pressure from either side to overstate or understate detection capabilities. Funded by a research grant from a technology company that makes AI tools — a potential conflict she must disclose.
Exklusive Informationen
Dr. Osei's latest research, not yet published, shows that current AI detection tools have a 15-20% false positive rate and a 25-30% false negative rate on corporate-style documents. These numbers are significantly worse than what detection tool vendors publicly claim. She also knows that detection accuracy degrades rapidly when documents are edited after AI generation.
Regeln
Dauer
60–90 Minuten insgesamt, aufgeteilt in drei Phasen
Kommunikation
Formal hearing format. Kessler chairs and manages speaking order. Witnesses (Sharma, Webb, Osei) respond to questions from the committee. Judge Holden may pose questions. All statements are on the record.
Entscheidungsmethode
The committee must produce a draft guidance document with three components: (1) standards for AI-generated evidence authentication, (2) boundaries for AI use in testimony preparation, and (3) disclosure requirements for AI-assisted litigation work product. Kessler seeks consensus but may issue a majority opinion with dissents.
Phasen
Testimony and Fact-Finding (25 minutes)
Kessler opens the hearing and calls each witness in turn. Priya Sharma testifies about discovering the fabricated report and the forensic linguistics analysis. Marcus Webb responds with his position on the report's authenticity and challenges to AI detection reliability. Dr. Osei presents the current state of AI detection science, including its limitations. Judge Holden asks clarifying questions. Each witness has 5-6 minutes followed by brief questions.
Standards Debate (25 minutes)
Open deliberation on the three components of the guidance document. Kessler frames each issue and invites positions from all participants. Key tensions will emerge: How high should the authentication burden be? Is AI-assisted testimony preparation ever permissible? Should disclosure of AI use in litigation be mandatory? Exclusive information may be revealed strategically to shift the debate.
Drafting and Decision (20 minutes)
The committee works toward final language for the guidance document. Each participant proposes specific provisions for the three components. Kessler synthesizes the proposals and identifies areas of agreement and disagreement. Where consensus cannot be reached, dissenting positions are recorded. The session concludes with each participant making a one-minute closing statement on the most important principle the guidance must reflect.
modules.m4.simulation.simVariationsTitle
- What if Webb confesses? During Phase 2, Marcus Webb reveals that he knows the investigation report was AI-generated by a junior associate. He claims he only learned this after it was produced in discovery. How does this change the hearing dynamics and the committee's deliberations?
- What if detection is unreliable? Dr. Osei reveals that her unpublished research shows AI detection has a 25-30% false negative rate. If one in four AI-generated documents cannot be detected, are authentication standards even feasible? How does the committee respond to this technical limitation?
- What if the client testifies? David Okafor appears at the hearing and publicly states that he asked his attorney to use AI to improve his testimony and she refused. He argues that using AI to organize memories is no different from an attorney coaching a witness during preparation. How does the committee address lay expectations versus professional standards?
Nachbesprechung
modules.m4.simulation.debriefSubtitle
On Evidence Integrity
- Where exactly is the line between 'AI-assisted document preparation' and 'AI-fabricated evidence'? Did the committee find a workable distinction?
- Are current evidence authentication rules adequate for AI-generated documents, or do we need entirely new standards?
- How should courts handle the gap between AI detection science and the legal standards for authentication?
- Should the burden of proving a document is authentic be higher when AI generation is suspected? Why or why not?
On Testimony and Preparation
- Is there a principled distinction between using AI to organize a witness's genuine memories and using AI to fill memory gaps?
- How does AI-enhanced testimony preparation compare to traditional witness coaching? Is it different in kind or only in degree?
- Should attorneys be required to disclose AI use in witness preparation? What would that requirement look like in practice?
On Professional Responsibility
- What should happen to an attorney who knowingly submits AI-fabricated evidence? Is this different from traditional evidence fabrication?
- How should bar associations enforce AI-related ethical standards when detection is imperfect?
- Should there be a safe harbor for attorneys who use AI in good faith with reasonable safeguards?
- How do you create accountability for AI misuse without chilling legitimate AI adoption?
Zu Ihrer eigenen Praxis
- Have you ever wondered whether a document produced in discovery was authentic? Would AI generation concerns change your approach?
- How would you handle a client who insists on using AI to improve their testimony? What would you say?
- Does your jurisdiction have clear guidance on AI use in litigation? If not, what guidance would you want?
- Name one specific practice you will adopt based on this simulation to address AI-related evidence risks.
Referenzen & Quellen
Beweismittel und Authentifizierung
- Federal Rules of Evidence 901, 902, 1003 — Authentication, self-authentication, and duplicate admissibility standards
- Daubert v. Merrell Dow Pharmaceuticals, 509 U.S. 579 (1993) — Reliability standard for forensic AI detection expert testimony
- Emerging judicial orders on AI disclosure in litigation filings (2023-2025)
Professional Ethics and AI
- ABA Model Rules 3.3 (Candor), 3.4 (Fairness), 8.4 (Misconduct) — Framework for evidence integrity obligations
- ABA Formal Opinion 512 (2024) — Attorney obligations when using generative AI
- State bar ethics opinions on AI in evidence preparation and disclosure (2024-2025)
Ready to Hold This Hearing?
This simulation is designed for guided facilitation as part of the Lawra Learning Program. Request a session with role assignments, evidence packets, and expert moderation for your team or institution.
Kommentare
Kommentare werden geladen...