4 Simulazione di Ruoli

The Ethics Committee Hearing — AI Evidence and Testimony Boundaries

The State Bar Ethics Committee has convened an emergency hearing. Two cases have collided: AI-fabricated evidence submitted by defense counsel and a plaintiff's request to use AI to enhance witness testimony. The committee must establish standards that will define how AI intersects with evidence and testimony for years to come.

Durata

60-90 minuti

Partecipanti

4-6 partecipanti

← Torna al Programma

Scenario della Simulazione

It is 9:00 AM in the State Bar Association's hearing room. The Ethics Committee has convened a special session prompted by the Okafor v. Nextera case, but the implications extend far beyond a single dispute. Judge Patricia Holden has referred the evidentiary questions to the committee before ruling. Priya Sharma is present to testify about both the fabricated evidence she discovered and il cliente request she refused. Robert Kessler chairs the committee and must guide it toward actionable standards. A Court Technology Advisor has been invited to provide technical context on AI detection capabilities and limitations. Outside the hearing room, journalists and legal bloggers are waiting. Whatever this committee decides will be cited in courtrooms across the country.

Parti Interessate e Ruoli

modules.m4.simulation.stakeholdersSubtitle

1

Judge Patricia Holden — Presiding Judge

Profilo

Federal district judge who referred the evidentiary questions to the ethics committee. She needs practical, workable standards she can apply in her courtroom — not aspirational principles that collapse under real-world conditions.

Obiettivi

  • Obtain clear guidance on authentication standards for generato dall'IA documents
  • Understand the reliability of forensic AI detection before making evidentiary rulings
  • Ensure any standards adopted are practical for judges who are not technology experts

Vincoli

Must maintain judicial impartiality and cannot advocate for either party in the underlying case. Privately concerned that overly strict authentication standards would paralyze discovery in complex cases.

Informazioni Riservate

Judge Holden has received a sealed motion from Nextera's counsel arguing that generato dall'IA documents should be treated the same as any computer-generated document under existing authentication rules — no new standards needed. She has also learned that two other cases in her district involve suspected generato dall'IA evidence.

2

Priya Sharma — Plaintiff's Attorney

Profilo

Employment litigator who discovered the fabricated investigation report and declined her client's request to use AI for testimony enhancement. She is both a witness to the specific facts and an advocate for clear ethical boundaries.

Obiettivi

  • Establish that AI-fabricated evidence warrants severe sanctions, including potential criminal referral
  • Draw a clear, enforceable line between legitimate AI assistance and impermissible evidence fabrication
  • Obtain guidance that protects attorneys who refuse client requests for AI-enhanced testimony

Vincoli

Must maintain client riservatezza about Okafor's specific testimony enhancement request. Can discuss the general ethical question without revealing privileged communications.

Informazioni Riservate

Priya has learned from a colleague at another firm that Nextera's defense counsel has used AI to generate 'internal documents' in at least one other case. If true, this is not an isolated incident but a pattern of practice. She cannot prove this yet.

3

Marcus Webb — Defense Attorney (Nextera's Counsel)

Profilo

Senior partner at a major defense firm representing Nextera Systems. His firm submitted the challenged investigation report. He maintains the report is authentic and that the forensic linguistics analysis is unreliable junk science.

Obiettivi

  • Defend the admissibility of the investigation report under existing evidence rules
  • Challenge the reliability of forensic AI detection methodologies
  • Prevent the establishment of new authentication burdens that would disadvantage defendants in employment cases

Vincoli

Must zealously advocate for his client's position while maintaining his own ethical obligations. If the report is fabricated and he knew, he faces personal disciplinary exposure.

Informazioni Riservate

Webb privately knows the investigation report was drafted by a junior associate using AI and then presented as an authentic company document. He did not instruct the associate to do this but became aware of it after the document was produced. He has not yet decided what to do with this knowledge.

4

Robert Kessler — Ethics Board Chair

Profilo

Chair of the state bar's ethics committee. A respected legal ethics professor who must guide a divided committee toward standards that are both principled and practical.

Obiettivi

  • Develop clear, enforceable standards for AI use in evidence and testimony
  • Achieve committee consensus despite deeply divided members
  • Produce guidance that addresses both the immediate case and the broader systemic challenges

Vincoli

The committee is split between members who want strict prohibitions and those who favor permissive standards. Kessler must find a middle ground. His draft opinion is due within 30 days.

Informazioni Riservate

Kessler has received a confidential survey showing that 31% of attorneys in the jurisdiction have used generative AI to assist with document preparation in litigation, and only 8% have disclosed this to opposing counsel or the court. The gap between practice and disclosure is enormous.

5

Dr. Amara Osei — Court Technology Advisor

Profilo

A computer science professor specializing in AI forensics, appointed by the court to provide technical context on AI detection capabilities, limitations, and the current state of the art.

Obiettivi

  • Provide honest, balanced testimony about what AI detection can and cannot do
  • Help the committee understand the technical limitations of any authentication standard they adopt
  • Prevent the committee from adopting standards based on either overconfidence or excessive skepticism about AI detection

Vincoli

Must maintain scientific objectivity and resist pressure from either side to overstate or understate detection capabilities. Funded by a research grant from a technology company that makes strumento di IAs — a potential conflict she must disclose.

Informazioni Riservate

Dr. Osei's latest research, not yet published, shows that current AI detection tools have a 15-20% false positive rate and a 25-30% false negative rate on corporate-style documents. These numbers are significantly worse than what detection tool vendors publicly claim. She also knows that detection accuracy degrades rapidly when documents are edited after AI generation.

Regole

Durata

60-90 minutes total, divided into three phases

Comunicazione

Formal hearing format. Kessler chairs and manages speaking order. Witnesses (Sharma, Webb, Osei) respond to questions from the committee. Judge Holden may pose questions. All statements are on the record.

Metodo Decisionale

The committee must produce a draft guidance document with three components: (1) standards for generato dall'IA evidence authentication, (2) boundaries for AI use in testimony preparation, and (3) disclosure requirements for assistito dall'IA litigation work product. Kessler seeks consensus but may issue a majority opinion with dissents.

Fasi

Fase 1

Testimony and Fact-Finding (25 minutes)

Kessler opens the hearing and calls each witness in turn. Priya Sharma testifies about discovering the fabricated report and the forensic linguistics analysis. Marcus Webb responds with his position on the report's authenticity and challenges to AI detection reliability. Dr. Osei presents the current state of AI detection science, including its limitations. Judge Holden asks clarifying questions. Each witness has 5-6 minutes followed by brief questions.

Fase 2

Standards Debate (25 minutes)

Open deliberation on the three components of the guidance document. Kessler frames each issue and invites positions from all participants. Key tensions will emerge: How high should the authentication burden be? Is assistito dall'IA testimony preparation ever permissible? Should disclosure of AI use in litigation be mandatory? Exclusive information may be revealed strategically to shift the debate.

Fase 3

Drafting and Decision (20 minutes)

The committee works toward final language for the guidance document. Each participant proposes specific provisions for the three components. Kessler synthesizes the proposals and identifies areas of agreement and disagreement. Where consensus cannot be reached, dissenting positions are recorded. The session concludes with each participant making a one-minute closing statement on the most important principle the guidance must reflect.

modules.m4.simulation.simVariationsTitle

  • What if Webb confesses? During Phase 2, Marcus Webb reveals that he knows the investigation report was generato dall'IA by a junior associate. He claims he only learned this after it was produced in discovery. How does this change the hearing dynamics and the committee's deliberations?
  • What if detection is unreliable? Dr. Osei reveals that her unpublished research shows AI detection has a 25-30% false negative rate. If one in four generato dall'IA documents cannot be detected, are authentication standards even feasible? How does the committee respond to this technical limitation?
  • What if il cliente testifies? David Okafor appears at the hearing and publicly states that he asked his attorney to use AI to improve his testimony and she refused. He argues that using AI to organize memories is no different from an attorney coaching a witness during preparation. How does the committee address lay expectations versus professional standards?

Debriefing

modules.m4.simulation.debriefSubtitle

On Evidence Integrity

  • Where exactly is the line between 'assistito dall'IA document preparation' and 'AI-fabricated evidence'? Did the committee find a workable distinction?
  • Are current evidence authentication rules adequate for generato dall'IA documents, or do we need entirely new standards?
  • How should courts handle the gap between AI detection science and the legal standards for authentication?
  • Should the burden of proving a document is authentic be higher when AI generation is suspected? Why or why not?

On Testimony and Preparation

  • Is there a principled distinction between using AI to organize a witness's genuine memories and using AI to fill memory gaps?
  • How does AI-enhanced testimony preparation compare to traditional witness coaching? Is it different in kind or only in degree?
  • Should attorneys be required to disclose AI use in witness preparation? What would that requirement look like in practice?

On Professional Responsibility

  • What should happen to an attorney who knowingly submits AI-fabricated evidence? Is this different from traditional evidence fabrication?
  • How should bar associations enforce AI-related ethical standards when detection is imperfect?
  • Should there be a safe harbor for attorneys who use AI in good faith with reasonable safeguards?
  • How do you create accountability for AI misuse without chilling legitimate AI adoption?

On Your Own Practice

  • Have you ever wondered whether a document produced in discovery was authentic? Would AI generation concerns change your approach?
  • How would you handle a client who insists on using AI to improve their testimony? What would you say?
  • Does your jurisdiction have clear guidance on AI use in litigation? If not, what guidance would you want?
  • Name one specific practice you will adopt based on this simulation to address AI-related evidence risks.

Riferimenti e Fonti

Prove e autenticazione

  • Federal Rules of Evidence 901, 902, 1003 — Authentication, self-authentication, and duplicate admissibility standards
  • Daubert v. Merrell Dow Pharmaceuticals, 509 U.S. 579 (1993) — Reliability standard for forensic AI detection expert testimony
  • Emerging judicial orders on AI disclosure in litigation filings (2023-2025)

Professional Ethics and AI

  • ABA Model Rules 3.3 (Candor), 3.4 (Fairness), 8.4 (Misconduct) — Framework for evidence integrity obligations
  • ABA Formal Opinion 512 (2024) — Attorney obligations when using generative AI
  • State bar ethics opinions on AI in evidence preparation and disclosure (2024-2025)

Ready to Hold This Hearing?

Questa simulazione is designed for guided facilitation as part of the Lawra Learning Program. Request a session with role assignments, evidence packets, and expert moderation for your team or institution.

Commenti

Caricamento commenti...

0/2000 I commenti sono moderati prima della pubblicazione.