4 Fallstudie

Das Beweisproblem — KI-generierte Beweise und die Grenzen der Interessenvertretung

Attorney Priya Sharma discovered that opposing counsel's key exhibit was AI-generated. Before she could act, her own client asked her to use AI to 'enhance' his account of the events. She now faces two ethical crises simultaneously — and the answers to each one complicate the other.

Dauer

90–120 Minuten

Teilnehmer

4–6 Teilnehmer

← Zurück zum Lehrplan

Der Fall

The case was straightforward, or so it seemed. David Okafor, a senior software engineer, sued his former employer Nextera Systems for wrongful termination, alleging he was fired in retaliation for reporting discriminatory hiring algorithms to the company's ethics board. Priya Sharma, a partner at a mid-sized plaintiff's employment firm, took the case on contingency. The facts were strong: Okafor had documented his complaints, and the timeline between his report and termination was damning.

Then the discovery phase produced a surprise. Nextera's counsel, a large defense firm, submitted a detailed 'internal investigation report' purportedly prepared by the company's Chief Compliance Officer three weeks before Okafor's termination. The report concluded that Okafor's performance had been declining for months and recommended termination for cause — completely independent of any retaliation motive. If authentic, it demolished the retaliatory timeline that was the backbone of Priya's case.

Something about the report felt wrong. The writing was unnaturally smooth — no hedging language, no organizational jargon, no formatting inconsistencies typical of internal corporate documents. Priya hired a forensic linguistics expert who concluded with 94% confidence that the report was generated by a large language model. The metadata showed the document was created at 2:17 AM, three days after Okafor filed his lawsuit — not three weeks before his termination as claimed. Someone had used AI to fabricate evidence and backdate it.

Wichtige Meilensteine

1

8 months ago — Okafor Reports Discriminatory Algorithm

David Okafor submits a formal complaint to Nextera's internal ethics board, documenting that the company's AI-driven hiring tool systematically disadvantages candidates over 40. He provides statistical analysis and specific examples.

2

6 months ago — Okafor Is Terminated

Nextera terminates Okafor citing 'restructuring.' No prior performance warnings exist in his personnel file. The termination occurs 8 weeks after his ethics complaint — within the retaliatory timing window recognized by courts.

3

4 months ago — Lawsuit Filed

Priya Sharma files a wrongful termination and retaliation complaint on Okafor's behalf. The complaint details the timeline and alleges the termination was pretextual.

4

2 months ago — The Report Surfaces

During discovery, Nextera produces the 'internal investigation report' dated three weeks before termination. Forensic analysis reveals it was AI-generated and created three days after the lawsuit was filed. The metadata and linguistic analysis are compelling but not yet dispositive.

Warum das wichtig ist

This case sits at the frontier of legal ethics. AI-generated evidence is not a hypothetical — forensic experts are already being retained to identify it, and courts are beginning to grapple with authentication standards for AI-era documents. But the case raises an even harder question: when your own client wants to use the same technology to 'strengthen' their testimony, where exactly is the line? Priya must simultaneously attack the integrity of opposing evidence while defending the integrity of her own. The two positions must be ethically consistent — and that consistency is harder to achieve than it appears.

Kontextanalyse

The legal, technological, evidentiary, and ethical dimensions that frame this case.

Beweisrechtlicher Rahmen

  • Federal Rules of Evidence 901 — Authentication requirement: proponent must produce evidence sufficient to support a finding that the item is what the proponent claims it is
  • Federal Rules of Evidence 1003 — Duplicates are generally admissible, but authenticity questions can require the original
  • Forensic linguistics as expert testimony — Daubert standards for reliability of AI-detection methodologies
  • Emerging case law on AI-generated document authentication and metadata analysis

Berufliches Verhalten

  • ABA Model Rule 3.3 — Duty of candor: a lawyer shall not knowingly offer evidence the lawyer knows to be false
  • ABA Model Rule 3.4 — Fairness to opposing party: shall not falsify evidence or obstruct access to evidence
  • ABA Model Rule 8.4(c) — Misconduct: conduct involving dishonesty, fraud, deceit, or misrepresentation
  • Rule 37(e) — Sanctions for failure to preserve electronically stored information, potentially applicable to AI-generated documents

Technologischer Kontext

  • Current large language models can produce documents that closely mimic corporate writing styles
  • AI-detection tools have significant false positive and false negative rates — no tool is definitive
  • Document metadata can be manipulated, but forensic analysis can often detect inconsistencies
  • The arms race between AI generation and AI detection is accelerating, with no stable equilibrium in sight

Ethische Spannungen

  • The asymmetry between attacking AI-generated evidence (opposing) and using AI-enhanced testimony (own client) must be ethically reconciled
  • Zealous advocacy requires pursuing every legitimate advantage — but where does legitimate end and deceptive begin?
  • The duty to the client and the duty to the tribunal can point in opposite directions when AI is involved
  • Emerging questions about whether attorneys have an affirmative duty to investigate AI provenance of evidence

Beteiligte & Rollen

Each participant assumes one role with distinct objectives, constraints, and private information. The roles are designed to create genuine ethical tension.

1

Priya Sharma — Plaintiff's Attorney

Profil

An experienced employment litigator representing David Okafor. She has strong evidence that opposing counsel's key exhibit is AI-fabricated. Simultaneously, her own client is pressuring her to use AI to strengthen his testimony.

Ziele

  • Challenge the authenticity of the AI-generated investigation report through proper legal channels
  • Maintain ethical boundaries with her own client regarding AI-enhanced testimony
  • Win the case on its merits while preserving her professional integrity

Einschränkungen

Priya's client has explicitly asked her to use AI to 'make his account more detailed and compelling.' She has declined once, but Okafor is frustrated and has threatened to find another attorney. The case is on contingency — losing the client means losing the investment.

2

David Okafor — Plaintiff

Profil

A senior software engineer who was fired after reporting a discriminatory AI hiring tool. His retaliation claim is strong on the timeline but his personal account of events is vague on key details because the termination meeting happened quickly and he was in shock.

Ziele

  • Win the case and hold Nextera accountable for retaliatory termination
  • Recover lost wages and professional reputation damage
  • Get his detailed account of events heard by the court, even if his memory is incomplete

Einschränkungen

Okafor genuinely believes AI could help him 'reconstruct' what happened in the termination meeting. He sees no ethical difference between using AI to organize his thoughts and using it to fill memory gaps. He is also a technologist who understands AI better than most clients — and he knows the report against him is fake.

3

Judge Patricia Holden — Presiding Judge

Profil

A federal district judge with 15 years on the bench. She has seen the forensic linguistics report challenging the investigation document and must decide how to handle AI-generated evidence questions that have no established precedent in her circuit.

Ziele

  • Establish fair and workable standards for AI-generated evidence authentication
  • Protect the integrity of the proceedings without prejudicing either side
  • Create a reasoned ruling that can serve as persuasive authority for other courts

Einschränkungen

The judge knows that whatever standard she sets will be scrutinized. Too strict and she excludes potentially legitimate AI-assisted documents. Too lenient and she opens the door to fabrication. She has received an amicus brief from a legal technology organization urging liberal admission standards.

4

Robert Kessler — Ethics Board Member

Profil

A member of the state bar's ethics committee, present as an observer and advisor. He is drafting a formal ethics opinion on AI-generated evidence and AI-enhanced testimony that will apply to all attorneys in the jurisdiction.

Ziele

  • Develop practical, enforceable standards for AI use in evidence and testimony
  • Distinguish between legitimate AI assistance and impermissible fabrication or enhancement
  • Balance innovation in legal practice with protection of the justice system's integrity

Einschränkungen

Kessler's committee is split. Some members want to ban AI from any evidentiary context. Others argue that AI-assisted document preparation is no different from hiring a ghostwriter. He needs a framework that both camps can accept.

Lernaktivitäten

Sechs progressive Aufgabentypen, die von sachlichem Verständnis bis zur beruflichen Selbstreflexion über die Ethik von KI bei Beweismitteln und Aussagen reichen.

  • Read the full case narrative. Identify every point where an ethical rule is implicated and name the specific rule.
  • Research forensic linguistics methodology: How does an expert determine whether a document was AI-generated? What are the reliability limitations?
  • Map the ethical positions of each stakeholder. Where do their interests align and where do they conflict?
  • Compare this case to Mata v. Avianca. What is similar (AI-generated legal content) and what is fundamentally different (evidence fabrication versus citation fabrication)?
  • Explain the distinction between 'AI-generated evidence' and 'AI-fabricated evidence.' Is every AI-generated document fraudulent? Where is the line?
  • Articulate Okafor's perspective: Why does he see AI testimony enhancement as legitimate? What is wrong with his reasoning — or is it actually sound?
  • Construct the strongest possible argument that the investigation report is authentic. Then construct the strongest argument that it is fabricated. Which is more persuasive and why?
  • Explain why Priya's two problems (opposing AI evidence and client AI testimony requests) are ethically connected. Can she take inconsistent positions?
  • Evaluate the forensic linguistics evidence: Is 94% confidence sufficient to challenge a document's authenticity? What standard should courts apply?
  • Analyze the ethical asymmetry: Is there a principled distinction between attacking AI-generated evidence and refusing to use AI-enhanced testimony for your own client?
  • Assess whether current evidence authentication rules are adequate for AI-generated documents, or whether new rules are needed
  • Question whether attorneys have an affirmative duty to investigate the AI provenance of evidence produced in discovery — or only when something 'feels wrong'
  • Draft a motion to challenge the investigation report's authenticity, including the legal standard, factual basis, and requested relief
  • Write the script for Priya's conversation with Okafor explaining why she cannot use AI to enhance his testimony, while keeping the client relationship intact
  • Propose an evidence authentication framework for AI-era documents that a court could adopt as a standing order
  • Role-play the ethics hearing as your assigned character. Prepare a 3-minute opening statement on the proper boundaries of AI in evidence and testimony
  • Compare the authentication motions drafted by different teams. Which would be most effective before a skeptical judge?
  • Assess each proposed evidence framework: Is it workable? Does it balance reliability with efficiency? Would it survive appeal?
  • Evaluate the client conversation scripts: Would Okafor actually be persuaded? Is the ethical explanation clear enough for a non-lawyer?
  • Review each team's proposed ethics opinion. Does it provide clear guidance that practicing attorneys can actually follow?
  • Before studying this case, did you think AI-generated evidence was a real concern or a hypothetical? Has your view changed?
  • How do you personally draw the line between 'AI-assisted preparation' and 'AI-fabricated evidence'? Is the line as clear as you thought?
  • Reflect on whether you would have noticed the investigation report was AI-generated. What skills are needed to detect AI-generated documents?
  • Identify one ethical principle from this case that you will apply differently in your own practice going forward.

Übung: Ethik in der Praxis

Write two parallel analyses: first, the strongest ethical argument for why Priya should be allowed to use AI to help Okafor organize his genuine memories into a coherent account. Then, the strongest ethical argument for why this crosses the line into testimony fabrication. Identify the precise point where legitimate assistance becomes impermissible enhancement. Present both analyses to the group and defend whichever position you find more persuasive.

Referenzen & Quellen

Beweismittel und Authentifizierung

  • Federal Rules of Evidence 901(b)(4) — Distinctive characteristics and other authentication methods
  • Daubert v. Merrell Dow Pharmaceuticals, 509 U.S. 579 (1993) — Standard for admissibility of expert testimony, applicable to forensic AI detection
  • Emerging case law on deepfake and AI-generated evidence authentication (2023-2025)

Berufsethik

  • ABA Model Rules 3.3 (Candor), 3.4 (Fairness), and 8.4 (Misconduct) — The core ethical framework for evidence integrity
  • ABA Formal Opinion 512 (2024) — Generative AI and attorney obligations
  • State bar ethics opinions on AI-assisted document preparation and evidence (2024-2025)

Bereit, dieses Dilemma anzugehen?

This case study is designed for guided facilitation as part of the Lawra Learning Program. Request a session that includes the ethics hearing simulation, evidence analysis exercises, and expert debriefing.

Kommentare

Kommentare werden geladen...

0/2000 Kommentare werden vor der Veröffentlichung moderiert.