Explore

What Not to Do

The Danger Zone. These are the mistakes that have already cost lawyers their reputations, their money, and their licenses. Each one is backed by real cases, real consequences, and real lessons. Learn from the mistakes of others so you do not have to learn from your own.

Lawra
Learning from others' mistakes is far cheaper than making your own. And in law, AI mistakes come with a price.

Why this matters: AI tools are powerful but imperfect. The legal profession has already seen sanctions, suspensions, and malpractice claims arising from improper AI use. The common thread in every case is the same — the lawyer treated AI output as a finished product instead of a starting point.

1

Don't Submit AI Output Without Reading Every Word

Lawyers are personally responsible for every word they file with a court. AI-generated text can contain fabricated citations, hallucinated case law, and confident-sounding nonsense.

Risk

Professional sanctions, malpractice liability, case dismissal, judicial referral to bar disciplinary authorities

Real-World Example

In Mata v. Avianca, Inc. (S.D.N.Y. 2023), attorneys Steven Schwartz and Peter LoDuca submitted a brief containing six entirely fabricated case citations generated by ChatGPT. When opposing counsel and Judge P. Kevin Castel could not locate the cases, the attorneys doubled down — asking ChatGPT to confirm the cases existed. Judge Castel imposed a $5,000 fine on each attorney, finding they acted in bad faith by failing to verify the AI's output. The case became a global cautionary tale.

Mitigation

Read every word of AI-generated output. Verify every citation against primary sources — Westlaw, LexisNexis, or official court databases. Treat AI output as an unverified first draft from an unreliable intern, not as finished work product.

Look, I get it — the whole point of AI is to save time. But 'saving time' by filing fake cases is like 'saving money' by not paying your malpractice insurance. You wouldn't file a brief your paralegal wrote without reading it. You definitely shouldn't file one your robot wrote without reading it. The AI doesn't have a law license to lose. You do.

-- Lawra
2

Don't Paste Confidential Client Information into Public AI Tools

Consumer AI tools like ChatGPT, Gemini, and Claude (free tiers) may use your inputs for model training. Pasting client data into these tools is a confidentiality breach.

Risk

Breach of attorney-client privilege, violation of duty of confidentiality, malpractice liability, regulatory penalties, client trust destruction

Real-World Example

In April 2023, Samsung Electronics suffered three separate data leaks in just 20 days when engineers pasted proprietary source code, internal meeting notes, and hardware test data into ChatGPT. Samsung subsequently banned all employee use of generative AI tools. While this was a corporate incident, it illustrates exactly the risk lawyers face: once information enters a public AI system, you lose all control over it. In the legal context, the New York City Bar Association issued Formal Opinion 2023-7 specifically warning attorneys about this risk.

Mitigation

The safest option: install a local AI on your own computer — your data never leaves your machine (see our Local AI guide). Alternatively, use enterprise-grade AI tools with contractual data protection guarantees (e.g., Azure OpenAI, ChatGPT Enterprise, Claude for Business). Never paste identifiable client information into free-tier consumer AI tools. Anonymize and redact before using AI for any client-related task. Establish firm-wide protocols for AI data handling.

Here's the thing about attorney-client privilege: it's not just a rule, it's the foundation of the entire legal system. Your client told you their secrets because they trust you. Pasting those secrets into ChatGPT is like leaving their file open on a park bench and hoping no one reads it. Except worse — because at least a park bench doesn't learn from what's left on it.

-- Lawra
3

Don't Assume AI Understands Your Jurisdiction

AI models are trained on massive datasets that blend legal systems from multiple countries and jurisdictions. They routinely conflate common law with civil law, federal with state rules, and one country's statutes with another's.

Risk

Filing based on wrong jurisdiction's law, providing incorrect legal advice, malpractice liability, loss of credibility with courts

Real-World Example

In a 2023 incident documented by legal technology researchers, a U.S. attorney using ChatGPT for contract law research received advice that seamlessly blended principles from the Uniform Commercial Code (U.S.), the Sale of Goods Act 1979 (England and Wales), and the UN Convention on Contracts for the International Sale of Goods (CISG) — without any indication that three different legal regimes were being mixed. The attorney nearly incorporated foreign legal standards into a purely domestic contract dispute. Separately, in Zachariah C. Crabill's 2023 disciplinary case in Colorado, the attorney filed AI-generated motions that cited non-existent Colorado cases constructed from patterns the AI had absorbed across multiple U.S. jurisdictions.

Mitigation

Always specify the exact jurisdiction in your prompts — state, federal circuit, country, and applicable body of law. Cross-reference every AI output against jurisdiction-specific sources. Never trust an AI's implicit assumption about which law applies.

AI models are like that well-traveled friend who's been everywhere but remembers everywhere as the same place. 'Oh, you're in Texas? Let me tell you about this great precedent from... *checks notes*... the High Court of England and Wales.' Jurisdictional precision isn't a nice-to-have — it's what separates legal advice from legal fiction.

-- Lawra
5

Don't Hide Your AI Use from Courts Requiring Disclosure

A growing number of courts now require attorneys to disclose when AI tools were used in preparing filings. Failing to disclose when required is a fast track to sanctions and loss of credibility.

Risk

Sanctions for non-compliance with court orders, loss of judicial trust, disciplinary proceedings, case-altering consequences

Real-World Example

In June 2023, Judge Brantley Starr of the U.S. District Court for the Northern District of Texas issued one of the first standing orders requiring all attorneys to certify that AI-generated text in filings had been verified by a human. By early 2024, the Fifth Circuit adopted a circuit-wide rule requiring AI disclosure certifications. The Eleventh Circuit, multiple U.S. district courts, and courts in the UK, Canada, and Australia followed with similar requirements. In February 2024, a federal magistrate judge in the Eastern District of Pennsylvania sanctioned an attorney for failing to disclose AI use after adopting a local rule requiring it.

Mitigation

Check every jurisdiction's current standing orders and local rules regarding AI disclosure before filing. Err on the side of transparency — even when not explicitly required, voluntary disclosure builds trust. Develop a standard AI disclosure certification for your firm.

The courts are not asking you to confess. They're asking you to be a professional. Disclosure rules exist because judges got tired of playing 'guess which citations are real.' If you're embarrassed to tell a judge you used AI, that's probably a sign you should have used it more carefully. Transparency is not a weakness — it's a signal that you're taking responsibility for your work product.

-- Lawra
6

Don't Treat AI Output as Legal Research

AI generates plausible-sounding legal text, not verified legal analysis. It produces text that looks like research but has not actually been researched. Treating AI output as a substitute for proper legal research is a professional hazard.

Risk

Reliance on fabricated authorities, incomplete legal analysis, missing controlling authority, malpractice exposure, sanctions

Real-World Example

A 2024 Stanford study led by researchers Varun Magesh and Daniel E. Ho tested the legal research capabilities of major AI models (GPT-4, Claude, Llama 2) and found that they hallucinated legal citations between 69% and 88% of the time when asked direct legal research questions. Even when citations were real, the models frequently misstated holdings, applied incorrect standards, or omitted controlling contrary authority. The study, 'Hallucinating Law,' demonstrated that no current AI model is reliable as a standalone legal research tool.

Mitigation

Use AI to generate research hypotheses, identify potential search terms, and create preliminary outlines. Then conduct actual legal research using authoritative databases — Westlaw, LexisNexis, Fastcase, Google Scholar, or official court and legislative databases. AI is the brainstorming partner; Westlaw is the source of truth.

You wouldn't cite Wikipedia in a brief. So why would you cite something that's basically Wikipedia with a law degree costume? AI doesn't research. It predicts text. Those are fundamentally different activities. Use it to jumpstart your research, absolutely — but the moment you start treating 'ChatGPT said so' as 'the law says so,' you've stopped being a lawyer and started being a very expensive autocomplete.

-- Lawra
7

Don't Use AI for Tasks Requiring Emotional Intelligence

Client counseling, sensitive negotiations, witness preparation, and delivering difficult news require human empathy, emotional attunement, and interpersonal judgment that AI fundamentally cannot provide.

Risk

Damaged client relationships, inadequate representation, ethical violations, failure to meet fiduciary obligations, harm to vulnerable individuals

Real-World Example

In 2023, the National Eating Disorders Association (NEDA) replaced its human helpline counselors with an AI chatbot named 'Tessa.' Within days, users reported that Tessa was providing advice that could worsen eating disorders — including recommending calorie counting and weight loss to people seeking help for anorexia. NEDA shut Tessa down within a week. While not a legal case, it perfectly illustrates what happens when AI is deployed in contexts requiring emotional sensitivity and nuanced human understanding. In the legal context, multiple bar associations have warned that AI cannot replace the judgment required in client counseling, particularly in sensitive practice areas like family law, criminal defense, and immigration.

Mitigation

Reserve tasks that require empathy, emotional intelligence, reading between the lines, and human connection for human professionals. Use AI for background research and drafting that supports these interactions, but never as a substitute for the human interaction itself.

Your client doesn't need a statistically probable response to their custody battle. They need someone who looks them in the eye and says, 'I understand what you're going through, and here's how we fight for you.' AI can draft the motion, but it can't hold the space. If you're using ChatGPT to write the email telling a client their appeal was denied, you've fundamentally misunderstood what your client needs from you at that moment.

-- Lawra
8

Don't Ignore Your Bar Association's AI Guidelines

Bar associations across the United States and internationally are issuing AI-specific guidance at an accelerating pace. These are not suggestions — they are interpretations of existing ethical obligations that carry real consequences.

Risk

Disciplinary action, suspension, disbarment, malpractice findings, loss of professional standing

Real-World Example

In January 2024, the Florida Bar Board of Governors issued detailed guidance on AI use, followed quickly by the California State Bar, the New York State Bar Association, and numerous other jurisdictions. The ABA issued Formal Opinion 512 in July 2024, providing national-level guidance. Lawyers who ignored early guidance suffered consequences: in the Crabill disciplinary matter (Colorado, 2024), the tribunal specifically noted that existing ethical rules — even without AI-specific guidance — created obligations that the attorney failed to meet. The message is clear: 'I didn't know there were AI rules' is not a defense when existing competence and supervision rules already apply.

Mitigation

Identify and read your bar association's AI guidance immediately. Subscribe to your bar's ethics updates. If your jurisdiction has not yet issued specific AI guidance, apply existing ethical rules — competence (Rule 1.1), confidentiality (Rule 1.6), supervision (Rules 5.1 and 5.3), and candor (Rule 3.3) — to your AI use. Document your compliance.

I know, I know — 'read the bar association guidelines' is the legal profession's version of 'eat your vegetables.' But here's the thing: these guidelines are your shield. When something goes wrong — and eventually, for someone, it will — the first question the disciplinary board will ask is, 'Did you follow the applicable guidance?' If your answer is 'I didn't know there was any,' you've already lost. And honestly, most of these guidelines are actually well-written. The bars are trying to help. Let them.

-- Lawra
9

Don't Let AI Write Your Entire Brief

AI-generated briefs lack the strategic vision, persuasive voice, and nuanced argumentation that distinguish effective legal advocacy. A brief written entirely by AI is a generic document that serves no client's particular interests.

Risk

Weak advocacy, generic arguments that fail to persuade, missed strategic opportunities, loss of the attorney's professional voice and credibility

Real-World Example

In Ex parte Allen Michael Lee (Texas Court of Criminal Appeals, 2024), a habeas corpus petition was flagged as likely AI-generated based on its generic language, unusual formatting patterns, and arguments that did not engage with the specific facts of the case. The court noted the filing's lack of case-specific analysis and its reliance on generalized legal propositions that could have applied to virtually any case. While the petition was not rejected solely because of suspected AI authorship, the court's scrutiny illustrates how AI-generated briefs can undermine credibility. Legal writing experts at Harvard Law School's analysis of AI-generated legal writing in 2024 found that while AI can produce grammatically correct and structurally sound legal text, it consistently failed to develop the kind of strategic, persuasive argumentation that wins cases.

Mitigation

Use AI for research, outlining, and generating first-draft sections. Then rewrite extensively — add your strategic vision, your knowledge of the judge, your understanding of the case's unique facts, your persuasive voice, and your professional judgment about what arguments to emphasize, minimize, or omit entirely.

Here's a secret: judges can tell. They've been reading briefs for years, and an AI-generated brief reads like a Wikipedia article in legal costume — technically correct, comprehensively mediocre, and utterly devoid of the strategic spark that makes a brief actually persuasive. Your client didn't hire Wikipedia. They hired you. Use AI for the scaffolding, then build something that only you could have built.

-- Lawra
10

Don't Assume Today's Limitations Are Permanent

AI capabilities are advancing at an exponential pace. What AI cannot do today, it may do competently tomorrow. Lawyers who dismiss AI based on its current limitations risk being blindsided by rapid improvement.

Risk

Professional obsolescence, competitive disadvantage, failure to adapt practice to evolving capabilities, inability to serve clients effectively as the profession transforms

Real-World Example

In 2022, legal professionals widely dismissed AI as incapable of passing the bar exam. In 2023, GPT-4 passed the Uniform Bar Examination, scoring in approximately the 90th percentile. In 2024, specialized legal AI models began outperforming junior associates in specific tasks like contract review and legal research, according to studies by LegalBench and others. Thomson Reuters reported that CoCounsel (its GPT-4-powered legal AI tool) achieved accuracy rates comparable to experienced attorneys on document review tasks. The pace of change has consistently outstripped expert predictions, with capabilities arriving years ahead of schedule.

Mitigation

Stay informed about AI developments through trusted sources. Reassess your AI capabilities and workflows quarterly. Build adaptability into your practice — treat AI competence as an evolving skill, not a one-time learning event. Invest in continuous education and experimentation.

In 2020, the 'smart' take was that AI couldn't handle legal nuance. By 2023, it was passing the bar exam. If you build your career strategy on the assumption that AI can't do X, you'd better have a plan for the day it can. I'm not saying AI will replace lawyers — I genuinely don't think it will. But it will absolutely replace lawyers who refuse to understand it. The lawyers who thrive will be the ones who stopped asking 'Can AI do this?' and started asking 'How can I do this better with AI?'

-- Lawra

Now Learn What to Do Right

Knowing what to avoid is essential — but it is only half the picture. Learn the best practices that will help you use AI effectively, ethically, and with confidence.

Ready for structured learning? Explore the Learning Program →

Comments

Loading comments...

0/2000 Comments are moderated before appearing.