Practical, step-by-step AI exercises designed for legal professionals. Each Quick Win includes a tested prompt, clear instructions, tips for better results, and cautions to keep you safe. Most take less than 10 minutes.
Start here. Copy a prompt, paste it into your favorite AI tool, and see the results for yourself.
Showing 20 Quick Wins
BeginnerCorporate 2 min
Summarize a 50-Page Contract in 2 Minutes
Turn a lengthy commercial contract into a structured executive summary with key terms, obligations, and risk flags -- ready to brief a client or a senior partner.
Prompt
You are a senior corporate attorney reviewing a commercial contract. I will provide the full text of a contract. Please produce a structured executive summary that includes:
1. **Parties**: Identify all parties, their roles, and their jurisdiction of incorporation or residence.
2. **Purpose & Scope**: Summarize the core purpose of the agreement in 2-3 sentences.
3. **Key Commercial Terms**: List the material commercial terms including pricing, payment terms, deliverables, and performance metrics.
4. **Term & Termination**: State the effective date, duration, renewal provisions, and all termination triggers (for cause and for convenience).
5. **Key Obligations**: For each party, list the 3-5 most significant obligations.
6. **Representations & Warranties**: Summarize the material reps and warranties from each party.
7. **Indemnification & Liability**: Describe the indemnification structure and any liability caps or limitations.
8. **Confidentiality**: Note the scope and duration of confidentiality obligations.
9. **Governing Law & Dispute Resolution**: State the governing law, jurisdiction, and dispute resolution mechanism (litigation, arbitration, mediation).
10. **Notable or Unusual Clauses**: Flag any provisions that are non-standard, one-sided, or potentially problematic, and briefly explain why.
Format the summary with clear headings. Use bullet points for readability. Keep the total summary under 800 words. After the summary, add a section titled "Risk Flags" listing any terms that a reviewing attorney should examine more closely, with a one-line explanation for each.
Here is the contract:
[PASTE THE FULL CONTRACT TEXT HERE]
Tips
If the contract exceeds the AI's context window, split it into sections and summarize each section separately, then ask the AI to synthesize the section summaries into one executive summary.
For best results, include the entire contract -- schedules, exhibits, and amendments. Missing annexes often contain the most commercially significant terms.
After receiving the summary, follow up with targeted questions like 'What are the most one-sided provisions in this contract?' or 'How does the indemnification clause compare to market standard?'
Use this summary as a starting point for your own analysis, not as the finished work product. The AI may miss nuance in heavily negotiated provisions.
Cautions
Never rely on the AI summary without reading the original contract yourself. AI can misinterpret defined terms, miss cross-references, or overlook provisions that modify other sections.
Do not paste confidential client contracts into consumer AI tools (free ChatGPT, public Claude) without understanding the provider's data retention and training policies. Use enterprise or API versions with appropriate data processing agreements.
AI may not recognize jurisdiction-specific implications of certain clauses. A non-compete clause means different things in California versus Texas versus England.
The 'Risk Flags' section is a helpful starting point, but it reflects pattern matching, not legal judgment. Material risks can hide in boilerplate that the AI considers standard.
What This Quick Win Does
Contract review is one of the most time-consuming tasks in corporate legal practice. A 50-page commercial agreement can take an experienced attorney 1-2 hours to review and summarize manually. With AI, you can generate a structured first-pass summary in approximately 2 minutes, freeing you to focus your expertise on the provisions that actually matter.
This Quick Win gives you a prompt that produces a comprehensive executive summary — the kind you would prepare for a client meeting, a deal team briefing, or your own initial review notes.
How to Use It
Step 1: Prepare the Contract
Open the contract in a format where you can select and copy the full text. PDF documents may need to be converted to text first. If working with a scanned PDF, use OCR software to extract the text before proceeding.
Make sure you include:
The main agreement body
All schedules and exhibits
Any amendments or side letters
The signature pages (these confirm the parties and effective date)
Step 2: Open Your AI Tool
Navigate to ChatGPT or Claude. For confidential documents, ensure you are using an enterprise account or an API deployment with appropriate data handling agreements in place.
Step 3: Paste the Prompt and the Contract
Copy the prompt above, paste it into the AI chat, and replace [PASTE THE FULL CONTRACT TEXT HERE] with the full text of the contract. Submit the message.
Step 4: Review the Output
The AI will return a structured summary. Read it alongside the original contract, paying particular attention to:
Defined terms: Verify that the AI correctly identified how key terms are defined and used throughout the agreement.
Cross-references: Check whether the AI caught provisions that modify or qualify other sections (e.g., “Subject to Section 8.3…”).
Risk Flags: Use the flagged items as a checklist for your deeper review, but add your own flags based on your knowledge of the client, the deal, and the jurisdiction.
Step 5: Iterate
If the summary misses something or you want more detail on a specific area, follow up:
“Expand on the indemnification provisions. What triggers each party’s indemnification obligation, and are there any carve-outs?”
“Compare the termination provisions to market-standard terms for a SaaS agreement.”
“Are there any provisions that would survive termination beyond the confidentiality clause?”
Why This Works
Large language models excel at extracting structured information from unstructured text. A well-formed contract is actually easier for AI to process than many other document types because it follows a relatively predictable structure with clear section headings and defined terms. The prompt instructs the AI to organize its output in the same framework an experienced attorney would use, which makes the result immediately useful.
What This Does Not Replace
This Quick Win accelerates the first pass. It does not replace:
Legal judgment about which provisions are commercially acceptable for your specific client
Negotiation strategy about which terms to push back on
Jurisdictional analysis about how local law affects enforceability
Due diligence that requires cross-referencing the contract against other deal documents, financials, or regulatory requirements
Use the AI summary to get oriented quickly. Then apply your expertise where it matters most.
BeginnerLitigation 5 min
Generate a Demand Letter First Draft
Create a professional, structured demand letter draft in minutes -- complete with factual recitation, legal basis, demand terms, and deadline -- ready for your review and customization.
Prompt
You are an experienced litigation attorney drafting a demand letter. Using the facts I provide below, draft a formal demand letter that includes the following sections:
1. **Header**: Date, recipient's full name and address, and a "Re:" line identifying the matter.
2. **Introduction**: Identify the sender (my client), state the attorney-client representation, and provide a one-sentence summary of the claim.
3. **Statement of Facts**: Present the relevant facts in chronological order. Be precise with dates, amounts, and parties. Maintain a firm but professional tone -- assertive, not aggressive.
4. **Legal Basis**: Identify the legal theories supporting the claim (breach of contract, negligence, statutory violation, etc.). Reference the applicable legal standards without citing specific case law (I will add jurisdiction-specific citations during review).
5. **Damages**: Itemize the damages claimed, including categories (compensatory, consequential, incidental) and specific amounts where known. If amounts are not yet determined, state that damages are being calculated and will be substantiated.
6. **Demand**: State clearly what is demanded (payment amount, specific performance, cessation of conduct, etc.) and the deadline for response (typically 10-30 days).
7. **Consequences of Non-Compliance**: State professionally what actions will follow if the demand is not met (filing of lawsuit, referral to regulatory agency, etc.). Avoid threats that could be considered improper.
8. **Closing**: Professional sign-off with space for attorney signature, bar number, and firm contact information.
Tone: Professional, firm, and measured. This letter should convey seriousness and preparation without being hostile or inflammatory.
Length: 2-3 pages.
Here are the facts of the matter:
Client name: [YOUR CLIENT'S NAME]
Opposing party: [OPPOSING PARTY'S NAME AND ADDRESS]
Nature of dispute: [BRIEF DESCRIPTION -- e.g., "Breach of a commercial lease agreement"]
Key facts: [CHRONOLOGICAL SUMMARY OF WHAT HAPPENED -- include dates, amounts, communications, and relevant documents]
Legal theories: [e.g., "Breach of contract under the lease agreement dated March 15, 2024; violation of state consumer protection statute"]
Damages sought: [e.g., "$45,000 in unpaid rent, $12,000 in property damage, attorney's fees"]
Demand: [WHAT YOU WANT -- e.g., "Full payment of $57,000 plus attorney's fees within 21 days"]
Jurisdiction: [STATE/COUNTRY]
Tips
Provide as much factual detail as possible in the bracketed fields. The more specific your input, the more useful the draft. Include exact dates, dollar amounts, contract section numbers, and the names of individuals involved.
After receiving the draft, add your jurisdiction-specific case citations and statutory references. The AI intentionally leaves these as placeholders so you insert verified, current authority.
Run the draft through a second AI pass asking: 'Review this demand letter for tone. Flag any language that could be perceived as threatening, unprofessional, or that could constitute an improper threat under the Model Rules of Professional Conduct.'
Customize the response deadline based on the urgency and norms in your jurisdiction. Thirty days is standard for most commercial disputes; shorter deadlines may be appropriate where a statute of limitations is imminent.
Consider whether your jurisdiction requires specific language for certain types of demand letters (e.g., FDCPA compliance for debt collection, pre-suit notice requirements for government entities).
Cautions
A demand letter is a legal document that can be introduced as evidence. Every factual statement must be accurate and supportable. Verify all facts against your file before sending.
Do not include case citations from the AI output without verifying them in a legal research database. AI frequently generates plausible-sounding but nonexistent case citations.
Be aware of ethical rules in your jurisdiction regarding threats of criminal prosecution to gain advantage in a civil matter. The AI may not know these boundaries.
Some jurisdictions have specific pre-suit notice requirements (e.g., tort claims against government entities, DTPA demands in Texas). The AI may not include jurisdiction-specific procedural prerequisites.
Never send a demand letter without a supervising attorney's review. This applies to AI-drafted letters just as it does to letters drafted by a junior associate.
What This Quick Win Does
Drafting a demand letter from scratch typically takes 30-90 minutes, depending on the complexity of the dispute. This Quick Win produces a structured, professional first draft in about 5 minutes — giving you a solid framework that you then refine with your legal judgment, verified citations, and client-specific strategy.
The resulting draft follows the standard structure that experienced litigators use: facts first, then legal basis, then a clear demand with a deadline. It saves you from staring at a blank page and lets you focus your time on the substantive legal analysis and strategic choices that matter.
How to Use It
Step 1: Gather Your Facts
Before opening the AI tool, assemble the key information from your file:
Client’s full legal name and contact information
Opposing party’s full legal name and address
A chronological summary of events with specific dates
The legal basis for the claim (which contract was breached, which statute was violated)
A clear accounting of damages
What you are demanding and your proposed deadline
The quality of your input directly determines the quality of the output. Vague facts produce vague letters.
Step 2: Fill In the Prompt Template
Copy the prompt above and replace each bracketed placeholder with the specific facts of your case. Be precise — include contract dates, section numbers, dollar amounts, and the names of individuals who made key representations or took key actions.
Step 3: Generate and Review the Draft
Paste the completed prompt into ChatGPT or Claude and submit. The AI will generate a 2-3 page demand letter draft.
Review the draft carefully for:
Factual accuracy: Does every stated fact match your file? AI can inadvertently alter dates, amounts, or the sequence of events.
Tone: Is the letter firm without being unprofessional? Adjust language that feels too aggressive or too passive for the situation.
Legal theories: Are the legal theories correctly stated? Add jurisdiction-specific statutory references and case citations from your own research.
Demand specificity: Is the demand clear, specific, and achievable? Vague demands invite vague responses.
Step 4: Add Jurisdiction-Specific Elements
The prompt intentionally avoids citing specific case law because AI-generated citations are unreliable. Now is the time to add:
Specific statutory citations (e.g., “pursuant to Cal. Civ. Code Section 1942.4”)
Verified case authority supporting your legal theories
Jurisdiction-specific procedural requirements (pre-suit notice periods, demand letter requirements under consumer protection statutes, etc.)
Any required regulatory language
Step 5: Client Review
Share the draft with your client for factual verification before sending. The client can confirm dates, amounts, and the accuracy of the factual narrative. This is good practice regardless of how the letter was drafted.
Adapting the Prompt for Different Dispute Types
This prompt works across multiple dispute categories. Adjust the “Legal theories” and “Damages” fields for:
Breach of contract: Specify the contract, the breached provisions, and the measure of damages
Personal injury: Include the date of incident, nature of injuries, medical treatment, and demand for insurance policy limits or a specific sum
Employment disputes: Reference the employment relationship, the adverse action, and applicable employment statutes
Landlord-tenant: Cite the lease provisions, the nature of the breach, and any statutory remedies
Intellectual property: Identify the IP right, the infringing conduct, and the demand (cease and desist, licensing, damages)
What This Does Not Replace
The AI generates a competent structural framework. It does not replace:
Strategic judgment about whether to send a demand letter at all, versus filing directly
Tone calibration based on the relationship between the parties and the likelihood of settlement
Jurisdictional expertise about required pre-suit procedures, limitations periods, or regulatory filings
Citation verification — every legal authority must come from your own research
Client counseling about the risks and benefits of the proposed course of action
BeginnerAny 1 min
Extract All Dates and Deadlines from a Document
Instantly pull every date, deadline, milestone, and time-sensitive obligation from any legal document into a structured table -- never miss a critical date again.
Prompt
You are a meticulous legal assistant performing a deadline and date extraction review. I will provide a legal document. Please extract every date, deadline, time period, and time-sensitive obligation mentioned in the document and present them in a structured table with the following columns:
| # | Date / Time Period | Type | Description | Section Reference | Action Required | Priority |
|---|-------------------|------|-------------|-------------------|----------------|----------|
For the "Type" column, categorize each entry as one of:
- **Fixed Date**: A specific calendar date (e.g., "March 15, 2025")
- **Relative Deadline**: A deadline calculated from an event (e.g., "within 30 days of notice")
- **Recurring**: A repeating obligation (e.g., "quarterly reports due within 15 days of quarter end")
- **Milestone**: A project or performance milestone
- **Statute of Limitations**: A legal time bar
- **Condition**: A date-dependent condition (e.g., "if not completed by December 31...")
For the "Priority" column, rate each as:
- **Critical**: Missing this date could result in loss of rights, default, or legal consequences
- **Important**: Missing this date would breach an obligation but may be curable
- **Administrative**: Routine or informational dates
After the table, provide:
1. A **"Critical Dates Summary"** listing only the Critical-priority items in chronological order
2. A **"Calculated Deadlines"** section that computes actual calendar dates for any relative deadlines, using today's date or the document's effective date as the reference point
3. Any **"Ambiguous Dates"** where the deadline is unclear, conflicting, or dependent on an event that has not yet occurred
Here is the document:
[PASTE THE DOCUMENT TEXT HERE]
Tips
This prompt works with virtually any legal document: contracts, court orders, settlement agreements, loan documents, corporate bylaws, regulatory filings, or legislation.
For relative deadlines (e.g., '30 days after closing'), tell the AI the reference date so it can calculate the actual calendar deadline. Add a line like: 'The closing date was January 15, 2025. Please calculate all relative deadlines from this date.'
Copy the output table directly into a spreadsheet or calendar system. Most AI tools format tables in Markdown, which can be pasted into Excel, Google Sheets, or project management tools.
Run this extraction on every new document that enters your file. It takes one minute and can prevent a missed deadline that costs thousands of dollars -- or worse.
For multiple related documents (e.g., a loan agreement and its security documents), process them together or sequentially and ask the AI to consolidate all deadlines into a single master timeline.
Cautions
Verify every extracted date against the original document. AI can misread dates, especially when documents use inconsistent formatting (e.g., mixing DD/MM/YYYY and MM/DD/YYYY formats).
Relative deadlines calculated by the AI must be verified against the actual calendar, accounting for weekends, holidays, and jurisdiction-specific counting rules (e.g., Federal Rules of Civil Procedure Rule 6 counting).
Some deadlines depend on events that may not have occurred yet. The AI will flag these as ambiguous, but you must track the triggering events separately.
Court filing deadlines have specific computation rules that vary by jurisdiction. Do not rely on AI calculations for filing deadlines without verifying against your jurisdiction's rules of procedure.
This extraction captures dates mentioned in the text. It cannot identify deadlines implied by law but not stated in the document (e.g., statutory notice periods, regulatory filing deadlines).
What This Quick Win Does
Missed deadlines are one of the leading causes of legal malpractice claims. Manually scanning a long document for every date, deadline, and time-sensitive obligation is tedious and error-prone — especially in complex agreements with relative deadlines, conditions, and cross-references.
This Quick Win extracts every temporal reference from a document in under one minute and organizes them into a prioritized, actionable table. It is one of the simplest and highest-value applications of AI in legal practice.
How to Use It
Step 1: Prepare the Document
Copy the full text of the document you want to analyze. This works with any document type:
Contracts and amendments
Court orders and scheduling orders
Settlement agreements
Loan documents and security instruments
Corporate governance documents
Regulatory filings and compliance documents
Legislation and regulations
If working with a scanned PDF, run OCR first to convert the image to searchable text.
Step 2: Add Context for Relative Deadlines
If the document contains relative deadlines (e.g., “within 30 days of the effective date”), add a line to the prompt specifying the reference dates:
“The effective date of this agreement is March 1, 2025. The closing date was April 15, 2025. Please calculate all relative deadlines using these reference dates.”
This allows the AI to compute actual calendar dates rather than leaving them as relative periods.
Step 3: Run the Prompt
Paste the prompt and the document into ChatGPT or Claude. The AI will return a structured table, a critical dates summary, calculated deadlines, and any ambiguous dates.
Step 4: Verify and Calendar
This is the most important step:
Cross-check every extracted date against the original document. Open the document alongside the AI output and verify each entry.
Apply jurisdiction-specific rules for computing time periods. The AI calculates calendar days, but your jurisdiction may exclude weekends and holidays, or use “business days” for certain deadlines.
Enter verified dates into your calendaring system with appropriate advance reminders. Most firms use 30-day, 14-day, and 7-day advance warnings for critical deadlines.
Flag ambiguous dates for further review. If a deadline depends on an event that has not yet occurred, create a tickler to revisit when the triggering event happens.
Step 5: Create a Master Timeline (Optional)
If you are managing multiple documents for a single matter (e.g., a transaction with a purchase agreement, financing documents, and regulatory approvals), run the extraction on each document and then ask the AI:
“Here are deadline extractions from three related documents. Please consolidate them into a single master timeline, sorted chronologically, and flag any conflicts or overlapping deadlines.”
Real-World Applications
Transaction management: Extract all closing conditions, delivery deadlines, and post-closing obligations from deal documents
Case management: Pull all deadlines from a scheduling order the moment it is entered
Compliance: Identify all reporting deadlines from a regulatory consent order
Estate administration: Identify all statutory deadlines from probate orders and trust instruments
What This Does Not Replace
This tool extracts dates that appear in the document text. It does not:
Identify legally implied deadlines not stated in the document (e.g., statutory filing periods, appeal windows, or notice requirements imposed by law rather than by contract)
Apply jurisdiction-specific computation rules for counting days (you must do this)
Account for holidays and court closures unless you provide a specific calendar
Substitute for a proper docketing system — this is a first-pass extraction tool, not a case management platform
IntermediateLitigation 3 min
Draft Discovery Request Questions
Generate targeted interrogatories, requests for production, and requests for admission tailored to your case -- a comprehensive first draft to refine with your litigation strategy.
Prompt
You are a senior litigation attorney preparing written discovery for a civil case. Based on the case information I provide, draft the following discovery requests:
**Part 1: Interrogatories (15-20 questions)**
Draft interrogatories that seek to establish:
- The opposing party's factual account of the disputed events
- The identity of all witnesses with knowledge of the relevant facts
- The existence and location of relevant documents and communications
- The opposing party's legal theories and factual basis for their claims or defenses
- Damages calculations and the basis for any claimed amounts
- Any expert witnesses the opposing party intends to call and the substance of their opinions
- Corporate structure, agency relationships, or authority (if applicable)
- Insurance coverage applicable to the claims
**Part 2: Requests for Production (15-20 requests)**
Draft document requests targeting:
- All communications between specified parties during the relevant time period
- Contracts, agreements, and amendments relevant to the dispute
- Internal documents reflecting decision-making related to the disputed conduct
- Financial records supporting or undermining the claimed damages
- Electronically stored information (ESI) including emails, text messages, and messaging app communications
- Photographs, videos, or recordings related to the events
- Expert reports, analyses, or studies
- Insurance policies and coverage correspondence
**Part 3: Requests for Admission (10-15 requests)**
Draft requests for admission that:
- Establish undisputed foundational facts to narrow the issues for trial
- Authenticate key documents
- Confirm the genuineness of signatures
- Establish the timing of key events
- Pin down legal contentions (e.g., "Admit that you owed a duty of care to Plaintiff")
**Formatting requirements:**
- Number each request sequentially within its category
- Begin each interrogatory with standard instructions and definitions (define "you," "document," "communication," "related to," and "concerning")
- Include a standard instruction that interrogatories are continuing in nature
- Make requests specific enough to be enforceable but broad enough to capture relevant information
- Avoid compound questions where possible (courts often sustain objections to compound interrogatories)
Here is the case information:
Case type: [e.g., "Breach of commercial lease agreement"]
Your client's role: [Plaintiff / Defendant]
Opposing party: [Name and role]
Key facts: [Summary of the dispute -- what happened, when, and what is contested]
Key issues: [The central factual and legal disputes -- e.g., "Whether Defendant breached Section 4.2 of the Lease by failing to maintain the HVAC system; amount of damages caused by the breach"]
Jurisdiction: [State/Federal and specific court if known]
Any discovery limitations: [e.g., "Court has limited interrogatories to 25" or "Proportionality concerns due to volume of ESI"]
Tips
Many jurisdictions limit the number of interrogatories (typically 25, including subparts, under FRCP Rule 33). Tell the AI your limit so it stays within bounds.
After generating the draft, review each request against your case theory. Remove requests that do not advance your specific claims or defenses, and add targeted requests based on facts unique to your case.
Use the requests for admission strategically. Well-crafted RFAs can eliminate the need to prove foundational facts at trial, saving significant time and expense.
Consider running a follow-up prompt: 'Now review these discovery requests from the opposing party's perspective. What objections would you raise to each request, and how should I revise to preempt those objections?'
For complex commercial litigation, add industry-specific document categories to the requests for production (e.g., 'all board meeting minutes discussing the transaction' or 'all regulatory filings related to the product').
Cautions
Discovery requests must comply with your jurisdiction's rules of procedure. FRCP Rules 26, 33, 34, and 36 govern federal discovery; state rules vary significantly. Verify that the format, number, and scope comply with applicable rules.
AI-generated requests may be overbroad. Courts increasingly enforce proportionality requirements (FRCP Rule 26(b)(1)). Review each request for proportionality to the needs of the case.
Do not include requests that seek privileged information, as this undermines your credibility with the court. Review for any requests that inadvertently target attorney-client or work product material.
The definitions and instructions section must conform to your jurisdiction's conventions. Some courts have model definitions; use those rather than AI-generated versions.
These requests are a starting point. Effective discovery is driven by your case theory and litigation strategy, not by a template. Cut what does not serve your case and add what does.
What This Quick Win Does
Drafting written discovery from scratch is one of the more formulaic yet time-consuming tasks in litigation. An experienced litigator knows the standard categories of information to request, but tailoring those categories to a specific case, formatting them properly, and ensuring they are defensible against objections takes substantial time.
This Quick Win generates a comprehensive first draft of interrogatories, document requests, and requests for admission tailored to your case facts. It gives you a working document to refine, not a finished product to file.
How to Use It
Step 1: Define Your Discovery Strategy
Before generating discovery requests, clarify what you need to prove or disprove. Ask yourself:
What are the elements of each claim or defense?
What facts does the opposing party control that you need?
What documents should exist based on the nature of the dispute?
What witnesses does the opposing party have access to?
What are the key contested issues?
Write these down in the “Key issues” field of the prompt. The more precisely you define what you are looking for, the more targeted the AI output will be.
Step 2: Fill In the Case Information
Replace each bracketed placeholder with the specifics of your case. Be detailed in the “Key facts” section — mention specific contract provisions, dates, communications, and individuals involved. This context allows the AI to generate requests that are specific to your dispute rather than generic templates.
Step 3: Generate and Triage
Submit the prompt and review the output. Approach it as you would a first draft from a junior associate:
Keep requests that directly support your case theory
Cut requests that are too broad, duplicative, or tangential
Revise requests that have the right idea but need tighter language
Add case-specific requests that the AI missed because they depend on facts you know from your investigation
Step 4: Check the Rules
Before finalizing, verify:
Number limits: Does your jurisdiction or the court’s scheduling order limit interrogatories? FRCP allows 25 (including discrete subparts); many state courts have different limits.
Format requirements: Some courts require specific formatting for discovery requests. Check local rules.
Proportionality: Under FRCP Rule 26(b)(1), discovery must be proportional to the needs of the case. Review each request against this standard.
Compound questions: Ensure interrogatories are not impermissibly compound. Each should ask one thing.
Step 5: Anticipate Objections
A powerful follow-up prompt:
“Review these discovery requests from the responding party’s perspective. For each request, identify the most likely objections (overbreadth, undue burden, privilege, proportionality, vague and ambiguous) and suggest revisions to make the request more defensible.”
This step often improves the final product significantly.
Adapting for Different Case Types
Modify the “Key facts” and “Key issues” fields for your specific practice area:
Employment discrimination: Focus on the decision-making process, comparator evidence, personnel files, and communications about the plaintiff
Trade secret misappropriation: Focus on access to confidential information, departing employee’s activities, competitive intelligence
Insurance coverage: Target the policy, claims file, coverage analysis, and communications with the insured
What This Does Not Replace
AI-generated discovery requests are a starting framework. They do not replace:
Case-specific strategy that targets the precise factual gaps in your case
Knowledge of local practice about what judges in your court expect and tolerate
Proportionality judgment that balances the value of information against the burden of production
Experience-based intuition about what documents and testimony will actually move the needle
BeginnerAny 2 min
Simplify Legal Language for a Client Email
Transform dense legal jargon into clear, plain-language explanations your clients can actually understand -- maintaining accuracy while building trust and transparency.
Prompt
You are an experienced attorney who excels at client communication. I need to explain a legal concept or situation to my client in clear, plain language. The client is not a lawyer and should not need a legal dictionary to understand this communication.
Please rewrite the following legal text (or draft an email based on the situation I describe) following these guidelines:
1. **Plain Language**: Replace legal jargon with everyday words. Where a legal term is essential and cannot be avoided, define it in parentheses the first time it appears.
2. **Short Sentences**: Keep sentences under 25 words where possible. Break complex ideas into multiple sentences.
3. **Active Voice**: Use active voice ("The court ruled..." not "It was ruled by the court...").
4. **Structure**: Use short paragraphs (2-3 sentences max). Use bullet points for lists of items, steps, or options.
5. **What It Means for Them**: After explaining each point, add a brief "what this means for you" sentence that connects the legal concept to the client's real-world situation.
6. **Action Items**: If the client needs to do anything, list the specific action items clearly at the end of the email, with deadlines if applicable.
7. **Tone**: Professional, warm, and reassuring. The client should feel informed and supported, not overwhelmed or alarmed.
8. **Accuracy**: Do not oversimplify to the point of inaccuracy. If a legal nuance matters, explain it simply rather than omitting it.
Format: Draft this as a complete email with a subject line, greeting, body, action items (if any), and professional sign-off.
Context about the client: [e.g., "Small business owner, no legal background, anxious about an ongoing contract dispute"]
Here is the legal text or situation to explain:
[PASTE THE LEGAL TEXT TO SIMPLIFY, OR DESCRIBE THE SITUATION YOU NEED TO EXPLAIN]
Tips
Tell the AI about your client's background and emotional state. A worried client needs more reassurance. A sophisticated business client can handle slightly more technical language. Context shapes tone.
If explaining a court order or legal document, paste the actual text and ask for a plain-language translation. Then weave that translation into the email.
For sensitive topics (adverse rulings, fee increases, bad news), add to the prompt: 'The client may be upset by this news. Frame the communication with empathy, acknowledge their likely concerns, and focus on what we can do next.'
Read the output aloud. If any sentence makes you pause or re-read, it needs to be simpler. Client emails should be understandable on the first reading.
Use this for any client-facing communication: retainer explanations, case status updates, settlement offers, billing summaries, or procedural next steps.
Cautions
Simplifying language must not change the legal meaning. Review the AI output to ensure it does not inadvertently promise outcomes, misstate legal standards, or omit important qualifications.
Do not include specific legal advice in a simplified email without ensuring it accurately reflects your analysis. The AI is simplifying language, not providing legal counsel.
Be careful with communications that could later be discoverable. Client emails are generally privileged, but ensure the content is consistent with your legal strategy.
Some legal terms have precise meanings that plain-language equivalents cannot fully capture (e.g., 'reasonable doubt,' 'fiduciary duty,' 'force majeure'). When precision matters, use the legal term and define it rather than substituting an imprecise everyday word.
Tone-check the output. AI sometimes produces language that sounds condescending when trying to simplify. Your client is not a child -- they are an intelligent adult who happens not to have a law degree.
What This Quick Win Does
One of the most common client complaints is that their lawyer speaks in a language they cannot understand. Phrases like “the court granted summary judgment on the cross-claims” or “your indemnification obligation survives termination” are second nature to attorneys but meaningless to most clients.
This Quick Win transforms legal jargon into clear, plain-language client communications. It helps you draft emails that clients actually read, understand, and appreciate — without sacrificing legal accuracy.
How to Use It
Step 1: Identify What Needs to Be Communicated
Gather the information you need to convey. This could be:
A court ruling or order that affects the client’s case
A contract provision the client needs to understand before signing
A status update on pending litigation or a transaction
An explanation of their legal options and the pros and cons of each
A description of next steps and what the client needs to do
A billing or fee-related communication
Step 2: Provide Client Context
The more the AI knows about your client, the better it can calibrate the language and tone. In the “Context about the client” field, include:
Their profession or background (helps calibrate vocabulary)
Their emotional state (anxious, frustrated, optimistic, confused)
Their level of familiarity with the legal process
The relationship history (new client versus long-standing relationship)
Step 3: Paste the Legal Content
You can either:
Paste the actual legal text (a court order paragraph, a contract clause, a statute section) and ask the AI to translate it into plain language within an email, or
Describe the situation (e.g., “The judge denied our motion to dismiss. We need to explain what this means and what happens next. The client was expecting the case to be dismissed.”)
Step 4: Review for Accuracy and Tone
Read the AI output with two questions in mind:
Is this accurate? Does the simplified version faithfully convey the legal reality? Watch for:
Oversimplifications that create false expectations
Omitted qualifications (e.g., “you will win” instead of “we believe we have a strong argument”)
Language that could be read as a guarantee of outcome
Is this appropriate? Does the tone match the situation and the client?
Not too casual for a serious matter
Not too alarming for a routine development
Not condescending in its simplifications
Warm and professional throughout
Step 5: Add Your Personal Touch
The best client communications have the attorney’s personality and relationship awareness woven in. After the AI gives you a strong structural draft:
Add a personal reference if appropriate (“As we discussed when we met last Tuesday…”)
Adjust the level of formality to match your relationship with the client
Ensure action items are realistic and include your availability for follow-up questions
Example: Before and After
Before (Legal Jargon):
“The court entered an order granting Defendant’s motion for partial summary judgment on Counts III and IV of the Complaint, holding that the economic loss doctrine bars Plaintiff’s negligence and negligent misrepresentation claims where the alleged damages arise solely from the parties’ contractual relationship. The breach of contract claim (Count I) and fraud claim (Count II) survive and will proceed to discovery.”
After (Plain Language):
Subject: Update on Your Case — Court Ruling on Our Claims
Dear [Client],
I want to update you on a ruling the court issued today. Here is what happened and what it means for your case.
What the court decided:
The judge dismissed two of our four claims against [Defendant]. Specifically, the court ruled that because your dispute with [Defendant] is based on a contract, you cannot also sue them for negligence related to the same issue. This is a legal rule called the “economic loss doctrine” — it essentially says that when there is a contract between two parties, the contract is the main remedy, not a separate negligence claim.
What this means for you:
Your two strongest claims — breach of contract and fraud — are still active and moving forward. The two dismissed claims were supporting claims, and their dismissal does not weaken the core of your case.
What happens next:
We now enter the discovery phase, where both sides exchange documents and information. I will be in touch next week with a timeline and what I will need from you.
Please do not hesitate to call me if you have questions.
What This Does Not Replace
This tool helps you communicate more clearly. It does not replace:
Legal judgment about what to communicate and when
Strategic considerations about what information to include or withhold in a given communication
Relationship management that requires your personal knowledge of the client
Ethical review to ensure the communication does not inadvertently create commitments, waive privileges, or misstate your legal position
BeginnerCorporate 3 min
Compare Two Contract Versions
Identify every difference between two contract drafts -- additions, deletions, and modified language -- with a clear analysis of how each change affects your client's rights and obligations.
Prompt
You are a senior contract attorney performing a detailed comparison of two versions of a contract. I will provide Version A (the earlier draft) and Version B (the revised draft). Please perform a comprehensive redline analysis with the following structure:
**1. Summary of Changes**
Provide a brief executive summary (5-10 bullet points) of the most significant changes between the two versions, listed in order of importance to the parties' rights and obligations.
**2. Detailed Change Log**
For each change between Version A and Version B, provide:
| # | Section | Change Type | Version A Language | Version B Language | Impact Assessment |
|---|---------|-------------|-------------------|-------------------|-------------------|
For "Change Type," classify each as:
- **Addition**: New language added in Version B that does not appear in Version A
- **Deletion**: Language removed from Version B that appeared in Version A
- **Modification**: Language changed between versions
- **Restructuring**: Same substance moved to a different section or reorganized
For "Impact Assessment," rate each change as:
- **Material**: Changes a party's rights, obligations, or risk allocation
- **Clarifying**: Adds precision without changing substantive meaning
- **Administrative**: Formatting, numbering, or non-substantive changes
- **Potentially Adverse**: Could disadvantage [specify which party]
**3. Risk Analysis**
After the change log, provide a section titled "Risk Analysis" that:
- Identifies the 3-5 most significant changes from the perspective of each party
- Flags any changes that shift risk allocation, limit remedies, expand obligations, or narrow protections
- Notes any changes that may create ambiguity or internal inconsistency within the revised draft
**4. Missing Elements**
Note any provisions present in Version A that were removed entirely in Version B, and assess whether the removal appears intentional (substantive deletion) or potentially accidental (drafting oversight).
Here are the two versions:
=== VERSION A (Earlier Draft) ===
[PASTE VERSION A HERE]
=== VERSION B (Revised Draft) ===
[PASTE VERSION B HERE]
Tips
For the best results, paste clean text without headers, footers, or page numbers. These formatting artifacts can confuse the comparison.
If the contracts are too long to paste both in a single message, split by section. For example: 'Compare Section 5 (Indemnification) from Version A with Section 5 from Version B.'
After reviewing the change log, follow up with targeted questions: 'Is the new limitation of liability in Section 8.2 consistent with market standard for this type of agreement?' or 'Does the revised termination clause give our client adequate protection?'
Use this in negotiation preparation. Before a call with opposing counsel, run the comparison to have a clear inventory of every change they made, ranked by significance.
This prompt works for any two versions of any document -- not just contracts. Use it to compare legislation (original vs. amended), policies (old vs. new), or court orders (proposed vs. entered).
Cautions
AI comparison is not a substitute for a proper redline tool (Microsoft Word Track Changes, contract management platforms). For documents with complex formatting, tables, or exhibits, use dedicated redline software for the initial comparison and AI for the analysis of what the changes mean.
The AI may miss subtle changes in punctuation, capitalization, or defined term usage that can have significant legal implications. A change from 'shall' to 'may,' or from 'Affiliate' to 'affiliate,' can alter legal meaning dramatically.
The impact assessment reflects general contract principles, not jurisdiction-specific law. A clause that the AI rates as 'Clarifying' may have material implications under the governing law of your specific contract.
If the two versions have different section numbering or structure, the AI may have difficulty mapping equivalent provisions. In that case, compare section by section manually and use the AI to analyze each section pair.
Do not paste confidential client contracts into consumer AI tools without appropriate data handling safeguards.
What This Quick Win Does
Contract negotiation involves multiple rounds of revisions. Identifying what the other side changed — and understanding the significance of each change — is critical to protecting your client’s interests. Traditional redline tools show you what changed but not why it matters.
This Quick Win goes beyond simple redlining. It produces a structured comparison that categorizes every change by type and significance, then provides a risk analysis highlighting the changes that require your closest attention. In 3 minutes, you get the analytical equivalent of what might take 30-60 minutes of manual side-by-side review.
How to Use It
Step 1: Prepare Both Versions
Extract clean text from both contract versions. Remove:
Headers and footers
Page numbers
Track changes markup (accept or reject all changes in each version first)
Watermarks or draft stamps
You want two clean text versions — one representing the earlier draft and one representing the current revision.
Step 2: Paste and Label Clearly
The prompt uses clear delimiters (=== VERSION A === and === VERSION B ===) to help the AI distinguish between the two versions. Maintain this labeling. If you reverse them, the AI will report additions as deletions and vice versa.
Step 3: Review the Summary First
Start with the executive summary of changes. This gives you an immediate sense of whether the revisions are primarily cosmetic or substantive. If the summary reveals material shifts in risk allocation, you know to review the detailed change log closely.
Step 4: Focus on Material and Potentially Adverse Changes
In the detailed change log, filter your attention:
Material changes: These alter rights, obligations, or risk allocation. Each one requires your legal analysis.
Potentially adverse changes: The AI has flagged these as disadvantageous to one party. Determine whether they are acceptable given the overall deal dynamics.
Clarifying changes: Review these quickly to confirm they truly are clarifying and not substantive changes disguised as “clean-up.”
Administrative changes: A quick scan is sufficient.
Step 5: Prepare Your Response
Use the AI output to organize your response to the other side’s draft. You might follow up with:
“Based on this comparison, draft a list of comments for a negotiation call. For each material change, provide a brief talking point explaining our position.”
This creates a negotiation preparation memo directly from the comparison.
Multi-Round Negotiation Workflow
For contracts going through many rounds:
Run this comparison after each round of revisions
Ask the AI to track which changes from your previous round were accepted, rejected, or counter-proposed
Build a running log of negotiation positions across rounds
“Here is the change log from Round 2 and the new Version C. Please identify which Round 2 changes were accepted in Version C, which were rejected, and what new changes appear.”
What This Does Not Replace
AI comparison is a powerful analysis accelerator, but it does not replace:
Professional redline tools for documents with complex formatting, tables, or embedded objects
Legal judgment about which changes are commercially acceptable for your client’s specific situation
Negotiation strategy about which points to concede and which to hold firm on
Jurisdiction-specific analysis of how changed terms interact with the governing law
A careful human read of the final execution version before signing — the last review must always be yours
IntermediateLitigation 5 min
Generate a Case Chronology from Documents
Build a detailed, organized case timeline from raw documents and notes -- the backbone of every well-prepared litigation matter, created in minutes instead of hours.
Prompt
You are an experienced litigation paralegal building a comprehensive case chronology. I will provide raw materials (documents, notes, deposition excerpts, communications, or a factual narrative). Please extract every event and organize them into a detailed chronological timeline with the following structure:
| # | Date | Time (if known) | Event Description | Persons Involved | Source Document | Category | Significance |
|---|------|-----------------|-------------------|------------------|----------------|----------|-------------|
**Instructions:**
1. **Date Format**: Use YYYY-MM-DD format for consistent sorting. If only a month or year is known, note it as "2024-06-XX" or "2024-XX-XX" and flag it as an approximate date.
2. **Event Description**: Write each event as a clear, factual statement. Use neutral language -- do not characterize events as favorable or unfavorable. Include specific details: dollar amounts, names, locations, and quoted language where significant.
3. **Persons Involved**: List all individuals and entities involved in or referenced by each event.
4. **Source Document**: Identify which document, deposition, or communication establishes each fact. Use consistent abbreviations (e.g., "Ex. A - Lease Agreement," "Dep. Smith 45:12-46:3," "Email - Jones to Smith 3/15/24").
5. **Category**: Classify each event as one of:
- **Contract/Agreement**: Formation, execution, amendment, or breach of agreements
- **Communication**: Letters, emails, calls, meetings
- **Performance**: Actions taken in performance or non-performance of obligations
- **Legal/Regulatory**: Court filings, regulatory actions, legal notices
- **Financial**: Payments, invoices, financial events
- **Personnel**: Hiring, termination, role changes
- **Key Decision**: Significant decisions by any party
- **Damage Event**: Events giving rise to or quantifying damages
6. **Significance**: Rate as:
- **Critical**: Directly establishes or disproves an element of a claim or defense
- **Important**: Provides context or corroboration for critical events
- **Background**: Helpful context but not directly dispositive
After the chronology table, provide:
**Gaps Analysis**: Identify periods where no events are documented but where events likely occurred based on the surrounding timeline. Note what types of documents or testimony might fill these gaps.
**Conflicting Facts**: Flag any events where different sources provide inconsistent accounts of dates, participants, or what occurred.
**Key Themes**: Identify 3-5 narrative themes that emerge from the chronology (e.g., "progressive deterioration of the business relationship," "pattern of missed deadlines," "escalating communications").
Here are the source materials:
[PASTE YOUR DOCUMENTS, NOTES, DEPOSITION EXCERPTS, OR FACTUAL NARRATIVE HERE]
Tips
Process documents in batches if you have many. Start with the core agreement and key communications, generate a baseline chronology, then add deposition testimony, financial records, and other materials in subsequent passes.
After the first chronology is generated, ask the AI to identify the 10 most important events for your specific legal theory. This helps you distinguish the critical facts from the background noise.
Use the Gaps Analysis to guide further investigation and discovery. Gaps in the timeline often correspond to documents that exist but have not yet been produced.
Export the chronology to a spreadsheet for ongoing case management. The table format translates directly to Excel or Google Sheets, where you can add columns for trial exhibit numbers, witness assignments, and deposition references.
For each subsequent document you receive during discovery, run it through the prompt and ask: 'Add the events from this document to the existing chronology' with the current chronology pasted in. Build the timeline incrementally throughout the case.
Cautions
Verify every date and fact in the chronology against the original source documents. AI can misread dates, transpose numbers, or attribute statements to the wrong party.
The 'Significance' ratings reflect general litigation relevance, not your specific case theory. An event rated 'Background' by the AI may be critical under your theory of the case, and vice versa.
Do not use the AI-generated chronology as a trial exhibit or attach it to a filing without thorough verification. It is a working document to guide your preparation, not a finished work product.
Be aware of how document order and emphasis in your input may bias the AI's characterization. If you paste only your client's documents, the chronology will reflect only your client's perspective.
Confidentiality applies with full force. Case documents are among the most sensitive materials in a law practice. Use enterprise AI tools with appropriate data handling agreements.
What This Quick Win Does
A well-constructed chronology is the backbone of every litigation matter. It organizes the factual record, reveals patterns, identifies gaps, and provides the foundation for depositions, motions, and trial preparation. Building one manually from stacks of documents, emails, and notes can take hours or days.
This Quick Win generates a structured, categorized chronology from your raw case materials in about 5 minutes. It does not just list dates — it categorizes events, tracks sources, identifies gaps in the record, flags conflicting facts, and highlights emerging narrative themes. It is the analytical starting point that transforms raw data into case understanding.
How to Use It
Step 1: Gather Your Source Materials
Collect the documents and notes that tell the story of the case:
Contracts and agreements: The documents that define the parties’ rights and obligations
Correspondence: Emails, letters, text messages, and meeting notes between the parties
Internal documents: Memos, board minutes, internal emails that reflect decision-making
Deposition testimony: Excerpts from depositions taken so far
Court documents: Complaints, answers, orders, and other filings
Your own notes: Case intake notes, client interview summaries, investigation findings
Step 2: Organize Your Input
For the best results, organize your input by source rather than trying to sort it chronologically (that is the AI’s job):
--- Source: Lease Agreement dated March 1, 2023 (Ex. A) ---[Paste text]--- Source: Email from Jones to Smith, June 15, 2023 ---[Paste text]--- Source: Deposition of John Smith, pp. 34-67 ---[Paste relevant excerpts]
Labeling sources helps the AI correctly populate the “Source Document” column.
Step 3: Process in Batches if Needed
If your case materials exceed the AI’s context window:
Start with the core documents (the agreement, the key breach, the demand letter)
Generate the initial chronology
Add additional materials in subsequent rounds:
“Here is the existing case chronology. Please add the events from the following new documents and integrate them in the correct chronological position. Flag any events that conflict with or modify the existing entries.”
[Paste the existing chronology table and the new documents]
Step 4: Verify and Enrich
After receiving the chronology:
Check every date against the original source
Verify attributions — make sure statements and actions are attributed to the correct parties
Add context that requires your knowledge of the case (e.g., significance of events that the AI cannot assess without understanding your legal theory)
Fill identified gaps by requesting missing documents in discovery or scheduling additional witness interviews
Step 5: Make It a Living Document
The case chronology should evolve throughout the litigation:
Add new events as documents are produced in discovery
Update significance ratings as your case theory develops
Cross-reference chronology entries with exhibit lists and witness disclosures
Use it to prepare deposition outlines and cross-examination
Advanced Uses
Deposition Preparation
“Based on this chronology, identify the 10 events where [Witness Name] was directly involved. For each, draft 3 deposition questions that would establish the facts, probe for additional details, and test the witness’s version of events.”
Motion Support
“From this chronology, extract the facts that support a motion for summary judgment on the breach of contract claim. Organize them as an undisputed statement of material facts.”
Trial Timeline
“Create a simplified trial timeline from this chronology that includes only Critical-significance events. Format it as a visual narrative suitable for a demonstrative exhibit.”
What This Does Not Replace
A chronology is a tool for organizing facts, not for analyzing them. This Quick Win does not replace:
Your investigation to uncover facts not yet in the documentary record
Witness interviews that provide context, motivation, and disputed facts
Legal analysis of which facts establish which elements of each claim
Trial judgment about which facts to present and in what order for maximum persuasive impact
Source verification — every entry in the chronology must be traceable to a verified source document
IntermediateTech/Privacy 5 min
Draft a Privacy Policy Framework
Generate a comprehensive privacy policy first draft covering data collection, use, storage, and rights -- a structured starting point for any organization that handles personal data.
Prompt
You are a privacy and data protection attorney drafting a privacy policy for an organization. Based on the information I provide, draft a comprehensive privacy policy that includes the following sections:
**1. Introduction & Scope**
- Identity of the data controller (organization name, address, contact information)
- Scope of the policy (what services, websites, and applications it covers)
- Effective date and last updated date
- A plain-language summary of the policy's purpose
**2. Information We Collect**
Organize by collection method:
- **Information you provide directly**: Account registration, forms, purchases, communications, user-generated content
- **Information collected automatically**: Device information, IP addresses, browser type, operating system, usage data, cookies, analytics
- **Information from third parties**: Social login providers, payment processors, advertising partners, public databases
For each category, list the specific data types collected.
**3. How We Use Your Information**
List all purposes, organized by legal basis:
- **To perform our contract with you**: Service delivery, order processing, account management
- **Based on your consent**: Marketing communications, non-essential cookies, personalization
- **For our legitimate interests**: Security, fraud prevention, analytics, service improvement
- **To comply with legal obligations**: Tax reporting, regulatory compliance, law enforcement requests
**4. How We Share Your Information**
- Categories of recipients (service providers, business partners, advertising partners, legal authorities)
- For each category: what data is shared, why, and what safeguards are in place
- Whether data is sold (and if so, clear disclosure; if not, clear statement)
- International data transfers and the legal mechanisms used (Standard Contractual Clauses, adequacy decisions, etc.)
**5. Data Retention**
- Retention periods for each category of data, with the rationale
- Criteria used to determine retention periods
- What happens when data is no longer needed (deletion, anonymization)
**6. Your Rights**
Draft this section to be adaptable for multiple jurisdictions. Include rights under:
- **GDPR** (EU/EEA): Access, rectification, erasure, restriction, portability, objection, automated decision-making
- **CCPA/CPRA** (California): Know, delete, correct, opt-out of sale/sharing, non-discrimination
- **General rights**: How to exercise rights, expected response timeframes, identity verification process, right to lodge a complaint with a supervisory authority
**7. Cookies & Tracking Technologies**
- Types of cookies used (strictly necessary, functional, analytics, advertising)
- How users can manage cookie preferences
- Reference to a separate cookie policy if applicable
**8. Security**
- Overview of technical and organizational security measures (do not over-specify implementation details)
- Incident response and breach notification commitments
**9. Children's Privacy**
- Age threshold and compliance framework (COPPA, GDPR Article 8, or applicable local law)
- Parental consent mechanisms if applicable
**10. Changes to This Policy**
- How changes will be communicated
- Whether continued use constitutes acceptance (and any limitations on this)
**11. Contact Information**
- Data Protection Officer or privacy contact details
- How to submit privacy requests
- Supervisory authority contact information (for GDPR compliance)
**Formatting**: Use clear headings, numbered sections, and plain language. Avoid dense legal jargon. The policy should be understandable by a non-lawyer while remaining legally precise.
**Tone**: Professional, transparent, and user-friendly. This policy should build trust, not obscure practices.
Here is the organization information:
Organization name: [NAME]
Type of business: [e.g., "SaaS platform for project management"]
Website/app: [URLs]
Jurisdictions: [Where the organization operates and where its users are located -- e.g., "US-based company with users in the US, EU, UK, and Canada"]
Data collected: [Describe what data you collect and how -- e.g., "User accounts with name, email, company. Payment processing via Stripe. Google Analytics for website usage. No biometric or health data."]
Data sharing: [Who receives user data -- e.g., "AWS for hosting, Stripe for payments, Google Analytics, Mailchimp for marketing emails. No data sold to third parties."]
Special considerations: [Any industry-specific requirements -- e.g., "Must comply with HIPAA for health data" or "Processes children's data for educational platform"]
Tips
Be specific about your data practices in the prompt. Generic input produces generic output. If you know exactly which analytics tools, payment processors, and hosting providers are used, include them.
After generating the draft, review it against each applicable regulation systematically. Use a compliance checklist: Does it satisfy GDPR Articles 13 and 14? CCPA Section 1798.100? Your industry-specific requirements?
Run a follow-up prompt for jurisdiction-specific addenda: 'Draft a California-specific addendum to this privacy policy that fully addresses CCPA/CPRA requirements, including the specific disclosures required under Cal. Civ. Code 1798.100-1798.199.'
Use plain language throughout. Regulators increasingly penalize privacy policies that are difficult for users to understand. The GDPR specifically requires clear and plain language (Article 12).
Schedule regular reviews. Privacy policies must be updated when data practices change, when new regulations take effect, or at minimum annually. Set a calendar reminder.
Cautions
A privacy policy is a legally binding document. Inaccurate statements about your data practices can result in regulatory enforcement actions, fines, and litigation. Every statement must accurately reflect your actual practices.
AI cannot audit your actual data flows. The policy must be based on a thorough data mapping exercise that identifies what data you collect, where it goes, how long it is kept, and who has access. Do not rely on the AI to know your data practices.
Privacy law varies significantly by jurisdiction and is evolving rapidly. GDPR, CCPA/CPRA, PIPEDA, LGPD, UK GDPR, and dozens of US state laws each have specific requirements. This draft is a starting framework that must be reviewed by a qualified privacy attorney for each applicable jurisdiction.
Do not simply publish an AI-generated privacy policy without legal review. Regulators and plaintiffs' attorneys specifically look for copy-paste policies that do not match actual practices.
Industry-specific regulations (HIPAA, GLBA, FERPA, COPPA) impose additional requirements beyond general privacy law. If your organization operates in a regulated industry, the privacy policy must address those sector-specific requirements.
What This Quick Win Does
Every organization that collects personal data needs a privacy policy. Whether you are advising a startup launching its first product, a law firm updating its own website, or an enterprise undergoing a compliance overhaul, the privacy policy is a foundational document.
Drafting one from scratch requires mapping data flows, understanding applicable regulations across multiple jurisdictions, and translating technical practices into clear, legally precise language. This Quick Win generates a comprehensive first draft in about 5 minutes — a structured framework that you then refine based on the organization’s actual data practices, applicable law, and industry-specific requirements.
How to Use It
Step 1: Conduct a Data Mapping Exercise
Before generating the policy, you need to understand the organization’s actual data practices. Answer these questions:
What personal data is collected? (Names, emails, payment information, device data, location, behavioral data, biometric data, health information, etc.)
How is it collected? (Forms, cookies, APIs, third-party integrations, user-generated content)
Why is it collected? (Service delivery, marketing, analytics, personalization, legal compliance)
Who receives it? (Internal teams, cloud providers, analytics platforms, advertising networks, payment processors, government agencies)
Where is it stored? (Which countries, which cloud providers)
How long is it kept? (Retention periods for each data category)
This data mapping is the essential input. The AI generates the policy structure; your data mapping provides the substance.
Step 2: Identify Applicable Jurisdictions
Privacy regulation depends on where your organization is based, where your users are located, and what industry you operate in:
EU/EEA users? GDPR applies
California users? CCPA/CPRA applies
Canadian users? PIPEDA applies
Brazilian users? LGPD applies
UK users? UK GDPR applies
Health data? HIPAA may apply (US)
Financial data? GLBA may apply (US)
Children’s data? COPPA (US), GDPR Article 8 (EU), or equivalent local law applies
Employees? Employment-specific privacy rules apply in many jurisdictions
List all applicable jurisdictions in the prompt so the AI can include the relevant provisions.
Step 3: Generate the Draft
Fill in the organization information in the prompt and submit. The AI will produce a multi-section privacy policy covering all standard areas.
Step 4: Review and Customize
This is where legal expertise is essential. For each section:
Verify accuracy: Does every statement match the organization’s actual practices? Delete anything that is aspirational rather than actual.
Check completeness: Does the policy include all disclosures required by each applicable regulation? Use a jurisdiction-specific checklist.
Add specificity: Replace generic language with the organization’s actual vendors, retention periods, and contact information.
Test readability: The policy should be understandable by a non-lawyer. If a section requires a law degree to parse, simplify it.
Add jurisdiction-specific addenda: Some organizations create a base policy with jurisdiction-specific supplements (e.g., “Additional Information for California Residents”).
Step 5: Implement Supporting Mechanisms
A privacy policy is only effective if it is supported by operational processes:
Cookie consent mechanism: A compliant cookie banner or consent management platform
Data subject request process: A system for receiving, verifying, and responding to privacy rights requests within required timeframes
Consent records: Documentation of when and how consent was obtained
Breach notification process: Procedures for detecting, assessing, and reporting data breaches within regulatory timeframes
Regular review: A process for updating the policy when practices change
Jurisdiction-Specific Follow-Up Prompts
After generating the base policy, use these follow-up prompts for jurisdiction-specific compliance:
GDPR Compliance Check:
“Review this privacy policy against GDPR Articles 12-14. Identify any required disclosures that are missing or insufficient, and draft the additional language needed.”
CCPA/CPRA Compliance Check:
“Review this privacy policy against CCPA/CPRA requirements. Does it include the required ‘Your California Privacy Rights’ section? Does it address the right to opt out of sale/sharing? Draft any missing language.”
COPPA Compliance (if applicable):
“This service is directed to children under 13. Review the privacy policy for COPPA compliance. Add parental consent mechanisms, data minimization commitments, and specific children’s privacy protections.”
What This Does Not Replace
An AI-generated privacy policy is a structural framework. It does not replace:
A data mapping exercise that documents actual data practices — the policy must reflect reality, not aspirations
Legal review by a qualified privacy attorney who understands the applicable regulations, enforcement trends, and industry standards
Regulatory expertise in jurisdiction-specific requirements that may not be captured by a general-purpose prompt
Ongoing compliance — the policy is one piece of a privacy program that includes training, technical measures, vendor management, and incident response
User experience design for consent flows, preference centers, and data subject request portals that make the policy actionable
AdvancedLitigation 8 min
Build a Cross-Examination Outline from a Deposition
Transform a deposition transcript into a structured cross-examination outline that identifies contradictions, weaknesses, and impeachment opportunities -- organized by topic with pinpoint page-line citations.
Prompt
You are a senior trial attorney preparing for cross-examination. I will provide excerpts from a deposition transcript. Your task is to produce a structured cross-examination outline organized for maximum effectiveness at trial.
**For each topic area, provide:**
1. **Topic heading** (e.g., "Timeline Contradictions," "Prior Inconsistent Statements," "Bias and Motive")
2. **Key admissions obtained** — List the most important admissions the witness made during the deposition, with page:line citations formatted as (p. XX:LL).
3. **Contradictions and inconsistencies** — Identify any statements that contradict:
- Other testimony in the same deposition
- Known documentary evidence
- Common sense or physical plausibility
- The witness's own prior statements (if referenced in the transcript)
For each, provide both the contradictory statements with citations.
4. **Impeachment material** — For each potential impeachment point:
- Quote the deposition testimony to be used (with page:line)
- Identify what it contradicts
- Draft 3-5 leading cross-examination questions that lock the witness into the deposition testimony before revealing the contradiction
- Format questions as short, single-fact, leading questions (the "one new fact per question" method)
5. **Areas of evasion** — Flag testimony where the witness was evasive, claimed lack of memory, or gave non-responsive answers. For each, suggest follow-up questions designed to pin down the witness.
6. **Concessions to obtain** — Identify facts the witness is likely to concede on cross-examination based on their deposition testimony. Draft the leading questions to obtain each concession.
**Formatting requirements:**
- Organize topics in recommended examination order (usually: establish favorable facts first, then contradictions, then impeachment, then bias/motive)
- Every reference to testimony must include (p. XX:LL) citation
- Cross-examination questions must be leading (yes/no format)
- Each question should contain only one new fact
- Flag any areas where you would recommend a demonstrative exhibit or document to use alongside the questions
**Case context:**
Case type: [e.g., "Personal injury -- slip and fall"]
Witness role: [e.g., "Defendant's store manager"]
Key issues for cross: [e.g., "Knowledge of the hazard, failure to inspect, prior incidents"]
Your theory of the case: [Brief statement of what you intend to prove]
**Deposition excerpts:**
[Paste relevant deposition transcript excerpts here]
Tips
You do not need to paste the entire transcript. Select the 5-15 most relevant pages. AI works better with focused input than with hundreds of pages of testimony.
After generating the outline, run a follow-up: 'Now review this outline from the perspective of the opposing attorney. What redirect questions would you ask to rehabilitate this witness, and how should I anticipate those on cross?' This adversarial review strengthens your preparation.
Use the 'Areas of evasion' section to prepare for a witness who will be more evasive at trial than at deposition. Draft tighter questions that leave no room for narrative answers.
For expert witnesses, add to the prompt: 'Also identify: (a) the limits of the expert's methodology, (b) facts the expert did not consider, (c) alternative interpretations of the data the expert relied on, and (d) any concessions about the reliability or applicability of the expert's opinions.'
Consider generating a separate 'Chapter Method' outline where each topic is a self-contained chapter that can be reordered at trial depending on how the examination unfolds.
Cautions
Deposition transcripts may contain confidential information, attorney-client communications marked on the record, or sealed material. Before uploading any transcript to an AI tool, verify that your jurisdiction's ethics rules and any protective orders permit it. Several bar associations have warned against uploading client materials to AI tools without informed consent.
AI cannot assess witness demeanor, credibility signals, or courtroom dynamics. The outline is a structural tool -- your trial instincts must guide the actual examination.
Verify every page:line citation against the actual transcript. AI may hallucinate citations or attribute testimony to the wrong portion of the transcript. An incorrect citation used at trial damages your credibility with the jury and the judge.
Leading questions must comply with your jurisdiction's evidence rules. Some jurisdictions restrict the scope of cross-examination to matters raised on direct (the 'scope rule' under FRE 611(b)); others follow the 'wide-open' rule. Know your jurisdiction.
Do not rely on AI to identify all impeachment opportunities. Review the transcript yourself for tone, hesitation, and context that AI cannot capture from text alone.
What This Quick Win Does
Cross-examination preparation is one of the most intellectually demanding tasks in trial practice. A great cross-examination is not improvised — it is built from careful analysis of the witness’s prior testimony, identification of every admission, contradiction, and impeachment opportunity, and organization into a structure that tells a story to the jury.
This Quick Win transforms raw deposition testimony into a trial-ready cross-examination outline. It identifies the points the witness has already conceded, the contradictions you can exploit, and the questions you need to lock the witness into their prior testimony before springing the trap.
How to Use It
Step 1: Select and Prepare Transcript Excerpts
Do not paste the entire deposition. Select the portions most relevant to your cross-examination themes:
Testimony about the key disputed facts
Any moments where the witness contradicted themselves
Testimony that conflicts with documentary evidence
Areas where the witness was evasive or claimed lack of memory
Testimony about bias, motive, or interest in the outcome
Copy these excerpts with their page and line numbers intact. The AI needs the citations to reference them accurately.
Step 2: Define Your Cross-Examination Goals
In the “Key issues for cross” field, be specific about what you need to accomplish:
What facts do you need this witness to concede?
What credibility issues do you want to expose?
What narrative does this cross-examination serve in your overall trial story?
Step 3: Generate and Refine
Review the output as you would a senior associate’s draft:
Verify citations — Check every page:line reference against the actual transcript
Test the question sequences — Read the leading questions aloud. Do they flow logically? Does each build on the last?
Cut weak points — A focused cross on 3-4 strong themes beats a scattered cross on 10 marginal ones
Add your instincts — You know things about this witness that the AI does not. Add questions based on your courtroom experience and case knowledge
Step 4: Build the Examination Order
The AI suggests a topic order, but consider your trial strategy:
Primacy: Start with your strongest material (jurors remember beginnings)
Recency: End with your most dramatic point (jurors remember endings)
Building blocks: Get concessions before using them for contradictions
Flexibility: The “Chapter Method” allows you to reorder topics at trial based on what happens on direct examination
IntermediateCorporate 15 min
Build an M&A Diligence Issue List from a Data Room
Turn a synthesized data room review into a categorized, deal-impact-tiered issue list -- covering regulatory, IP, employment, financial, and change-of-control risks -- ready to hand to the deal team.
Prompt
You are a senior M&A attorney conducting due diligence on an acquisition target. I will provide a summary of documents reviewed from the target's data room (or excerpts from the documents themselves). Your task is to produce a structured diligence issue list.
For each issue identified, categorize it under one of these five tracks:
- **Regulatory & Compliance** (licenses, permits, government approvals, environmental, sanctions)
- **Intellectual Property** (ownership gaps, third-party licenses, open-source exposure, employee IP assignments)
- **Employment & Benefits** (key-person risk, change-of-control clauses, WARN Act exposure, benefits liabilities)
- **Financial & Tax** (undisclosed liabilities, earn-out disputes, tax liens, accounting irregularities)
- **Change of Control** (consent requirements, anti-assignment clauses, customer/supplier agreements triggered by the deal)
For each issue, provide:
1. **Issue title** (one clear sentence)
2. **Track** (category from the list above)
3. **Deal-Impact Tier**:
- 🔴 Deal-Breaker — would prevent or materially restructure the transaction
- 🟠 Material — requires resolution before or at closing; affects price or indemnification
- 🟡 Minor — should be disclosed and addressed but unlikely to affect deal terms
- ⚪ Informational — noted for post-closing integration planning
4. **Supporting Documents** (cite the document title or data room folder where this issue arises)
5. **Recommended Action** (1-2 sentences: what must be done -- request further documentation, obtain consent, negotiate rep & warranty, escrow holdback, etc.)
After the issue list, add a brief **Summary for Deal Counsel** (3-5 sentences) highlighting the top three risks and any items requiring immediate escalation.
Here is the data room summary / document excerpts:
Transaction: [DESCRIBE THE DEAL -- e.g., "Acquisition of Acme Corp by Buyer Inc, all-stock deal valued at $120M"]
Target industry: [e.g., "SaaS / fintech"]
Data room contents reviewed: [LIST DOCUMENT CATEGORIES REVIEWED -- e.g., "corporate records, IP assignments, customer contracts top-20, employment agreements for C-suite, last 3 years audited financials, material licenses"]
Key findings from review: [PASTE YOUR DILIGENCE NOTES OR DOCUMENT EXCERPTS HERE]
Tips
If using NotebookLM, upload the actual data room documents as sources and ask the AI to surface issues by track. NotebookLM's citation feature lets you verify exactly which document triggered each flag.
Run the prompt a second time with the instruction 'Focus only on change-of-control triggers in customer and supplier contracts' to get a deeper pass on consent requirements.
Ask a follow-up: 'Which of the Material issues would typically be addressed through a rep and warranty insurance policy versus a specific indemnity?' This helps prioritize the deal structure conversation.
Cross-reference the AI issue list against your firm's standard M&A diligence checklist to make sure no categories were missed in your data room review.
For the Deal-Breaker items, immediately draft a diligence inquiry letter requesting the missing or clarifying documents before the issue list goes to the client.
Cautions
AI cannot review documents it has not been given. This prompt produces output only as good as the summaries or excerpts you provide. Incomplete inputs yield an incomplete issue list -- not a clean bill of health.
Do not paste confidential target-company documents into consumer AI tools. Use enterprise-grade tools (ChatGPT Enterprise, Claude for Work, or a self-hosted API deployment) with appropriate DPAs and NDA coverage for the deal.
AI may misclassify the severity of an issue because it lacks jurisdiction-specific knowledge. An employment classification issue that is Minor in one state can be a Material liability in California. Apply your own legal judgment to every tier assignment.
The AI will not catch issues that require cross-referencing multiple documents (e.g., a license restriction buried in Exhibit C that conflicts with a rep in the purchase agreement). Human review of source documents remains essential.
This output is a working draft for attorney use -- it is not a deliverable to send directly to the client or the other side without attorney review, expansion, and verification.
What This Quick Win Does
A thorough M&A data room review can involve hundreds of documents across a dozen functional tracks. The bottleneck is not reading the documents — it is organizing findings into a coherent issue list that deal counsel, the client, and the deal team can act on. This Quick Win takes your diligence notes or document excerpts and produces a categorized, deal-impact-tiered issue list in one structured AI pass.
The output is not a final diligence memo. It is an organized working draft that gives you a head start on the most important synthesis task in any deal: separating the deal-breakers from the noise.
How to Use It
Step 1: Prepare Your Inputs
Before running the prompt, consolidate your diligence notes. You do not need to have finished all document review — you can run this prompt iteratively by track. At minimum, gather:
A one-line deal description (buyer, target, deal type, approximate value)
The target’s industry (affects which regulatory and IP issues are most likely)
A list of the data room folders and document categories you have reviewed
Your raw notes or key excerpts from those documents
If using a tool with file upload (Claude with file upload, or NotebookLM), you can attach the actual documents rather than pasting summaries.
Step 2: Open Your AI Tool
For M&A diligence, the confidentiality stakes are high. Use only enterprise or API-tier tools:
ChatGPT Enterprise or Claude for Work for text-based input
NotebookLM (Google Workspace version) for multi-document synthesis with citations
Any self-hosted model deployment covered by your firm’s data security policy
Step 3: Paste the Prompt and Your Diligence Notes
Copy the prompt above. Fill in the bracketed fields with your deal description and diligence findings. If your notes are lengthy, paste them directly — the prompt instructs the AI to organize them, not to invent issues from thin air.
Step 4: Review and Tier the Output
Read the issue list critically:
Verify every tier assignment. The AI applies general deal norms; you apply jurisdiction and industry knowledge. Downgrade or upgrade tiers as warranted.
Check source attribution. If the AI cites a document you did not provide, it has hallucinated a reference — delete it.
Add missing issues. The AI can only flag what it was given. Supplement with issues you know exist from prior deals in this sector.
Step 5: Iterate by Track
Run focused follow-up passes for high-risk tracks:
“Go deeper on the change-of-control triggers. List every contract provision that requires third-party consent and identify whether consent can be obtained pre-closing.”
“For the IP track issues, identify which ones would be covered by a rep and warranty insurance policy and which would require a specific indemnity.”
“Draft three due diligence inquiry questions I should send to target’s counsel for the top Material issues.”
Why This Works
M&A diligence is a classification problem: hundreds of facts must be sorted by category, severity, and required action. Large language models handle classification well when given clear taxonomies and sufficient context. The five-track structure and four-tier impact rating give the AI a precise output schema to follow, which produces a result that maps directly onto the deal team’s workflow — rather than a freeform list that requires further organization.
What This Does Not Replace
Document review itself. The AI processes what you give it. It cannot read documents sitting in the data room that you have not reviewed or uploaded.
Judgment on deal structure. Whether a Material issue becomes a price chip, an escrow holdback, a rep and warranty, or a walk-away is a strategic decision that requires attorney and client judgment.
Regulatory analysis. FDI screening, antitrust pre-merger thresholds, and sector-specific approvals require specialized counsel, not an AI issue list.
Privilege protection. Ensure your use of AI tools preserves work-product protection over the diligence analysis. Consult your firm’s AI use policy before uploading privileged documents.
IntermediateCorporate 8 min
Draft a Board Memo Summarizing a Contract Risk Matrix
Convert a contract risk matrix or portfolio review into a polished, board-ready two-page memo -- executive summary, top-5 risks ranked, recommended actions, and a mitigation timeline -- written for directors, not lawyers.
Prompt
You are a senior corporate counsel drafting a memorandum to the Board of Directors. I will provide a contract risk matrix or a summary of contract portfolio findings. Your task is to convert this into a clear, executive-level board memo that directors can read and act on in 10 minutes.
Structure the memo as follows:
1. **Header**
- To: Board of Directors, [COMPANY NAME]
- From: [ATTORNEY NAME], [TITLE]
- Date: [DATE]
- Re: Contract Risk Review -- [REPORTING PERIOD or PORTFOLIO DESCRIPTION]
- Confidential: Attorney-Client Privileged
2. **Executive Summary** (1 short paragraph, 4-6 sentences)
Summarize what was reviewed, the overall risk level of the portfolio, the single most important finding, and the board action required (if any).
3. **Top 5 Risks** (ranked table)
Present the five most significant risks as a ranked list. For each risk:
- **Risk name** (plain English, no legal jargon)
- **Which contracts or counterparties are affected**
- **Business impact** (financial exposure, operational disruption, reputational harm -- give specific dollar ranges or percentages where available)
- **Likelihood** (High / Medium / Low based on the information provided)
- **Current status** (e.g., "Unmitigated," "Partially mitigated -- pending renegotiation," "Mitigated -- amended Q1 2024")
4. **Recommended Actions** (numbered, owner assigned)
For each top risk, state the recommended action in one clear sentence. Assign an owner (Legal, Finance, Operations, CEO, Board approval required). Use plain language -- no Latin, no section references.
5. **Mitigation Timeline**
Present a simple three-column table: Action | Owner | Target Date. Separate into three horizons:
- Immediate (0-30 days)
- Near-term (31-90 days)
- Ongoing / Annual
6. **Attachments Referenced**
List the underlying risk matrix or contract review documents that support this memo.
Tone: Plain English. Write for sophisticated business people, not lawyers. No legalese. No passive voice. Use active verbs. Keep the memo to two pages when printed.
Here is the contract risk matrix / portfolio summary:
Company: [COMPANY NAME]
Reporting period: [e.g., "Q1 2025 contract portfolio review -- 47 contracts reviewed"]
Risk matrix or summary: [PASTE YOUR RISK MATRIX DATA OR FINDINGS HERE]
Key stakeholders: [e.g., "General Counsel presenting to Audit Committee"]
Any board action required: [e.g., "Board approval needed for contract restructuring budget of $200K"]
Tips
Paste your risk matrix as a table -- most AI tools handle markdown or CSV tables well. If your matrix is in Excel, copy the relevant rows and paste as tab-separated text.
After generating the memo, ask the AI: 'Rewrite the Executive Summary as if you are a CFO presenting this to investors. What would you emphasize differently?' This stress-tests whether the financial exposure is communicated clearly.
Use a follow-up prompt to convert the Recommended Actions section into a legal project plan: 'Turn the recommended actions into a project checklist with sub-tasks for a paralegal to execute.'
If the board meeting has a fixed agenda slot (e.g., 15 minutes), add that constraint to the prompt: 'This memo will be presented verbally in 15 minutes. Add a one-paragraph talking-points section at the top for the presenter.'
Check whether your jurisdiction or listing rules require specific board-level disclosure of material contract risks (e.g., SEC Item 1A for public companies). The AI will not know your reporting obligations.
Cautions
All dollar figures, percentages, and contract terms in the memo must come from your actual risk matrix. AI will not invent numbers -- but it may re-characterize a risk as higher or lower than your source data supports. Verify every risk description against the underlying contracts.
Board memos are governance documents. Mislabeling a risk tier (e.g., calling a High risk 'Medium') can create liability for the attorney and the company if that risk later materializes. Apply independent legal judgment to every tier before the memo goes to the board.
Do not input confidential contract data into consumer AI tools. Use enterprise tools with appropriate data handling agreements. Board communications carry heightened confidentiality obligations.
Mark every draft clearly as 'DRAFT -- Attorney-Client Privileged.' AI-generated drafts have been inadvertently shared as final documents. Implement a review gate before any board distribution.
AI does not know your company's risk tolerance, board composition, or prior board decisions. The Recommended Actions section must be calibrated by you to match the company's actual risk appetite and governance norms.
What This Quick Win Does
Contract risk matrices are attorney work product — dense, technical, and written for lawyers. Board members are business executives who need the same information translated into plain English, ranked by business impact, and paired with a clear action plan. Manually converting a risk matrix into a board-ready memo typically takes 2-4 hours. This Quick Win does it in 8 minutes.
The output is a two-page memo structured around what directors actually need: the one-sentence headline, the five risks that matter most, who owns each one, and when each must be addressed. It is written for people who did not go to law school and do not want to.
How to Use It
Step 1: Prepare Your Risk Matrix
Before running the prompt, make sure your underlying risk matrix is organized enough to hand to the AI. Gather:
The company name and the scope of the portfolio review (how many contracts, what period)
Your existing risk matrix, ranked findings, or key takeaways in text or table form
Any specific dollar exposures, contract values, or percentage impacts your review identified
Whether board action is actually required (approval, ratification, budget authorization)
If your risk matrix is still in draft, that is fine — the AI will help you structure it, but your job after the prompt is to verify every number and characterization against the source documents.
Step 2: Open Your AI Tool
Use ChatGPT or Claude. For documents containing non-public company information, use enterprise versions with data handling agreements in place. This memo will carry attorney-client privilege — treat the inputs accordingly.
Step 3: Paste the Prompt and Your Data
Fill in the bracketed fields with your company name, reporting period, and risk matrix data. Paste tabular data directly — most AI tools handle markdown tables and comma-separated values without difficulty.
Step 4: Review the Output for Board-Readiness
Read the draft as a director would, not as a lawyer. Ask yourself:
Is the Executive Summary the right headline? A board wants to know: how worried should we be, and what do we need to do?
Are the five risks correctly ranked by business impact (not by legal complexity)? A high-probability, low-dollar risk may rank below a low-probability, existential one.
Are the Recommended Actions specific and actionable? Vague recommendations (“monitor the situation”) frustrate boards. Each action should have an owner and a date.
Is the timeline realistic? The AI generates target dates based on your input. Adjust them to reflect your actual workflow and the board’s next meeting date.
Step 5: Iterate for Clarity
If any section reads as too legalistic, push the AI to simplify:
“Rewrite the Risk #2 description without any legal terms. Explain it as if you are briefing the CFO who has never seen this contract.”
“The Recommended Actions section is too vague. Make each action a specific, measurable step with a named responsible person.”
“Add a one-paragraph ‘What We Are Not Reporting’ section that explains what risks were reviewed and found acceptable, so the board understands the full scope.”
Why This Works
The transformation from legal risk matrix to board memo is a well-defined writing task: a fixed input structure (risk matrix) converted to a fixed output structure (executive memo with ranked risks and action timeline). AI excels at structured transformation tasks. The prompt supplies the output schema explicitly, which prevents the AI from defaulting to a legal memo format and keeps it in the plain-English register that boards expect.
What This Does Not Replace
Legal judgment about whether a risk is correctly tiered. The AI ranks based on what you tell it — it does not know what a High risk means for your specific company, industry, or board.
Governance knowledge about what requires board approval versus management authority under your company’s bylaws and delegated authority matrix.
Disclosure obligations for public companies: if any risk is material, SEC rules may require disclosure beyond the board memo. The AI does not know your reporting obligations.
Attorney review before distribution. Every board memo, however generated, must be reviewed and approved by counsel before it goes to directors. AI-generated drafts are starting points, not final deliverables.
AdvancedCorporate 20 min
Develop a SaaS MSA Negotiation Strategy
Turn a vendor-form SaaS Master Services Agreement into a structured negotiation strategy -- positions to push, acceptable fallbacks, and walk-away lines -- covering all nine high-stakes clauses.
Prompt
You are a senior technology transactions attorney representing an enterprise customer negotiating a SaaS Master Services Agreement presented by the vendor. I will provide the full text of the MSA (or key provisions). Your task is to produce a structured negotiation strategy document.
For each of the nine clauses listed below, provide:
1. **Current position** (brief summary of what the vendor draft says)
2. **Customer's opening ask** (what to request first -- the best reasonable position for the customer)
3. **Acceptable fallback** (what the customer can live with if the opening ask is rejected)
4. **Walk-away condition** (the specific language or outcome that would require escalation or deal refusal)
5. **Negotiation rationale** (2-3 sentences explaining why this clause matters and what leverage the customer has)
6. **Market standard** (brief statement of what is typical in comparable SaaS enterprise agreements)
Cover these nine clauses:
A. **Liability Cap** -- mutual vs. one-sided caps, cap multiples, super-cap for IP indemnity and data breach
B. **Indemnification** -- IP infringement, data breach/security incident, mutual vs. one-directional obligations
C. **Data Ownership & Processing** -- who owns customer data, permitted uses by vendor, sub-processor obligations, return/deletion on termination
D. **Security Obligations** -- minimum security standards, audit rights, incident notification timeline, vendor certifications (SOC 2, ISO 27001)
E. **SLA Remedies** -- uptime commitments, service credits as exclusive remedy vs. termination right, measurement methodology
F. **Audit Rights** -- financial audit, security audit, compliance audit; notice periods; vendor cooperation obligations
G. **Termination for Convenience** -- notice periods, data retrieval window, pro-rata refund of prepaid fees
H. **IP in Deliverables** -- ownership of customizations, professional services work product, feedback, derivative works
I. **Residual Rights** -- vendor's right to use general knowledge, skills, ideas retained in unaided memory
After the nine-clause analysis, add:
- **Priority Stack**: Rank the nine clauses from most to least critical for this customer's situation, with a one-sentence justification for each ranking.
- **Package Deal Opportunities**: Identify 2-3 clause pairs where a trade-off makes sense (e.g., "Accept vendor's liability cap multiple in exchange for a super-cap on data breach indemnity").
Here is the information about this negotiation:
Customer description: [e.g., "Mid-size financial services company, 500 employees, regulated by state banking regulators"]
Vendor description: [e.g., "Series B SaaS startup, contract management software, no enterprise customers yet"]
Annual contract value: [e.g., "$180,000/year, 3-year initial term"]
Customer's primary concerns: [e.g., "Data ownership, security standards, uptime, exit rights"]
Vendor MSA key provisions: [PASTE THE RELEVANT MSA SECTIONS HERE -- or describe the key terms if pasting the full document]
Tips
After generating the strategy document, run a second prompt: 'Draft three fallback redlines for the Liability Cap clause based on the fallback positions in the strategy. Format as tracked-changes language.' This converts strategy into actual contract language.
Ask the AI to simulate the vendor's likely counter to your opening position on each clause: 'Play the role of the vendor's counsel and respond to each opening ask with a typical vendor counter-position.' This helps you prepare for the negotiation.
If using Lawra Redline, upload the full MSA and ask it to automatically flag the nine clause areas, then paste those flagged sections into this prompt for deeper strategy.
Adjust the Priority Stack based on your client's industry. For a healthcare company, Data Ownership and Security Obligations typically rank #1 and #2. For a manufacturing company, SLA Remedies and Termination for Convenience often lead.
Use the Package Deal Opportunities section as the opening for your deal call with the vendor. Offering a trade-off signals good faith and often unlocks movement on the clauses you care most about.
Cautions
AI provides general market-standard positions. Whether a particular fallback is acceptable for your client depends on the client's risk tolerance, regulatory environment, and the strategic importance of this vendor relationship. Do not use AI strategy output as a substitute for client counseling.
The AI may not be current on rapidly evolving areas such as AI-specific data use rights, EU AI Act compliance obligations in vendor agreements, or CCPA/CPRA service provider requirements. Verify that the strategy reflects current law in your jurisdiction.
Walk-away conditions are business decisions, not just legal ones. The AI can identify legally significant thresholds -- the client must decide whether they are commercially acceptable. Never tell a client to walk away from a deal based solely on AI output.
Do not paste the full vendor MSA into consumer AI tools without confirming that the vendor's NDA or the negotiation context permits sharing draft documents with third-party AI services. Some NDAs explicitly prohibit this.
AI hallucinations in a negotiation strategy are particularly dangerous -- a fabricated 'market standard' position could cause you to concede something that is actually negotiable. Verify all market-standard characterizations against your firm's contract database, Practical Law, or Bloomberg Law Standard Documents.
What This Quick Win Does
Vendor-form SaaS MSAs are written to protect the vendor — every default position benefits them. Enterprise customers need a systematic strategy covering all the high-stakes clauses before walking into a negotiation call. Preparing this strategy manually, clause by clause, can take 3-5 hours. This Quick Win compresses that preparation into 20 minutes by generating a structured strategy document with opening positions, fallbacks, walk-aways, and package deal opportunities across all nine commercially critical clauses.
The output is a negotiation playbook, not a redlined contract. It tells you what to ask for, what you can accept, and where to hold the line — before the conversation starts.
How to Use It
Step 1: Gather Your Inputs
Before running the prompt, collect three things:
The vendor’s MSA or at minimum the key provisions for each of the nine clauses. The more text you provide, the more specific the strategy.
A clear description of your client: industry, regulatory environment, size, and strategic importance of this vendor relationship.
The client’s stated priorities and non-negotiables. If the client has already told you “data ownership is a deal-breaker for us,” capture that before the AI makes assumptions.
The annual contract value and term length matter too: a $1.8M/year, five-year commitment warrants more aggressive positions than a $18K/year trial agreement.
Step 2: Open Your AI Tool
Use Claude or ChatGPT for this prompt. Claude handles long contracts particularly well due to its large context window. If you have the full MSA, paste it after the nine-clause instructions — Claude can analyze a 30-50 page agreement in a single pass.
If using Lawra Redline, start there to auto-flag the relevant provisions, then bring those flagged excerpts into this prompt for deeper strategy analysis.
Step 3: Paste the Prompt and the MSA
Fill in the bracketed fields with your deal details. Then paste the relevant MSA provisions — or the full agreement if your tool supports it. If the MSA is too long for one context window, run the prompt with the full agreement description and paste one or two sections at a time for the detailed clause analysis.
Step 4: Review the Strategy Output
Read the strategy document as a negotiator, not as a drafter. Evaluate:
Are the opening asks realistic? The AI targets positions that are commercially reasonable but favorable. Verify they are achievable given the vendor’s size and leverage.
Are the fallbacks actually acceptable to the client? The AI derives fallbacks from market norms. Confirm with the client that their risk tolerance matches the suggested fallback on each clause.
Are the walk-away conditions calibrated correctly? These are the most important outputs. Make sure each walk-away condition reflects genuine deal risk, not theoretical concern.
Does the Priority Stack match the client’s actual priorities? Reorder it if the AI’s ranking does not match your understanding of what the client cares about most.
Step 5: Convert Strategy into Talking Points and Redlines
Once the strategy is confirmed, iterate to produce usable outputs:
“Using the opening asks from the strategy, draft the three most important redlines as contract language in tracked-changes format.”
“Write a negotiation call agenda organized around the Priority Stack, with 2-3 talking points for each clause in order.”
“Based on the Package Deal Opportunities, draft a short negotiation email proposing a first trade-off to the vendor’s counsel.”
Why This Works
SaaS MSA negotiation follows a predictable structure: nine or ten clauses that appear in nearly every enterprise agreement, with well-established market positions on each side. Because the legal landscape is relatively standardized, AI can produce reliable strategy frameworks when given the vendor’s actual draft language and enough context about the customer. The structured prompt (opening ask / fallback / walk-away / rationale / market standard) forces the output into the format experienced technology transactions attorneys already use — making it immediately actionable rather than requiring reformatting.
What This Does Not Replace
Client counseling on risk tolerance. The AI does not know whether your client would rather pay a higher annual fee to get better data ownership terms, or vice versa.
Regulatory analysis. If the client is regulated (financial services, healthcare, government contractor), the regulatory overlay may require positions more aggressive than AI market standards suggest.
Leverage assessment. Whether you can actually achieve your opening ask depends on the vendor’s pipeline, your client’s strategic value to them, and the competitive alternatives. AI cannot evaluate negotiation leverage.
Drafting verified redlines. The strategy is the starting point. The actual contract language must be drafted and verified by counsel before it goes to the other side.
Post-negotiation documentation. Keeping a clear record of what was conceded and why — for future disputes and renewal negotiations — is attorney work that cannot be delegated to AI.
AdvancedCorporate 25 min
Spot Cross-Border Regulatory Triggers in a Deal
Given a cross-border M&A or commercial deal description, surface the regulatory filings and approvals likely required -- FDI screening, antitrust pre-merger, CFIUS, EU FSR, export controls, sanctions, data transfer regimes -- organized by jurisdiction with confidence-level flagging.
Prompt
You are a senior international corporate attorney advising on a cross-border deal. I will describe the transaction. Your task is to identify the regulatory filings, approvals, and compliance obligations likely triggered by this deal -- organized by jurisdiction and category.
For each regulatory trigger identified, provide:
1. **Trigger name** (plain English -- e.g., "CFIUS national security review," "EU merger notification," "GDPR data transfer impact assessment")
2. **Jurisdiction** (country or bloc)
3. **Regulatory category**:
- FDI Screening (national security / strategic sector review)
- Antitrust / Competition (pre-merger notification or review)
- Sector-Specific Approval (financial services, telecom, media, defense, energy, healthcare)
- Export Controls & Technology Transfer
- Sanctions Screening
- Data Transfer & Privacy
- Other (specify)
4. **Likely applicability**:
- 🔴 High -- threshold or trigger almost certainly met based on the facts
- 🟠 Medium -- threshold may be met; requires further factual investigation
- 🟡 Low -- threshold unlikely met but worth confirming
5. **Key threshold or trigger condition** (1-2 sentences: what specific fact pattern triggers this obligation)
6. **Filing or approval type** (mandatory pre-closing / voluntary / post-closing notification / ongoing compliance)
7. **Approximate timeline** (if known: e.g., "CFIUS voluntary notice: 30-45 day review; can extend to 45-day investigation")
8. **Specialist needed** (yes/no + brief note on what type of specialist -- e.g., "Yes -- CFIUS-specialized D.C. counsel")
9. **Initial action required** (one sentence: what to do right now to begin addressing this trigger)
After the jurisdiction-by-jurisdiction checklist, produce:
- **Critical Path Summary**: List the 3-5 triggers most likely to affect deal timeline or require pre-closing satisfaction. State whether each is a condition to closing.
- **Information Gaps**: List the facts you would need to confirm or investigate to upgrade the confidence level on Medium and Low triggers.
- **Jurisdictions to Rule Out**: For any major regulatory regimes not triggered by this deal (e.g., "CFIUS not applicable because no U.S. nexus"), briefly state why so the deal team can close that item.
Important instructions:
- Flag every trigger as a preliminary screen, not a legal opinion. Note that each jurisdiction requires analysis by qualified local counsel.
- Do not fabricate specific filing thresholds or monetary values -- use ranges or note where thresholds vary and must be verified.
- Where a trigger depends on facts not provided, flag it as an Information Gap rather than marking it Low.
Here is the deal description:
Deal type: [e.g., "Acquisition of 100% of shares of TargetCo by BuyerCo"]
Buyer: [Name, country of incorporation, ultimate beneficial owner nationality, industry sector]
Target: [Name, country of incorporation, industry sector, any government contracts or regulated assets]
Target's operations: [Countries where target has employees, assets, revenues, customers, or data]
Transaction value: [Approximate deal value or revenue of target]
Deal structure: [Share purchase / asset purchase / JV / commercial agreement]
Sensitive sectors involved: [e.g., defense, semiconductors, critical infrastructure, financial services, media, telecom, healthcare data -- or "none identified"]
Data involved: [Types and jurisdictions of personal data held or processed by target]
Known regulatory concerns: [Any filings already identified by the deal team, or "none yet"]
Tips
Run this prompt at the term sheet stage, not at signing. Regulatory timelines (especially CFIUS, EU FSR, and national FDI reviews) can add 3-12 months to a deal. Early identification protects the deal schedule.
After generating the checklist, follow up with: 'For each High trigger, draft a one-paragraph briefing I can send to the relevant specialist counsel to initiate their preliminary assessment.' This saves time in the specialist engagement process.
Use the Information Gaps section to build a targeted information request for the target in diligence. Many Medium triggers can be resolved to High or Ruled Out with a single data point (e.g., target's U.S. revenue, government contract status, or exact data processing locations).
For deals with EU nexus, ask a follow-up: 'Assess whether the EU Foreign Subsidies Regulation notification thresholds are met based on the facts. What financial information do I need from the target to confirm?' The FSR is newer and frequently missed.
Cross-reference the AI output against your firm's regulatory clearance tracker or a current Practical Law cross-border M&A checklist. AI knowledge of specific monetary thresholds may be outdated -- always verify with local counsel or a current regulatory database.
Cautions
This prompt produces a preliminary screening tool, not a legal opinion. Every High or Medium trigger must be reviewed by qualified counsel in the relevant jurisdiction before any filing decision is made. Regulatory requirements change frequently and vary by sector.
AI knowledge of specific filing thresholds, monetary values, and recently enacted FDI screening regimes may be outdated. The EU Foreign Subsidies Regulation, updated CFIUS regulations, and new national FDI laws (UK NSI Act, Australia FIRB, India FDI policy) have all evolved rapidly since 2022. Verify all thresholds with current sources.
Sanctions screening requires real-time database checks against OFAC, EU, UN, and UK consolidated lists -- not AI analysis. Use a dedicated sanctions screening tool (e.g., World-Check, Dow Jones Risk & Compliance) for any sanctions-related due diligence.
Export control analysis (EAR, ITAR, EU Dual-Use Regulation) for technology companies requires specialized counsel. AI can flag that export controls may apply; it cannot perform the technical review of controlled technology classifications.
Do not share non-public deal information with consumer AI tools. Cross-border M&A terms, target identities, and deal structures are highly sensitive. Use enterprise AI tools with appropriate confidentiality protections.
What This Quick Win Does
Cross-border deals carry a hidden timeline risk: the regulatory approvals no one mapped at term sheet. FDI screening, antitrust pre-merger filings, CFIUS reviews, EU Foreign Subsidies Regulation notifications, export control licenses, sanctions checks, and data transfer assessments can each add months to a deal — and missing a mandatory filing can expose the parties to fines, forced divestiture, or voided transactions. This Quick Win runs a structured regulatory trigger screen from a plain-language deal description, producing a jurisdiction-by-jurisdiction checklist with confidence-level flags in 25 minutes.
The output is a preliminary screen, not a legal opinion. Its purpose is to ensure the deal team asks the right questions of the right specialists before a timeline is set and a signing date is announced.
How to Use It
Step 1: Compile the Deal Description
The quality of the trigger screen depends entirely on the completeness of your deal description. Before running the prompt, gather:
Buyer identity: Country of incorporation, ultimate beneficial ownership nationality, industry sector, and any government or sovereign wealth fund ownership stake
Target identity: Country of incorporation, sector, government contracts or regulated assets, any prior national security review history
Geographic footprint: Every country where the target has employees, physical assets, revenues exceeding a meaningful threshold, or customer data
Deal structure: Share deal versus asset deal matters — some FDI regimes are triggered only by share acquisitions; others capture asset purchases above a threshold
Sensitive sector flags: The AI cannot know from the company name whether the target handles ITAR-controlled technology, critical infrastructure, or financial services licenses. You must provide this.
Transaction value: Many antitrust and FDI thresholds are monetary. Include both deal value and target revenue where known.
The more precisely you fill in the deal description, the fewer items will land in the Information Gaps section.
Step 2: Open Your AI Tool
Use Claude or ChatGPT at the enterprise tier. Given the sensitivity of cross-border M&A deal terms, do not use consumer versions. The full deal description you are about to paste is almost certainly subject to confidentiality obligations to both parties.
Step 3: Paste the Prompt and the Deal Description
Fill in all bracketed fields. Where you genuinely do not know a field (e.g., whether the target has U.S. revenues), write “Unknown — to be confirmed in diligence” rather than leaving it blank. This instructs the AI to flag those unknowns as Information Gaps rather than assuming away the risk.
Step 4: Review the Output by Jurisdiction
Work through the checklist jurisdiction by jurisdiction:
Elevate any Lows that involve sectors the AI may not have flagged correctly. If the target handles personal health data, upgrade any privacy-related trigger regardless of the AI’s confidence level.
Cross-check antitrust thresholds. AI may cite outdated monetary thresholds for EU, U.S. HSR, or national competition filings. Confirm current thresholds via the relevant authority’s website or your firm’s regulatory database before ruling any trigger out.
Treat every “Specialist needed: Yes” item immediately. Engage specialist counsel in parallel with the rest of diligence, not after it. Regulatory review timelines cannot be compressed.
Step 5: Build the Action Plan from the Critical Path Summary
The Critical Path Summary identifies which triggers control the deal timeline. Use it to:
“For each item in the Critical Path Summary, draft a one-paragraph scope of engagement for specialist counsel.”
“Convert the Information Gaps list into a targeted diligence information request to the target’s counsel.”
“Based on the Critical Path, propose a realistic regulatory clearance timeline and identify which items must be resolved before signing versus before closing.”
Why This Works
The cross-border regulatory landscape is broad but structurally predictable: a defined set of regimes, each triggered by specific facts (sector, nationality, transaction value, geographic nexus). AI can rapidly map a deal description against this trigger matrix and flag applicability — a task that would otherwise require a lawyer to mentally work through a dozen different regulatory frameworks simultaneously. The structured prompt (nine categories, four confidence levels, filing type, timeline) forces the AI to produce output that immediately surfaces the questions that need expert analysis rather than generating an undifferentiated wall of regulatory text.
What This Does Not Replace
Legal opinions from local counsel. A preliminary AI screen identifies what to investigate. Mandatory filing decisions, voluntary clearance strategy, and timing elections must come from qualified counsel in each jurisdiction.
Real-time sanctions screening. OFAC, EU, and UN list status changes daily. AI cannot perform sanctions screening — use a dedicated compliance database.
Export control classification reviews. Whether a target’s technology is controlled under EAR, ITAR, or EU Dual-Use regulations requires a technical and legal analysis that AI cannot reliably perform.
Threshold verification. AI knowledge of specific monetary thresholds (HSR, EU merger regulation, national FDI value triggers) may lag behind regulatory updates. Always confirm current thresholds with primary sources before filing or concluding no filing is required.
Strategic clearance advice. Whether to file voluntarily, how to structure the notification, what remedies to offer proactively — these are judgment calls that depend on agency relationships, political context, and deal-specific facts that exceed what AI can assess.
BeginnerLitigation 10 min
Draft Witness Interview Questions from a Complaint
Turn a complaint or fact summary into a structured witness interview outline -- organized by topic with open-ended foundation questions, narrative prompts, document authentication cues, and credibility-testing follow-ups -- so you can conduct a thorough, natural interview without a rigid script.
Prompt
You are an experienced litigation attorney preparing to interview a fact witness. Using the complaint and witness information I provide, produce a structured witness interview outline organized by topic -- not a Q&A script. The attorney will use each topic section as a flexible guide, asking questions in whatever order feels natural during the interview.
**For each topic section, provide:**
1. **Topic heading** (e.g., "Background and Relationship to the Parties," "What the Witness Observed," "Documents the Witness Can Authenticate")
2. **Objective** — One sentence stating what you need to establish or learn in this section.
3. **Opening / foundation questions** — 3-5 open-ended questions to establish the witness's knowledge, position, and relationship to the events. Use "Tell me about..." and "Walk me through..." phrasing to encourage narrative answers.
4. **Narrative-eliciting prompts** — 3-5 questions that invite the witness to tell the story in their own words without leading. Include at least one "What happened next?" and one "Is there anything else you remember about that?"
5. **Drill-down on disputed facts** — For each key disputed issue I identify below, provide 2-3 focused follow-up questions that press for specifics: dates, times, exact words used, who else was present, what the witness saw versus what they inferred.
6. **Document authentication** — If the witness may have authored, received, or has knowledge of relevant documents, draft 2-3 questions to establish foundation for authentication under FRE 901 (or jurisdiction equivalent).
7. **Credibility-testing follow-ups** — 3-4 neutral questions that probe the witness's basis of knowledge, memory limitations, and possible bias, without being accusatory: e.g., "How certain are you about that date?" and "Is there anything that might have affected your recollection?"
**Tone guidance:**
- If the witness is FRIENDLY (client, client's colleague, corroborating witness): rapport-building, conversational, let the witness talk.
- If the witness is ADVERSE or NEUTRAL: careful and precise; avoid giving information; pin down specifics before moving on.
**Inputs:**
Case caption: [CASE NAME AND COURT]
Witness name and role: [e.g., "Jane Smith, former office manager for defendant"]
Witness type: [FRIENDLY / ADVERSE / NEUTRAL]
Summary of complaint allegations: [PASTE KEY PARAGRAPHS OR A 3-5 SENTENCE SUMMARY]
Key disputed facts relevant to this witness: [LIST 3-5 SPECIFIC DISPUTED ISSUES -- e.g., "Whether defendant knew of the defect before the sale"]
Documents the witness may know about: [LIST DOCUMENTS BY NAME OR DESCRIPTION]
Interview setting: [e.g., "Informal phone call," "Formal deposition prep session," "In-person office meeting"]
Tips
Provide all five input fields before generating. The more precise your 'key disputed facts' list, the sharper the drill-down questions will be. Vague inputs produce generic questions.
After receiving the outline, run a follow-up: 'Now review this outline from the perspective of opposing counsel. What questions might they ask this witness that I should prepare the witness to handle?' This is especially useful for friendly witnesses in deposition prep.
For adverse witnesses, use the 'credibility-testing follow-ups' section to build the foundation for later impeachment at deposition or trial. A witness who overstates certainty in an informal interview can be effectively impeached later.
Save and annotate the outline after the interview: note which topics the witness was forthcoming on, where they hesitated, and what new facts emerged. This becomes the starting point for deposition prep.
Adjust the 'Interview setting' field to get context-appropriate language. An informal fact-gathering call calls for a conversational tone; a formal deposition prep session should include more structured reminders about listening to questions carefully.
Cautions
Do not upload the actual complaint or client documents directly to a public AI tool without confirming your jurisdiction's ethics rules and your firm's data policy permit it. Summarize confidential facts in the input fields instead of pasting verbatim privileged documents.
Witness interview outlines are work product. Treat the AI output as a privileged draft and store it accordingly. Do not share it with the witness.
AI does not know this witness. It cannot assess demeanor, credibility, or the dynamics of the attorney-witness relationship. Use the outline as a framework, not a script -- your instincts during the interview must guide you.
Jurisdiction matters. Some states restrict pre-deposition contact with employees of an opposing party. Know your jurisdiction's rules on witness contact (see Model Rule 4.2) before conducting any interview.
Always have a supervising attorney review the interview outline and strategy before conducting the interview, particularly for adverse witnesses or witnesses who may be represented by counsel.
What This Quick Win Does
Preparing for a witness interview typically means reviewing the complaint, pulling key documents, and writing out a list of questions — a process that can take an hour or more for a case you are still learning. This Quick Win compresses that preparation into 10 minutes by generating a structured, topic-organized interview outline tailored to the specific witness, the disputed facts, and the relationship between the witness and the parties.
The output is not a rigid script. It is a topic-by-topic guide with objectives, suggested question types, and document authentication cues — designed so you can move through the interview naturally, following the witness’s answers rather than mechanically reading questions.
How to Use It
Step 1: Gather Your Inputs
Before opening the AI tool, identify the five inputs the prompt requires:
The case caption and court
The witness’s name, role, and whether they are friendly, adverse, or neutral
A 3-5 sentence summary of the key complaint allegations (you can summarize rather than paste the actual complaint)
A list of 3-5 specific disputed facts this witness is likely to have knowledge about
The documents the witness may have authored, received, or be able to authenticate
The sharper your “key disputed facts” list, the more useful the drill-down questions will be. Spend two minutes writing that list before you open the AI.
Step 2: Open Your AI Tool
Open ChatGPT or Claude in a private or enterprise workspace. If your firm has configured a data-protected instance of either tool, use that. Do not paste verbatim privileged documents or client names into a consumer-grade AI tool without your firm’s authorization.
Step 3: Paste the Completed Prompt
Copy the prompt above and fill in each bracketed field with the specific facts of your case. Submit it and wait for the outline.
Step 4: Review and Annotate the Output
Review the generated outline section by section:
Check the objectives — Does each section’s stated objective match what you actually need from this interview?
Test the questions — Read the open-ended questions aloud. Do they sound natural? Would a real witness understand them?
Add case-specific detail — The AI works from the facts you gave it. Add any questions based on documents or prior statements the AI did not have.
Adjust tone — For a friendly witness in deposition prep, soften aggressive language. For an adverse witness, tighten open-ended questions into more focused probes.
Step 5: Iterate for Specific Gaps
If the outline misses an important topic, prompt the AI to fill the gap:
“Add a section on [specific topic] — the witness may have personal knowledge of [specific event]. Include questions about who was present, what was said, and what documents were created.”
Why This Works
Witness interviews follow a predictable structure regardless of the case type: establish the witness’s background and relationship to the events, get the narrative in the witness’s own words, drill down on disputed facts, authenticate documents, and probe the limits of the witness’s memory and basis of knowledge. AI is well-suited to generating this structure from a fact summary because the structure is consistent and the inputs are text-based.
What AI cannot do is adapt in real time to what the witness actually says. The outline gives you the map; the interview requires you to navigate.
What This Does Not Replace
Your legal judgment about which facts are actually disputed and which witnesses can speak to them
Case strategy — the interview outline serves a theory of the case that only you can formulate
Real-time adaptation — the best interviews follow the witness, not the outline; treat the AI output as a starting point, not a ceiling
Ethics compliance — your obligations under Model Rule 4.2 (no contact with represented parties) and your jurisdiction’s witness-contact rules apply regardless of how the outline was prepared
Supervising attorney review of the interview strategy, particularly for adverse witnesses
AdvancedLitigation 30 min
Build a Daubert Challenge Outline Against an Expert Report
Upload an opposing expert report and generate a structured Daubert (or jurisdiction-equivalent) challenge outline: methodology critique under Rule 702 and the four Daubert factors, data-source reliability problems, qualifications mismatches, ipse-dixit reasoning, and targeted deposition questions to build the exclusion record.
Prompt
You are a senior litigator preparing to challenge an opposing expert under Federal Rule of Evidence 702 and Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 579 (1993), as clarified by Kumho Tire Co. v. Carmichael, 526 U.S. 137 (1999), and the 2023 amendments to FRE 702.
I will provide the opposing expert's report. Produce a structured Daubert challenge outline and a companion set of deposition questions designed to develop the exclusion record.
**Part 1 -- Daubert Challenge Outline**
For each challenge basis, state: (a) the legal standard, (b) the specific deficiency in this report, and (c) the strongest argument for exclusion or limitation.
1. **Qualifications** -- Does the expert's training, education, and experience match the specific opinions offered? Identify any gap between the expert's claimed expertise and the subject matter of each opinion.
2. **Testability (Daubert Factor 1)** -- Can the methodology be -- and has it been -- tested? Is the expert's method falsifiable? Identify any opinion resting on a process that cannot be independently evaluated or replicated.
3. **Peer review and publication (Daubert Factor 2)** -- Has the theory or technique been subjected to peer review and publication? Identify opinions resting on unpublished, proprietary, or non-peer-reviewed methodology.
4. **Known or potential error rate (Daubert Factor 3)** -- What is the known or potential error rate of the method? Is there a margin of error stated in the report? Identify any opinion where the error rate is unknown, unstated, or so large as to render the opinion unreliable.
5. **General acceptance (Daubert Factor 4)** -- Is the methodology generally accepted in the relevant scientific or technical community? Identify opinions that depart from accepted practice without explanation.
6. **Fit to the facts (relevance / "fit" requirement)** -- Do the opinions actually assist the trier of fact on a disputed issue in this case? Identify any opinion that is too general, speculative, or disconnected from the specific facts to be helpful.
7. **Data reliability** -- What data, documents, and assumptions did the expert rely on? Identify: (a) data the expert did not consider that a reliable expert in the field would have reviewed; (b) data that was selectively cited; (c) assumptions that are unsupported or contrary to the evidence.
8. **Ipse dixit reasoning** -- Identify conclusions where the expert simply asserts a result without methodological explanation -- what courts have called "I said so" reasoning that FRE 702 and Joiner prohibit.
9. **Basis for each opinion vs. sufficient facts or data** -- Under the 2023 amendment to FRE 702, the proponent must show by a preponderance of the evidence that the expert's opinions reflect sufficient facts or data and reliable principles. Flag opinions that would fail this standard.
**Part 2 -- Motion Outline**
Draft a skeleton motion to exclude or limit the expert, organized as:
- Introduction (one paragraph summary of the challenge)
- Legal standard (FRE 702 / Daubert / 2023 amendment)
- Argument sections tracking Part 1 above (use subheadings A, B, C...)
- Relief requested (full exclusion vs. limitation to specific opinions)
**Part 3 -- Targeted Deposition Questions**
For each challenge basis identified above, provide 4-6 deposition questions designed to develop the exclusion record. Questions should:
- Be precise and single-subject (one concept per question)
- Commit the expert to the methodology before exposing its limits
- Draw out admissions about what the expert did NOT do, did NOT review, or cannot quantify
- Avoid arguing with the expert -- build the record, do not cross-examine yet
**Inputs:**
Case caption: [CASE NAME AND COURT]
Jurisdiction: [FEDERAL / STATE -- if state, identify whether Daubert or Frye state]
Expert name and stated qualifications: [COPY FROM REPORT COVER PAGE]
Opinions to challenge (list each numbered opinion or conclusion): [LIST OR PASTE FROM REPORT]
Key case facts the expert should have considered: [LIST FACTS YOU BELIEVE WERE IGNORED OR MISCHARACTERIZED]
Your retained expert's anticipated counter-opinions (if any): [BRIEF SUMMARY OR "NOT YET RETAINED"]
**Expert report:** [PASTE REPORT TEXT OR UPLOAD FILE]
Tips
Run this prompt twice: once for a full challenge outline, then again with the instruction 'Now identify the three weakest bases for exclusion in the outline you just generated, and explain why a court might reject each argument.' This adversarial review sharpens your motion before filing.
The 2023 amendments to FRE 702 shifted the burden: the proponent must now establish admissibility by a preponderance of the evidence, and the court -- not the jury -- determines whether the expert's opinions reflect sufficient facts. Make this the centerpiece of your argument in federal court cases filed after December 1, 2023.
Use Part 3 deposition questions before finalizing the motion. Experts sometimes concede methodological limitations at deposition that strengthen your exclusion argument or narrow the scope of testimony that will survive. File the motion after the deposition record is built.
If the expert relies on proprietary software or datasets, ask the AI to draft a focused discovery request for the underlying data, code, and validation studies. Daubert challenges are much stronger when the methodology cannot be independently replicated.
For non-scientific experts (e.g., damages experts, industry practice experts), the Kumho Tire framework applies Daubert flexibly to the specific reliability questions relevant to that type of expertise. Ask the AI to tailor the analysis: 'This is a damages expert, not a scientific expert -- adjust the Daubert factors analysis accordingly.'
Cautions
Do not upload the expert report to a consumer AI tool without confirming that (a) your firm's data policy permits it, (b) no protective order restricts disclosure of the report, and (c) your jurisdiction's ethics rules allow it. Use an enterprise or data-protected AI instance if available. ABA Formal Opinion 512 requires reasonable measures to prevent unauthorized disclosure of client information.
AI does not know your jurisdiction. Daubert applies in federal courts and in the majority of states, but approximately 15 states still follow Frye (general acceptance only) or a modified standard. Confirm your jurisdiction's expert admissibility standard before filing. The AI's analysis of FRE 702 is not directly transferable to Frye states.
Verify every case citation in the AI output against Westlaw, Lexis, or a similar legal research database before including it in a motion. AI-generated legal citations are frequently plausible-sounding but fabricated. In Mata v. Avianca, Inc., No. 22-cv-1461 (S.D.N.Y. 2023), a court imposed sanctions on attorneys who submitted AI-generated citations without verification. The same risk applies here.
This outline is a starting point, not a final product. Your retained expert -- if you have one -- must review the methodological critique. An AI cannot evaluate the substance of scientific or technical methodology the way a qualified expert can. Use the AI to identify the framework; use your expert to fill the substance.
Daubert motions have strategic costs: filing a weak challenge signals overconfidence and can backfire at trial. Have a supervising attorney evaluate whether the strongest arguments justify the motion before filing.
What This Quick Win Does
A Daubert challenge is one of the highest-leverage motions in complex litigation. Excluding a key expert can collapse the opposing party’s damages case, eliminate causation, or end a products liability claim before trial. But building the challenge requires a methodical analysis of the expert report against five legal standards — qualifications, reliability, methodology, fit, and sufficiency of data — plus a disciplined deposition to build the exclusion record.
This Quick Win produces three deliverables in about 30 minutes: a structured legal analysis of every challengeable aspect of the report, a skeleton motion to exclude or limit, and a set of deposition questions designed to develop the factual record the court will need to rule. It turns the expert report into a roadmap for the challenge rather than a document you simply rebut on the merits.
How to Use It
Step 1: Prepare the Expert Report and Your Inputs
Gather the following before opening the AI tool:
The full expert report (or the most relevant sections if the report is very long)
A list of the expert’s stated opinions — numbered as they appear in the report
The key case facts you believe the expert ignored, mischaracterized, or selectively applied
A note on your jurisdiction (federal versus state; Daubert state versus Frye state)
Any preliminary views from your own retained expert, if one has been engaged
Identify whether your jurisdiction applies Daubert (federal and majority of states), Frye (general acceptance only, roughly 15 states including California, New York, Illinois), or a modified standard. The AI’s analysis defaults to FRE 702 and Daubert — the output must be adjusted for Frye jurisdictions.
Step 2: Open an Appropriate AI Tool
Use ChatGPT or Claude with file-upload capability. If the expert report is under the tool’s paste limit, paste the text directly into the prompt field. If it is longer, use the file-upload feature.
Use a data-protected enterprise instance if your firm has one. Do not upload a confidential expert report to a consumer AI tool without confirming that your firm’s data policy and any applicable protective orders permit it.
Step 3: Paste the Completed Prompt and Submit
Fill in each bracketed input field with the specific facts of your case. Paste or upload the expert report. Submit the prompt.
The AI will generate all three parts: the Daubert analysis, the motion skeleton, and the deposition questions. Review them in sequence.
Step 4: Review and Strengthen the Output
Treat the output as you would a draft from a smart but non-expert associate — strong on structure, requiring your substantive judgment:
Verify the legal standards — Confirm that the FRE 702 / Daubert analysis is accurate for your court. Check the 2023 amendment language if the case is in federal court.
Test each challenge basis — Which arguments are strongest? Which depend on facts the AI did not have? A Daubert motion is strongest when limited to 2-4 genuinely meritorious challenges.
Add your expert’s input — Have your retained expert review the methodological critique. The AI identifies the framework; your expert provides the technical substance.
Verify all case citations — Check every citation against Westlaw or Lexis before including it in any filing. AI-generated citations are a sanctions risk.
Step 5: Sequence the Work — Depose Before You File
Use Part 3 (the deposition questions) at the expert’s deposition before finalizing the motion. Experts sometimes concede methodological limitations — or double down on ipse dixit reasoning — at deposition in ways that either strengthen your exclusion argument or reveal grounds you had not identified. The motion should be filed after the deposition record is complete.
Why This Works
Daubert challenges succeed or fail based on the quality of the analytical framework applied to the expert’s report. That framework — FRE 702, the four Daubert factors, the Joiner ipse-dixit doctrine, and the 2023 amendment’s burden-shifting rule — is consistent across cases. AI is highly effective at applying a known legal framework systematically to a new set of facts, catching methodological gaps that are easy to miss in a dense expert report.
The deposition questions work on the same principle: the structure of a Daubert deposition (commit the expert to the methodology, then expose its limits) is generalizable, and the AI applies it to the specific opinions at issue.
What This Does Not Replace
Your retained expert’s technical review of the opposing expert’s methodology — courts expect the challenge to be grounded in scientific or technical substance, not just legal argument
Independent legal research on current Daubert standards in your specific circuit or state — the law on admissibility is not uniform, and circuits have developed distinct approaches to several factors
Citation verification — every case and rule citation must be confirmed in a legal research database before appearing in any filing
Strategic judgment about whether to file at all, and which arguments to lead with, based on the judge’s known approach to expert gatekeeping
Supervising attorney review of the motion and deposition strategy before either is executed
BeginnerTech/Privacy 5 min
Generate a Data-Incident First-Response Checklist
Produce a phased 60-minute / 24-hour / 72-hour action checklist for a suspected data incident -- with role assignments, regulatory trigger analysis, and stop-and-reassess gates -- in under 5 minutes.
Prompt
You are a privacy and cybersecurity attorney advising the legal and security team during the first hours of a suspected personal-data incident. Based on the facts I provide, generate a phased first-response checklist covering the following time horizons:
**PHASE 1 — First 60 Minutes: Triage and Containment**
1. Immediate containment steps (isolate affected systems, revoke compromised credentials, preserve forensic state)
2. Evidence preservation and legal-hold scope: What evidence must be preserved and in what form? Who issues the legal-hold notice, and to whom?
3. Initial incident classification: Is this a confirmed breach, suspected breach, or near-miss? What facts are still unknown?
4. Internal escalation chain: Who must be notified immediately? Assign each action to a specific role (CISO, General Counsel, DPO, Comms/PR, IT/Ops, C-Suite).
5. **STOP-AND-REASSESS GATE #1**: List the minimum facts that must be confirmed before moving to Phase 2. If unknown, list next investigative steps.
**PHASE 2 — First 24 Hours: Assessment and Privilege**
6. Invoke attorney-client privilege: Confirm that the investigation is being directed by legal counsel. Document that communications are privileged.
7. Scope the affected data: What categories of personal data are involved (special category data under GDPR Art. 9, financial data, health data, children's data)? Estimated number of data subjects and jurisdictions.
8. Notification trigger evaluation — work through each framework:
- **GDPR Art. 33/34**: Is there a "breach of personal data" as defined in Art. 4(12)? Is it "unlikely to result in a risk" (no notify) or does it present a risk to rights and freedoms (notify supervisory authority within 72 hours of awareness) or a high risk (notify affected individuals "without undue delay")?
- **US state breach notification laws**: Identify which state laws apply based on affected residents (note that California CCPA/CPRA, New York SHIELD Act, and Texas HB 4181 have different triggers and timelines). Flag any laws with sub-72-hour timelines.
- **Contractual obligations**: Do any vendor agreements, insurance policies, or customer contracts require incident notification within a specified timeframe?
- **Sector-specific rules**: Do HIPAA (60-day rule), GLBA, PCI-DSS, or other sector rules apply?
9. Regulatory outreach decision: Should you engage the supervisory authority proactively before the mandatory deadline? Who makes that call?
10. Engage external counsel / forensics: Decision criteria and engagement checklist.
11. **STOP-AND-REASSESS GATE #2**: Are notification obligations triggered? If yes, list specific deadlines by jurisdiction and assigned owner. If no, list conditions that would change the analysis.
**PHASE 3 — First 72 Hours: Notification and Remediation**
12. Regulatory notification drafting: Key elements required under GDPR Art. 33(3) (nature of breach, DPO contact, categories/approximate number of records, likely consequences, measures taken). Note equivalent requirements for any applicable US or sector regulators.
13. Individual notification drafting (if required under GDPR Art. 34 or applicable state law): Plain-language explanation, recommended protective steps for affected individuals, contact details.
14. Board / senior management briefing: What do executives need to know? What decisions require board-level authorization?
15. Insurance carrier notification: Check cyber insurance policy for notification obligations and coverage conditions.
16. External communications holding statement: Draft a brief holding statement for media/customer inquiries that says nothing materially harmful and commits to no specific facts not yet confirmed.
17. Documentation log: What records must be maintained to demonstrate compliance with GDPR Art. 33(5) (documentation obligation regardless of whether notification is required)?
**OUTPUT FORMAT**
For each action item:
- **Action**: Clear, specific task
- **Owner**: Role responsible (e.g., CISO, GC/DPO, Comms, IT/Ops)
- **Deadline**: Specific time or trigger
- **Priority**: Critical / High / Medium
Use a structured table or numbered checklist. After the three phases, add a **Regulator Timelines Summary** table listing each applicable framework, the notification trigger, the deadline, the recipient, and the assigned owner.
Here are the incident facts:
Incident description: [DESCRIBE WHAT HAPPENED -- e.g., "Unauthorized access detected on customer database server. Potentially exposed: name, email, encrypted payment card numbers. Discovered by SIEM alert at 14:00 UTC."]
Data categories involved (if known): [e.g., "Names, emails, encrypted payment card data. No special category data identified yet."]
Estimated number of affected individuals: [e.g., "Unknown -- database contains ~80,000 records"]
Jurisdictions of affected individuals: [e.g., "Primarily EU (Germany, France), US (California, Texas), and UK"]
Organization type: [e.g., "B2C SaaS company, no HIPAA obligations, PCI-DSS in scope"]
Applicable regulations already identified: [e.g., "GDPR, UK GDPR, CCPA/CPRA, New York SHIELD Act, PCI-DSS"]
Known contractual notification obligations: [e.g., "Enterprise customer agreements require notification within 48 hours of confirmed breach"]
Current time since discovery: [e.g., "4 hours"]
Tips
Run this prompt as soon as an incident is plausibly confirmed -- do not wait for full forensic clarity. The 72-hour GDPR clock runs from when the organization becomes 'aware' of a breach meeting the Art. 4(12) definition, which is earlier than most legal teams expect.
Keep the AI output in a privileged document. Label it as 'Prepared at the Direction of Counsel / Attorney-Client Privileged' from the start. Do not share the checklist outside the legal-response team without a privilege review.
Run follow-up prompts for each regulatory framework: 'Draft the GDPR Art. 33 supervisory authority notification for this incident, using the facts confirmed so far. Flag each field where facts are still unknown and note what placeholder language to use.'
After stabilization, run a post-incident prompt: 'Based on this incident, what technical and organizational measures under GDPR Art. 32 should the organization implement to reduce the risk of recurrence? Categorize by immediate (30 days), medium-term (90 days), and strategic (12 months).'
Adapt the checklist to your organization's incident response plan (IRP). If your IRP assigns different role names or uses different escalation triggers, tell the AI: 'Revise the role assignments to match our IRP: [paste relevant IRP sections].'
Cautions
AI cannot confirm whether a GDPR 'personal data breach' as defined in Art. 4(12) has occurred -- that is a legal and factual determination requiring attorney judgment. Do not use the AI output to make the notification/no-notification decision without qualified legal review.
Notification timelines are jurisdictional and rapidly evolving. GDPR's 72-hour rule (Art. 33) is measured from 'awareness', not from containment. Several US states (e.g., Florida, New Mexico) have stricter timelines. The AI may not reflect the most recent legislative amendments. Always verify current law.
GDPR (enforced by EDPB-coordinated national DPAs) and Brazil's LGPD (enforced by ANPD) have meaningfully different breach-notification thresholds, timelines (GDPR: 72 hours to DPA; LGPD: 'reasonable timeframe' currently interpreted as 3 working days by ANPD), and documentation obligations. Do not treat them as interchangeable.
Do not paste actual incident data -- system names, confirmed vulnerabilities, affected record samples -- into a consumer AI tool without a data processing agreement. Use an enterprise or API deployment. Describe the incident in general terms if necessary.
The AI-generated checklist is a workflow scaffold, not legal advice. Incident response involves legal strategy decisions (when to self-report, how to manage regulator relationships, privilege architecture) that require an experienced privacy attorney.
What This Quick Win Does
A data incident is the worst time to figure out your response workflow. The GDPR’s 72-hour notification clock (Article 33) starts running from the moment your organization becomes “aware” of a qualifying breach — not when the forensic investigation concludes, and not when legal counsel finishes reviewing. In parallel, US state laws, sector-specific regulations (HIPAA, PCI-DSS), and contractual obligations may impose their own timelines, some of them shorter.
This Quick Win generates a phased first-response checklist — 60 minutes / 24 hours / 72 hours — with role assignments, regulatory trigger analysis, and explicit “stop-and-reassess” gates that force the team to confirm key facts before moving forward. It is designed to be used at the start of an incident, when the facts are still uncertain and every hour counts.
How to Use It
Step 1: Gather the Initial Facts
You do not need complete information to run this prompt — incomplete facts are the norm at the start of an incident. Fill in what you know at the time of the prompt:
What happened (or what is suspected to have happened)
What categories of data may be involved
How many individuals may be affected and in which jurisdictions
Which regulations and contracts are likely in scope
How long ago the incident was discovered
Use conservative estimates and flag unknowns explicitly. The AI is instructed to include investigative next steps for unresolved questions.
Step 2: Run the Prompt in a Privileged Environment
Open an enterprise or API-deployed instance of ChatGPT or Claude — not a consumer free-tier tool. From the moment you begin documenting the incident, mark the output as “Attorney-Client Privileged / Prepared at Direction of Counsel.” Paste the prompt, fill in the incident facts, and submit.
Step 3: Distribute the Checklist to the Response Team
Share the phased checklist with the core incident response team (CISO, GC/DPO, Comms, IT/Ops). Assign owners and deadlines. The stop-and-reassess gates are deliberate checkpoints: do not proceed to the next phase until the gate criteria are met.
Step 4: Track Regulatory Deadlines in Real Time
The Regulator Timelines Summary table at the end of the AI output gives you a consolidated view of all notification deadlines triggered by the incident. Paste this into your incident tracking system with actual timestamps as facts are confirmed.
Step 5: Run Phase-Specific Follow-Up Prompts
As the incident evolves, use the AI for phase-specific drafting tasks:
“Draft the GDPR Article 33 notification to [supervisory authority] using the confirmed facts. Flag each required field under Art. 33(3) where facts are still uncertain.”
“Draft an individual notification letter under GDPR Art. 34 that complies with the plain-language requirement of Art. 12 and includes the information required under Art. 34(2).”
“Draft a board briefing memo summarizing the incident, current response status, regulatory exposure, and decisions requiring board authorization.”
Why This Works
Incident response is a process problem as much as a legal problem. The checklist format maps directly to how response teams actually work: parallel workstreams with different owners, sequential decision gates, and compressed timelines. By giving the AI the regulatory frameworks (GDPR Art. 33/34, US state laws, sector rules) and the organizational structure (CISO / GC / Comms / Ops), the prompt produces output that is immediately usable as a command-and-control document rather than a generic legal memo.
The stop-and-reassess gates are the most important feature. They force the team to answer: “What do we actually know right now?” — which is the question that determines notification obligations.
What This Does Not Replace
Legal judgment on whether a qualifying breach has occurred under each applicable regulatory definition (a determination that requires counsel, not AI)
A pre-incident response plan (IRP) — this prompt supplements an IRP; it does not substitute for one
Forensic investigation by qualified cybersecurity professionals
Regulatory relationship management — decisions about when and how to engage a supervisory authority proactively involve legal strategy that AI cannot provide
Insurance counsel coordination — cyber insurance coverage conditions can be jeopardized by unauthorized communications; involve coverage counsel before any external disclosures
BeginnerTech/Privacy 8 min
Summarize a Vendor DPA for Negotiation
Turn a vendor's Data Processing Agreement into a structured negotiation summary -- deviations from your standard playbook, sub-processor risks, audit rights, breach timelines, transfer mechanisms, and traffic-light redlines -- in about 8 minutes.
Prompt
You are a privacy attorney reviewing a vendor's Data Processing Agreement (DPA) on behalf of the data controller (your client). I will provide the full text of the vendor's DPA. Produce a structured negotiation summary covering the following areas:
**1. Parties and Processing Scope**
- Identify the controller, processor, and any sub-processors named in the agreement.
- Summarize the subject matter, nature, purpose, and duration of processing as described in the DPA (required under GDPR Art. 28(3)).
- Flag any purposes of processing that are vague, overly broad, or that could permit the vendor to use data for its own purposes.
**2. Sub-Processor Disclosure and Approval Rights**
- List all sub-processors disclosed in the DPA or linked schedule.
- Identify the approval mechanism: Does the controller have the right to object to new sub-processors (GDPR Art. 28(2) requirement)? What is the notice period?
- Flag: Does the DPA permit the vendor to add sub-processors without prior notice or with only post-hoc notification?
- Rate this section: **Green** (fully compliant with Art. 28(2)) / **Amber** (workable with modifications) / **Red** (non-compliant, requires significant negotiation).
**3. Security Measures (GDPR Art. 32)**
- Summarize the technical and organizational security measures (TOMs) described in the DPA or security exhibit.
- Flag any measures that are aspirational ("will implement") rather than operational ("has implemented").
- Note any missing standard measures (encryption at rest and in transit, access controls, penetration testing, vulnerability management, incident response).
- Rate: **Green / Amber / Red**.
**4. Breach Notification Timing**
- State the vendor's contractual breach notification commitment (time period and trigger).
- Compare to GDPR Art. 33(2): The processor must notify the controller "without undue delay" after becoming aware of a personal data breach. Market standard for contractual commitments is 24-48 hours.
- Flag: Does the DPA's notification obligation start from "awareness" or from "confirmation" of a breach? The latter is non-standard and reduces the controller's ability to meet its own 72-hour deadline.
- Proposed redline if non-compliant: Draft the revised clause.
- Rate: **Green / Amber / Red**.
**5. Audit Rights (GDPR Art. 28(3)(h))**
- Describe the audit rights granted to the controller: on-site audit, third-party certification (SOC 2 Type II, ISO 27001), or questionnaire only?
- Flag: Does the DPA require unreasonable advance notice, limit audits to business hours only, allow the vendor to object without providing an alternative, or restrict audits to once per calendar year without cause exception?
- Note whether certifications are offered as a substitute for audit and whether this is acceptable.
- Rate: **Green / Amber / Red**.
**6. International Data Transfers**
- Identify whether any transfers outside the EEA/UK/adequate countries are contemplated.
- State the transfer mechanism used: Standard Contractual Clauses (SCCs -- which module?), adequacy decision, binding corporate rules, or derogation.
- If SCCs are used: Are they the 2021 EU SCCs? Is the correct module applied (Module 2 for controller-to-processor transfers)? Are the Annexes (I, II, III) completed?
- Flag any transfers that rely on outdated mechanisms (pre-Schrems-II standard clauses, Privacy Shield remnants).
- Note whether a Transfer Impact Assessment (TIA) is referenced or required by the DPA.
- Rate: **Green / Amber / Red**.
**7. Retention, Return, and Deletion**
- State the vendor's obligations upon termination or expiry: return data, delete data, or both?
- What is the timeframe for deletion? Is a deletion certificate provided?
- Flag: Does the DPA permit the vendor to retain data beyond termination for its own purposes (analytics, model training, legal compliance)? If so, on what basis?
- Rate: **Green / Amber / Red**.
**8. Liability and Indemnification**
- Describe the liability allocation between controller and processor.
- Is there a cap on the processor's liability? How does it relate to the fees paid? (Note: GDPR Chapter VIII allows data subjects to sue both controllers and processors directly; contractual caps between parties do not limit regulatory fines.)
- Flag any indemnification obligations that run solely in the vendor's favor.
- Rate: **Green / Amber / Red**.
**9. Controller Instructions**
- Does the DPA confirm that the processor only processes personal data on documented instructions from the controller (GDPR Art. 28(3)(a))?
- Is there a mechanism for issuing and documenting instructions throughout the relationship (not just at signature)?
- Flag any language permitting processing beyond controller instructions without notification.
**10. Deviations from Standard Controller DPA Playbook**
Based on the above analysis, list the **top 5 deviations** from a market-standard controller-favorable DPA, ranked by severity. For each deviation, provide:
- The issue in plain language
- The relevant GDPR article or market standard
- A proposed redline (specific revised clause language)
**SUMMARY TABLE**
Produce a traffic-light summary table with columns: Section | Current DPA Position | Market Standard | Rating (Green/Amber/Red) | Proposed Action.
Here is the vendor DPA:
[PASTE THE FULL DPA TEXT HERE]
Our standard DPA playbook position (optional -- include if you have one):
[PASTE YOUR STANDARD POSITIONS ON KEY CLAUSES, OR LEAVE BLANK FOR MARKET-STANDARD ANALYSIS]
Tips
Include your organization's standard DPA playbook positions in the prompt if you have them. The AI will then compare against your specific requirements, not just market standards, making the output far more actionable.
After the review, run a follow-up prompt for each Red-rated clause: 'Draft three alternative versions of the sub-processor approval clause -- one aggressive (controller-favorable), one balanced (market-standard), and one fallback (minimum acceptable position).'
For large DPAs with security exhibits, annexes, and sub-processor lists, paste each document separately and ask the AI to integrate the analysis: 'I have provided the main DPA body. Now review the Security Exhibit [paste]. Update your analysis for sections 3 and 5.'
Ask the AI to draft your negotiation email once you have the redlines: 'Using the top 5 deviations above, draft a professional negotiation letter to the vendor's legal team requesting the redlines. Tone: collaborative but firm. Include a proposed call to resolve open items.'
For LGPD compliance, add to the prompt: 'Also evaluate whether this DPA satisfies Brazil LGPD Art. 39 (processor obligations) and ANPD Resolution CD/ANPD No. 14/2024 on data sharing. Flag any gaps specific to Brazilian law.'
Cautions
AI may hallucinate specific clause numbers or misread cross-references within the DPA. Verify every specific article or section citation in the AI output against the original document before sending any redlines to the vendor.
GDPR (EDPB) and LGPD (ANPD) have different approaches to processor obligations. LGPD Art. 39 is less prescriptive than GDPR Art. 28, and ANPD guidance on DPA requirements is still evolving. Do not assume a GDPR-compliant DPA automatically satisfies LGPD.
Do not paste the vendor's DPA into a consumer AI tool without verifying your organization's data handling policy and the vendor's terms of service. Use an enterprise deployment with a signed data processing agreement with the AI provider.
The traffic-light rating reflects pattern matching against market standards, not a legal opinion. A Green rating does not mean the clause is enforceable, commercially appropriate for your specific risk profile, or compliant with every applicable jurisdiction.
Supervising attorney review is required before sending any redlines to a vendor. The AI output is a first-pass analysis and drafting aid -- negotiation strategy and final positions require legal judgment.
What This Quick Win Does
Reviewing a vendor’s Data Processing Agreement is one of the most repetitive tasks in privacy practice. Whether you are in-house counsel evaluating a SaaS vendor or advising a client on a new data partnership, the review follows the same structure every time: check the eight mandatory Art. 28 provisions, evaluate sub-processor controls, assess breach-notification timing, verify the transfer mechanism, and identify where the vendor’s paper deviates from your standard positions.
This Quick Win generates a structured negotiation summary — including a traffic-light review and proposed redlines — in about 8 minutes. The output is designed to be used directly in a negotiation: as a briefing memo for the business team, as a basis for your redline markup, or as the agenda for a call with the vendor’s legal counsel.
How to Use It
Step 1: Obtain the Complete DPA Package
Before running the prompt, gather all documents that form the DPA:
The main DPA body
Any security exhibit or annex (often titled “Annex II: Technical and Organizational Measures”)
The sub-processor list (often a linked webpage or Annex III)
Any applicable Standard Contractual Clauses and their completed annexes (Annex I: description of transfers; Annex II: TOMs; Annex III: sub-processors)
Any order forms or statements of work that are incorporated by reference
Missing annexes often contain the most important terms. A DPA that looks complete on its face may have inadequate TOMs buried in an exhibit.
Step 2: Prepare Your Playbook Positions (Optional but Recommended)
If your organization has a standard DPA playbook — preferred positions on sub-processor approval windows, audit rights, breach timelines, liability caps, and deletion timeframes — paste these into the optional playbook section of the prompt. The AI will compare the vendor’s paper against your specific requirements rather than generic market standards.
Step 3: Run the Prompt
Paste the full DPA text (and any relevant annexes) into the prompt. For very long DPAs (30+ pages), paste the main body first and follow up with the security exhibit and SCC annexes in separate messages.
Step 4: Use the Traffic-Light Table
The summary table gives you an at-a-glance view of every key issue:
Green: Compliant with GDPR Art. 28 and market standard — accept or minor comment
Amber: Workable but requires modification — negotiate the proposed redline
Red: Non-compliant or significantly controller-unfavorable — escalate; do not accept without changes
Paste the table into your deal memo or use it to structure a kickoff call with the vendor’s team.
Step 5: Iterate on Red-Rated Clauses
For each Red-rated item, run a follow-up prompt to develop your negotiating position:
“The vendor’s breach notification clause requires notification within 72 hours of ‘confirmed’ breach rather than ‘awareness.’ Draft three alternative formulations — aggressive, market-standard, and fallback — with brief commentary on each.”
“The sub-processor approval clause gives us 5 business days to object but does not state what happens if we do object. Draft a revised clause that adds a dispute resolution mechanism and a fallback prohibition on the vendor proceeding if we object within the notice period.”
Why This Works
GDPR Article 28(3) specifies the eight minimum provisions that every DPA must include. This gives the review a deterministic structure: either a provision is present and compliant, or it is not. The prompt maps directly onto that structure, which means the AI is doing pattern recognition against a well-defined legal checklist — a task where large language models perform reliably when the checklist is explicit in the prompt.
The proposed redlines serve a practical purpose: they move the conversation from “this clause is inadequate” to “here is the specific language we need,” which is how vendor negotiations actually progress.
What This Does Not Replace
A full legal review of the DPA in the context of the broader commercial relationship, data flows, and applicable law beyond GDPR
LGPD-specific analysis for Brazilian data: LGPD Art. 39 processor obligations and ANPD guidance require separate evaluation and should not be assumed to overlap completely with GDPR Art. 28
Schrems II / Transfer Impact Assessment analysis for transfers to non-adequate countries — the AI can flag the issue, but the TIA requires a country-specific factual and legal assessment
Negotiation strategy decisions about which redlines are deal-breakers versus concessions — those require legal and business judgment
Final approval of any DPA markup before it is sent to a vendor — supervising attorney review is required
IntermediateTech/Privacy 15 min
Run a GDPR & LGPD Applicability Analysis on a New Product
Given a product's data flows, user geography, and processing purposes, determine whether GDPR and LGPD apply, identify legal bases for each processing activity, map controller/processor roles, evaluate DPIA/RIPD triggers, and produce a jurisdiction matrix with a legal-basis decision tree.
Prompt
You are a privacy attorney conducting a GDPR and LGPD applicability analysis for a new product. Based on the product description I provide, work through the following analysis systematically:
**PART 1 — Territorial Scope and Applicability**
**GDPR Territorial Scope (Art. 3)**
1. Does the "establishment criterion" apply (Art. 3(1))? Is the organization or any entity in its group established in the EU/EEA and processing personal data in the context of that establishment's activities?
2. Does the "targeting criterion" apply (Art. 3(2)(a))? Is the organization offering goods or services to data subjects in the EU/EEA, even if not established there? Analyze: Is the website/app accessible from the EU? Are there EU-specific pricing, language, currency, or marketing indicators?
3. Does the "monitoring criterion" apply (Art. 3(2)(b))? Does the product monitor the behavior of individuals who are in the EU/EEA (behavioral tracking, profiling, location tracking)?
4. **Conclusion**: Does GDPR apply? To which processing activities specifically?
**LGPD Territorial Scope (Art. 3)**
5. Does the "targeting criterion" apply (LGPD Art. 3(I))? Is the processing carried out in Brazil?
6. Does the "offering criterion" apply (LGPD Art. 3(II))? Is the product offered to individuals located in Brazil, or are the processing activities aimed at individuals in Brazil?
7. Does the "origin criterion" apply (LGPD Art. 3(III))? Was the personal data collected in Brazil?
8. **Conclusion**: Does LGPD apply? To which processing activities specifically?
**PART 2 — Controller / Processor Mapping**
9. For each entity in the product ecosystem (product company, analytics vendors, payment processors, AI/ML providers, advertising networks, cloud infrastructure), determine:
- Is the entity a **controller** (determines purposes and means of processing) under GDPR Art. 4(7) / LGPD Art. 5(VI)?
- Is the entity a **processor** (processes on behalf of the controller) under GDPR Art. 4(8) / LGPD Art. 5(VII)?
- Is the entity a **joint controller** under GDPR Art. 26?
10. What agreements are required between these entities (DPA under GDPR Art. 28, joint-controller arrangement under Art. 26, data sharing agreement under LGPD Art. 26)?
**PART 3 — Processing Activities and Legal Bases**
For each processing activity identified in the product description, produce a legal-basis analysis:
**Under GDPR (Art. 6 for general data; Art. 9 for special category data)**
For each activity:
- Describe the processing activity and the data involved
- Identify the most appropriate legal basis:
- Art. 6(1)(a): Consent -- Is it freely given, specific, informed, and unambiguous? Is it withdrawable without detriment?
- Art. 6(1)(b): Contract -- Is the processing strictly necessary for the performance of a contract with the data subject?
- Art. 6(1)(c): Legal obligation -- Does a specific EU or member-state law require this processing?
- Art. 6(1)(f): Legitimate interests -- What is the legitimate interest? Would a reasonable data subject expect this? Does the LIA balance in favor of processing?
- Flag any processing of special category data (Art. 9) requiring an additional Art. 9(2) basis (explicit consent, employment law, vital interests, public interest, health care, legal claims)
- Note if any processing involves solely automated decision-making with legal/significant effects (Art. 22)
**Under LGPD (Art. 7 for general data; Art. 11 for sensitive data)**
For the same activities, identify the LGPD legal basis:
- Art. 7(I): Consent
- Art. 7(II): Legal obligation
- Art. 7(V): Execution of a contract
- Art. 7(IX): Legitimate interest (note: LGPD Art. 10 limits legitimate interest to activities that data subjects would reasonably expect)
- For sensitive data (Art. 5(II) -- health, biometric, religious, political, racial/ethnic data): Art. 11 requirements (explicit consent or other specific basis)
**PART 4 — DPIA / RIPD Trigger Evaluation**
11. Under GDPR Art. 35, is a Data Protection Impact Assessment (DPIA) required? Evaluate against the EDPB's trigger criteria:
- Large-scale processing of sensitive data (Art. 9)?
- Systematic profiling with significant effects?
- Systematic monitoring of publicly accessible areas?
- Processing of data relating to vulnerable individuals?
- Innovative use of technology?
- Matching or combining datasets in ways data subjects would not expect?
- Transfers to countries without adequate protection?
12. Under LGPD Art. 38, is a Relatório de Impacto à Proteção de Dados Pessoais (RIPD) required? (ANPD may require this upon request; it is recommended as best practice when processing poses significant risk to data subjects.)
13. **Conclusion**: DPIA required / recommended / not triggered. RIPD required / recommended / not triggered. Key risks to document.
**PART 5 — International Data Transfers**
14. Identify all international transfers implied by the product's data flows (cloud region, third-party processors, analytics providers).
15. For each transfer destination:
- **GDPR**: Is there an adequacy decision (Art. 45)? If not, what transfer mechanism is used (SCCs Art. 46, BCRs, derogation Art. 49)?
- **LGPD**: Is the destination country considered to provide adequate protection by ANPD? If not, what mechanism (LGPD Art. 33: adequate country list, standard clauses, global corporate policies, cooperation agreements)?
16. Flag any transfers with no identified lawful mechanism.
**PART 6 — Data Subject Rights Workflows**
17. For each of the following rights, describe the workflow required and identify any operational gaps:
- GDPR: Access (Art. 15), Rectification (Art. 16), Erasure (Art. 17), Restriction (Art. 18), Portability (Art. 20), Objection (Art. 21) -- 30-day response window
- LGPD: Art. 18 rights (confirmation, access, correction, anonymization/blocking/elimination, portability, deletion of consent-based processing, information on sharing, right to revoke consent) -- 15-day response window
18. Are there any rights that the product architecture makes technically difficult to fulfill (e.g., erasure in a distributed system, portability from a proprietary data format)?
**OUTPUT**
Produce:
A. **Jurisdiction Applicability Matrix**: table with columns: Regulation | Applies (Yes/No/Conditional) | Basis for Applicability | Scope of Application
B. **Legal Basis Matrix**: table with columns: Processing Activity | Data Categories | GDPR Legal Basis (Art. 6/9) | LGPD Legal Basis (Art. 7/11) | Risk Notes
C. **Obligations Checklist**: list of required actions (DPA agreements, DPIA, RIPD, privacy notice updates, consent mechanisms, DSR workflows, transfer mechanisms) with priority (Immediate / Before Launch / Ongoing)
D. **Decision Tree for Legal Basis**: a simple if/then flowchart (in text format) to guide the team in selecting the legal basis for future processing activities
Here is the product description:
Product name: [NAME]
Product type: [e.g., "B2B SaaS analytics platform for HR teams"]
Data flows: [Describe what personal data the product collects, from whom, how, where it is stored, and with whom it is shared -- e.g., "Collects employee names, job titles, performance metrics, and behavioral data via browser plugin. Stores in AWS us-east-1. Shares anonymized benchmarking data with third-party analytics partner in the US."]
User geography: [Where are users located -- e.g., "Customers are US and EU companies; end-users are employees globally including Brazil and Germany"]
Processing purposes: [e.g., "HR analytics, performance benchmarking, workforce planning, product improvement, fraud prevention"]
Organization details: [e.g., "US-headquartered, no EU establishment, no BR establishment, 50,000+ EU end-users, 10,000+ BR end-users"]
Special data categories: [Any health, biometric, racial/ethnic, political, religious, or children's data -- or "none identified"]
Known third-party processors: [e.g., "AWS (hosting), Snowflake (data warehouse), Mixpanel (analytics), Stripe (payments)"]
Tips
Be as specific as possible about data flows and user geography. Vague inputs ('we have some EU users') produce vague analyses. Specify the approximate number of EU and Brazilian users and whether they are consumers, employees, or B2B contacts.
Run the analysis twice if the product processes both consumer data and employee/B2B contact data. The legal-basis analysis differs significantly: employee data often relies on legal obligation or contract, while consumer data more commonly relies on consent or legitimate interest.
After the applicability analysis, run a gap analysis against your existing privacy notices: 'Compare the processing activities and legal bases identified in this analysis against the following privacy policy [paste]. Identify any processing activities that are not disclosed or that are disclosed on an incorrect legal basis.'
For LGPD specifically, ask a follow-up about ANPD enforcement priorities: 'What enforcement actions has ANPD taken in 2024-2025 most relevant to this product type? What compliance gaps have been the focus of ANPD investigations?' This contextualizes the risk landscape.
Use the decision tree output as a living tool: paste it into a Confluence page or internal wiki and update it as new processing activities are added. Review the legal-basis matrix when product features change -- a new AI feature, behavioral targeting toggle, or third-party data partnership may trigger a new basis analysis.
Cautions
AI analysis of territorial scope is interpretive, not definitive. Whether the 'targeting criterion' (GDPR Art. 3(2)(a) / LGPD Art. 3(II)) is met in a borderline case requires factual investigation and legal judgment -- particularly for products that are passively accessible from a jurisdiction but not actively marketed there. EDPB guidelines on this distinction are the authoritative reference.
GDPR (EDPB / national DPAs) and LGPD (ANPD) are enforced by different authorities with different enforcement cultures and guidance. LGPD's legitimate interest basis (Art. 10) is interpreted more narrowly by ANPD than GDPR Art. 6(1)(f) is by the EDPB. Do not assume legal bases are interchangeable across frameworks.
The AI cannot access your actual product architecture, code, data flows, or vendor contracts. The analysis is only as accurate as the product description you provide. Before finalizing any applicability or DPIA conclusion, validate it against a real data mapping exercise.
Do not rely on AI output to conclude that a DPIA is not required. The DPIA trigger analysis under GDPR Art. 35 depends on facts that may not be fully captured in a product description. Err on the side of conducting a DPIA for any innovative product with significant data processing.
This analysis produces a framework and a set of legal questions. The final applicability determination, legal basis selection, and DPIA/RIPD decisions require review by a qualified privacy attorney familiar with current regulatory guidance and enforcement trends in each applicable jurisdiction.
What This Quick Win Does
When a product team comes to legal with a new feature or product, privacy counsel typically needs to answer four questions quickly: Does GDPR apply? Does LGPD apply? What is the legal basis for each processing activity? And what do we need to do before launch?
Answering these questions rigorously — working through GDPR Article 3’s territorial scope criteria, LGPD Article 3’s parallel framework, legal basis selection under Articles 6 and 9 (GDPR) and Articles 7 and 11 (LGPD), DPIA and RIPD triggers, and international transfer mechanisms — typically takes several hours and produces a memo that the product team cannot easily act on.
This Quick Win compresses that first-pass analysis to about 15 minutes. The output is structured around three actionable artifacts: a jurisdiction applicability matrix, a legal-basis matrix per processing activity, and a prioritized obligations checklist with a legal-basis decision tree for the team to reuse as the product evolves.
How to Use It
Step 1: Conduct a Pre-Prompt Data Mapping
The quality of the AI analysis is entirely dependent on the accuracy of the product description. Before running the prompt, spend 5-10 minutes mapping the product’s data flows:
What data is collected? List every data element by category (identifiers, behavioral data, device data, inferred data, special category data).
From whom? Consumers, employees, B2B contacts, minors?
How? Forms, cookies, APIs, third-party SDKs, passthrough from other systems?
Where is it stored? Cloud region, country, specific provider.
Who receives it? List every third-party processor and the data they receive.
For what purpose? Be specific: “behavioral analytics for product improvement” is more useful than “analytics.”
If you have a data flow diagram or a completed data mapping worksheet, summarize it into the product description field. The more specific the input, the more accurate the applicability and legal-basis analysis.
Step 2: Identify All User Geographies
List the countries where users are located, not just where the organization is headquartered. GDPR’s targeting criterion (Art. 3(2)(a)) and LGPD’s origin/targeting criteria (Art. 3(I-III)) are both triggered by user location — not organizational location. A US-headquartered company with EU and Brazilian users is subject to both regulations for that processing.
Step 3: Run the Prompt
Fill in all six fields in the product description section and submit. The AI will work through all six analytical parts in sequence. For complex products with many processing activities, the output may be long — that is appropriate; do not truncate the prompt.
Step 4: Review the Three Output Matrices
Jurisdiction Applicability Matrix: Verify each Yes/No/Conditional conclusion against the EDPB’s Guidelines 3/2018 on territorial scope. If a determination is borderline, note it as requiring legal judgment.
Legal Basis Matrix: For each processing activity, confirm that the legal basis identified is genuinely available — in particular, verify that any reliance on “legitimate interests” (GDPR Art. 6(1)(f) / LGPD Art. 10) is defensible based on a legitimate interests assessment (LIA), and that any reliance on “contract” (Art. 6(1)(b)) is limited to processing strictly necessary for the contract’s performance.
Obligations Checklist: Prioritize the “Before Launch” items. These are the compliance actions that create regulatory exposure if not completed prior to user-facing release. Items flagged “Immediate” typically involve existing processing that is out of compliance now.
Step 5: Use the Decision Tree for Ongoing Product Development
Save the legal-basis decision tree as a living reference document. When the product team adds a new feature — a behavioral recommendation engine, a third-party data integration, a new analytics capability — run it through the decision tree before development begins. This embeds privacy by design into the product lifecycle rather than retrofitting compliance after launch.
Follow-Up Prompts for Specific Compliance Tasks
After completing the applicability analysis, use targeted follow-up prompts:
DPIA Scoping:
“Based on the processing activities identified, draft a DPIA scope document for [specific high-risk processing activity] that includes: description of processing, necessity and proportionality assessment, identification of risks, and proposed mitigation measures. Structure it to satisfy GDPR Art. 35(7) requirements.”
LGPD Rights Workflow:
“Draft a data subject rights workflow for Brazilian users that satisfies LGPD Art. 18 requirements. Include: request intake, identity verification, 15-day response timeline, response templates for each right type, and escalation path for complex requests.”
Privacy Notice Gap Analysis:
“Compare the processing activities in this analysis against the following privacy notice [paste]. Identify: (1) processing activities not disclosed, (2) legal bases that differ from those disclosed, (3) missing GDPR Art. 13/14 required information, (4) missing LGPD Art. 9 required information.”
Why This Works
GDPR and LGPD applicability analysis follows a structured decision tree that maps naturally to the AI’s ability to apply rules to facts. The territorial scope criteria (Art. 3 GDPR, Art. 3 LGPD), legal basis exhaustive lists (Art. 6/9 GDPR, Art. 7/11 LGPD), and DPIA trigger criteria (EDPB guidelines) are all enumerated in law and regulatory guidance. When you provide specific product facts, the AI can apply these criteria systematically — more thoroughly than a first-pass memo produced in time pressure, and in a format that the product team can actually use.
The key limitation is the same as any legal analysis: the AI applies rules to the facts you provide. If the facts are incomplete or inaccurate, the analysis will be too.
What This Does Not Replace
A full data mapping exercise using a structured data inventory — the AI analysis is only as accurate as the product description you provide
A qualified privacy attorney’s judgment on borderline territorial scope questions, legal basis selection for complex processing activities, and DPIA conclusions
ANPD- and EDPB-specific regulatory expertise — enforcement priorities, informal guidance, and case-by-case interpretations of both agencies evolve continuously and may not be reflected in AI training data
A completed DPIA or RIPD — the AI output identifies whether an assessment is triggered; conducting the actual impact assessment requires stakeholder interviews, technical architecture review, and legal sign-off
Ongoing compliance monitoring — privacy law in both the EU and Brazil is evolving rapidly; regulatory guidance, adequacy decisions, and enforcement precedents must be tracked continuously, not just at product launch
AdvancedTech/Privacy 30 min
Run an EU AI Act Risk Classification on a Deployed System
Given a description of a deployed AI system — purpose, training data, deployment context, users, and outputs — classify it under the EU AI Act risk tiers (prohibited, high-risk, limited-risk, minimal-risk), identify Annex III applicability, map provider vs. deployer obligations, evaluate GPAI/foundation-model overlays, and produce a structured classification memo with an obligations checklist by role.
Prompt
You are a senior technology lawyer conducting an EU AI Act risk classification for a deployed AI system. Based on the system description I provide, work through the following analysis systematically and produce a structured classification memo.
**PART 1 — Prohibited AI Practices (Art. 5)**
Evaluate whether the system falls within any of the prohibited practices under Art. 5. For each category below, state whether it applies, does not apply, or requires further factual investigation:
1. **Subliminal or manipulative techniques** (Art. 5(1)(a)-(b)): Does the system deploy subliminal techniques beyond a person's consciousness to distort behavior, or exploit vulnerabilities of specific groups?
2. **Social scoring by public authorities** (Art. 5(1)(c)): Is the system used by a public authority to evaluate or classify natural persons based on social behavior or personality characteristics?
3. **Real-time remote biometric identification in public spaces** (Art. 5(1)(d)-(f)): Does the system use real-time biometric identification of individuals in publicly accessible spaces for law enforcement purposes (subject to Art. 5(2) narrow exceptions)?
4. **Biometric categorisation inferring sensitive attributes** (Art. 5(1)(g)): Does the system categorise individuals based on biometric data to infer race, political opinion, trade union membership, religious beliefs, sexual orientation, or health status?
5. **Emotion recognition in workplace/education** (Art. 5(1)(f)): Is the system used to infer the emotions of natural persons in the workplace or educational institutions?
6. **Conclusion**: Is the system prohibited under Art. 5? If yes, identify the specific prohibition and its consequences. If no, proceed to Part 2.
**PART 2 — High-Risk Classification (Art. 6 + Annex III)**
Evaluate both limbs of the high-risk classification.
**Limb A — Safety component / product covered by Union harmonisation legislation (Art. 6(1)):**
7. Is the AI system a safety component of a product covered by the Union harmonisation legislation listed in Annex I (e.g., Machinery Regulation, Medical Devices Regulation, Radio Equipment Directive, Automotive safety systems)?
8. Is the product required to undergo a third-party conformity assessment under that legislation?
9. **Conclusion on Art. 6(1)**: High-risk / not high-risk under Limb A.
**Limb B — Annex III stand-alone high-risk systems (Art. 6(2)):**
For each Annex III category, evaluate whether the system falls within its scope:
10. **Annex III(1) — Biometrics**: Is the system a remote biometric identification system, a biometric categorisation system, or an emotion recognition system?
11. **Annex III(2) — Critical infrastructure**: Is the system used as a safety component in critical infrastructure (energy, water, transport, digital infrastructure)?
12. **Annex III(3) — Education and vocational training**: Does the system determine access to, or assign persons to, educational institutions or vocational training, or evaluate learning outcomes with material effect on educational trajectory?
13. **Annex III(4) — Employment, workers management, access to self-employment**: Is the system used for recruitment, selection, promotion, task allocation, monitoring performance, termination, or creditworthiness of self-employed persons?
14. **Annex III(5) — Essential private/public services**: Is the system used to evaluate eligibility for essential services (social security, health services) or to assess creditworthiness or credit scores?
15. **Annex III(6) — Law enforcement**: Is the system used for individual risk assessment in law enforcement, lie detection, deep-fake detection in investigations, crime analytics, criminal history assessment, or victim profiling?
16. **Annex III(7) — Migration, asylum, border control**: Does the system assist with migration risk assessment, examination of asylum applications, or border surveillance?
17. **Annex III(8) — Administration of justice and democratic processes**: Is the system used to assist judicial authorities in researching facts or law, or to influence elections?
18. **Art. 6(3) exception**: Even if the system falls within Annex III, does it qualify for the Art. 6(3) exception (no significant risk due to narrow purpose, human oversight sufficient to reverse output, or purely preparatory task)?
19. **Conclusion on Art. 6(2)/(3)**: High-risk / not high-risk under Limb B, with Annex III category reference.
**PART 3 — Limited-Risk Systems (Arts. 50 and 52)**
20. **Transparency obligations for chatbots / conversational AI (Art. 50(1))**: Does the system interact with natural persons through text, voice, or other formats? If yes, must the deployer ensure persons are informed they are interacting with an AI (except where obvious from context)?
21. **Transparency for emotion recognition / biometric categorisation (Art. 50(3))**: Does the system detect emotions or infer sensitive attributes from biometric data? If yes, inform persons exposed.
22. **Deep-fake disclosure (Art. 50(4))**: Does the system generate synthetic content (text, images, audio, video, code) that appears authentic? If yes, is machine-readable disclosure (Art. 50(2)) required?
23. **Conclusion on Art. 50**: Transparency obligations triggered / not triggered.
**PART 4 — GPAI Model Overlay (Arts. 51-55)**
24. Does the AI system incorporate or deploy a General Purpose AI (GPAI) model (a model trained on broad data, general in purpose, and integrable into diverse downstream systems)?
25. If yes, is the GPAI model a **high-impact model** under Art. 51 (trained with compute above 10^25 FLOPs, or presenting systemic risk as designated by the European Commission)?
26. For high-impact GPAI models: Identify the additional obligations under Art. 55 (adversarial testing, incident reporting to the AI Office, cybersecurity measures, energy efficiency reporting).
27. For standard GPAI models: Identify Art. 53 obligations (technical documentation, instructions for use, compliance with copyright law, training data summary).
28. **Conclusion on GPAI overlay**: Does the system trigger GPAI obligations, and at what tier?
**PART 5 — Provider vs. Deployer Obligations**
For each obligation below, identify whether it falls on the **provider** (entity that develops and places the system on the market or puts it into service — Art. 3(3)), the **deployer** (entity that uses the system under its own authority — Art. 3(4)), or both. Where both are responsible, specify the allocation:
**If high-risk:**
29. Quality management system (Art. 9)
30. Technical documentation (Art. 11 + Annex IV)
31. Record-keeping / logging (Art. 12)
32. Transparency and provision of information to deployers (Art. 13)
33. Human oversight measures built into the system (Art. 14)
34. Accuracy, robustness, cybersecurity (Art. 15)
35. Conformity assessment (Art. 43): Self-assessment (Annex VI) or third-party notified body (Annex VII)?
36. EU Declaration of Conformity (Art. 47)
37. CE marking (Art. 48)
38. Registration in the EU database (Art. 49 + Art. 71): Is the system in a category requiring registration before deployment?
39. Post-market monitoring (Art. 72): Deployer reporting of serious incidents and malfunctions to provider and market surveillance authority
40. Fundamental rights impact assessment for certain deployers (Art. 27): Is the deployer a public authority or private entity deploying in high-risk areas listed in Art. 27(1)? If yes, conduct FRIA before deployment.
**If limited-risk (transparency obligations only):**
41. Transparency notice obligations by whom and at which point in the interaction (Art. 50)?
**PART 6 — Entry Into Force and Transitional Phasing**
42. Identify which provisions apply to this system based on the phased entry into force:
- **6 February 2025**: Prohibited practices (Art. 5) apply.
- **2 August 2025**: GPAI model obligations (Arts. 51-55) apply; governance provisions (Arts. 64-68 — AI Office, national authorities) apply.
- **2 August 2026**: High-risk systems under Annex III (Art. 6(2)) must comply with Arts. 9-15 and conformity assessment. Limited-risk transparency obligations (Art. 50) apply.
- **2 August 2027**: High-risk systems under Art. 6(1) (Annex I safety components) must comply.
- **General obligations and definitions**: Applied from 2 August 2026 for most provisions.
43. Has the system already been placed on the market before the relevant transition date? If yes, note the transitional regime under Art. 111.
**OUTPUT**
Produce the following structured memo:
**A. Classification Summary**
A one-page top-level memo stating: (1) overall risk tier, (2) Annex III category (if applicable), (3) GPAI overlay (if applicable), (4) transparency obligations triggered.
**B. Reasoning Matrix**
A table with columns: Test | Provision | Result (Applies / Does Not Apply / Requires Investigation) | Key Reasoning
**C. Obligations Checklist by Role**
A table with columns: Obligation | Provision | Role (Provider / Deployer / Both) | Due Date / Trigger | Priority (Before Deployment / Ongoing / Before [date])
**D. Open Questions**
List factual gaps in the system description that could change the classification if resolved differently.
Here is the AI system description:
System name and version: [NAME AND VERSION]
System purpose: [Describe what the system does — e.g., "Resume screening tool that scores job applicants on predicted job performance based on CV text and historical hiring data"]
Deploying organization type: [e.g., "Private employer, 5,000 employees, EU-established"]
Users / operators: [Who operates the system — e.g., "HR department staff; AI output is reviewed by a recruiter before any hiring decision"]
End subjects: [Persons whose data is processed or who are affected — e.g., "Job applicants, including EU residents"]
Training data summary: [e.g., "Trained on historical hiring decisions from 2010-2022; includes structured CV data and interviewer scores"]
Outputs and their effect: [e.g., "Numeric score 0-100 plus ranked shortlist; recruiters report following the AI ranking in 85% of cases"]
Third-party AI components: [e.g., "Built on [VENDOR] foundation model; uses [VENDOR] NLP API for CV parsing"]
Current deployment context: [e.g., "In production since [DATE]; deployed in [COUNTRIES]"]
Known regulatory interactions: [e.g., "Subject to GDPR; employer currently conducts DPIAs for HR tools; no prior AI Act compliance review"]
Tips
Populate the 'outputs and their effect' field with as much specificity as you can about human reliance on the AI output. The Annex III(4) employment category and the Art. 6(3) exception both turn on whether the AI output has a material effect on decisions about persons — a system that produces a score a human always overrides is treated differently from one whose output is routinely followed. Quantify reliance where possible.
Run the GPAI overlay analysis separately if the system uses an API-accessed foundation model such as GPT-4, Claude, Gemini, Llama, or Mistral. The provider of that underlying model has Art. 53 or Art. 55 obligations; your organization as a downstream deployer integrating it into a higher-risk application may have additional Art. 25 compliance obligations regarding use within the terms of the model provider's GPAI compliance documentation.
Ask the AI to generate a gap analysis against your existing documentation: 'Based on the high-risk obligations checklist above, review the following technical documentation [paste]. Identify which Art. 11 + Annex IV requirements are met, partially met, or missing.' This accelerates conformity assessment preparation significantly.
The Art. 6(3) exception for systems that would otherwise fall under Annex III is narrow and must be documented in the technical file. Ask for a separate analysis: 'Draft a reasoned Art. 6(3) assessment for [specific Annex III category] that documents: (a) the specific limited purpose, (b) the degree of human oversight, and (c) why the risk to fundamental rights is not significant. Flag where the factual record is insufficient to support the exception.'
For public-authority deployers, the fundamental rights impact assessment (FRIA) under Art. 27 is a distinct requirement from a GDPR DPIA — though the two can be conducted as an integrated exercise. Ask the AI to produce a combined DPIA/FRIA template: 'Draft a combined GDPR DPIA and AI Act Art. 27 FRIA template for [system description]. Identify the overlapping questions and the AI Act-specific additions.'
Cautions
The EU AI Act has a phased entry into force. Prohibited practices (Art. 5) applied from 6 February 2025. GPAI obligations (Arts. 51-55) applied from 2 August 2025. High-risk obligations under Annex III do not apply until 2 August 2026. Systems already on the market before the relevant date benefit from transitional provisions under Art. 111. Do not treat this as a single-date compliance deadline — build a phased implementation roadmap.
AI classification analysis depends entirely on accurate factual inputs. A system description that understates human reliance on AI outputs, mischaracterizes the deployment context, or omits the Annex I product context may produce a classification that is legally incorrect. Validate the system description against technical documentation, product specifications, and operator interviews before relying on the AI output for any compliance filing.
Do not paste system technical documentation, training data descriptions, or internal product specifications into a consumer AI tool without confirming that your organization's data policy permits it and that no confidentiality obligations restrict disclosure. Use an enterprise or data-protected AI instance. ABA Formal Opinion 512 requires reasonable measures to prevent unauthorized disclosure of client information when using AI tools.
The AI Act is a Regulation, not a Directive — it is directly applicable across all EU Member States. However, national competent authorities differ by Member State and sector, and enforcement guidance from the AI Office and EDPB on interactions with GDPR is still developing. Do not treat an AI-generated classification as a substitute for legal advice from counsel familiar with the AI Office's current interpretive positions and relevant national authority guidance.
AI output on Annex III applicability and the Art. 6(3) exception will require verification by a supervising attorney and, for high-risk systems, review by a qualified conformity assessment body. This prompt produces a first-pass classification framework — it does not constitute a conformity assessment, a legal opinion, or a regulatory filing.
What This Quick Win Does
The EU AI Act creates a tiered risk architecture where the compliance burden — from transparency-notice-only to full conformity assessment with CE marking and EU database registration — depends entirely on which risk category a deployed system falls into. Getting the classification wrong in either direction has real consequences: under-classification exposes the organization to enforcement risk; over-classification wastes resources on conformity obligations that do not apply.
This Quick Win produces a structured first-pass classification memo in about 30 minutes. It works through all three steps of the classification analysis — prohibited practices under Art. 5, high-risk classification under Art. 6 with all eight Annex III categories, and the GPAI overlay under Arts. 51-55 — and then maps every applicable obligation to the specific role (provider, deployer, or both) that must fulfil it. The output is a classification memo with reasoning, an obligations checklist by role, and a prioritized list of open questions the legal team must resolve to finalize the classification.
How to Use It
Step 1: Assemble the System Description
The classification turns on specific facts about the system’s purpose, deployment context, and the degree to which human oversight is genuinely exercised before any decision affecting a person is made. Before running the prompt, gather:
Technical documentation or product specifications for the system
Contractual documents identifying which entity is the provider (developed the system) and which is the deployer (using it in production)
Any documentation of human-in-the-loop processes: how often do operators follow, override, or modify the AI’s output?
The training data description — biometric, sensitive-category, or historical-decision data is highly material to Annex III classification
Information on any foundation model or third-party AI API the system integrates (GPAI overlay analysis)
Be specific about the “outputs and their effect” field. Annex III(3) (education), Annex III(4) (employment), and the Art. 6(3) exception all turn on whether the AI output has a “significant effect” on a decision affecting a natural person. A system that ranks job applicants but is routinely overridden is analytically different from one whose output is followed in 85% of cases.
Step 2: Open an Appropriate AI Tool
Use ChatGPT or Claude. These tools are strong at applying the AI Act’s enumerated criteria to a specific system description, working through each Annex III category systematically, and producing structured tables — which is exactly the analytical pattern required for classification.
Use an enterprise or data-protected instance before pasting internal technical documentation. The system description will typically contain confidential product information.
Step 3: Run the Classification and Review Part by Part
The prompt is structured as a sequential legal analysis across six parts. Review the output part by part:
Part 1 (Prohibited): If any prohibited-practice flag is raised, stop and escalate. Prohibited AI under Art. 5 cannot be deployed regardless of conformity assessment.
Parts 2-3 (High-risk / limited-risk): The Annex III analysis is the most fact-sensitive section. Cross-check each “does not apply” conclusion against the actual system description to confirm the AI has applied the right scope of each category.
Part 4 (GPAI): If the system integrates a foundation model via API, the GPAI overlay analysis must identify what obligations flow to the upstream model provider (technical documentation, copyright compliance) versus what due diligence the downstream deployer must undertake (Art. 25).
Parts 5-6 (Obligations / Phasing): The obligations checklist should drive the compliance roadmap. The phased entry into force means some obligations apply now; others do not apply until 2026 or 2027.
Step 4: Identify and Resolve Open Questions
The AI will flag factual gaps in Part D (Open Questions). These are the points where the classification depends on facts not provided in the system description — for example, whether the deployer qualifies as a public authority under Art. 27, or whether a specific Annex I product-safety regulation applies. Assign each open question to a member of the team to investigate before finalizing the classification.
Step 5: Build the Compliance Roadmap
Convert the obligations checklist into a phased roadmap. The 2 August 2026 deadline for Annex III high-risk compliance is the central milestone for most deployed systems. Work backwards from that date to identify when quality management systems, technical documentation, conformity assessments, and EU database registrations must be initiated — many of these require months of preparation.
Why This Works
The AI Act’s classification structure is a decision tree: a defined set of criteria applied to the facts of a specific system. The prohibited practices list (Art. 5), Annex III categories, and Art. 6(3) exception each contain enumerable tests — the same analytical pattern that AI handles well when the legal framework is specific and the facts are provided. The GPAI tier analysis under Art. 51 (the 10^25 FLOPs threshold and Commission designation mechanism) is similarly rule-based.
The obligations mapping (Part 5) adds a second layer: taking each applicable provision and allocating it to the correct role. Provider/deployer allocation is one of the most practically confusing aspects of the AI Act for organizations that both develop and deploy AI systems internally, and the prompt forces a systematic role-by-role analysis.
What This Does Not Replace
A conformity assessment under Art. 43, which for certain Annex III categories requires a notified body and cannot be replaced by an AI-generated classification memo
Technical documentation under Art. 11 and Annex IV — the AI can identify what the documentation must contain, but the actual documentation must be produced by the system’s technical team
A fundamental rights impact assessment (FRIA) under Art. 27 — the AI can produce a template and identify the required elements, but the assessment requires stakeholder consultation, the FRIA must be registered, and public-authority deployers must notify their market surveillance authority
Legal advice from counsel current on AI Office guidance, national competent authority interpretations, and sector-specific delegated acts that will supplement the AI Act’s framework
Supervising attorney review of the classification memo before it is relied upon for any regulatory filing, procurement representation, or contractual warranty about AI Act compliance status
AdvancedTech/Privacy 25 min
Build an Algorithmic Accountability Obligations Map
Given a use case for an automated decision system — hiring, credit, insurance, criminal justice, or healthcare — produce a cross-jurisdictional obligations map covering GDPR Art. 22, CCPA/CPRA ADMT, NYC Local Law 144, EU AI Act, Colorado SB24-205, Brazil LGPD Art. 20, and applicable sector-specific rules, with a jurisdiction × obligation matrix, transparency-notice requirements, bias-audit cadence, and record-keeping duties.
Prompt
You are a senior technology and privacy attorney conducting a cross-jurisdictional algorithmic accountability analysis for an automated decision system (ADS). Based on the use case I describe, produce a comprehensive obligations map covering all applicable regulatory frameworks.
**PART 1 — Use Case Classification and Jurisdictional Triggers**
1. Identify the decision domain from the use case description:
- **Employment / hiring**: Recruitment screening, scoring, shortlisting, performance evaluation, promotion, termination
- **Credit / financial services**: Credit scoring, loan underwriting, pricing, fraud detection, account management
- **Insurance**: Underwriting, claims assessment, pricing, fraud detection
- **Criminal justice / law enforcement**: Risk assessment, recidivism prediction, predictive policing, sentencing support
- **Healthcare**: Diagnostic support, treatment recommendations, clinical triage, insurance coverage determinations
- **Other**: Identify domain and analogous regulatory treatment
2. Identify all applicable regulatory frameworks based on: (a) where the organization is established, (b) where the data subjects are located, and (c) the specific decision domain.
**PART 2 — Framework-by-Framework Obligations Analysis**
For each applicable framework, provide a structured analysis:
**A. GDPR Article 22 — Automated Individual Decision-Making**
3. Does the ADS make decisions "based solely on automated processing" that produce "legal or similarly significant effects" on data subjects (Art. 22(1))? Analyze: Is there meaningful human involvement? Is the effect legal (termination, denial of a right) or similarly significant (significant financial impact, denial of services, social exclusion)?
4. If Art. 22(1) applies, which exception under Art. 22(2) is invoked?
- Art. 22(2)(a): Necessary for a contract with the data subject?
- Art. 22(2)(b): Authorized by EU or Member State law with suitable safeguards?
- Art. 22(2)(c): Explicit consent?
5. If an exception applies, what safeguards must be implemented under Art. 22(3)?
- Right to obtain human intervention
- Right to express point of view
- Right to contest the decision
6. **Transparency obligation (Art. 13/14 + Recital 63)**: What "meaningful information about the logic involved" and "significance and envisaged consequences" must be disclosed in the privacy notice?
7. **DPIA trigger (Art. 35(3)(a))**: Is a DPIA mandatory for this systematic automated processing with significant effects?
8. **Special category data (Art. 9(2) + Art. 22(4))**: If the ADS processes or effectively infers special category data, what additional conditions apply?
**B. CCPA / CPRA — Automated Decision Technology (ADMT)**
9. Does the California Consumer Privacy Act (CPRA amendments, Cal. Civ. Code § 1798.100 et seq.) apply? (California residents + annual gross revenue >$25M, or 100,000+ consumers/households, or 50%+ revenue from selling data)
10. Does the ADS constitute "Automated Decision Technology" (ADMT) under the CPPA's pending ADMT regulations? ADMT is technology that, with minimal human review, processes personal information to make or execute a decision with a legal or similarly significant effect on a consumer.
11. **Opt-out right**: Must the business offer consumers the right to opt out of ADMT for decisions with legal or significant effects?
12. **Access right**: Must the business provide consumers access to information about the ADS's logic, training data sources, output types, and decision criteria?
13. **Annual risk assessment obligation**: Does use of ADMT for decisions with significant effects require an annual risk assessment submitted to the CPPA?
14. Note the current regulatory status: The CPPA has adopted ADMT regulations — identify which provisions are in effect and which remain subject to finalization as of your knowledge cutoff.
**C. NYC Local Law 144 — Automated Employment Decision Tools (AEDT)**
15. Does the use case involve employment decisions in New York City? If yes, does the ADS qualify as an "Automated Employment Decision Tool" (AEDT) — a system that uses machine learning, statistical modeling, data analytics, or AI that is used to substantially assist or replace discretionary decision-making for hiring or promotion?
16. **Bias audit requirement**: Must the employer obtain an independent bias audit of the AEDT before use and annually thereafter? Identify:
- Who may conduct the audit (independent auditor definition under Local Law 144)
- What the audit must calculate: selection rate and impact ratio for sex, race/ethnicity categories; analysis for intersectional categories
- The "4/5ths rule" (80% threshold) for identifying disparate impact
17. **Posting and notice requirements**: What must be posted on the employer's website (audit summary, date, score categories used)? What notice must be provided to job candidates or employees?
18. **Scope limitations**: Note that Local Law 144 applies only to employers with candidates or employees who are employed in NYC, that it covers hiring and promotion (not performance management or termination in its current form), and that the 2023 enforcement rules limit "substantially assist or replace" to specific reliance scenarios.
**D. EU AI Act — High-Risk Employment / Credit / Healthcare / Law Enforcement Systems**
19. If the ADS falls within the use case domains covered by Annex III of the EU AI Act (employment: Annex III(4), credit/essential services: Annex III(5), law enforcement: Annex III(6), healthcare as safety component or covered by MDR: Art. 6(1)), identify:
- Applicable Annex III category
- Whether an Art. 6(3) exception is available
- Provider vs. deployer obligation split (for cross-reference with File 19)
20. **Transparency to affected persons (Art. 50 + Art. 26(10))**: For high-risk ADS that make decisions significantly affecting persons, deployers must notify those persons that the system is in use and provide relevant information about the system's purpose, logic, and rights available to contest the decision.
21. **Fundamental Rights Impact Assessment (Art. 27)**: If the deployer is a public authority or deploys in specific high-risk areas under Art. 27(1), is an FRIA required before deployment?
**E. Colorado Artificial Intelligence Act (SB24-205 — effective February 1, 2026)**
22. Does the ADS qualify as a "High-Risk AI System" under SB24-205? Colorado defines this as an AI system that, when deployed, makes or is a substantial factor in making consequential decisions — including decisions about education, employment, financial services, essential government services, healthcare, housing, insurance, and legal services.
23. **Developer obligations**: Has the developer used reasonable care to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination? What technical documentation and impact assessments are required?
24. **Deployer obligations**: Must the deployer (a) conduct an annual impact assessment? (b) implement risk management policies? (c) provide a notice to consumers that explains the purpose of the ADS, types of data used, and right to appeal?
25. **Right to appeal (§ 6-1-1703(3))**: What right to correct information or appeal consequential decisions must be made available to consumers?
26. **Enforcement**: Colorado Attorney General enforcement; no private right of action in SB24-205.
**F. Brazil LGPD Article 20 — Automated Decision Review Rights**
27. Does LGPD apply (see LGPD Art. 3 territorial scope)? If yes, does the ADS make automated decisions that affect the interests of data subjects, including decisions related to professional, consumer, credit, or personal aspects?
28. **Art. 20(1) review right**: Data subjects may request review of decisions made solely on automated processing. What workflow must the organization implement to fulfill this right within a reasonable time?
29. **Art. 20(2) criteria disclosure**: Upon request, the organization must provide information about the criteria and procedures used for the automated decision. What level of explainability is required?
30. **Sensitive data prohibition (Art. 20(3) + Art. 11)**: Is the ADS prohibited from making automated decisions based solely on processing of sensitive personal data (health, biometric, racial/ethnic, religious, political, financial data)?
**G. Sector-Specific Rules**
Analyze any applicable sector-specific rules based on the use case domain:
31. **Credit / Financial Services (US)**: Does the Equal Credit Opportunity Act (ECOA, 15 U.S.C. § 1691 et seq.) and Regulation B apply? Specifically:
- Adverse action notice requirement (Reg. B § 202.9): specific reasons for credit denial, including AI-generated scores
- CFPB guidance on ECOA and algorithmic model explainability (CFPB Circular 2022-03: ADS must provide specific reasons, not proxy reasons, for adverse action)
- Fair Credit Reporting Act (FCRA) if consumer reports or third-party data sources are used
32. **Employment (US)**: Does Title VII of the Civil Rights Act (42 U.S.C. § 2000e) apply? Specifically:
- Disparate impact theory (Griggs v. Duke Power Co., 401 U.S. 424 (1971)): employer tools with disparate impact on protected classes must be job-related and consistent with business necessity
- EEOC guidance on AI hiring tools (EEOC Technical Assistance, May 2023)
- Illinois Artificial Intelligence Video Interview Act (AIVIA): biometric-data disclosure + annual audit for video interview AI in Illinois
33. **Insurance (US)**: State insurance commissioner guidance on algorithmic underwriting fairness (identify applicable state)
34. **Healthcare (US)**: Does the ADS implicate FDA Software as a Medical Device (SaMD) guidance or Office for Civil Rights guidance on AI discrimination in healthcare (HHS OCR AI guidance, 2024)?
**OUTPUT**
Produce the following:
**A. Jurisdiction × Obligation Matrix**
A table with columns: Framework | Applies (Yes / No / Conditional) | Threshold / Trigger | Core Obligations | Compliance Deadline
**B. Transparency Notice Requirements**
For each applicable framework, specify exactly what must be disclosed: (1) to whom, (2) when (before collection, before decision, upon request), (3) in what format, (4) with what minimum content.
**C. Bias Audit and Impact Assessment Cadence**
A table showing: Framework | Audit / Assessment Type | Frequency | Who Conducts | What Must Be Documented | Where Must It Be Filed / Published
**D. Record-Keeping Duties**
List all record-keeping obligations: what records must be kept, for how long, and by whom.
**E. Governance Gaps and Recommended Next Steps**
Identify the three highest-priority compliance gaps given the use case, with recommended next steps.
Here is the automated decision system description:
Organization name and type: [e.g., "Regional bank, US-headquartered with operations in Germany and Brazil"]
Decision domain: [e.g., "Consumer loan underwriting — determining whether to approve a personal loan and at what interest rate"]
ADS description: [Describe what the system does — e.g., "ML model trained on 8 years of loan performance data; inputs: applicant credit score, income, employment history, bank transaction patterns; output: approve/deny recommendation + suggested rate; a loan officer reviews the output before final decision but approves the AI recommendation in 91% of cases"]
Data subjects: [e.g., "US consumers, German residents (EU), Brazilian residents"]
Affected geographies: [e.g., "Decisions made for applicants in California, New York, Colorado, Germany, Brazil"]
Training data: [e.g., "Historical loan data 2015-2023; includes demographic proxies; no explicit race/ethnicity data but zip code and income included"]
Third-party data sources: [e.g., "Experian credit bureau data, Plaid bank transaction API, internal proprietary scoring model"]
Human oversight level: [e.g., "Loan officers follow AI recommendation 91% of the time; formal override mechanism exists but rarely used"]
Existing compliance documentation: [e.g., "ECOA adverse action notices in place; no GDPR Art. 22 analysis completed; no bias audit conducted"]
Tips
The 'human oversight level' field is the most legally consequential input in this prompt. Whether a system is making decisions 'based solely on automated processing' (GDPR Art. 22(1)), 'with minimal human review' (CPPA ADMT definition), or 'substantially assists or replaces discretionary decision-making' (NYC LL144) all depend on how real the human review is. Quantify the override rate, document the time pressure on reviewers, and describe whether the human has access to the underlying data or only the AI output. These facts determine which frameworks apply and whether exceptions are available.
Run a follow-up prompt to generate all required transparency notices in one pass: 'Based on the obligations map above, draft a model transparency notice that satisfies all applicable disclosure requirements simultaneously: GDPR Art. 13/14 meaningful information requirement, CCPA ADMT access right language, NYC LL144 candidate notice, Colorado SB24-205 consumer notice, and LGPD Art. 20(2) criteria disclosure. Highlight where the requirements conflict or require separate notices.'
For the ECOA adverse action analysis, specify the AI output format. The CFPB's 2022 Circular clarified that 'complex models' cannot use generic adverse action reason codes (CFPB Circular 2022-03). Ask the AI to analyze whether the specific output format (numeric score, ranked reasons, probability distribution) satisfies the ECOA specific-reasons requirement and FCRA disclosure rules.
Ask the AI to draft a bias audit protocol based on the NYC LL144 requirements, then check whether the same protocol can simultaneously satisfy Colorado SB24-205's impact assessment requirements and the EU AI Act's Art. 9 risk management / Art. 72 post-market monitoring requirements. A single audit protocol can be designed to satisfy multiple frameworks with careful scoping.
For employment ADS specifically, request a jurisdictional pre-deployment checklist: 'List every US state with active ADS / AI hiring legislation or pending bills as of [current date], the specific requirements for this ADS type, and the compliance status gap.' Illinois AIVIA, Maryland's pending ADS legislation, and several other states have requirements that stack on top of EEOC guidance and NYC LL144.
Cautions
The CPPA's ADMT regulations are a moving target. Draft regulations were published, opposed, revised, and re-circulated through 2024-2025. Confirm the current enforcement status of specific ADMT provisions before relying on AI output about CCPA/CPRA ADMT obligations — the AI's training data may not reflect the most recent regulatory status. Check the CPPA's website (cppa.ca.gov) directly.
NYC Local Law 144 applies only to employment decisions affecting candidates or employees who are located in New York City. It does not apply to all NY State employers, does not extend to performance management or termination in its current form, and its definition of 'substantially assist or replace discretionary decision-making' was narrowed by the 2023 enforcement rules. Do not over-apply its scope to non-NYC or non-hiring contexts.
The EU AI Act's high-risk obligations under Annex III do not apply until 2 August 2026 for new systems and may apply later for systems already in deployment under Art. 111 transitional rules. The Colorado AI Act (SB24-205) enters into force on 1 February 2026. Build phased timelines into the compliance roadmap rather than treating both as immediate obligations.
Cross-jurisdictional analysis is subject to compounding uncertainty: the AI is applying multiple frameworks simultaneously, each of which is itself subject to ongoing regulatory development. The output is a structured first-pass map — it is not a legal opinion. Verify every framework-specific conclusion against the primary source (statute, regulation, agency guidance) before relying on it for compliance decisions.
Do not input internal model documentation, training data descriptions, audit results, or privileged legal analysis into a consumer AI tool. Use an enterprise-grade, data-protected AI instance. This analysis will likely be sought in discovery or regulatory investigations. ABA Formal Opinion 512 requires lawyers to take reasonable precautions to prevent unauthorized access to client information when using AI tools.
What This Quick Win Does
Automated decision systems in high-stakes domains — hiring, credit, insurance, healthcare, criminal justice — now face obligations from multiple overlapping regulatory frameworks simultaneously. A credit-scoring ADS used by a bank operating in Germany, California, and Brazil must satisfy GDPR Art. 22’s automated decision-making rules, CCPA/CPRA ADMT regulations, LGPD Art. 20 review rights, ECOA’s adverse action notice requirements, and the EU AI Act’s high-risk obligations under Annex III(5) — all at once, each with different transparency requirements, bias audit expectations, record-keeping obligations, and compliance timelines.
This Quick Win produces a jurisdiction-by-jurisdiction obligations map in about 25 minutes. The output is structured around four practical artifacts: a framework-by-framework applicability matrix, precise transparency notice requirements per jurisdiction, a bias-audit and impact-assessment cadence table, and a consolidated record-keeping duties summary. The goal is to replace ad hoc framework-by-framework research with a single structured map that can anchor a cross-border compliance project.
How to Use It
Step 1: Define the Decision Domain Precisely
The applicable regulatory frameworks depend heavily on the specific decision domain and on the degree of human involvement. Before running the prompt, answer these questions:
What decision is being made? Be specific: “resume screening” is different from “offer/no-offer decision”; “credit score calculation” is different from “loan denial.”
Who is affected? Consumers, employees, applicants, patients, criminal defendants? Location matters for every framework.
What is the AI’s actual role? Does it generate a ranked list that a human reviews in 5 seconds, or does it produce a decision that is implemented automatically unless affirmatively reversed? The “human oversight level” input is the most legally consequential field in this prompt.
What data sources feed the model? Biometric data, third-party credit bureau data, consumer reports, behavioral data from cookies or apps — each triggers additional framework-specific rules.
Step 2: Run the Prompt and Review the Applicability Matrix First
The Jurisdiction × Obligation Matrix (Output A) establishes which frameworks apply and which do not. Review it first, before the detailed obligations analysis. Common misapplication risks to check:
GDPR Art. 22 applies only when the decision is based “solely” on automated processing — confirm that the human review described in the system inputs is genuinely meaningful, not nominal.
NYC Local Law 144 applies only to hiring and promotion decisions affecting NYC-based candidates or employees — it does not extend to performance management or termination.
CCPA/CPRA ADMT: Confirm current regulatory status before relying on AI output about which specific ADMT provisions are enforceable. The CPPA regulations have been subject to ongoing revision.
Colorado SB24-205 enters into force 1 February 2026 — include it in the planning horizon but flag as forward-looking for systems not yet deployed.
Step 3: Use the Transparency Notice Requirements to Draft Notices
The transparency notice requirements (Output B) drive the most immediate compliance action for most organizations. Use this output to:
Identify gaps in the organization’s current privacy notices against GDPR Art. 13/14 “meaningful information about the logic” requirement
Draft or update adverse action notices for ECOA compliance using the CFPB Circular 2022-03 standards
Produce NYC LL144-compliant candidate notices identifying that an AEDT is in use
Identify where frameworks require separate notices versus where a single consolidated notice can satisfy multiple frameworks
Follow up with the AI to draft the actual notice language after reviewing the requirements.
Step 4: Build the Bias Audit and Impact Assessment Roadmap
The audit cadence table (Output C) is the practical roadmap for the bias audit program. Use it to:
Identify which frameworks require independent auditors (NYC LL144) versus internal assessments (Colorado SB24-205, EU AI Act Art. 9)
Consolidate overlapping audit requirements where a single bias audit protocol can simultaneously satisfy multiple frameworks
Establish the annual cadence and assign responsibility between the legal, compliance, and data science teams
For employment ADS subject to NYC LL144, the bias audit must be completed before first use and annually thereafter. Results must be posted on the employer’s website. Build this into the product launch timeline.
Step 5: Implement Record-Keeping and Establish Governance Gaps
The record-keeping duties (Output D) often reveal gaps in existing documentation programs. Cross-check against existing data governance policies and assign ownership for each record type. The governance gap analysis (Output E) provides the prioritized starting point for the compliance project.
Why This Works
Cross-jurisdictional ADS compliance is essentially a matrix problem: a defined set of frameworks, each with a defined set of criteria, applied to a specific system’s facts. The AI is well suited to applying multiple regulatory frameworks simultaneously and structuring the output as a comparative matrix — a task that would otherwise require reviewing five or more regulatory frameworks in sequence and manually integrating the results.
The sector-specific layer (ECOA, Title VII, EEOC guidance, CFPB circulars) benefits particularly from AI-assisted analysis because the interaction between general privacy frameworks and sector-specific rules is not always obvious — a credit ADS subject to ECOA may face adverse action notice obligations that are more specific and more stringent than GDPR Art. 22’s transparency requirements, and a hiring ADS subject to Title VII’s disparate impact theory faces a different analytical framework from NYC LL144’s impact ratio test.
What This Does Not Replace
A qualified bias audit conducted by an independent auditor under NYC Local Law 144 — the AI can draft the audit protocol and analyze the required metrics, but the audit itself must be conducted by an independent qualified auditor and the results must be posted publicly
Legal judgment on whether human oversight is genuinely “meaningful” under GDPR Art. 22(1), CCPA ADMT, or Colorado SB24-205 — this is a facts-and-circumstances determination that requires direct review of the decision workflow and may be contested by regulators
ECOA adverse action reason analysis for specific model architectures — the CFPB’s requirement that complex AI models provide specific reasons (not proxy reasons) for adverse action requires the output layer of the model to be specifically designed for explainability; the AI can identify the obligation but cannot engineer the solution
Current CPPA ADMT regulatory status — the CPPA ADMT regulations have been subject to ongoing revision and may not be fully reflected in AI training data; confirm directly with the CPPA before relying on AI output about CCPA ADMT compliance obligations
Supervising attorney review of the obligations map before it is used to drive compliance decisions, public disclosures, or regulatory filings in any of the applicable jurisdictions
How to Use Quick Wins
Choose a Quick Win that matches your practice area and experience level.
Read the full exercise, including the cautions.
Copy the provided prompt into your preferred AI tool.
Modify the prompt for your specific needs.
Always verify the AI output against primary sources before relying on it.
Want to Go Deeper?
Quick Wins are your starting point. To build a comprehensive understanding of AI in legal practice, explore our structured learning paths and prompt engineering guides.
Comments
Loading comments...