All Opinion Articles

In May 2026, Mike — an open-source legal AI platform — launched on Hacker News with a deliberately spartan pitch: "Feature parity. Zero cost. Self-hostable." The codebase appeared on GitHub, the comments lit up, and within hours dozens of forks were created. Mike is real. It does what its README claims. But Mike is not the story. Mike is the signal that the story has changed.

A Structural Shift, Not a Single Product

For two years, the legal AI conversation in firm management committees has been a binary one: "Do we sign a contract with [vendor X]?" The framing assumed that production-grade legal AI required venture-backed enterprise software with specialized infrastructure. That framing was always partially incorrect, but until 2026 the open-source alternatives required substantial engineering investment to assemble — chunking pipelines, vector databases, citation parsers, prompt libraries, role-based access, audit trails. Few firms had the appetite.

Mike — alongside a quiet but rapidly maturing ecosystem of complementary open-source components — eliminates that engineering tax. The full stack now exists. Other ingredients of a private legal AI deployment have matured in parallel:

  • Open-weight model families competitive with the commercial frontier — Llama 4, Mistral Large, Qwen, DeepSeek, Gemma — runnable inside a firm's own GPU cluster or via private endpoints
  • Private model hosting through providers offering no-training contractual commitments (Anthropic and OpenAI both now sell enterprise tiers with contractual training opt-out and dedicated capacity)
  • Document ingestion and citation tooling packaged as reusable open libraries
  • General-purpose private AI interfaces like Open WebUI, which have established the UX patterns Mike refines for the legal domain

Taken together, what was a custom engineering project in 2024 is now a configuration exercise in 2026.

What Mike Gets Right

Reviewing Mike's actual feature set — not the marketing, the working software — three design choices deserve attention from any firm evaluating private deployment:

Verbatim citations. Every answer is anchored to a specific page and quote in a specific document. This is not a polish feature; it is the only architecture that meets the malpractice-defensibility standard. Any tool that lets the model paraphrase or summarize without traceable citation is unsuitable for actual legal work.

Matter-scoped projects. The unit of organization is the matter, not the user account. Documents, conversations, workflows, and access live and die together with the engagement. This mirrors how firms actually think about information barriers.

Reusable workflows as first-class objects. The "save a prompt as a workflow that juniors can run in one click" pattern is what separates a chatbot from a productivity platform. Mike's implementation here is conceptually clean.

These are best practices, not Mike-exclusive features. Any private legal AI deployment in 2026 should be evaluated against them.

What Mike Doesn't Solve

Reading the Mike codebase reveals a deliberate scope. Mike is a platform, not a turnkey product. Specifically:

  • There is no commercial support. When the assistant returns a wrong citation at 4 AM the day before a closing, you call your IT team — there is no vendor hotline.
  • There is no SLA. Uptime, response time, security patching, model-provider-deprecation handling — all of it is your firm's responsibility.
  • The AGPL-3.0 license requires careful handling. Any firm that modifies Mike and uses it as part of a service offered to third parties must publish those modifications. For most firms using it internally this is a non-issue, but it requires legal review.
  • DMS, billing, conflict-checking, and matter-management integration is your project. Mike provides the platform; your team writes the connectors.
  • Adoption is not automatic. Senior associates who want a 200-page diligence pack analyzed don't care that the underlying technology is open source. They care that they trust the output and that it integrates into their workflow. That work is human, not technical.

This is not a criticism of Mike. The project's authors are explicit: it is a working codebase, not a managed service. The criticism, if there is one, is of the framing that "open source replaces vendor software" — because for most firms, what they're actually buying when they pay Harvey or Legora is not the software.

What Firms Are Actually Buying

When a firm signs a six-figure annual contract with a commercial legal AI vendor, the value breakdown is typically something like:

  • 15% — the software platform itself
  • 20% — model API costs absorbed by the vendor
  • 25% — uptime, security, compliance, audit trails
  • 25% — training, change management, customer success
  • 15% — risk transfer (when something goes wrong, there is a counterparty)

Mike eliminates the first 15%. That is real, but it is also the easiest 15% to replace.

The other 85% — the operations layer, the human layer, the trust layer — is where firms struggle. And it is where the open-source revolution does not replace anything. It just shifts the cost from vendor to firm.

For firms with deep internal IT capability and a genuine appetite to operate a production AI platform, this shift is liberating. For everyone else, it creates an opportunity that can be filled by partners — independent integrators with the legal AI expertise to deploy, customize, train, and maintain a private platform on the firm's behalf.

Best Practices for Private Legal AI in 2026

Independent of which platform a firm chooses — Mike, a custom build, or a managed alternative — these are the operational best practices that determine whether a deployment succeeds:

  1. Citation-required outputs. No paraphrasing without a verbatim citation back to source. The floor for malpractice defensibility.
  2. Matter-level information barriers. Documents and conversations from one matter never leak into prompts for another, even within the same firm.
  3. Model provider review. Read the actual data-handling terms. "No training" means different things in different contracts. Fine print on logging, retention, and incident response matters.
  4. Open-weight model option for the most sensitive matters. Some matters belong on local GPU infrastructure with no external API calls at all. Plan the architecture so a subset of work can be routed to local inference.
  5. Workflow library as institutional asset. Treat the firm's prompt and workflow library the same way you treat the precedent bank — versioned, curated, attributed, reviewed. Senior partners' prompts are intellectual capital.
  6. Audit trails as a first-order requirement. Every prompt, every output, every cited document — logged with user, matter, timestamp. Discovery and bar inquiries do not pause for retrofitting.
  7. Output review protocols. Define which AI outputs require human review at which seniority. AI-drafted briefs leaving the office without partner review is not acceptable.
  8. AI-use disclosure to clients. Increasingly required by bar associations and increasingly expected by sophisticated clients. Build the policy now.
  9. Training across seniority levels. Senior partners need different training than junior associates, who need different training than paralegals.
  10. Ongoing model curation. Models change. Capabilities improve, costs shift, providers deprecate. Someone needs to be the firm's model strategist on an ongoing basis.

These are not Mike requirements. They are private-AI requirements. Any firm operating any legal AI platform in 2026 — open source or commercial — needs to plan for all ten.

The Build vs. Buy vs. Partner Question

For most firms, the practical decision in 2026 is not "open source vs. SaaS" but "what mix of self-deployment and partnership makes sense?"

A useful triage:

  • Self-deploy fully if your firm has dedicated IT staff comfortable with Linux, Docker, Postgres, Python, vector databases, and on-call rotation; if you have in-house legal-tech engineering capacity; and if you can absorb several months of platform-development time before the first matter runs through it.
  • SaaS-only if your firm has under 50 lawyers, no IT staff, and the matters you handle don't carry data residency or training-risk concerns that would make a multi-tenant cloud unacceptable.
  • Partner-deployed for the majority — firms that want the privacy and control benefits of a private deployment but reasonably refuse to take on the operations burden. An independent integrator deploys the platform, configures the model providers, builds the firm's workflow library, trains the team, and maintains the system on retainer.

The third option is the new arrival. Until Mike and its peers existed, partner-deployment was constrained by the absence of credible open-source platforms to deploy. That constraint has lifted.

Lawra's Position

Lawra has positioned itself for this moment.

Our Sovereign Suite service line takes the partner-deployed approach: we deploy a complete private legal AI platform — Mike, where it is the right fit, or a custom architecture where it is not — inside your private cloud or on-premises infrastructure. We integrate with your DMS. We build your workflow library. We train your team. We operate the platform on retainer if you want.

We are deliberately platform-agnostic. For some firms, Mike is the right answer. For others, the right answer is a custom build on open-weight models running on the firm's own GPU cluster. For others still, a managed endpoint from Anthropic or OpenAI under enterprise contract is the right balance of risk, cost, and operational simplicity.

The question is not "what is the best platform?" — it is "what is the best platform for your firm's matters, infrastructure, and risk posture?" We have no vendor relationships that would bias the recommendation.

What Comes Next

The open-source legal AI era began in 2026, but the operational era — the period during which firms actually run these platforms in production — has barely started. The next 24 months will produce a wave of best-practice consolidation, regulatory guidance, malpractice doctrine, and bar-association rulemaking. Firms that deploy thoughtfully now will shape that emerging consensus. Firms that wait will inherit it.

Mike's launch did not change the answer. It changed the question. The question used to be "can a firm operate its own legal AI platform?" The answer is now obvious: yes. The new question is "should we, and how?" — and that one each firm has to answer for itself.

We are happy to help you answer it.