AI in Law: Notes from Sedona WG13 – And What Legal Teams Should Do Next
-
Published on Oct 29, 2025
Generative AI isn’t on the horizon for law; it’s already in the building. As corporate teams move from pilots to production and vendors like Innovative Driven keep pushing the edges, the impact will compound – resulting in shorter cycles, lower costs, and broader access. At the same time, we’re deploying these systems while we’re still learning what they’re capable of (and, importantly, where they fall short) and what they break. The tension of rapid adoption, paired with open questions, was front and center at the Sedona Conference Working Group 13 (WG13) midyear meeting. WG13 brought together judges, practitioners, technologists, and academics for two days of candid discussion about where AI fits in the legal ecosystem and where it doesn’t.
My 30,000 ft Observations
The definition problem (and why it isn’t academic)
One theme dominated: even among experts, there’s no shared definition of “artificial intelligence.” Make it too broad and you accidentally sweep in spreadsheets or classic TAR; make it too narrow and you neglect real risk. Many may argue “semantics,” but definitions determine what gets regulated, insured, funded, approved, or admitted into evidence. The group didn’t land on a single definition, unsurprisingly. And while that’s okay for now, it means organizations need to be explicit about what they mean by “AI” in policies, contracts, and workflows. For instance, we’ve been working some tailored language into protective orders limiting what the receiving party can do with confidential information.
The takeaway for legal teams isn’t to wait for a grand answer; it’s to implement practical guardrails that work under either view
Old rules or new regime?
The regulatory debate split cleanly into two camps:
- Existing frameworks are enough. AI is currently regulated in privacy, IP, data security, unfair practices, and professional responsibility frameworks. In this view, GenAI is speed and scale, not a new species. If an AI-drafted brief hallucinates, the signer still owns it, just like a cite-check gone wrong from a human associate.
- AI is a new category. The counterargument here is straightforward: some harms only become plausible at scale with these tools, so the law should make that clear and provide the necessary strictures and protections.
No surprise: no consensus. The takeaway for legal teams isn’t to wait for a grand answer; it’s to implement practical guardrails that work under either view.
The messy middle we actually live in
A few realities are already clear:
- Hallucinations won’t hit zero. You can push them down with better models, better prompts, and better data; you can’t eliminate them. Treat the model like a clever intern: fast, tireless, occasionally wrong with confidence, and design verification accordingly.
- Consumers are all-in, but enterprises are lagging. The most durable value today is efficiency and automation, not magic. That’s still meaningful: hours back, fewer manual hops, and tighter feedback loops.
- The latest look down at the nose moment: “It’s just a wrapper.” Many “new” legal AI products are inseparably built on top of a small set of foundation models. Consolidation is inevitable, and the differentiation lies in what happens before your data is sent to one of only a handful of LLMs.
- Access to justice expands. With structure and training, GenAI can help pro se litigants and small teams navigate complex tasks they’d otherwise struggle to afford.
What to do now if you lead or advise a legal team
Move from “interesting” to “operational” with a few plain steps:
- Define success. For each workflow, define acceptable error rates, human-in-the-loop checks, and escalation paths. “Looks good” is not a control.
- Instrument, then iterate. Log prompts, outputs, corrections, and downstream impacts. You can’t govern what you don’t measure, and you can’t improve what you don’t trace.
- Baby steps are fine. ECA, memo scaffolding, privilege log fragments, issue-tagging suggestions. Places where human verification is quick and time is saved are clear. Getting Started with GenAI: Practical Use Cases in Discovery – Innovative Driven
- Learn before you leap. AI is a force multiplier under the right circumstances. It multiplies a good process and exposes a bad process. Teach people how to use it and when to say “stop” or “let’s go back to the drawing board.”
- Contract like it matters. Demand clarity on data use, retention, model provenance, and update cadence. If a vendor can’t explain how it handles your data today and after an update, that’s not a good sign.
Where I land
We’ve all seen the tech cycle: hype early, disillusionment next, durable value later. I’m not sure we’ve hit the trough yet, but the durable value is already here for teams that avoid the temptation of “easy button” use cases and build real, thoughtful workflows. Sedona’s strength is in building consensus, and consensus will lag behind the technology. That’s as expected. “Good enough and getting better” beats “perfect and never shipped.”