Sign up to our newsletter
Get insightful automation articles, view upcoming webinars and stay up-to-date with Checkbox
Reading time:
[reading time]

Over the past week, I’ve had a lot of conversations surrounding Anthropic’s launch of their new legal plugin that helps users customize Claude for legal tasks such as document review, redlining, and compliance.
What makes this announcement worth paying attention to isn’t just the product itself, but the reaction that followed.
The response wasn’t confined to legal tech. It rippled across the broader SaaS market, with public companies spanning infrastructure, DevOps, IT service management, and productivity all moving in the same direction almost immediately. When reactions spread that widely, it’s usually a sign the market isn’t responding to a single product launch, but to something more structural.
When a foundation model company ships a vertical, opinionated workflow, it raises a broader question: where does long-term value in software actually sit? If the intelligence itself increasingly lives at the model layer, what continues to differentiate the tools built on top of it?
These questions are worth unpacking properly, because the significance of this announcement isn’t just in the AI tool itself, but in what it signals about how legal technology and legal operating models are evolving.
What Did Anthropic Announce?
Last week, Anthropic announced a new legal-specific offering as part of its Claude Co-Work plugins — tools designed to customize Claude for tasks like contract review, redlining, compliance checks, and basic document triage.
At a functional level, this will feel familiar to anyone who has been following legal AI over the past few years. A lawyer can upload a document, reference a playbook or standard position, and receive flagged issues or suggested changes as an initial pass. The goal is straightforward: save time on repeatable, text-heavy work.
What’s different here is how the capability is delivered.
Instead of stopping at a general-purpose model or API, Anthropic has embedded legal-specific guidance directly into the product. In practice, this means the model is instructed on how to approach legal review such as when to rely on a user’s playbook, how to proceed if one doesn’t exist, and how to evaluate a contract holistically rather than clause by clause in isolation.
Conceptually, this mirrors how legal work is taught in practice. A senior lawyer doesn’t just hand over a document and ask for comments. They provide structure, context, and judgment.
This approach represents an evolution in how foundation models are being delivered to end users. Instead of stopping at a general intelligence layer, the model is paired with domain-specific workflows that are immediately usable for a defined set of tasks.
That shift is what drew so much attention. It highlights how quickly AI tooling is moving from broad capability to applied, domain-specific use.
Practice of Law vs. Business of Law
To make sense of what this means for legal teams, I’ve found it helpful to distinguish between two related but very different problems: the practice of law and the business of law.
The practice of law is what most people instinctively think about. It’s legal judgment. Interpreting contracts. Assessing risk. Deciding whether a clause is acceptable, whether a position aligns with policy, or how a regulation should be applied in a specific context. This is the domain where tools like contract review and legal reasoning naturally fit.
The business of law, on the other hand, is about how legal work moves through an organization.
It starts long before a lawyer opens a document. Someone in the business has a question or a request. It comes in through email, Slack, Teams, or a shared inbox. Someone has to decide what the request actually is, whether it needs legal involvement, how urgent it is, and who should handle it. Approvals need to happen. Information needs to be captured. Work needs to be tracked. Decisions need to be auditable.
Legal teams rarely struggle because they can’t reason about a clause. They struggle because demand is unpredictable, intake is inconsistent, prioritization is manual, and context gets lost as work moves between people and systems. That’s where delays, risk, and burnout tend to show up.
This distinction matters because different categories of technology are optimized for different parts of that problem. Some tools are designed to augment legal judgment, while others are designed to orchestrate legal work inside a business.
Understanding which problem a tool is solving (and which one it isn’t) makes it much easier to evaluate announcements like this one without overreacting or underestimating their impact.
Where Anthropic’s Legal Plugin Is Powerful
Applied AI has made real progress in helping lawyers move faster on certain types of work — particularly work that is repetitive, text-heavy, and bounded by clear standards. Contract review is a good example. Uploading an agreement, comparing it against a known position, and getting a first pass on potential issues can save meaningful time.
For individual lawyers, this kind of capability can be a real productivity boost as it reduces the amount of manual scanning required, helps surface issues earlier, and creates a useful starting point for review. In that sense, tools like Claude’s legal plugin are doing exactly what they’re designed to do: augment legal judgment, not replace it.
They also work well as point solutions. You have a document. You want an answer. You want help thinking through it faster. That’s a valuable interaction, and it’s one that will continue to improve as models get better.
Recognizing this is important because it’s easy to swing too far in either direction — either dismissing these tools as “just another AI feature,” or assuming they solve every problem legal teams face. In reality, they’re very effective within a specific scope. The key is understanding where that scope begins and ends.
Where It Falls Short for In-House Legal Teams
Most in-house legal work doesn’t begin with a document neatly ready for review. It begins with an email that says “Can you take a look at this?” Or a Slack message that says “Is this okay to sign?” Or a request that’s half-formed, missing key details, and unclear about urgency or risk.
Before a lawyer can even use a tool like Claude’s legal plugin, someone still has to:
- Decide whether the request actually needs legal review
- Gather the right information
- Understand how it fits within internal policies
- Route it to the right person
- Track what’s happening and why
That layer — the layer where work is classified, prioritized, approved, and audited — is where most in-house legal teams feel the real operational strain. And it’s largely invisible to tools that focus purely on legal reasoning.
Legal Operating Models
AI-driven legal reasoning tools make individual tasks faster. They help lawyers process information more efficiently once the work is already in motion. That’s a meaningful improvement.
But legal teams don’t operate as a collection of isolated tasks. They operate as a service function inside a business.
That means the harder problems tend to be:
- Managing unpredictable demand
- Making consistent decisions at scale
- Ensuring low-risk work doesn’t consume high-cost resources
- Providing visibility into workload, turnaround time, and risk exposure
- Creating defensible, auditable processes that the business can rely on
This is where legal operating models either break down or scale effectively. And it’s why announcements like this should prompt legal teams to step back and ask not “Does this tool do X?” but “Where does this fit in how we actually run legal?”
How Legal Teams Should Think About AI Tools Like This
A useful way to think about applied AI tools like Anthropic Claude’s legal plugin is as force multipliers, not foundations.
They’re most effective when a request is already clearly defined, a document is ready for review, standards and playbooks are established, or a lawyer is already engaged in the work
They’re far less effective at deciding what work should be done in the first place, managing intake across the business, enforcing process and approvals, providing end-to-end visibility into legal demand, and creating a system of record for legal decisions.
For most in-house teams, the biggest gains don’t come from making lawyers slightly faster on every task. They come from reducing unnecessary work, routing requests correctly, and ensuring legal effort is spent where it actually adds value. That’s why, in practice, these tools tend to work best when they’re paired with systems that handle intake, workflow, and orchestration.
Key Takeaways
Anthropic moving into legal is a strong validation of the space as it strengthens the idea that legal work is complex, valuable, and worth investing in.
As AI continues to advance, legal teams that get the most out of it will be the ones that:
- Are clear on how work enters the function
- Have strong guardrails around when legal judgment is required
- Use AI selectively, where it meaningfully improves outcomes
- Design their operating model first, and layer tools on top of it
Seen through that lens, Anthropic’s announcement isn’t a disruption to be feared. It’s a call to be more deliberate about how legal work is structured, and more thoughtful about how new capabilities are integrated.
The future of legal isn’t about choosing between AI and workflows, or judgment and process. It’s about combining them in a way that lets legal teams scale without losing control, context, or credibility.
Frequently Asked Questions
Does Anthropic’s legal plugin replace legal tech platforms?
Not necessarily. Anthropic’s plugin is powerful for accelerating specific legal reasoning tasks like contract review. But most in-house legal challenges aren’t just about reviewing documents. They’re about managing intake, prioritization, approvals, visibility, and governance. Applied AI enhances certain tasks; it doesn’t replace the broader operating model legal teams rely on.
Should legal teams rethink their tech stack because of this launch?
Legal teams don’t need to overhaul their stack overnight. Instead, this is a moment to evaluate where AI fits. The key question isn’t “Do we need this tool?” but “Which layer of our operating model does this improve?” Strong intake, workflow, and orchestration foundations become even more important as AI tools improve.
What problem is Anthropic’s legal plugin actually solving?
It primarily solves for faster, more structured document-level analysis. It helps lawyers review contracts against playbooks, flag risks, and generate first-pass edits. It’s designed to augment legal judgment once the work is already clearly defined and in motion.
What problem does Anthropic's legal plugin not solve?
It doesn’t solve how legal work enters the organization, how requests are triaged, how approvals are routed, how workload is tracked, or how decisions are audited. Those operational challenges exist before a document is even reviewed and are central to how in-house legal teams scale.

Evan Wong is the CEO & Co-Founder of Checkbox, a 14x award-winning no code workflow automation platform, and is a listed Forbes 30 Under 30. Evan has worked with many legal teams globally on their digital transformation projects by leveraging the power of no code automation and his expertise in developing digital solutions to solve business process problems. Through this work, he has helped redefine how lawyers conduct intake and triage, generate documents, provide advice, and facilitate workflows, with a focus on applying innovation with ruthless practicality.
Book a Demo
See the New Era of Intake, Ticketing and Reporting in Action.


