Harvey for Legal Teams: Is Legal AI Making Lawyers Duller or Sharper?

Legal AI like Harvey can accelerate research and analysis, but overreliance risks dulling core legal judgment. When AI replaces reasoning, lawyers shift from thinking to validating. The smarter approach is applying AI to operational friction such as intake, triage, and workflow, so lawyers spend more time exercising expertise.

February 19, 2026
February 19, 2026

Table of contents

Reading time: 

[reading time]

Over the last eight years, I’ve seen the shift from shared inboxes and spreadsheets, to matter management, workflow automation, dashboards, and AI agents. I believe in legal tech.  

And right now, almost every conversation includes the same topic: AI.

“Are you seeing teams use Harvey?”

“Should we be using AI for contract review?”

“Are we behind?”

It’s a fair question. The tools are impressive. You can do legal research in seconds, execute clause analysis at scale, and instantly generate due diligence summaries that would’ve taken a junior lawyer half a day.

But I’ve been thinking a lot about this…

If AI is doing more of the thinking… what’s happening to the thinker?

When lawyers start relying on AI to do foundational thinking, their role shifts from thinking to validating. And over time, that changes how skills develop.

A study conducted in 2025 by Michael Gerlich on AI Tools in Society: The Impacts of Cogntive Offloading and the Future of Critical Thinking highlight that whilst AI tools can enhance learning outcomes by providing personalized instruction and immediate feedback, growing evidence shows that over-reliance on these tools can lead to cognitive offloading.  

So, if you’re not exercising judgment, your critical thinking skills weaken. And if you’re not working through the analysis yourself, your instincts don’t sharpen.  

I’m not suggesting we ignore legal tech solutions like Harvey. But I do think we need to be deliberate. Because the question isn’t whether legal AI works. It clearly does. The question is whether we’re using it in a way that makes lawyers sharper… or quietly dulls the blade.

Where Does AI Start Replacing Legal Expertise?

There’s a difference between automating process and outsourcing judgment.

Legal AI tools are designed to analyze and reason. They can synthesize legal arguments, interpret case law, suggest litigation strategy, and assess regulatory exposure based on fact patterns.

But, if a tool is analyzing a set of facts and generating a legal position… that’s the core of what a lawyer is trained to do.

If it’s synthesizing case law and telling you which precedent is most relevant… that’s expertise.

If it’s drafting a legal argument in response to a motion or regulatory inquiry… that’s judgment.

And again — it’s impressive. But this is where I think we need to pause because the more we rely on AI to perform substantive legal reasoning, the more we risk shifting the lawyer’s role from strategist to supervisor. And when that shift happens, a lawyer’s skills gradually erode.  

Expertise in law isn’t something you preserve by reviewing someone else’s reasoning. It’s built and maintained by working through ambiguity yourself, interpreting the facts, and making the informed legal decisions.

Risks of Replacing Legal Judgement with AI

Judgement Risk

Large Language Models (LLMs) are probabilistic systems. They predict language patterns but don’t actually “understand” law and often make mistakes, especially when it comes to conducting analysis and legal research.

They can’t grasp a company’s specific risk tolerance, history with regulators, or strategic priorities like a seasoned lawyer or General Counsel (GC) can.

And since accuracy is a top priority in legal work, “mostly right” isn’t good enough. Outsourcing that layer of thinking to a legal AI tool doesn’t just risk a mistake. It risks missing the nuance that only comes from specialization.

Identity & Confidence Risk

If AI starts to feel “better than you” at research or analysis, what does that do to the profession?

Junior lawyers lose critical training reps, senior lawyers begin to question where their differentiation lies, and the role shifts from strategic advisor to validator of machine output.

That ends up narrowing the legal function and over time, if lawyers are positioned as supervisors of AI instead of experts applying judgment, the perceived value of the legal function changes.

How to Apply AI Without Replacing Legal Judgment

Some AI tools help lawyers save time by accelerating research, analysis, and writing. Checkbox AI also helps legal teams save time — just in a very different way.

The legal AI market right now is heavily focused on augmenting strategic legal work. In other words, tools that step directly into areas that require legal expertise.

Checkbox takes a narrower approach.

It doesn’t try to replace research, generate legal analysis, or form legal positions. Instead, Checkbox operates as the service layer between the business and legal.

Checkbox’s AI Legal Front Door handles intake, triages requests, answers low risk FAQs, routes matters into structured workflows, ensures lawyers receive complete information upfront, and eliminates the “where is this?” and “can you resend that?” emails.

Research, interpretation, and risk analysis are high-risk domains. We know GPT-based systems can get things wrong and over-reliance on them in those areas can quietly weaken the very analytical muscles lawyers rely on.

But fielding repetitive policy questions, chasing missing attachments, and sorting through inbox chaos are low-risk and high-friction responsibilities.

Related Article: Learn more about why legal needs a purpose-built AI Agent and discover the best legal-specific alternatives to ChatGPT.

Removing the friction caused by manual, admin work doesn’t dull legal skills. The more time they spend wrestling with ambiguity, forming positions, calibrating risk, the sharper they stay.

AI should not replace that. It should protect it.

The right AI strategy isn’t about asking, “How can we automate what lawyers do?” It’s about asking, “How can we automate what lawyers shouldn’t be doing in the first place?”

That’s the difference between AI that substitutes expertise and AI that supports it  

Key Takeaways

I don’t think the future of legal is lawyers versus AI. I think it’s about how intentionally we apply it. Used the wrong way, AI can absolutely make lawyers faster. But faster isn’t the same as better.

If AI consistently forms the argument, frames the issue, and synthesizes the reasoning, lawyers risk 'cognitive offloading’. Over time, that narrows the profession and weakens the very skills that make legal teams strategic partners to the business.

Used the right way, though, AI does something very different. It can support lawyers by:

  • Structuring disorganized intake processes,  
  • Answering repetitive questions,
  • Routing work automatically,
  • Giving leadership visibility without manual reporting, and
  • Giving lawyers their time back.

Doing the real work yourself strengthens expertise. Overreliance on automation in judgment-heavy areas weakens it. But automation applied to friction-heavy areas amplifies it.

So, when legal leaders evaluate AI, I’d suggest asking one simple question:

Is this replacing legal judgment or nurturing it?

P.S. obviously I used AI to tighten up this article, so I’m betraying the very ideas I’m expressing. I suppose if I was a writer by trade it would be an even bigger betrayal, but like I said, AI is amazing and I’d be dumb not to use it where it makes sense. For me this is where it makes sense. Where does it make sense for you?

Frequently Asked Questions

Can using AI for legal research weaken critical thinking skills?

Potentially, yes. Studies on cognitive offloading suggest that when professionals rely too heavily on AI for analysis and reasoning, their own judgment and critical thinking skills can decline. Legal expertise is maintained through active reasoning, not passive review.

What are the risks of using AI for legal analysis?

Large Language Models (LLMs) are probabilistic and can hallucinate or miss nuance. They lack institutional memory, company-specific risk tolerance, and contextual judgment. In legal work, where accuracy and nuance are critical, “mostly right” is often not sufficient.

What is the difference between automating legal process and replacing legal judgment?

Automating process involves tasks like intake, routing, triage, FAQs, and workflow organization. Replacing judgment involves AI generating legal positions, interpreting case law, or forming arguments. The former reduces friction; the latter risks substituting expertise.

How can legal teams use AI without dulling legal skills?

Legal teams should apply AI to low-risk, high-friction administrative tasks while protecting judgment-heavy work like research, strategic analysis, and risk calibration. AI should remove operational noise so lawyers can spend more time applying their expertise.

David Moore
  
Director of Sales, North America at Checkbox

Dave has spent the last 6+ years hyper-focused on helping corporate legal departments leverage technology to create more actionable access to data, and documents, and automate processes to ensure their teams are laser focused on the highest value activity at the top of their skillset and licensure.

Book a Demo

See the New Era of Intake, Ticketing and Reporting in Action.

No items found.
No items found.