Editorial

Where AI Contract Review Actually Fails

Vladimir Kuzin

Vladimir Kuzin · Founder & CEO, Shepherdstack LLC

·Updated · 7 min read
Disclosure: Founder of Shepherdstack LLC, the company behind Pact. All comparison articles use a standardized evaluation methodology applied equally to all tools, including Pact.

AI contract review tools are getting better fast. But they still fail in predictable, important ways. Knowing where those failures happen is more useful than knowing where AI succeeds — because the failures are where you lose money.

This article is published by Shepherdstack, the company behind Pact. We build an AI contract review tool. That gives us a detailed view of where these tools work — and a professional obligation to be honest about where they don't.

1. AI Cannot Assess Your Business Context

A non-compete clause in a software engineer's employment agreement means something completely different from the same clause in a franchise agreement. AI tools flag non-competes as risky — which is correct in isolation — but they cannot evaluate whether the restriction is reasonable for your specific industry, role, geography, or career plans.

An attorney asks: "What do you do? Where do you want to work next? Is this employer known for enforcing these?" An AI tool asks nothing. It flags the clause and moves on.

This matters because the advice that follows from each analysis is different. The AI might say "this non-compete is broad and could restrict your future employment." The attorney might say "this is unenforceable in California, ignore it" or "this is aggressive but standard for your industry — negotiate the duration down from 24 to 12 months."

2. AI Misses What Isn't in the Contract

Contract review isn't just about analyzing clauses that exist. It's about noticing what's missing. A commercial lease without a cap on CAM (Common Area Maintenance) charges. An employment agreement without an IP assignment carve-out for personal projects. A vendor agreement without a termination-for-convenience clause.

AI tools are trained on what contracts typically contain. They're much weaker at identifying important omissions, because the absence of a clause doesn't trigger pattern recognition the same way its presence does.

This is especially dangerous because missing protections are often the most expensive mistakes. The clause that wasn't there is the one you discover during a dispute.

3. Accuracy Claims Are Narrower Than They Sound

The most cited benchmark in AI contract review is the 2018 LawGeex study, which found AI achieved 94% accuracy on NDA issue-spotting compared to 85% for experienced attorneys. This is a real study with a real result, but it's important to understand what it does and doesn't prove.

  • The study tested only NDAs — one of the most standardized contract types
  • It was funded by LawGeex, an AI contract review company
  • Issue-spotting (identifying a clause exists) is different from risk assessment (evaluating what the clause means for you)
  • The 94% figure has not been independently replicated across other contract types

None of this means the study is wrong. It means that quoting "94% accuracy" as a general statement about AI contract review is misleading. Performance varies significantly by contract type, complexity, and what you're measuring.

4. AI Struggles With Unusual or Custom Clauses

AI models learn from patterns in training data. When a clause follows a common structure — standard indemnification, typical limitation of liability — AI performs well. When a clause is custom-drafted, uses unusual language, or combines multiple provisions in non-standard ways, accuracy drops.

Custom clauses are common in exactly the contracts that matter most: negotiated enterprise agreements, M&A documents, complex licensing deals, and any contract drafted from scratch rather than from a template.

If your contract is a template — a standard NDA, a boilerplate SaaS agreement — AI tools will handle it well. If someone drafted it specifically for your deal, the AI is working with less certainty than its confident output suggests.

5. AI Can Create a False Sense of Security

This may be the most dangerous failure mode. AI tools present their analysis with the same confidence regardless of their actual certainty. A clause that the model has seen thousands of times gets the same tone as one it's never encountered.

Users who receive an AI report saying "no major risks found" may skip attorney review entirely — even when the contract contains provisions that a lawyer would immediately flag. The tool didn't say the contract was safe. It said it didn't detect known risk patterns. Those are different statements.

6. AI Cannot Negotiate for You

Identifying a problem and solving it are different skills. AI can flag that your limitation of liability clause caps damages at the contract value (which may be too low). It cannot assess your leverage, propose counter-language that the other party will accept, or judge when to push back versus when to accept the risk.

Negotiation requires understanding the relationship, the relative power dynamics, and the cost of walking away. These are fundamentally human assessments.

When to Use AI and When to Use a Lawyer

AI contract review is a screening tool, not a replacement for legal judgment. It works best when:

  • The contract is relatively standard (NDAs, freelance agreements, standard leases)
  • The financial stakes are moderate (under $50,000)
  • You need a fast initial assessment before deciding whether to involve an attorney
  • You want to understand what a contract says before a legal consultation, so you can ask better questions

You should involve an attorney when:

  • The contract is custom-drafted or heavily negotiated
  • The financial stakes are significant (above $50,000 or involving ongoing obligations)
  • The deal involves intellectual property, equity, or non-compete restrictions
  • You're signing something that limits your future options in ways you don't fully understand

The Bottom Line

AI contract review tools are genuinely useful for a specific set of tasks. But the marketing around them — including, historically, some of our own — has sometimes overstated their capabilities. The honest position is: these tools are good at clause detection in standard contracts, and they're not good at the contextual judgment that makes legal advice valuable.

Knowing the difference protects you better than any tool can.

About Vladimir Kuzin

Founder & CEO, Shepherdstack LLC

Vlad Kuzin is the founder of Shepherdstack LLC and creator of Pact, an AI-powered contract review tool. He builds software that helps individuals and small businesses understand the documents they sign.

Disclosure: Founder of Shepherdstack LLC, the company behind Pact. All comparison articles use a standardized evaluation methodology applied equally to all tools, including Pact.

Copyright © 2026 Shepherdstack LLC. All rights reserved.

This site provides general legal information, not legal advice. Consult a qualified attorney for your specific situation.

Editorial Policy·About·Contact