Legal AI: Revenge of the Humans | Daniel Yim

Legal AI: Revenge of the Humans | Daniel Yim

Ask Google, one of the world’s most advanced artificial intelligence products, to find “restaurants near me that are not McDonald’s”.

Language is incredibly complex, and while great advances are constantly being made, there are many instances where AI completely misses the point despite it being straightforward for humans.

This doesn’t mean we shouldn’t use AI. I doubt you’ll stop using Google because it produced a list of McDonald’s outlets when you explicitly told it not to. But we should understand the limits of AI and know when an old-fashioned human approach might be cheaper, faster or better quality. 

I’ve set out below three limitations I’ve discovered from years of working with legal AI products on high-end transactional and knowledge management work. I’d be interested in hearing your experiences too! My details are at the end of this article.

Your contracts aren’t designed for AI

Some might say your contracts aren’t designed for humans either, but that’s another story.

Apart from some recent and commendable standardisation initiatives (e.g. oneNDA, Bonterms, Common Paper), lawyers don’t generally think about how easy their drafting will be for a computer to understand. 

In the perfect world that exists in tech demos, AI can indeed extract the debt size of a loan agreement by pinpointing the precise clause or defined term that talks about that. In the real, grubby world, the borrower actually wanted more money, and as a compromise negotiated a provision buried in a different part of the contract, allowing them to increase the debt size after one year subject to certain criteria. Or worse, this mechanism is in a side letter with only one of the banks in the syndicate. AI doesn’t cope well with this.

These type of things occur all the time, particularly in negotiated contracts. When faced with analysing a large batch of documents, you’ll always start with a plan as to the data you want to be extracted, what you’ll need to optimise (i.e. ‘train’) the AI for, and a vision of how everything will be clean and organised at the end. Invariably, you’ll soon find many documents don’t fit within your happy framework, no matter how hard you try to jam them in. To quote Mike Tyson, “Everybody has a plan until they get punched in the mouth”.

Legal documents are messy. AI gets punched in the mouth pretty quickly. Humans need to be there to pick the AI up off the ground, figure out what isn’t working and tweak the approach.

AI can’t extract what’s in people’s heads

And hopefully, it stays that way.

A lawyer was telling me recently about a deal where he had to convert the main contract into a different type of contract because the commercial structure changed at the last minute (sound familiar?). There weren’t decent precedents for the new contract in the document management system (DMS), so the lawyer cobbled together a Frankenstein comprised of the original contract and bits of mechanical drafting from some past examples he scraped together. This was all done in a huge rush, without much negotiation given there was a big discrepancy in bargaining power, and the lawyers just did whatever was needed to get the deal across the line. By his own acknowledgement, the final document was ‘not great’.

We then discussed how in a few years from now, an unsuspecting associate may well stumble upon this document in the DMS (or perhaps it will be suggested to them by AI) and think, “Fantastic, this document is exactly what I need, and it was authored by a lawyer I know is good.” Except of course it’s not, and should be permanently purged, but compliance won’t let us do that.

So much contextual information is not written down anywhere. If it’s not written down or otherwise observable, it cannot be accessed by AI. For this reason, experienced lawyers are always cautious where, for example, an AI-powered search suggests past work examples they aren’t familiar with. Without asking the people who worked on that previous document, they can’t be sure about the parties' circumstances, how the overall deal was structured, and the negotiation dynamics. Indeed something as basic as the applicable industry sector often cannot be determined from the words on the page.

It is possible to record this ‘metadata’ that otherwise only exists in people’s heads, but this is only practical up to a point. You might be able to force people to record things like the industry sector, deal value, perhaps even which law firm acted for whom and high-level deal features. But the devil is often in the detail, and it simply isn’t feasible to make lawyers write commentary on every clause. Funnily enough, they will happily share their documents and tell you everything you need to know if you simply ask, and they might even give you some useful suggestions on how to approach your transaction.

AI will do what you tell it to do

Well, other than call centre AI.

I spoke recently with a knowledge lawyer (professional support lawyer) who said he’ll often have lawyers requesting X, but after asking some questions on what they’re actually trying to achieve, it turns out they don’t need X but instead need Y.

Like all technology products, AI works on commands. It doesn’t question whether the commands it has been given are good. It doesn’t stop and ask whether you should be using AI at all.

I’ve made this mistake before. I decided to press go on an exercise to ‘cluster’ (i.e. group) documents contained in a data room, essentially so we could understand broadly the types of contracts we were dealing with. It was only later I found out there was an in-house lawyer who knew exactly the categories and sub-categories of contracts that existed and precisely how they had been organised in the data room (it wasn’t obvious to an outsider). I could have simply asked them and received more useful and accurate results in a much shorter amount of time.

So while AI has tremendous potential, we mustn’t forget that often the best and most efficient answer does not reside in AI, but rather in the human sitting right next to us, or at the end of a phone line. That human can navigate messy document structures, explain why things were negotiated in a certain way, and tell you when you are veering off track. You can even eat at McDonald’s with that human, or not, as the case may be.

Daniel Yim

Daniel Yim is an Australian lawyer and legal technology expert. His current work includes Knowledge Rocket (a platform to automatically connect knowledge, tools and resources to the work you are doing), Sideline (a simple and lightweight text comparison tool built into MS Word) and Bumpd (a not-for-profit organisation helping people who have lost their jobs as a result of pregnancy or parenting-related discrimination). He previously worked at Gilbert + Tobin and Axiom.

Also read top viewed Ai Legal article: The Role of AI in Legal Research.

Subscribe to the Legal Practice Intelligence fortnightly eBulletin. Follow the links to access more articles related to the business of law and legal technology.    

Disclaimer:  The views and opinions expressed in this article do not necessarily reflect the official policy or position of Novum Learning or Legal Practice Intelligence (LPI). While every attempt has been made to ensure that the information in this article has been obtained from reliable sources, neither Novum Learning or LPI nor the author is responsible for any errors or omissions, or for the results obtained from the use of this information, as the content published here is for information purposes only. The article does not constitute a comprehensive or complete statement of the matters discussed or the law relating thereto and does not constitute professional and/or financial advice.

Back to blog