LegalTech Trainwrecks #5 (Precedent retrieval)
Over the past two years, I’ve prototyped and discarded about a dozen potential legaltech products.
One of these ideas was an AI solution to retrieve relevant examples of past work (‘precedents’) more quickly and easily to support contract drafting and review.
It’s a popular space with plenty of competitors, both new and established, and there is a strong demand for good solutions. Indeed, when you talk to lawyers, all of them, without exception, say they would love a better way of getting the precedents they need when they need them.
But after brushing aside the surface-level enthusiasm and digging further into how lawyers actually find precedents today and how a proposed technology product could support that, the path forward became far more complicated.
The thesis seemed to make sense
Lawyers working on contracts are constantly cross-checking against precedents.
Typically how it works in law firms is that the more experienced team members will think for a minute and then decide which precedents should be used.
The problem with this approach is that:
- It fails to leverage the law firm’s collective intelligence, in particular, the vast bank of precedents within the firm’s document management system (‘DMS’).
- It places too much reliance on an individual’s aptitude and memory, and as a result, relevant precedents can be missed.
- There are situations where a senior lawyer forgets (or has no time) to suggest relevant precedents to their junior lawyer, so the junior lawyer has to start from nothing.
Currently, where lawyers feel they don’t have enough good precedents for whatever they are working on, there are two options:
- Ask their departmental colleagues or knowledge team (if one exists).
- Search their DMS.
The former is inefficient, and in any case, people want to be more self-sufficient and avoid bothering their colleagues.
The latter is both inefficient and highly dependent on the lawyer being able to translate what they want into good search terms. In addition, once search results have been returned, it’s an extremely painful process to open each document, skim read, decide whether it’s useful, and then check with the people who originally worked on that document (if they are even still at the firm) to confirm the document is indeed suitable for the relevant use case.
The solution? Artificial intelligence that understands the document you are working on and automatically delivers the most relevant precedents from your DMS without you having to search for them.
You might be aware of similar AI products, but they are instead pitched at the clause level rather than the document level. These typically work by extracting and categorising clauses from precedents within your DMS. I decided (rightly or wrongly) not to go down that path since my gut feeling was that it would be very difficult to categorise all of those clauses in a usable way (see an excellent analysis here), and also, in many cases, what is not in a precedent is just as important as what is.
But the problem wasn’t as big as I thought
I went to potential users to test the above thesis. The feedback was positive. Everyone loved the idea of a faster, easier way to get good precedents, drawing from their firm’s collective knowledge.
But it was a trap.
In 90% of cases (or probably more), experienced lawyers do not have any difficulty knowing which are the best precedents to use. This is hardly surprising. These lawyers have highly specialised knowledge and expertise. It would be worrying if they didn’t know the best precedents to use for any given matter within their area.
Nor do lawyers have trouble retrieving those precedents. I pushed hard here (with plenty of leading questions), but the large majority of lawyers said it was quite easy to search and find the precedents they were looking for in their DMS, and besides, many just saved their favourite precedents onto their hard drives.
In the remaining 10% (or probably less) of cases, as mentioned above, the fallbacks are to either ask colleagues or to do an exploratory search in the DMS.
Let’s take each of these in turn
While nobody said they liked sending internal group emails asking for help, concerned it would be perceived as annoying or spammy, the responses back invariably proved these fears were misplaced. It turns out most lawyers don’t mind helping their colleagues and are quite happy to respond to these sorts of requests even at short notice or outside normal hours. Several lawyers emphasised the benefits of discussing problems and queries with colleagues not working on the same matter and that we should be encouraging more of this sort of collaboration. I appreciate this might not be everyone’s experience, but then again, if a law firm has the sort of culture that means people don’t want to help their colleagues, there are probably bigger problems than what an AI can solve.
If asking your team and the entire department hasn’t worked, then we find ourselves in the realm of needing to search the DMS. This is a last resort and rare. Yet even in this situation, and despite complaints about user interfaces or slow speeds, people usually found what they wanted fairly easily or confirmed that it didn’t exist. I really tried to have people tell me that search was difficult to do (or, in start-up language, ‘broken’), and it was for some, but not most.
There’s one more point to address. A number of people mentioned that senior lawyers are not always great at suggesting precedents to their junior lawyers. Many junior lawyers won’t feel comfortable raising this with their senior lawyer, let alone emailing the entire department for help, so they just silently go fishing for precedents in the DMS. I agree this is a problem, but I don’t think technology is the answer. Surely the solution here is to improve supervision and delegation skills, rather than trying to use artificial intelligence to avoid senior lawyers needing to give proper instructions in the first place.
Where did it go wrong
Given all of this, why then did lawyers tell me they liked the idea of a faster, higher quality precedent retrieval system when the size of the problem was not actually that big?
Well, ask anyone if, hypothetically, they would like something that performs some important activity for them better, and of course, they will say yes. It sounds fantastic, in theory. And I really wanted this to work. But moving someone from their current process to a new solution requires more than better. The new solution has to have benefits that are so far ahead of the status quo that it justifies the pain of switching. I just couldn’t see a big enough problem to solve here. Other people may have different insights and find good opportunities, but for me, it was time to move on.
I didn’t totally kill the idea, though. Instead, I pivoted to focus on the other big intellectual property assets of law firms: templates, guides, training material and other knowledge management resources. Could that be a better angle?
If you want to find out, please feel free to follow me on LinkedIn, and in the meantime, check out Part 1, Part 2, Part 3 and Part 4 of this series on legaltech trainwrecks.
Daniel Yim writes and teaches on legal technology and transformation. He is the founder of technology start-ups Sideline and Bumpd, and previously worked worked at Gilbert + Tobin and Axiom.