
While law firms struggle to keep a lid on AI security threats and otherwise control lawyers using AI out of turn, Linklaters has a forward-thinking solution that more firms should embrace. As RollOnFriday notes, Linklaters puts its prospective robot lawyers through a UK law exam of their own design, crafting 50 questions from 10 different practice areas geared toward a “competent mid-level lawyer specialised in that practice area” and asked the AI to hash it out.
It’s the Baby Bar for robots. So, like, a Nano Bar? Whatever it is, it’s working better than the California Bar.
It’s not like Linklaters is alone on this. Tech professionals across Biglaw perform tests like these all the time. The distinction is that Linklaters is doing this out in the open. And senior lawyers graded the responses for substance, introducing important stakeholders to the AI evaluation process that can get overlooked when tools are considered exclusively by IT teams and firm tech committees. Which isn’t a knock on the conscientious approach firm staff and tech-savvy attorneys bring to these decisions but… sometimes you need to take these issues outside the nerd circle.
The good news for firms is that AI is getting better:
Linklaters noted there was a “significant improvement” in the results when compared with tests it ran in October 2023. The firm said that the AI models are starting to perform at a level where they should be able to assist in legal research, such as providing a first draft or “as a cross-check on an existing answer,” and could also be useful “for tasks that involve summarising relatively well-known areas of law”.
OpenAI delivered the best-performing AI model on the Linklaters test. That said, the tool only scored 6.4 out of 10 and bet-the-company lawyering doesn’t grade on a curve. The report doesn’t provide Grok’s score, but based on its stated approach to legal reasoning, we’re guessing it made OpenAI look like William Blackstone crawled out of the grave for a spot of tea and a run at some contracts.
But putting AI through the wringer in such a public way signals to lawyers that the firm both sees potential in AI while still making the clear case to everyone that AI isn’t ready for prime time.
This approach should push back against the “shadow AI economy” problem Hill Dickinson recently addressed. When firms downplay or outright shun AI, unruly lawyers are going to start experimenting with AI tools on their own. And that’s how you end up with confidential client data being uploaded to random servers in hostile countries. Transparency keeps everyone on the same page when it comes to when and where AI fits into a modern legal practice.
Linklaters makes AI sit law exams [RollOnFriday]
Joe Patrice is a senior editor at Above the Law and co-host of Thinking Like A Lawyer. Feel free to email any tips, questions, or comments. Follow him on Twitter or Bluesky if you’re interested in law, politics, and a healthy dose of college sports news. Joe also serves as a Managing Director at RPN Executive Search.