Virtually every industry and profession will be touched by the rise of AI, and the legal profession is no different.
At Holding Redlich, we’re incorporating AI into our practice using LexisNexis’ Lexis+ AI tool, which can generate legal briefs, correspondence and provide guidance on case law – all with human oversight.
The Victorian Supreme Court also recently issued guidelines on the responsible use of AI in the courtroom and litigation. The main takeaways from this guidance are that lawyers need to be intimately involved in the review of any material generated by AI, and it is better for legal practitioners to rely on legal-focused AI tools – like Lexis+ AI – rather than generative AI tools like ChatGPT or Google Gemini.
Getting into the nuts and bolts of AI
When ChatGPT exploded onto the world stage almost two years ago, it created momentum and generated public attention because it was generative AI, otherwise known as GenAI, and seemed capable of producing surprising, and sometimes creative, results. GenAI is trained on massive data sets, often scraped from the public internet, however there are ongoing legal cases around whether the companies building GenAI services have broken copyright laws when training their models.
Since then, most of us would be familiar with entering a prompt into a GenAI chatbot, such as a request to generate a cover letter, ideas for a kid’s party or a travel itinerary for Euro Summer and getting a response that’s generally helpful and informative. Lexis+ AI works in largely the same way, except it is trained on millions of legal documents and precedents stored in LexisNexis’ database and can respond to a lawyer’s natural language query with a comprehensive reply.
The biggest problem with GenAI is that when it responds to a prompt, just as its name suggests, it can generate answers from scratch or, as it is known in the industry, “hallucinate.” When GenAI does hallucinate, it is very difficult for the user to know that’s the case, as the response is presented in a way that seems both authoritative and believable. The hallucination problem is one of the reasons the Supreme Court of Victoria is cautious about the wholesale embracing of general-purpose generative AI in litigation.
But there’s another type of AI that’s less well-known, but also less likely to hallucinate, called “extractive AI.” With extractive AI, the software consults known sources and compiles a response, much like an information professional would if you were to ask them for a reference from a specific subject matter resource. The information professional would use Boolean connectors and other traditional research methods before coming back to you with an answer from a verifiable source.
So, when extractive AI is prompted by a question, it would consult databases of known information and then return a response. Because the information it relies on is known and verifiable, there’s less chance of it simply making up a believable sounding answer. Put another way, extractive AI probably won’t hallucinate, while there’s a good chance GenAI, at least under some circumstances, will.
Integrating AI into legal practice
In the last year, a new technique called Retrieval Augmented Generation, or RAG, has emerged as a potential way of mitigating the hallucinations that can plague GenAI. RAG, in many ways, combines the best of extractive AI with the strengths of GenAI by taking the results of a GenAI query and augmenting it with known facts from a reputable source.
The combination of RAG and GenAI has the greatest potential for legal professionals looking to incorporate AI into their practice, because it would be able to use verifiable legal resources, while also displaying the flair and creativity GenAI is known for.
As it stands, it is still early days for RAG, so the best advice for legal professionals is to rely on an AI tool tuned towards the profession. The Lexis+ AI technology we are adopting into our practice at Holding Redlich is exactly that.
It is also important, as the Victorian Supreme Court notes, for legal professionals to always have oversight of the material generated by AI. Using a legal-focused AI tool doesn’t exempt lawyers from using their own judgement about whether what’s been generated is factual and can be relied on in court.
Another major issue with GenAI is that it is effectively a black box – researchers do not really understand what goes on inside the software, or how it arrives at its answers. Despite this, the Victorian Supreme Court’s guidelines state “parties and practitioners who are using AI tools in the course of litigation should ensure they have an understanding of the manner in which those tools work, as well as their limitations.”
Finally, where AI has been used to assist with a legal task, lawyers using AI in the courtroom should disclose the fact they have done so to all relevant parties, including the judge.
AI will inevitably make lawyers’ lives easier and more productive, as one of our lawyers recently found when they slashed the amount of time it took to research case law from four hours to just 30 minutes using Lexis+ AI. But, as lawyers, we also must be mindful of the limitations of these tools – and that using them is no substitute for good legal judgement and deep experience.
By Keren Smith, Chief Knowledge Officer, Holding Redlich
Disclaimer
The information in this article is of a general nature and is not intended to address the circumstances of any particular individual or entity. Although we endeavour to provide accurate and timely information, we do not guarantee that the information in this article is accurate at the date it is received or that it will continue to be accurate in the future.
By Keren Smith, Chief Knowledge Officer, Holding Redlich
Keep up to date with our stories on LinkedIn, Twitter, Facebook and Instagram.