Artificial intelligence is going to make legal work faster and more efficient, but it needs help from lawyers and legal professionals to do so.
As a Research Director in Thomson Reuters Research and Development group, I work firsthand with technologies like machine learning, natural language processing and artificial intelligence (AI). Since we work on creating solutions for customers, I also meet many intelligent and experienced legal professionals. I’m often asked whether the technologies we work on – particularly AI – are going to send these professionals to the same part of history books as milkmen and stenographers.
The short answer: No.
To allay any remaining concern, expanding on that short answer may be helpful.
Getting out what gets put in
An overlooked feature of machine learning technology is that we get out of it what we put in. With respect to AI capabilities developed for the legal market, that means we need legal knowledge – human legal knowledge, the kind gained from years of education, experience and working with legal clients – to build and shape the applications, to train the AI models, and to evaluate their performance and fine-tune their usefulness. As effective as AI can be, it must be harnessed and channeled. If it’s for the purpose of legal work, lawyers and legal professionals are the ones to help us do that.
In addition to real, live attorneys who help us understand the legal domain in order to craft AI models to operate in the legal space, Thomson Reuters also has over 150 years of editorial enhancements to our legal content that we can also use as features in our models. Over the years, we’ve learned to use these editorial enhancements, like Headnotes and the KeyNumber system, and other metadata found in our legal content, to our advantage in terms of building more domain knowledge and context into the machine-learned models and natural language processing modules that power Westlaw and other Thomson Reuters products.
Finding the right remedy
On that note, it’s important to remember that AI is by no means the answer to everything. Sometimes, I think of it as a shiny, new hammer. Like hammers, AI algorithms and tools are readily available. However, these tools are useless unless we know how, where, and when to use them effectively. It’s tempting to want to use that hammer for everything, but that doesn’t mean we should. Technology like AI is not immune to The Hype Cycle; people get excited about its early promise, over-apply it to everything, get discouraged when it doesn’t work well after getting shoehorned in to this use or that use and, finally, land on intelligent, natural ways to use it. AI has, in fact, been through this cycle several times before and has endured two previous “AI winters” in the 1970s and 1980s, during which interest and funding for AI dried up due to a perceived lack of progress.
These technologies are not new. Thomson Reuters Research & Development group has been working on these same things—machine learning, natural language processing, and AI—for over 20 years. The math hasn’t changed, most of the algorithms are not new; the difference is that computers are faster and data to train models is bigger and more readily available. So, the pace at which advances have been made in AI have sped up and the practicality of doing many of these tasks so the computer can interact with a person in real time has become actuality. Many of these capabilities are comprised of tasks we’ve been able to do for years, but not in parallel or at scale or fast enough to put in real world applications. You might have been able to ask a computer a question, but you may have had to wait hours (or days) for a reply. Now that we can do the many component tasks comprising these capabilities quickly and in parallel and stack them up in pipelines, one after the other, in order to mimic many types of human tasks and decision-making in real time, AI as people have imagined it in comic books and movies is becoming a reality.
When we work with AI, our goal is not to make lawyers obsolete. Far from it. Rather, our aim is to make their jobs faster, easier, and more intuitive. For example, with legal research, our objective to make it faster and more effective—to automate as much of the boring stuff as we can so an attorney can spend time on the hard, rewarding parts of his or her job. This means making it easier to find information by making interactions more natural, helping to sift through materials – to rank, summarize, and prioritize them for the user – and taking them to the end of their process by helping them make a decision or by answering their question. At each stage in this workflow, there’s an opportunity to introduce AI and make work easier and faster for knowledge workers, like attorneys.
It’s in the background
If there’s still any lingering concern that AI is going to supplant, rather than supplement, think about this: We have AI woven into many of our Thomson Reuters Legal products already. If you didn’t know that, it means we’re using AI the way we should – to complement the work our customers are doing.
It’s an exciting time in AI research. We’re starting to see more knowledge workers like attorneys (and tax and financial professionals, as well as government workers and academics) integrate AI into their workflows and homes. In the near future, researchers like me will continue to refine AI use cases in deep vertical domains like Legal, while at the same time paying attention to issues like transparency, ethics, responsibility and potential.
The Thomson Reuters Data Privacy Compliance Survey is available for complimentary download.
Learn more about the Thomson Reuters Data Privacy Advisor and its Enforcement Heat Map feature.