Skip to content

Our Privacy Statement & Cookie Policy

All Thomson Reuters websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.

Artificial intelligence

Examining the impact of algorithms in the courtroom

Gregg Wirth  Content Manager, Legal Executive Institute

Gregg Wirth  Content Manager, Legal Executive Institute

If new technology can actually increase access to justice, how can we ensure that such access actually results in equal justice for all?

Imagine a courtroom of the future in which a federal prosecutor has all the ramped-up capabilities, research depth and legal knowledge that artificial intelligence (AI) can muster. An impressive use of innovation and technology, right?

Now imagine that prosecutor turning that AI-honed legal prowess on an indigent defendant, represented by a single legal aid attorney who has none of those things.

Imagine an inmate being refused parole on the basis of an algorithm which determines that the inmate is more likely to re-offend, but what the defense team don’t know is that the algorithms draw their conclusions from a data set which is full of bias.

As the legal world rushes towards technical efficiency where AI and machine learning can hold tremendous sway in the power balance of how criminal and civil cases are heard and decided, where do the ethics of the law and the concept of justice come in?

And if new legal tech can actually increase access to justice for citizens in countries like the US and the UK, how can we ensure that such access actually results in equal justice for all? That the use of technology itself does not become a proxy for fairness, adhered to without question, or because of its design not open to question.

It’s these types of questions, and many others concerning the impact that legal technology, AI and algorithms will have on issues of access to justice, legal ethics, human rights, and fair legal representation that the Law Society of England and Wales is seeking to address with its new Public Policy Commission on Algorithms in the Justice System.

“There are really two broad buckets of interest in this area,” explains Sophia Adams-Bhatti, director of legal and regulatory policy at the UK Law Society. “One is around the practical implications of technology and the law, and how the legal practice will change. But, you also have to deal with the impact on the law itself as a result of developments in technology and AI. This is where the legal ethics questions become really very interesting.”

The Commission will focus its examination of the use of algorithms in the justice system in England and Wales, although it will “take appropriate account of international developments” as well, according to the Law Society website. One question at the heart of the matter will be what controls, if any are needed to ensure that trust and basic human rights are protected in the justice system.

The three-person Commission will be chaired by Law Society President Christina Blacklaws and include Sofia Olhede, a professor of statistics at University College London, and Sylvie Delacroix, a professor of law and ethics at the University of Birmingham.

Looking for evidence

Commissioners will be holding three Evidence Sessions — the first was held this week — at which a wide range of multidisciplinary experts will present evidence on the question of how algorithms and their use within the justice system impact on human rights, and what measures are needed as a consequence. “This was deliberately designed to be a conversation that we as a Law Society could curate, but through which we bring together the voices of the various stakeholders,” says Adams-Bhatti. “We’ve designed it so that we bring together the technology, the academic research community, the ethicists, the lawyers, the voice of the citizens, political science and the voice of the law enforcement agencies, all to the table to bring their perspectives.” The Commission decided to focus on criminal justice at the outset because of the vast amounts of research being done, the numerous practical and human rights applications available, and the implications that have yet to be really worked through, she adds.

Adams-Bhatti said once the Law Society began looking at the impact of technology on the law and on how people are interacting with the law and the courts, it became clear to the group that the focus of this exercise should be on the power of machine learning.“Machine learning and AI empowered processing is increasingly where the real potential benefits of technology and the law starts to bite,” she explains. “The ability to turn huge amounts of data into knowledge and understanding that we couldn’t do any other way — certainly, manually we couldn’t do this — it’s an opportunity there for the taking.”

For members of a government law enforcement agency, for example, the ability to harness the knowledge of what was previously massive amounts of unstructured data and mine it in a way to be able to deliver their public duty much easier with nominal costs compared to hiring 100 or 1,000 individual workers to transfer and sift through this data and do so at a greater speed and with greater efficiency, is a tremendous benefit for that enforcement agency, Adams-Bhatti says. “You can see the vast appeal of that.”

A need for balance

However, Adams-Bhatti argues that the that benefit has to be balanced with the need to ensure that tech-enhanced prosecutorial power doesn’t come at the cost of the rule of law. It is vital that our justice system is transparent, understandable, allows individuals to defend themselves in court and upholds the core tenants of equality in the eyes of law, she explains.

“We have to say, ‘Okay, if these tools are available, how do we ensure they don’t corrupt the very purpose for which these agencies are trying to deploy them?’” Adams-Bhatti says, giving the example of the advances in facial recognition technology and surveillance. She notes that while some make the point that facial recognition might be a great way of combating crime in certain areas, and in the right hands that might be quite useful, it does raise questions of why should the state be collecting data about where its citizens are traveling and so forth, for no reason other than collecting data? “Surely, as a libertarian society, people should be free to walk around without being surveyed.”

Indeed, it’s these sorts of questions and ethical dilemmas that legal experts have not had the opportunity to explore in a structured way — which underscores the importance of the Commission’s core mission. “It struck me that we had not only the opportunity to look at this from a human rights perspective, but to really focus in on the great opportunity, or at least the potential promise of AI in society as a whole and whether or not we can help settle some of the principle questions that sit at the heart of this debate,” Adams-Bhatti offers.

Those interested in making contributions, submitting written evidence, or applying to appear at one of the four planned public evidence sessions taking place over the course of the next year should contact the Commission via its dedicated email Commission@lawsociety.org.uk.

This article was originally published on Legal Executive Institute.


Learn more

Explore more insights on how artificial intelligence will affect the legal profession.

  • Facebook
  • Twitter
  • Linkedin
  • Google+
  • Email

More answers