Skip to content
Thomson Reuters

Should artificial intelligence be regulated?

Artificial intelligence (AI) is making significant waves across the globe, with experts predicting that it will increasingly change and reshape the way people live their daily lives. AI is also likely to shake up the legal industry; triggering a profound shift in the delivery of legal services. However, with such potential and power to drive seismic change to ordinary life and professional services, it has led to some debate over whether AI should be regulated.

Professor Sylvie Delacroix, of the University of Birmingham, spoke to Thomson Reuters’ Legal Insights about her views on AI and the call for regulation.

How significant is artificial intelligence and its role within society?

The most significant development today is the extent to which we are capable of gathering and exploiting data to develop new kinds of knowledge which radically transform the way we live, for better or for worse. I think the term ‘AI’ can be somewhat misleading: at the heart of recent headlines – grabbing applications (from self-driving cars to automated diagnosis tools) are data-exploiting algorithms that are only remotely connected to the quest to develop machines that emulate – and surpass – human intelligence.

You have mentioned before that there is a need to regulate AI. Why do you think this is necessary?

Regulation will never be more than a small, yet important, part of the answer. Given the speed with which algorithmic tools and practices are transforming the meaning of things like privacy or consent, bottom-up approaches are essential: these can be industry-sponsored (like the Partnership on AI) or community led (see for instance discussions surrounding the idea of `grassroots’ data trusts).

For regulation to be adequate and successful we first need to have worked out as a society how much and why we care about particular forms of privacy, consent or professional practices. If, say, we were able to develop some automated, virtual `GP bot’, making frontline medical services both accessible and highly reliable, should we pause and consider what values, if any, cannot be perpetuated in a virtual, automated consultation? Yes we should, only we are not used to doing that. All too often technological innovations are deemed desirable on the basis of a simple cost/safety/quality/attention-grabbing optimisation function. If these innovations are subsequently found to have generated significant vulnerabilities or inequalities, regulation cannot reverse-engineer them. Today’s regulatory efforts to address the power asymmetry between data-controllers and data-subjects, for instance, can only go so far.

What are the legal complexities which have arisen since the emergence of AI?

Much attention has been devoted to the `legal responsibility challenge’ underlying self-driving cars: in many ways I think that is one of the easier problems to solve, as there is a very pragmatic, compulsory no-fault insurance solution. In contrast, developing ways of preserving meaningful consent and/or privacy in a data-driven economy selling highly personalised products and services is a thorny challenge.

What can lawyers do now to address the evolving nature of AI and the anticipated regulatory needs?

First and foremost, lawyers need to get their head out of the sand and educate themselves. It is shocking that most students graduating from law school today have little or no understanding of the data they leak on a daily basis – whether it be through social media or mundane online shopping – what uses it can be put to (given different types of algorithms), and what limited rights they have under current Data Protection regulation and will be under the General Data Protection Regulation (GDPR) which comes into force on 25 May 2018.

How do you envisage AI changing the legal profession in the next 10 years?

Decision support systems have the potential to greatly enhance the legal profession, particularly its ability to live up to its societal and ethical responsibilities. For that potential to materialise, however, the legal profession needs to reflect upon the values it serves and pro-actively engage with those systems’ designers.

It was announced in the Government’s Autumn Budget that the first ever national advisory board for AI will be established to set the standards for the use and ethics of AI and data. What are your thoughts on this move?

It is an important initiative, and it is crucial that all sectors (computer scientists, professionals of all kinds, academics, and generally all citizens) actively support and critically engage with the work of this advisory board.


Professor Delacroix focuses on the intersection between law and ethics, with a particular interest in Machine Ethics and Agency. Her research seeks to bridge the gap between ongoing work into the non-cognitive roots of ethical agency – including habits and the assumptions currently presiding over the design of both decision-support and `autonomous’ systems meant for professional or morally-loaded contexts. She also researches the effect of personalised profiling and ambient computing on our ability to trigger change in our social practices. Professor Delacroix’s work has notably been funded by the Wellcome Trust, the NHS and the Leverhulme Trust, from whom she received the Leverhulme Prize in 2010.

Legal technology conference (r)evolution—the launch of Legal Geek North America Automated legal advice: rules, responsibility and risk allocation Effective discharge of risk professionals’ responsibilities using AI What is the current state of legal technology around the world? AI legal tech startup selected to enter Thomson Reuters Labs Incubator programme Legal tech adoption and the real drivers of change Embedding innovative legal tech in law firms How could AI impact the justice system? Legal tech innovation: “there is opportunity in the north”  In-house focus at the Legal Geek Conference—key takeaways