Skip to content

Our Privacy Statement & Cookie Policy

All Thomson Reuters websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.

Artificial intelligence

Is it time to regulate robots?

Nayeem Syed  Assistant General Counsel at Thomson Reuters

Nayeem Syed  Assistant General Counsel at Thomson Reuters

As we move toward wider acceptance of the application of AI, including to the diverse fields of transportation, medical care and knowledge management systems, politicians have quickly turned to questions of design, ethics and liability.

In the 1940s, Isaac Asimov created The Three Laws of Robotics (and later, retroactively, a fourth preceding one) which together form an elegant and widely built upon theme which seek to address the fear that society will become too dependent on machines and that machines will become too powerful in decision making.

The key fear that technologists – and now policy makers – are addressing is that overreliance on robots will inevitably create unintended consequences and that a race toward ever greater innovation can drive malfeasance, either intentional (hacking) or unintentional (bad design).

The legal theories to address responsibility will occupy lawyers, but clarifying liability fails to address or prevent the actual and irreversible physical or financial harm to the entirely innocent. That remains a data management and engineering challenge.

Europeans seek to get ahead of, and shape developments

The Committee on Legal Affairs of the European Parliament has very recently published a motion with recommendations to the European Commission on Civil Law Rules on Robotics. It concludes that recent developments in this area mean it cannot now be dismissed as science fiction. It has potentially significant implications for both consumers and businesses across Member States. It is therefore important to create a legislative instrument governing robotics and artificial intelligence that would encourage and facilitate medium term developments that were in line with European values and that could be used to track progress and enable adaptation over time. It would also mean that European industry would not be simply forced to accept standards determined by proactive third states.

Legal personality

The motion recommends creating a specific legal status for robots, and thereby designating them specific rights and obligations, in particular where they make autonomous decisions and interact with third parties. This is controversial; extending legal status to a new category of entity will require much further analysis and debate. A central question will be whether a legal personality should be based on a consciousness; the presence of one or more legal persons acting as a single entity, or simply that an algorithmically-driven process ceases to have dependence on a third party and is capable (and indeed designed to) to operate with others entirely on its own.

Common to most of the developed world’s judicial systems is the concept of a legal person. About a hundred and fifty years ago, to shield entrepreneurs from personal liability, the modern limited liability corporation emerged and afforded the status of legal personhood. As a result, it could buy and own things in its own name.  This drove large investment, creating global growth and exponential individual prosperity. It could also be sued and prosecuted. However, as we have seen, this has rarely resulted in greater compliance with laws or the reduction of harm to the environment. Therefore, over the coming years, much more would need to be presented as compelling the creation of a new category of legal persons.

Ethical by design

It is of course preferred that designers of systems define rules based on principles of beneficence and non-maleficence to ensure desirable outcomes; that is very much a framework that we recognize and find reassuring. However, it may be an illusion. The challenge with artificial intelligence is that progress in this area necessarily means that such machines are intended to learn by doing and apply decision rules to complex and unique fact patterns often in time-limited situations.

For example, an algorithm may be specifically designed – and required – to assess and decide outcomes in order to minimize damage to persons and property.  Consider an autonomous truck faced with a passenger car full of elderly people crossing into its lane; it must decide between taking alternative actions such as sacrificing its highly flammable cargo, best preserving either its own or third vehicle occupants, or, simply following the rules of the road. Mercedes recently had to “clarify” one of its executive’s statements on its programming design approach after he seemed to suggest the German firm, the world’s largest luxury car manufacturer, would priortize its passengers. Should a designer of a program setting the underlying decision rules comply with a regulatory code or adopt certain protocols?

Moral Machine from MIT is a game where you decide various versions of the “trolley problem” and it then displays analytics about your ethics. It is a simple demonstration of one of the very many real challenges businesses now face as they prepare for complex commercial projects where much is at stake, either physical life or entire financial systems.

Registration and compulsory insurance

The motion also suggests a system of registrations, administered by a specially created agency that would also work toward creating harmonized international standards with the International Standardization Organization to prevent a fragmented internal market. In addition, it suggests exploring a scheme of compulsory insurance which would build on the current situation with cars that ensure victims of uninsured cars are not left without some financial recourse.

Engineers and policymakers must collaborate

The motion’s recommendations cover logical ground but in reality, are simply a starting point. However, the initiative will help to build a more stable and predictable business and scientific research environment where investment capital can be more confidently deployed and innovation hubs can be established and centers of excellence thrive.

As we have seen with successful fintech communities, where a group of motivated actors concentrate, highly beneficial network effects often ensue. If regulators consciously create predictable operating conditions, then academic and business leaderships can collaborate with policymakers and attract the necessary third parties and investment to successfully experiment and accelerate the future state of AI which promises so much.

Learn more

Anticipate and navigate global regulatory compliance confidently with the most comprehensive and trusted intelligence available – in a single solution. Learn more about our regulatory compliance intelligence platform.

Download the State of Regulatory Reform Report 2017 to learn about what regulatory events will shape the year and how you can best stay prepared.

  • Facebook
  • Twitter
  • Linkedin
  • Google+
  • Email

More answers