Skip to content
Thomson Reuters
Technology

Driverless car crashes and data theft: court cases of the not so distant future

What happens when your smart home locks you out, or a smart thermostat causes a fire? New technologies bring challenges for the law over liability for flawed software, hacked devices and identity theft.

The rise of technologies such as driverless cars, the Internet of Things (IoT) and smart cities will result in a proliferation of legal cases to establish who is responsible for automated, intelligent devices, while hackers and fraudsters take advantage of such innovations to find new ways to pry money out of people and companies. Meanwhile, in a bid to keep pace, regulators are writing new laws that require interpretation, while the courts re-imagine existing laws for the connected age. Technology is progressing at what seems like an ever-increasing rate. So, is the law as it stands able to provide clarity in this brave – and complicated – new world?

Safer roads?

Driverless cars are hurtling into the present, promising safer roads without attentive humans behind the wheel. But there’s still work to do: on the same day that Google’s Waymo announced its driverless cars had been approved for public testing without a human behind the wheel, a Nayva driverless shuttle in Las Vegas took no evasive action to prevent a lorry from reversing into it.

In the UK, driverless vehicles are already being tested in Milton Keynes, Greenwich and elsewhere, with varying levels of automation. While it’s likely to be many years until fully driverless cars take over, UK Transport Secretary Chris Grayling believes completely self-driving cars will be on British roads by 2021.

Could there be a boon for road safety around the world? According to the National Highway Traffic Safety Administration, 94 percent of crashes in the US are due to human error. Worldwide, says the World Health Organization, 1.25 million people die each year as a result of traffic accidents.

Despite this, one of the most common debates about driverless cars centres on what happens when driverless cars are involved in an accident: how do we decide who is at fault? It may not be as difficult as it sounds, says Joseph Raczynski, Legal Technologist and Applications Integrator at Thomson Reuters. “Driverless cars with hundreds of sensors will capture everything that occurred with massive volumes of data, audio and video, which will tell a pretty exact story of the incident,” he says. “This brings about the need for lawyers to be able to tap into this data, contract with experts who can extract the data, and understand the full picture”. This means they’ll need to understand how code works, in order to understand what technical experts are telling them.

Then there’s the so-called ‘trolley problem’, exploring how a car would ‘decide’ whom to hit. “What if the car’s algorithm has to make a decision between crashing to one side or another of a single lane road, ‘choosing’ to hit an older person rather than a child on the sidewalk?” says Raczynski. “Certainly these are cases that we will see argued with the mass adoption of these new transports”.

The degree of automation in a car will affect such cases. Automotive standards body SAE International has six levels of driving automation, ranging from level 0 (automated warnings), to level 3 (drivers can take their eyes off the road but human intervention may be required), and level 5 (steering wheels not necessary).

Mid-level systems that let drivers take their hands off the wheel at certain times could prove more contentious than fully automated level 5 cars. “The interesting liability arises if the vehicle has said: ‘Switch back to manual mode’, and the driver doesn’t pick that up quickly enough. Who’s responsible in that scenario,” asks Emma Wright, Commercial Technology Partner, Kemp Little and contributor to Thomson Reuters Practical Law. “It’s going to be difficult to prove what failed and who was responsible”.

That’s further complicated by external influences, from software bugs and cyber-attacks, to obscured street signs – who is at fault, says Raczynski, if a self-driving car can’t ‘read’ a stop sign because of graffiti?

Hypotheticals turned real

On Sunday 18 March 2018, a driverless car caused a fatality in the Arizona, USA. The self-driving Uber car didn’t stop when a pedestrian walked her bike across the road. The system detected the pedestrian about 50 feet away, but failed to slowdown or swerve and was traveling at 38 mph upon impact. Despite investigation, there is still no clear answer what went wrong – but there is a video which strongly suggests a failure by Uber’s automated driving technology. The governor of Arizona, who is a strong advocate of technology, suggests that Uber could be held criminally liable. But the local police chief stated in an interview the opposite view – that Uber was not at fault. Local prosecutors have yet to decide if criminal charges will be warranted.

Later in the same month, in California, the second fatal crash of a self-driving car occurred. The car crashed into a concrete highway divider and burst into flames. It was controlled by Tesla’s Auto pilot system. The computer logs in the car shows that the Autopilot was on, and the system is set to require the driver to keep their hands on the wheel and monitor the road. If the driver fails to hold onto the wheel, the system gives a warning on the dashboard. Similarly to the earlier car crash, the data pulled from the wrecked car indicates there was five seconds and 150 meters of unobstructed view of the barrier before impact. The diver of this car also did not intervene, and had been given multiple warnings to return their hands to the wheel. In both instances, the drivers seemed to be lured into thinking the driverless computer system is more capable than it is in reality. According to the National Highway Traffic Safety Administration, there were no defects in the system and all was properly operational.

These incidents may signal a difficult time for the driverless car systems, yet many engineers and tech experts remain convinced that the removal of distracted humans out of the driving process will lead to improved road safety. Rather than a mid-level system, which requires the presence of a human driver – the argument could be moved closer to justify a fully automated level 5 car which has no human driver. Though, from the example of the two fatal car crashes, it seems to indicate that further advancements with the technology must be achieved.

When technology talks

Connected devices have already started to arrive in our homes – from thermostats to the voice assistants – as well as our cities, with Google designing an entire ‘smart’ district in Toronto. But while half of Britons own some sort of connected home device most of those are TVs or entertainment-related, with smart appliances and lighting less popular.

That does not mean that the Internet of Things (IoT) revolution has stopped. Instead, expect gradual evolution, with smart features popping up in devices as and when they’re replaced. The global IoT market is expected to reach $724bn (£550bn) by 2023, with more than 20bn connected devices globally by 2020, according to Gartner.

Consumers are already protected if a smart appliance goes wrong, says Wright. “What is the difference between a connected thermostat or tumble dryers causing fires?” she asks.

That said, the complex network of companies designing and supporting IoT devices could make liability difficult to ascertain, says Kate Chandler, Senior Counsel in Disputes and Investigations at Taylor Wessing, noting that software developers, manufacturers, services providers and even consumers themselves could be found at fault. “The consumer might contribute to the damage if they fail to follow the instructions for use and warnings properly, or did not maintain the product adequately by installing software updates”.

Manufacturers might turn to the ‘state of the art’ defence, arguing that their product was as good as it could be at release – though, if a bug could be patched and wasn’t, that may not hold water.

The largest source of lawsuits is likely to be security, says Raczynski, pointing to a string of hacks against security cameras and baby monitors. “All of this happened because most of these IoT devices lacked proper security protocols to protect the device and the home network it sits on,” he says. “This area is undoubtedly the most prime area for suits in the next few years”.

The legal liability issues with IoT become more complex as devices start communicating with one another. In a smart home, a connected thermostat might turn on a plug that powers a floor heater, or an alarm clock triggering a coffee maker – all without human interaction. If one sends flawed instructions or spreads a virus, determining who is at fault will involve unpicking a network of systems, services and software, says Angus Finnegan, Head of Communications in Information Technology at Taylor Wessing. “Responsibility for such damage or interference will need to be looked at on a case-by-case basis to determine whether any of the suppliers involved were at fault – through negligence or otherwise”.

Even if users don’t connect their own homes, they’ll still come face-to-face with IoT in cities, offices and even their residences, as landlords future-proof buildings and look to save on costs with smart tech, says Clare Harman Clark, Professional Support Lawyer at Taylor Wessing. “Immediate legal questions arise concerning rights to install technology, as enshrined in existing lease arrangements,” she says, adding that data protection and privacy must be considered in all smart-city implementations. “IoT allows for the collection of considerable data by landlords – by accident or design”.

Barry Jennings, Legal Director in Bird & Bird’s Tech & Comms sector group, adds that privacy concerns and the potential for tracking and surveillance increase with the addition of automation. “Technologies are using artificial intelligence and machine learning to automate decision-making and reducing or removing human control,” he says. In other words, much of our lives will be overseen and affected by machines that don’t ask us for permission first.

Phish and chips

Fraud isn’t new, but technology is helping it to spread from localised incidents to worldwide phishing and targeted scams. The latest reports from the Office for National Statistics (ONS) show British adults were hit by 3.3m fraud attacks in the year to June 2017, with 57 percent of those computer-related – suggesting fraud is shifting online.

Three reasons why generative AI will not take over lawyer jobs Legal AI tools and AI assistants: must-have for legal professionals Creating a seamless legal transaction management workflow has never been easier How to simplify M&A due diligence and smooth out transaction management Level up your legal team’s performance without increasing headcount AI made big strides in 2023 – what does 2024 hold? EU AI Act: The world’s first comprehensive laws to regulate AI Level up M&A due diligence reviews with HighQ and Document Intelligence Automating the client experience: Is your firm keeping up? How a rounded approach to people, processes, and technology helps GCs transform legal departments