Are driverless cars vulnerable to hackers?

You may remember some rather unsettling headlines from May this year, when a cybersecurity consultant caused a minor furore by announcing that he had managed to take control of the engines of the plane he was travelling on, by hacking into the onboard wifi system.

One of the big arguments behind driverless technology is that it has the potential to make our roads safer.

Security and aviation experts piled on to deny that this was possible, but by then it had garnered worldwide media attention, reflecting many public concerns that if computers can be taken control of remotely, then so can vehicles if they have network-connected computers inside them. Mission Secure, an American security firm involved in cyber defence, agrees. It fears that with the volumes of technology in today’s modern cars, hackers will soon be targeting them.

One of the big arguments behind driverless technology is that it has the potential to make our roads safer. This is of major interest to everyone, from manufacturers to dealerships such as www.jenningsmotorgroup.co.uk, to the car-driving public. Everyone knows that the next big innovation in the motor industry is probably going to be driverless tech.

Google have their own test track going in California, and the British government has plans to introduce a live testing scheme in several UK cities soon. By 2020, or maybe even sooner, driverless vehicles could be on the roads. But if you’re not in control of your car, could someone else be? Is there any truth in the idea that a driverless car might be vulnerable to hacking?

So why would someone want to hack into a car’s technology system?

So why would someone want to hack into a car’s technology system? There are several potential reasons. Firstly, it could be to steal cars remotely, with the thieves themselves far away. They could even disable wireless-connected police vehicles in pursuit. Deliberately crashing a car could be used to settle a score, or to cause mass congestion – researchers at the Universities of California and Washington have in the past managed to infect driverless vehicles with computer viruses and were able to cause them to crash. It could even form a kind of terrorist attack. These ideas may seem outlandish, but they are certainly enough to trouble the experts. It’s said that Google even has an entire team of crack programmers looking for security flaws in their vehicles and how to avoid them.

Ironically, the more complex we make our in-car technology, often with the worthy aim of making passengers safer, the more loopholes are left for hackers to exploit. One expert from the Institution of Engineering and Technology has said that 90% of applications contain defects – the average vehicle contains some 10 million lines of software code. With the introduction of driverless tech you could easily be looking at hundreds of millions of lines that need to be just right for everything to work perfectly – there are bound to be a few bugs in there somewhere.

So that covers some of the risks. What about the solutions?

Firstly, and most obviously, is to keep vital driving control functions such as braking and steering offline and unconnected to any network. But that doesn’t exactly tally with driverless technology, which will be using radar and GPS systems to locate traffic problems ahead and avoid them. Then there is keeping as much software in-car as possible, and minimising what is online. And lastly, to introduce black boxes to cars, much as they are with planes, to establish the causes of any incidents and correctly assign blame – system fault, driver error, or external factor.

It is vital to keep driving control functions such as braking and steering offline and unconnected to any network

For the time being though, the prevailing theory is that, rare as such events are likely to be, if something is network-connected then dedicated hackers will somehow be able to get into it. Obviously the number of people who would want to do something like this is very low, and the number of people who have the resources and skills to do something even if they wanted to is lower still. So although there is a danger, maybe it’s not something worth getting too worked up over. There are, after all, many other issues to do with the concept of driverless cars – such as reaction times slowing – that are more of a concern.

+ posts

CIF Presents TWF – Professor Sue Black

Newsletter

Related articles

Three tips for managing complex Cloud architectures

"Moving to the Cloud is a strategic choice many...

Demystifying AI Image Copyright

Stable Diffusion and Legal Confusion: Demystifying AI Image Copyright Think...

CIF Presents TWF – Duane Jackson

In this episode of our weekly show, TWF! (Tech...

CIF Presents TWF – Emily Barrett

In this episode of our weekly show, TWF! (Tech...

AI Show – Episode 4 – Richard Osborne

On the latest captivating instalment of the AI Show,...

Subscribe to our Newsletter