Digit Geek
Digit Geek > Recent Articles > Technology > We’re not getting autonomous cars anytime soon

We’re not getting autonomous cars anytime soon

Autonomous cars have been repeatedly said to be around the corner, but there are common and everyday problems that have to be dealt with first

Autonomous cars have been the favourite child of automobile companies and silicon valley tech giants for a while now. Even though this dream might have been in the making for some time, the last few years have seen a lot of chatter around it, with the first batch tests on public streets getting the go-ahead earlier this year in a few first world countries. Since then, everyone and their uncle seems to be claiming that a driverless future is right around the corner. That is true if only the corner is not actually a mile wide and isn’t ridden with potholes, jaywalkers and bird shit along the way.

It is not news that autonomous cars are not perfect. But what bothers us is the state in which it is right now and how it is being portrayed. Some of the problems that can throw an autonomous car off track are extremely common and happen every day. For instance, there are numerous issues that pop-up when you completely remove the human element out of the car. Take a busy intersection on a highway, where traffic is present from all directions at any given time. Now, when a signal to turn right is given, an autonomous car won’t be able to make eye-contact with a trailing human driver from the opposite lane to judge whether it should make the turn before he passes or after. A lack of perception when it comes to intention has already led to accidents in autonomous car testing.

Autonomous cars can’t be pricks and that’s a problem

Sometimes, hand signals from traffic police/managing authorities form a crucial part of navigating the roads. If the situation is especially chaotic, a human driver can look at guidance from the traffic police to make sure they don’t contribute to the chaos and can make their way out of it peacefully. On the other hand, an autonomous car, even if it does have the technical ability to understand hand signals, might not be able to accommodate for the human variation in the same signal being made by different people. That is if it can even make the judgement call to put traffic rules aside and follow the human navigator.

Crowded Street
Would an over-polite autonomous car ever make it through such a street?

While Google had announced that its autonomous car can recognise a hand signal from a cyclist, a four-way traffic jam could be a completely different ballgame. Additionally, as George Filip at the University of Nottingham, UK puts it, giving cyclists and pedestrians the ability to stop autonomous cars could end up making cities gridlocked. Who ever thought that overly polite autonomous vehicles would end up being a problem?

The trolley problem

It is established that putting autonomous vehicles on the road would lead to a certain percentage of reduction in road accidents. But this gives rise to a new issue – how will the vehicle deal with the ones that do happen? This brings us, and the automobile industry, to the trolley problem.

The trolley problem
The trolley problem represented pictorially. What choice would you make?

A thought experiment in ethics, in its most simple form the trolley problem is as follows:

There is a runaway trolley barreling down the railway tracks. Ahead, on the tracks, there are five people tied up and unable to move. The trolley is headed straight for them. You are standing some distance off in the train yard, next to a lever. If you pull this lever, the trolley will switch to a different set of tracks. However, you notice that there is one person on the side track. You have two options:

 

  • Do nothing, and the trolley kills the five people on the main track.
  • Pull the lever, diverting the trolley onto the side track where it will kill one person.

Which is the most ethical choice?

This is where AVs cannot compete or surpass humans as of now. Human instinct would allow the human to factor in a number of things like the apparent age of people on either tracks, their social status etc – but most importantly, the value of life. If a vehicle, on a given day, is at all able to make the distinction between different people on the road, how can it be programmed to weigh in the value of life in a situation like this? The answer is not clear – not now, and doesn’t seem likely to be in the near future.

To roadkill or not to roadkill?

One of the primary gaps in computer vision right now is the ability to understand what’s inside an object. So, if a bag is suddenly thrown in front of a self-driving car, it will not possess the ability to understand if it’s empty or is filled with bricks. The wrong last moment decision, in either case, could be fatal to the human passengers within.

Vandalised road sign
While a human can clearly tell this is a stop sign, an autonomous car might fail

This brings us to one of the most simple, but problematic issues, with self-driving cars – sensor obstruction. Autonomous vehicles rely on their ability to see and make sense of things on the road to navigate – things like road signs. A recent study by the University of Washington shows that simple acts of vandalism on road signs, like adding a few dots and dashes to a Stop sign or making it read ‘Love Stops Hate’ was enough to confuse the onboard computer into assuming it as a 45mph Speed limit sign or some other signs. In another case, a faded sign was not identified correctly. In either situation, it does not need to be explained how grim the outcome of either mistake could be in a live traffic situation.

The way out?

To be honest, there are measures to deal with such situations. In fact, these situations are the very reason that autonomous cars are being designed to accommodate inputs from multiple sources into their decision-making process. For instance, hyper-accurate GPS mapping is used to correlate live imagery with map data on the roads. Endless hours of testing (which seems to be rare among many automobile companies that are into autonomous vehicles, according to numbers from California DMV) with humans involved can eventually make the onboard computers on cars smart enough to judge human intuition. We could even go as far as assuming that with sufficient smart capabilities, non-autonomous vehicles will be able to communicate with autonomous vehicles on the road to understand their intentions and relay the information to their human drivers, hence avoiding any confusion brought by the absence of eye-contact or body language. But that brings us to the last-straw – cyber security.

A virus on wheels

There is no evidence whatsoever right now to assure potential driverless-car owners that a computer onboard their vehicles will be any more secure than any other computer on a network. If you saw the Fate of the Furious earlier this year, you might have been sceptic about an entire city’s cars going into auto-drive and wreaking havoc on the city. But then if you talked to a tech enthusiast or, even better, an automobile engineer, you would find out that it’s not really an implausible situation.

While measures are being taken, every single one of us (and history) knows that there is more often a gap between regulation and action than not. Unfortunately, a miss here will not lead to just another batch of emails or accounts being leaked on the internet. It could lead to serious physical damage and even, in worst situations, death.

We need to think

So before we jump onto the whole driverless bandwagon, automobile companies need to make sure that they are not rushing into something that they do not have the technology or ability to implement at its best possible state. Unless that’s the case, driverless cars should not be touted as the next promised revolution in transportation.

Arnab Mukherjee

Arnab Mukherjee