The self-driving vehicle revolution is upon us and it brings with it some serious challenges. One such conundrum is just how much control will we give over to our vehicles. Recently, we’ve had the first fatality^ resulting from the use of this family of technologies. However, it’s important to note that the car wasn’t really self-driving.
The person died due to the (presumably) improper use of the autopilot feature. Before we rush to blame Tesla, we’ll see why this sort of half-measure is quite dangerous. Let’s take a look at what another industry that has been using autopilot functionalities for decades has learned during time:
Self-driving cars are an “all or nothing” affair
It becomes apparent that auto piloting features can cause humans to lose some of their skills. What’s even worse is that auto piloting is done in half-measures. This inconsistent state of affairs inevitably affects decision making in the brain. The results can be disastrous.
Pilots undergo extensive training before using auto piloting functions, drivers do not. Expect more such accidents to take place, unless serious changes are made in drivers’ education and training. I believe that such changes are difficult to implement and that the correct way forward is to remove the human from the driver’s seat altogether.
Slowly but surely, a human driving a car on a public street will become like seeing a horse and carriage on the motorway’s fast lane. Of course, this might seem far-fetched now, but check back in 10-20 years.
Things are going to get even more complicated when ethics start to play a role in all this. One of the essential features of self-driving cars is that they will be in permanent communication with one another. Through this, they will also gain an increased awareness of the road conditions ahead of them and each other’s occupants.
What if, for example, two self-driving vehicles realize that a collision is inevitable? Should your car kill you to save others? What if drivers start hacking their cars to protect them at all costs? Here’s a very interesting article on this subject:
One day, maybe self-driving cars will be able to make a decision about how to cause fewer fatalities during unavoidable accidents by sacrificing the car with fewer passengers. Taking this discussion further, let us consider that human lives are more than just numbers. Could self-driving cars quantify the potential of a human life? What if the Artificial Intelligence supervising the travel of more cars decides to sacrifice an entire family in order to save a highly skilled doctor?
I believe that at one point, AI will be able to decide between saving a child or a young man who is already sick of terminal cancer. There will be those that will consider such judgements unfair – letting a “machine” decide for your life is scary. But we might have to deal with this situation at one point. Accidents will always happen, but that doesn’t mean we can’t do something about reducing their impact upon our society.
Putting the drama into perspective
These are very difficult choices. I have little doubt that one day, true Artificial Intelligence might be able to tackle these problems just like we’re able to solve first grade math problems. Until then, however, we’ll be left with some serious ethical and logistical challenges to solve.
I also have little doubt that in the coming years a lot of keystrokes will be spent debating even the smallest mistake made by a self-driving vehicle. But these mistakes will probably pale in comparison with the thousands of people, many of them children, dying at the hands of reckless drivers every year.
It’s a no-brainer that self-driving cars will drastically reduce the number of deaths on our streets. I have to say this bluntly: the sooner we restrict access to our public roads, the better. Not even intelligent animals should be allowed to drive metal bullets at 130 kilometers per hour.
Last but not least, let’s not forget about the security concerns that shall arise when we’ll have a bunch of computers zooming around the motorways at high speeds. I recently wrote an article^ on this subject. I don’t even want to imagine what a terrorist attack would look like if some hacker would start tampering with the software of hundreds of speeding robots weighing a couple of tons (or many more) each.