As the world becomes increasingly connected, so are all the devices that we’re using. Vehicles, of course, are no exception. But while a hacked phone or refrigerator won’t be immediately life-threatening, a compromised vehicle can endanger the lives of many.
Nissan found itself in hot water after security researchers managed to hack its electric cars through a web link:
The auto-maker shuttered the faulty software application^, but according to the article above, this might not have been enough since “attackers don’t even need to use the NissanConnect app, because they can deliver the attack through a web browser by spoofing the app.”
With self-driving cars heading our way fast, this is not my idea of a software ecosystem that I would trust with my life. In addition to all the immense challenges that companies involved in researching automated driving will have to overcome, the security aspect will have to be handled in an extremely careful way. Check what they did to this jeep^.
Self-driving vehicles will, through their very nature, rely on a wealth of external information. When they will go mainstream, we will already be talking about automatically negotiating traffic lights, combining more vehicles into trains for optimal fuel efficiency or tragedy-prevention ethics^ (I highly recommend reading the linked article).
Companies have proved time and again that they care about little else than their profit margins. There is very little regard for safety and quality. In a recent posting^, I wrote that we’re perhaps partially to blame for this. Unless the situation changes both we and our environment will suffer because of these companies’ neglect.
At least some sort of relief comes from the fact that some governments will make hacking vehicles a very serious criminal offense. Michigan, for example, proposes to go as far as life in prison^. But what about the companies that allow their applications to be hacked (due to rushing the development process)? I believe those entities should face equally serious punishments.