Safety or Security for the autonomous car or is that the question
January 25, 2016
3 minute readsecurity connected-car
Beginning of October 2015 I attended the workshop on autonomous driving in the UK" at Böblingen. It was a really interesting day and I learned a lot of interesting stuff. For example I didn’t know that the driverless car was invented in Britain in 1960 Demos of robotics capabilities showed big data analytics in a setting I hadn’t thought of before.
If you want to know more details Natalie Sauber has written an excellent summary of the event.
From a security standpoint it got me thinking about the consequences for software security this development has:
- Autonomous driving is at least as much a software issue as a manufacturing issue. It seems though that the consequences haven’t been fully understood yet. While there are approaches to perform safety tests such as the ones performed by Mira these do not consider security testing yet. They largely follow the current practice of defining certain use cases and expected behavior. If these are met all is fine. Unfortunately this approach doesn’t really fit to software security where you rather need “abuse cases” or threat models.
- Combining this general gap with the fact that much of the software is written in languages such as C which do not have the best reputation for secure coding makes it even more important that companies implement a secure development lifecycle.
- Finally we need more emphasis on threat modeling of autonomous cars. It’s true in a sense as Paul Newman from Oxbotica puts it nicely that “software vulnerabilities are the result of a finite number of stupid mistakes you can make” . nevertheless these can be desastrous. What’s more we need to understand what can go wrong if the software is changed deliberately or by some mistake. Just imagine if the machine learning that helps to recognize pedestrians is altered.
- As if these issues aren’t difficult enough the situation is made more complex because cars are not made by a single vendor but use lots of different suppliers. While each one of them might hopefully take security serious this doesn’t guarantee at all that the final combination of the different parts is still secure. The consequences will mirror what Bruce Schneier wrote about the wildly Insecure—And Often Unpatchable Internet of Things.
So what needs to be done about it?
I think it’s a combination of different aspects:
- In a short term real security pentesting needs to be included in the development.
- For real improvement each and every vendor has to use a real software development lifecycle that involves software security from the beginning.
- Car vendors need to take the final liability for the overall security of the autonomous car which in turns mean that they need to require security features from their suppliers and proff of their presence.
- As vulnerabilities will still be present a way to respond in a timely fashion with proper patching has to be in place. Similar to existing software this also means that vendors need ot keep track of all the third-party software they include and constantly check if vulnerabilities are discovered there.
- To make sure that this isn’t simply lip service the threat models and security claims need to be carefully investigated and evaluated by independent bodies.
If we achieve this there will be no more distinction between safety checks and security checks.