Connected and Autonomous Cars Are Wonderful and a Safety-Critical Security Nightmare

By Alan Zeichick, President & Principal Analyst, Camden Associates

Imagine a car whose tires explode because hackers overrode its tire pressure monitoring system, causing a fatal crash. Imagine a billionaire’s limousine being tracked by kidnappers because of bugs in its infotainment system. Imagine a future self-driving car being redirected into a dangerous neighborhood — or tricked into taking the wrong highway exit — because someone hijacked its satellite navigation system.

Connected cars have the potential to improve our lives, offering unprecedented new features for safety, entertainment and always-on communications. I am excited about the potential offered by advanced, context-sensitive voice-activated features, by the potential to look out the window during a long stretch on an Interstate highway while listening to Pandora. A beautiful dream: Being able to exit the car on a busy San Francisco street, and then the vehicle goes off to park itself and comes back when I need it again.

Yet the security implications of connected and autonomous cars is real. Problems are happening today. The industry making connected cars, autonomous cars, and self-driving cars is still in its infancy, and everyone involved in the industry – and that includes car manufacturers, regulators, parts makers, the supply chain, software apps makers, and even consumers, need to be aware and working as much on security as they are on artificial intelligence and exciting consumer features.

Just as we have well-known breaches with Windows PCs and Android smartphones, soon we will have viruses and malware that infects our car. It’s scary to consider the implications of hackers breaking into and altering the behavior of 4,000 pounds of self-driving steel with 300 horsepower engines.

The Threat is Real
At first, the Engine Control Units (ECUs) microprocessors in cars were focused on engine and emissions systems performance, such as monitoring oxygen sensors, adjusting valve timing, managing fuel injection, and monitoring for faults, which result in the dreaded Check Engine light. Since the late 1980s, however, ECUs and other automotive systems have increased in complexity. More sensors, more processing power, most sophisticated software led to tighter integration into other vehicular systems.

Everything from airbags to anti lock brakes to adaptive speed control is managed by microprocessors. These safety-critical systems are tied together by a digital data bus called Controller Area Network (CAN), very similar to a business’s local area network. In fact, a standard was recently approved to use a new version of Ethernet as an automotive bus. These data buses were hardened against security breaches and not connected to external systems. Meanwhile, the software in cars can comprise hundreds of millions of lines of code – more sophisticated than even an Air Force jet fighter.

Traditionally, entertainment and non-safety-critical systems – like the stereo and door unlock remotes – were not standalone systems, not connected to anything safety critical.

That has changed. Telematics systems like BMW’s Assist, GM’s OnStar, Lexus’ Enform, and Mercedes’ mbrace can call for help, via radio, in the case of a collision or airbag deployment. They can also allow remote dispatchers and mobile apps to unlock the doors, start the car, and even diagnose engine faults. Meanwhile, hackers can unlock doors with inexpensive “Raspberry Pi” computers. In 2015, a group of computer scientists demonstrated that they could take over a Jeep Cherokee air conditioner, radio, and braking system remotely.

Meanwhile, researchers have shown that it’s possible to override a tire pressure monitoring system (TPMS), potentially causing a self-inflating tire system to explode. Satellite navigation systems can be jammed or programmed with an offset, meaning the car can be redirected. Malicious mobile apps on phone, when paired with Bluetooth, can gain access to vehicle systems. Thanks to these new features, the firewalls between safety-critical and non-safety-critical have many holes.

What’s Needed? Regulations, Governance and Testing
The good news is that government and industry standards are attempting to address the security issues with connected cars. The bad new is that those standards don’t address security directly; rather, they merely prescribe good software-development practices that should result in secure code. That’s not enough, because those processes don’t address security-related flaws in the design of vehicle systems. Worse, those standards are a hodge-podge of different regulations in different countries, and they don’t address the complexity of autonomous, self-driving vehicles.

Today, commercially available autonomous vehicles can parallel park by themselves. Tomorrow, they may be able to drive completely hands-free on highways, or drive themselves to parking lots without any human on board. The security issues, the hackability issues, are incredibly frightening. Meanwhile, companies as diverse as BMW, General Motors, Google, Mercedes, Tesla and Uber are investing billions of dollars into autonomous, self-driving car technologies.

At least there are some standards. One of the best is ISO 26262-6:2011, which is “intended to be applied to safety-related systems that include one or more electrical and/or electronic (E/E) systems and that are installed in series production passenger cars with a maximum gross vehicle mass up to 3,500 kg.” That’s a broad brush, but it includes:

  • requirements for initiation of product development at the software level,
  • specification of the software safety requirements,
  • software architectural design,
  • software unit design and implementation,
  • software unit testing,
  • software integration and testing, and
  • verification of software safety requirements.

Unfortunately not all automakers, such as those in the United States, must conform with ISO standards, including ISO 26262-6:2011.

In the U.S. and in other countries, a set of standards called MISRA (Motor Industry Software Reliability Association) try to ensure that software written in the C programming language are safe and secure. C is the most common language used for embedded systems like in automobiles.

Yet there is no single set of guidelines or tests that certify vehicular subsystems — or entire cars — are safe from being hacked. Software testing tools companies, such as Parasoft, Intertek, Luxoft and Vector Software, help automakers validate the security their own products and those coming up through the supply chain. It’s a necessary start, because ultimately, the car makers will be held liable if a successful security attack against one of their vehicles causes property damage, injury or death.

Connected cars are wonderful and enjoyable. Autonomous cars are the stuff of science fiction. Let’s hope that the automotive industry ensures this dream doesn’t turn into a horrific nightmare.

Alan Zeichick is president and principal analyst at Camden Associates, an independent consultancy focusing on software development, cloud software, automotive and networking technologies. A former mainframe systems architect, Mr. Zeichick is a frequently cited expert on technology and speaker at many industry conferences. Camden Associates is based in Phoenix, Arizona. Follow him @zeichick

 

LEAVE A REPLY

Please enter your comment!
Please enter your name here