|
|
Subscribe / Log in / New account

Linux and automotive computing security

Did you know...?

LWN.net is a subscriber-supported publication; we rely on subscribers to keep the entire operation going. Please help out by buying a subscription and keeping LWN on the net.

By Nathan Willis
October 10, 2012

There was no security track at the 2012 Automotive Linux Summit, but numerous sessions and the "hallway track" featured anecdotes about the ease of compromising car computers. This is no surprise: as Linux makes inroads into automotive computing, the security question takes on an urgency not found on desktops and servers. Too often, though, Linux and open source software in general are perceived as insufficiently battle-hardened for the safety-critical needs of highway speed computing — reading the comments on an automotive Linux news story it is easy to find a skeptic scoffing that he or she would not trust Linux to manage the engine, brakes, or airbags. While hackers in other embedded Linux realms may understandably feel miffed at such a slight, the bigger problem is said skeptic's presumption that a modern Linux-free car is a secure environment — which is demonstrably untrue.

First, there is a mistaken assumption that computing is not yet a pervasive part of modern automobiles. Likewise mistaken is the assumption that safety-critical systems (such as the aforementioned brakes, airbags, and engine) are properly isolated from low-security components (like the entertainment head unit) and are not vulnerable to attack. It is also incorrectly assumed that the low-security systems themselves do not harbor risks to drivers and passengers. In reality, modern cars have shipped with multiple embedded computers for years (many of which are mandatory by government order), presenting a large attack surface with numerous risks to personal safety, theft, eavesdropping, and other exploits. But rather than exacerbating this situation, Linux and open source adoption stand to improve it.

There is an abundance of research dealing with hypothetical exploits to automotive computers, but the seminal work on practical exploits is a pair of papers from the Center for Automotive Embedded Systems Security (CAESS), a team from the University of California San Diego and the University of Washington. CAESS published a 2010 report [PDF] detailing attacks that they managed to implement against a pair of late-model sedans via the vehicles' Controller Area Network (CAN) bus, and a 2011 report [PDF] detailing how they managed to access the CAN network from outside the car, including through service station diagnostic scanners, Bluetooth, FM radio, and cellular modem.

Exploits

The 2010 paper begins by addressing the connectivity of modern cars. CAESS did not disclose the brand of vehicle they experimented on (although car mavens could probably identify it from the photographs), but they purchased two vehicles and experimented with them on the lab bench, on a garage lift, and finally on a closed test track. The cars were not high-end, but they provided a wide range of targets. Embedded electronic control units (ECUs) are found all over the automobile, monitoring and reporting on everything from the engine to the door locks, not to mention lighting, environmental controls, the dash instrument panel, tire pressure sensors, steering, braking, and so forth.

Not every ECU is designed to control a portion of the vehicle, but due to the nature of the CAN bus, any ECU can be used to mount an attack. CAN is roughly equivalent to a link-layer protocol, but it is broadcast-only, does not employ source addressing or authentication, and is easily susceptible to denial-of-service attacks (either through simple flooding or by broadcasting messages with high-priority message IDs, which force all other nodes to back off and wait). With a device plugged into the CAN bus (such as through the OBD-II port mandatory on all 1995-or-newer vehicles in the US), attackers can spoof messages from any ECU. There are often higher-level protocols employed, but CAESS was able to reverse-engineer the protocols in its test vehicles and found security holes that allow attackers to brute-force the challenge-response system in a matter of days.

CAESS's test vehicles did separate the CAN bus into high-priority and low-priority segments, providing a measure of isolation. However, this also proved to be inadequate, as there were a number of ECUs that were connected to both segments and which could therefore be used to bridge messages between them. That set-up is not an error, however; despite common thinking on the subject, quite a few features demanded by car buyers rely on coordinating between the high- and low-priority devices.

For example, electronic stability control involves measuring wheel speed, steering angle, throttle, and brakes. Cruise control involves throttle, brakes, speedometer readings, and possibly ultra-sonic range sensors (for collision avoidance). Even the lowly door lock must be connected to multiple systems: wireless key fobs, speed sensors (to lock the doors when in motion), and the cellular network (so that remote roadside assistance can unlock the car).

The paper details a number of attacks the team deployed against the test vehicles. The team wrote a tool called CarShark to analyze and inject CAN bus packets, which provided a method to mount many attacks. However, the vehicle's diagnostic service (called DeviceControl) also proved to be a useful platform for attack. DeviceControl is intended for use by dealers and service stations, but it was easy to reverse engineer, and subsequently allowed a number of additional attacks (such as sending an ECU the "disable all CAN bus communication" command, which effectively shuts off part of the car).

The actual attacks tested include some startlingly dangerous tricks, such as disabling the brakes. But the team also managed to create combined attacks that put drivers at risk even with "low risk" components — displaying false speedometer or fuel gauge readings, disabling dash and interior lights, and so forth. Ultimately the team was able to gain control of every ECU in the car, load and execute custom software, and erase traces of the attack.

Some of these attacks exploited components that did not adhere to the protocol specification. For example, several ECUs allowed their firmware to be re-flashed while the car was in motion, which is expressly forbidden for obvious safety reasons. Other attacks were enabled by run-of-the-mill implementation errors, such as components that re-used the same challenge-response seed value every time they were power-cycled. But ultimately, the critical factor was the fact that any device on the vehicle's internal bus can be used to mount an attack; there is no "lock box" protecting the vital systems, and the protocol at the core of the network lacks fundamental security features taken for granted on other computing platforms.

Vectors

Of course, all of the attacks described in the 2010 paper relied on an attacker with direct access to the vehicle. That did not necessarily mean ongoing access; they explained that a dongle attached to the OBD-II port could work at cracking the challenge-response system while left unattended. But, even though there are a number of individuals with access to a driver's car over the course of a year (from mechanics to valets), direct access is still a hurdle.

The 2011 paper looked at vectors to attack the car remotely, to assess the potential for an attacker to gain access to the car's internal CAN bus, at which point any of the attacks crafted in the 2010 paper could easily be executed. It considered three scenarios: indirect physical access, short-range wireless networking, and long-range wireless networking. As one might fear, all three presented opportunities.

The indirect physical access involved compromising the CD player and the dealership or service station's scanning equipment, which is physically connected to the car while in the shop for diagnosis. CAESS found that the model of diagnostic scanner used (which adhered to a 2004 US government mandated standard called PassThru) was an embedded Linux device internally, even though it was only used to interface with a Windows application running on the shop's computer. However, the scanner was equipped with WiFi, and broadcasts its address and open TCP port in the clear. The diagnostic application API is undocumented, but the team sniffed the traffic and found several exploitable buffer overflows — not to mention extraneous services like telnet also running on the scanner itself. Taking control of the scanner and programming it to upload malicious code to vehicles was little additional trouble.

The CD player attack was different; it started with the CD player's firmware update facility (which loads new firmware onto the player if a properly-named file is found on an inserted disc). But the player can also decode compressed audio files, including undocumented variants of Windows Media Audio (.WMA) files. CAESS found a buffer overflow in the .WMA player code, which in turn allowed the team to load arbitrary code onto the player. As an added bonus, the .WMA file containing the exploit plays fine on a PC, making it harder to detect.

The short-range wireless attack involved attacking the head unit's Bluetooth functionality. The team found that a compromised Android device could be loaded with a trojan horse application designed to upload malicious code to the car whenever it paired. A second option was even more troubling; the team discovered that the car's Bluetooth stack would respond to pairing requests initiated without user intervention. Successfully pairing a covert Bluetooth device still required correctly guessing the four-digit authorization PIN, but since the pairing bypassed the user interface, the attacker could make repeated attempts without those attempts being logged — and, once successful, the paired device does not show up in the head unit's interface, so it cannot be removed.

Finally, the long-range wireless attack gained access to the car's CAN network through the cellular-connected telematics unit (which handles retrieving data for the navigation system, but is also used to connect to the car maker's remote service center for roadside assistance and other tasks). CAESS discovered that although the telematics unit could use a cellular data connection, it also used a software modem application to encode digital data in an audio call — for greater reliability in less-connected regions.

The team reverse-engineered the signaling and data protocols used by this software modem, and were subsequently able to call the car from another cellular device, eventually uploading malicious code through yet another buffer overflow. Even more disturbingly, the team encoded this attack into an audio file, then played it back from an MP3 player into a phone handset, again seizing control over the car.

The team also demonstrated several post-compromise attack-triggering methods, such as delaying activation of the malicious code payload until a particular geographic location was reached, or a particular sensor value (e.g., speed or tire pressure) was read. It also managed to trigger execution of the payload by using a short-range FM transmitter to broadcast a specially-encoded Radio Data System (RDS) message, which vehicles' FM receivers and navigation units decode. The same attack could be performed over longer distances with a more powerful transmitter.

Among the practical exploits outlined in the paper are recording audio through the car's microphone and uploading it to a remote server, and connecting the car's telematics unit to a hidden IRC channel, from which attackers can send arbitrary commands at their leisure. The team speculates on the feasibility of turning this last attack into a commercial enterprise, building "botnet" style networks of compromised cars, and on car thieves logging car makes and models in bulk and selling access to stolen cars in advance, based on the illicit buyers' preferences.

What about Linux?

If, as CAESS seems to have found, the state-of-the-art is so poor in automotive computing security, the question becomes how Linux (and related open source projects) could improve the situation. Certainly some of the problems the team encountered are out of scope for automotive Linux projects. For example, several of the simpler ECUs are unsophisticated microcontrollers; the fact that some of them ship from the factory with blatant flaws (such as a broken challenge-response algorithm) is the fault of the manufacturer. But Linux is expected to run on the higher-end ECUs, such as the IVI head unit and telematics system, and these components were the nexus for the more sophisticated attacks.

Several of the sophisticated attacks employed by CAESS relied on security holes found in application code. The team acknowledged that standard security practices (like stack cookies and address space randomization) that are established practice in other computing environments simply have not been adopted in automotive system development for lack of perceived need. Clearly, recognizing that risk and writing more secure application code would improve things, regardless of the operating system in question. But the fact that Linux is so widely deployed elsewhere means that more security-conscious code is available for the taking than there is for any other embedded platform.

Consider the Bluetooth attack, for example. Sure, with a little effort, one might could envision a scenario when unattended Bluetooth pairing is desirable — but in practice, Linux's dominance in the mobile device space means there is a greater likelihood that developers would quickly find and patch the problem than would any tier one supplier working in isolation.

One step further is the advantage gained by having Linux serve as a common platform used by multiple manufacturers. CAESS observed in its 2011 paper that the "glue code" linking discrete modules together was the greatest source of exploits (e.g., the PassThru diagnostic scanning device), saying "virtually all vulnerabilities emerged at the interface boundaries between code written by distinct organizations." It also noted that this was an artifact of the automotive supply chain itself, in which individual components were contracted out to separate companies working from specifications, then integrated by the car maker once delivered:

Thus, while each supplier does unit testing (according to the specification) it is difficult for the manufacturer to evaluate security vulnerabilities that emerge at the integration stage. Traditional kinds of automated analysis and code reviews cannot be applied and assumptions not embodied in the specifications are difficult to unravel. Therefore, while this outsourcing process might have been appropriate for purely mechanical systems, it is no longer appropriate for digital systems that have the potential for remote compromise.

A common platform employed by multiple suppliers would go a long way toward minimizing this type of issue, and that approach can only work if the platform is open source.

Finally, the terrifying scope of the attacks carried out in the 2010 paper (and if one does not find them terrifying, one needs to read them again) ultimately trace back to the insecure design of CAN bus. CAN bus needs to be replaced; working with a standard IP stack, instead, means not having to reinvent the wheel. The networking angle has several factors not addressed in CAESS's papers, of course — most notably the still-emerging standards for vehicle ad-hoc networking (intended to serve as a vehicle-to-vehicle and vehicle-to-infrastructure channel).

On that subject, Maxim Raya and Jean-Pierre Hubaux recommend using public-key infrastructure and other well-known practices from the general Internet communications realm. While there might be some skeptics who would argue with Linux's first-class position as a general networking platform, it should be clear to all that proprietary lock-in to a single-vendor solution would do little to improve the vehicle networking problem.

Those on the outside may find the recent push toward Linux in the automotive industry frustratingly slow — after all, there is still no GENIVI code visible to non-members. But to conclude that the pace of development indicates Linux is not up to the task would be a mistake. The reality is that the automotive computing problem is enormous in scope — even considering security alone — and Linux and open source might be the only way to get it under control.


Index entries for this article
SecurityAutomotive


(Log in to post comments)

Linux and automotive computing security

Posted Oct 10, 2012 19:25 UTC (Wed) by fuhchee (guest, #40059) [Link]

"ultimately trace back to the insecure design of CAN bus."

On the other hand, if the CAN bus traffic was secured to some extent, the subsystem manufacturers might become even more blase about buffer overflows and logic errors. After all, no more hostile traffic would be expected.

Linux and automotive computing security

Posted Oct 10, 2012 19:59 UTC (Wed) by drag (guest, #31333) [Link]

"Airspace firewall" will always be infinitely more effective then any other sort of scheme with a networked system. There simply is no comparison.

It's idiotic to wire up any sort of entertainment system or any non-essential system with engine management or braking system.

Linux and automotive computing security

Posted Oct 10, 2012 20:26 UTC (Wed) by jimparis (guest, #38647) [Link]

Idiotic? But your entertainment system is the screen where the rear-view backup camera gets displayed. You need the computer controlling the transmission to be able to tell the computer controlling the entertainment system to start displaying the camera feed. Now they're wired up. And I think you'll find that by the time you hit every use case (safety interlocks that prevent changing GPS coordinates while the car is driving, vehicular speed being to augment the GPS in tunnels, etc) you'll find that just about everything gets connected somehow.

Linux and automotive computing security

Posted Oct 10, 2012 21:26 UTC (Wed) by Cyberax (✭ supporter ✭, #52523) [Link]

"Back in my days" (tm) we'd just have placed a purely electric connection, i.e. "short these two wires if the reverse gear is engaged". No need for complex digital interface.

Linux and automotive computing security

Posted Oct 10, 2012 23:36 UTC (Wed) by martinfick (subscriber, #4455) [Link]

"Back in my day", people looked behind their cars before putting the car in reverse. I was shocked to be recently hit standing still in a parking lot by someone relying on their reverse warning and not bothering to look; the warning never went off.

I could not help but think of the modern Battlestar Galactica series when reading this article, I am now fairly convinced that I simly don't want such a network in my vehicle. If the authorities mandate it, I will just stick with my used cars for as long as I can (luckily 90s galvanizing makes that more of a possibility). I don't own a vehicle made this melenium and I don't plan to, they simply are less safe and full of BS that no one needs. Everytime I rent a car I am shocked at how poor the visibility is due to the large air bag filled columns pushed too far forward impeeding the view out the side of the windshield making a left turn a high risk acitvity (for me and anyone nearby). It's sad, but soon it will be mandated that we all drive tanks with nothing but a 7 inch screen to view the outside chaos of dead pedestrians left in our wake, and the media will brag about how much safer modern cars are than ever. :(

Linux and automotive computing security

Posted Oct 10, 2012 23:44 UTC (Wed) by jimparis (guest, #38647) [Link]

> "Back in my day", people looked behind their cars before putting the car in reverse. I was shocked to be recently hit standing still in a parking lot by someone relying on their reverse warning and not bothering to look; the warning never went off.

I was referring to the rear-view cameras, which are kind of a necessity on some cars these days due to poor visibility... (see below)

> they simply are less safe and full of BS that no one needs. Everytime I rent a car I am shocked at how poor the visibility is due to the large air bag filled columns pushed too far forward

I think many of the visibility problems stem from pushing to get better gas mileage. Vertical spaces like windows keep getting smaller. Accordingly, some of the technological "improvements" like rear-view cameras are to try to counteract those problems. It's not (necessarily) just some cranky designer having a bad day.

Linux and automotive computing security

Posted Oct 11, 2012 3:39 UTC (Thu) by ncm (guest, #165) [Link]

According to report from inside the automotive industry, what drives the trend to reduced visibility is the desire by female buyers (who now have a predominant influence on new-car purchase decisions) to feel less "exposed". In other words, car makers are making everyone, including buyers, less safe so as to be perceived by buyers as safer.

Linux and automotive computing security

Posted Oct 16, 2012 12:18 UTC (Tue) by wookey (guest, #5501) [Link]

Reduced visibility due to thicker A pillars is due to more stringent crash testing/requirements. 'NCAP tests' in Europe. And a god NCAP rating really does help sell cars. But it also makes them heavier and harder to see out of. The steadily improving motor vehicle injury stats have been coming at the expense of those outside (pedestrians, cyclists, motorcyclists) for some time now. At least in Europe TPTB have finally understood that trying to improve the numbers by simply discouraging those other modes is counter-productive in so many other ways (obesity, congestion, noise, expense and general public realm issues), but rowing back from 50 years of 'the car is king' thinking and development is hard to do. Visibility, crash ratings and excessive tech in cars are just small parts of a much wider issue.

I've been holding on to my 1997 pre-ECU vehicle for a while now, despite its relative inefficiency, hoping to get something with free software in it so I had a least a chance of keeping some control over quality. It looks like it'll have to last at least a few more years before I can actually buy anything I might consider acceptable. But there are at least signs of useful progress in this sphere.

Linux and automotive computing security

Posted Oct 11, 2012 14:42 UTC (Thu) by ortalo (guest, #4654) [Link]

That's too late.
Even if you can avoid the security/safety issue in your car (which I doubt you will be able to), you will not be able to avoid it in the next place where embedded (computer) systems (security) will raise concerns (tubes, trains, planes, houses, nuclear industry, chemical industry, ... put your favourite risk here ...). It's even possible that the automotive industry is not specifically "in advance" on this topic...

The problem is taking seriously into account computer security. I had hoped in the 90s that maybe this could be done before computing invaded everything. It seems I was wrong. [1] So now, what do we do to change that state of fact (before even your old no-computer car really gets unusable)?
Switching to Linux may be an improvement.

But note that if I had the choice now, I would switch to OpenBSD. Not because of the technical quality, but because of the design target.
(Unless Linus and other developpers of the kernel clearly upgrade the priority for security of course.)

PS: Another practical idea but intended for cars manufacturers: offer brand new cars to all linux kernel developers. Now. And for BSDs devs too (come on, that business is not *so* in crisis). Let's remember them that was what Digital did 20 years ago to get Linux on its Alpha CPU.

[1] In the meantime, in my opinion, security only seriously expanded to the gaming industry and to some extent the media/telco. industry. What an irony!

Linux and automotive computing security

Posted Oct 19, 2012 12:53 UTC (Fri) by JEFFREY (guest, #79095) [Link]

"You don't want [CAN bus] in [your] vehicle."

You'd really shudder to know that CAN bus is also used in SCADA/DCS systems that operate dangerous boilers, refineries, and power plants.

Linux and automotive computing security

Posted Oct 19, 2012 13:59 UTC (Fri) by Jonno (subscriber, #49613) [Link]

CAN itself is no worse than Ethernet, except for speed and packet length limitations. On the contrary, it offers several benefits over plain ethernet, such as built-in QoS and a much lower cost to deploy.

The difference is that there are several standard abstraction layers built on top of ethernet which provides additional features, including some security features. Unfortunately these abstraction layers are way to complex to run on the 20 kHz, 8 bit system with 64 kB RAM you typically see in a sensor, leaving you the options of raw ethernet, raw CAN, or raw RS-232 for connectivity.

When given those choices, using CAN is usually a pretty good option, you just have to remember its limitations and design your application protocol with security in mind, as you wont "inherit" any from the underlying protocol, like you do with TCP/IP. (Though that is probably true anyway, as the security features of TCP/IP are quite limited).

Linux and automotive computing security

Posted Oct 15, 2012 14:14 UTC (Mon) by drag (guest, #31333) [Link]

> Idiotic?

Yes.

> But your entertainment system is the screen where the rear-view backup camera gets displayed.

Personally I have learned to turn my head.

> You need the computer controlling the transmission to be able to tell the computer controlling the entertainment system to start displaying the camera feed.

You can have data that goes one way.

For example it's very common in industrial applications dealing with potentially high voltage to use 'light connectors' to join disparate electrical systems. Basically you just have some infrared transmitters on one side and a infrared sensor on another and thus you can transfer information without a direct electrical connection.

So it's very possible to have a properly functioning gauges and other devices without the ability for any attacker, no matter how determined or skilled, to use your entertainment system to subvert your automobile remotely.

> And I think you'll find that by the time you hit every use case (safety interlocks that prevent changing GPS coordinates while the car is driving,

Idiotic safety controls. If I had something like that on my car I would just turn the GPS off and use my cell phone and google maps, or other equivalent. I don't need anti-features in my car. Driving is hard enough without having to fight my car for control.

> vehicular speed being to augment the GPS in tunnels, etc) you'll find that just about everything gets connected somehow.

Only if it is designed by moronic engineers.

Linux and automotive computing security

Posted Oct 15, 2012 14:18 UTC (Mon) by fuhchee (guest, #40059) [Link]

"... optical isolation ..."
"So it's very possible [to do one-way communication]"

The second does not follow from the first. The need for two-way communication comes from application requirements, and can be implemented at the physical level with wires, wireless, two unidirectional optical isolators, whatever.

Linux and automotive computing security

Posted Oct 15, 2012 16:37 UTC (Mon) by bronson (subscriber, #4806) [Link]

> Personally I have learned to turn my head.

Check out the new 2012/2013 models. Crash and fuel economy requirements have made deck heights very high and D-pillars very wide. Rearward visibility is suffering mightily.

Linux and automotive computing security

Posted Oct 16, 2012 8:54 UTC (Tue) by njwhite (guest, #51848) [Link]

>> And I think you'll find that by the time you hit every use case (safety interlocks that prevent changing GPS coordinates while the car is driving,
> Idiotic safety controls. If I had something like that on my car I would just turn the GPS off and use my cell phone and google maps, or other equivalent. I don't need anti-features in my car. Driving is hard enough without having to fight my car for control.

I quite agree. I don't know why people want this sort of thing in their cars. Indeed this article in general just made me not want to ever get a car built in the last 10 years. Of all activities, something as dangerous as driving is something I would be least comfortable reducing my control over. Is the only option for those of us who value control in driving now kit cars and antiques?

Linux and automotive computing security

Posted Oct 18, 2012 18:12 UTC (Thu) by TRauMa (guest, #16483) [Link]

Dont worry so much, all these driving helpers are a transient state anyway. Soon you'll just enter your car and relax while it will do all the driving, and even if you would be tempted to drive yourself it would be a bad idea because most lanes on the highway will be closed to human drivers due to security reasons.

Linux and automotive computing security

Posted Oct 10, 2012 21:55 UTC (Wed) by rgmoore (✭ supporter ✭, #75) [Link]

This topic is touched on in the article. The problem is that many non-critical systems need information from the critical systems in order to function properly and/or safely. For example, automatic door locking depends on knowing something about the state of the car- different makers choose to lock when the engine is started, the car is put in gear, or when it exceeds a threshold speed- to operate properly. OTOH, the locks need to be connected to insecure systems that take remote information, like the keyless entry or remote assistance systems. So the locks now need to communicate with both the critical driving systems and the communications systems. Putting an air gap in place will disable some useful feature of the car.

You can't even fix the problem with one-way information flow between critical and non-critical components, because there are valid reasons for wanting to send information the other way. Many security features require sending information from the outside world to the engine computer. For example, my car has a feature that disables the ignition if the doors are locked using the keyless entry system. That's a very desirable feature, but it means giving control over the engine to a system that has to talk to the outside world.

Linux and automotive computing security

Posted Oct 10, 2012 22:44 UTC (Wed) by Cyberax (✭ supporter ✭, #52523) [Link]

Air gap doesn't mean complete absence of any communication. For example, a door lock system can passively listen to CAN bus for speed information messages (the design of CAN makes this easy).

So far I haven't seen an example where you really need complex two-way communication between a critical system and non-critical stuff.

Linux and automotive computing security

Posted Oct 11, 2012 14:52 UTC (Thu) by ortalo (guest, #4654) [Link]

A fireman with its phone wants the engine to stop (GSM -> engine).
Is that a better idea?

Anyway, I *agree* with you: first, why not try to do something good with an air gap. Once manufacturers will have demonstrated their ability to design something correct with an air gap, maybe they could be allowed to try to adress more complex configurations.

But you know, that was the way certification authorities approached the issue for airplanes and, apparently, the "non-critical -> critical" issue came back on the table within 2-3 years.
It seems civilian users want to do that. (Maybe users really are the most annoying vulnerability after all...)

Linux and automotive computing security

Posted Oct 10, 2012 22:53 UTC (Wed) by cesarb (subscriber, #6266) [Link]

> You can't even fix the problem with one-way information flow between critical and non-critical components, because there are valid reasons for wanting to send information the other way.

You could combine one-way information flow with a default-deny firewall on the opposite direction, with very strict format checks. If implemented properly, only a few exact packets would be able to pass, with a result similar to a bundle of discrete wires. (It would be a set of rules somewhat like: allow only the exact packet 010203x4, with x being only 1, 2, or 3.)

Of course, that adds cost, power, and space usage, since the firewall would have to be a separate discrete component, and you would need one for each device straddling separate integrity domains. You also lose flexibility, since you would have to replace the firewall component if you need to add more functionality in the direction it filters.

Linux and automotive computing security

Posted Oct 10, 2012 20:33 UTC (Wed) by dashesy (guest, #74652) [Link]

I do not understand why not think of CAN bus as the wire, it is data that needs protection not the wire.

Linux and automotive computing security

Posted Oct 10, 2012 20:58 UTC (Wed) by dlang (guest, #313) [Link]

The protocol over the wire matters as well.

Since the CAN bus includes the over-the-wire protocol as well as the electrical requriements, the fact that it doesn't even have the concept of sender ID is a major problem.

Yes, everything could add it's own authentication in the messages, but that is just layering another protocol on top of CAN, and getting all vendors to agree to it would not be trivial.

Switching to a different network protocol (say IP) would then enable a LOT of standard authentication, firewalling, etc tools to be used. Yes, mistakes can still be made, but given standard tools they are less likely.

One thing to remember is that when the CAN bus was created, it took a rather expensive system to run an IP stack. Nowdays this can be done on very cheap hardware.

Linux and automotive computing security

Posted Oct 10, 2012 21:07 UTC (Wed) by fuhchee (guest, #40059) [Link]

"Since the CAN bus includes the over-the-wire protocol as well as the electrical requriements, the fact that it doesn't even have the concept of sender ID is a major problem."

Intra-computer buses like PCI get by without that.

Linux and automotive computing security

Posted Oct 10, 2012 21:12 UTC (Wed) by dlang (guest, #313) [Link]

> Intra-computer buses like PCI get by without that.

and how much security do you have on a PCI bus protecting you from a rouge card?

basically none.

(although, PCI actually does have a slot ID, so the main system can do some source validation. This is actually used with some virtualization systems, but answering the question in the spirit asked, rather than being picky about the particular bus used as an example)

the problem is that the CAN bus is not within a computer, it's connecting many different computers together to form the car's overall network. Not all of the devices on the network should be equally trusted.

Linux and automotive computing security

Posted Oct 10, 2012 21:29 UTC (Wed) by Cyberax (✭ supporter ✭, #52523) [Link]

So cars need an airgapped CAN bus, inaccessible from 'untrusted' sources. Duh.

I shudder to think that my car's tire pressure sensors would use IPv6 to talk to the central computer. That's just... inelegant.

Linux and automotive computing security

Posted Oct 12, 2012 13:34 UTC (Fri) by peter-b (guest, #66996) [Link]

I assume it would seriously upset you to learn that subsystems within the Falcon 9 launch vehicle use TCP/IP to talk to each other, then...

Linux and automotive computing security

Posted Oct 12, 2012 15:13 UTC (Fri) by Cyberax (✭ supporter ✭, #52523) [Link]

Yes, that's surely insane. Though not as insane because rockets don't usually have cell modems connected to their critical infrastructure.

Linux and automotive computing security

Posted Oct 12, 2012 17:21 UTC (Fri) by rgmoore (✭ supporter ✭, #75) [Link]

OTOH, security researchers have successfully hacked a car's computer system through the tire pressure sensors, which displays a certain inelegance in the current system. The tire pressure sensors are actually an especially vulnerable point, because the only mechanically elegant way of transmitting information between the wheels and the rest of the car with some kind of wireless communications.

Linux and automotive computing security

Posted Oct 12, 2012 22:06 UTC (Fri) by Cyberax (✭ supporter ✭, #52523) [Link]

So how is it going to be better if tire sensors now have TCP/IP stack with OpenSSL for PKI implementation?

The car-local network is a postcard example for a local airgapped network. It makes no sense to try to make every component secure, it's much better to have a secure perimeter where any external data input is treated as potentially malicious.

Tire-sensors and the law

Posted Oct 13, 2012 2:29 UTC (Sat) by Max.Hyre (subscriber, #1054) [Link]

My understanding (i.e., I'm too lazy to look it up right now) is that the law mandates these radio transmitters for tire sensors, and actually prohibits doing it by comparing wheel rotation rates. Of course, using sensors already in place (for ABS &c.) would markedly reduce the attack surface. I've always wondered whether this was done so that all new cars are now trackable remotely for some small-ish value of remote.

Now who would want that?
(/me puts on tinfoil hat back on)

Linux and automotive computing security

Posted Oct 14, 2012 21:57 UTC (Sun) by rgmoore (✭ supporter ✭, #75) [Link]

I think I've actually described it wrong; the problem is not with the tire pressure sensors, per se, but with the receiver. The designers seem to have treated the pressure sensor and receiver as a unit that was entirely inside the car, rather than treating the signal from the pressure sensors as an untrusted input. Researchers were able to crack the receiver by sending a spoof signal.

I think this is a good example of the drawback of relying on perimeter security; it's brittle. If you fail to consider one source of potentially malicious data (or consider it but fail to secure it adequately), the whole system falls apart. I think you'd be much better off with some kind of defense in depth so that a single security failure doesn't bring down the whole system. Otherwise, you're left with a car that can be hacked because the designers didn't think that somebody might spoof the signals from the wireless tire pressure sensors.

Maybe a full encrypted and authenticated TCP/IP stack is overkill, and a better CAN implementation can provide an adequate level of protection. But basing everything, including the internal message bus, on a standardized platform that's known to have good security seems like a big step forward.

Linux and automotive computing security

Posted Oct 15, 2012 1:36 UTC (Mon) by Cyberax (✭ supporter ✭, #52523) [Link]

What kind of security can a bus provide? CAN is as simple as it gets for its purposes - it's a very simple broadcast-only shared-media bus with prioritized messages.

If you try to replace it with Ethernet then you'll get loads of problems, starting with a requirement to have point-to-point connections between endpoints and switches and then moving on to DoS protection and priority-based transmission.

And security guarantees won't get any better - Ethernet does not guarantee anything.

Linux and automotive computing security

Posted Oct 10, 2012 21:18 UTC (Wed) by smurf (subscriber, #17840) [Link]

Different problem space. Don't compare apples with lemons.

funny - thanks

Posted Oct 11, 2012 5:03 UTC (Thu) by ds2horner (subscriber, #13438) [Link]

funny - thanks

funny - thanks

Posted Oct 16, 2012 17:55 UTC (Tue) by Baylink (guest, #755) [Link]

That's not apples and lemons, that's apples and antifreeze.

Linux and automotive computing security

Posted Oct 10, 2012 21:24 UTC (Wed) by iabervon (subscriber, #722) [Link]

I'm not convinced by those examples of systems that need to bridge the security-critical and IVI networks; all of the stability-control-related systems (plus stability control itself) seem critical, likewise cruise control, while none of the door-lock things are. It seems to me that you would need a device that listened to the critical bus and report to the non-critical bus, so that the CD player could tell when the car is in motion. However, as far as I can tell, this device doesn't need to do able to affect the critical bus.

I'm not clear as to the intent of suggesting an IP network instead of the CAN network, in any case; IP is not at the same protocol layer. You could switch from CAN to ethernet, but you'd need a custom switch (which knows which sensors are where and what's most important) in order to avoid having the denial of service problem be at least as bad. Sure, you couldn't have the CD player tell the brakes they shouldn't engage, but you couldn't really keep the CD player from pushing 100Mb of audio data at the brakes so packets from the brake pedal don't get through. And CAN has the security advantage that you can build your CD player with a CAN PHY that is only able to use low-priority IDs. It's practically impossible for an ethernet PHY to know that it would be flooding the network.

Linux and automotive computing security

Posted Oct 10, 2012 21:50 UTC (Wed) by bjencks (subscriber, #80303) [Link]

Actually, it's not that hard to do proper QoS with modern switches. Just mark all packets coming from the CD player at a lower priority than the ones coming from the brake system. You can even have the devices emit tagged packets and restrict them to a subset of available priorities.

Or you could put in extra point-to-point links between each especially critical pair of devices. With IP, it's not very hard to just add an extra host route down a different pipe; it doesn't have to have the overhead that a whole new bus would.

This doesn't even get into the possibilities of using non-ethernet transport, some of which can provide more strictly managed performance guarantees.

Linux and automotive computing security

Posted Oct 10, 2012 22:36 UTC (Wed) by Cyberax (✭ supporter ✭, #52523) [Link]

So CD player can instead spam all other subsystems? And going to PtP links is distinctly a step back.

Never mind that you now need a complex IP stack capable of supporting PKI on each freaking sensor. If that's not a definition of madness, then I don't know what is.

CAN bus is fine for what it does. It's GREAT. The problem is, it's an internal bus that's being abused to interface with external systems.

Adding PKI to each sensor is like adding PKI to your hard drive to fight against computer viruses.

Linux and automotive computing security

Posted Oct 10, 2012 22:51 UTC (Wed) by SLi (subscriber, #53131) [Link]

I am a Linux geek, and I work with safety critical systems (mostly safety critical methodology research). I think anyone who thinks Linux, or for that matter Windows or any other operating system most people here would have heard of, could run in a safety-critical setting in the foreseeable future simply misunderstands the nature of safety critical systems.

The first thing to understand is that safety is not the same as security. Most of this article talks about security. Security can affect safety, and certainly the safety critical industry should take it better into account, but it's only a very small part of the story. Also, many of the attacks mentioned in this paper do not concern safety at all. For example, someone being able to steal your car is not a safety concern. Safety concerns are exactly those that can lead to bodily harm or death of someone operating the system or other people.

As an example, a window closing mechanism in a car might be considered very slightly safety critical if it would be possible for it to chop off a finger if it malfunctions. Brakes would have a higher safety criticality level, since a car moving at a high speed without functional brakes can cause the death of not only the driver but also other people. The car stereos would generally be considered non-critical, but to certify the entire system, you will have to show sufficient separation between critical and non-critical systems, and also between less critical and more critical systems.

There are generally certain requirements that regulators require for safety critical code. Generally any code to be run in a safety critical context needs to be developed with an extraordinarily thorough and rigorous process. The entire process must be well documented, starting from the design, but also encompassing coding, testing, etc. There is a good rationale for this: Testing will never find all your bugs. Rigorous design and such things won't find them all either, but it won't hurt. The point is this: Safety is something that you need to build in from the beginning; you just cannot add it later.

This necessarily results in safety critical code being comparatively speaking very simple. The requirements also become more stringent when the criticality level (for example, SIL levels, where 1 is the least critical, such as car windows, and 4 would be the most critical, such as aeroplane control systems and nuclear reactors) rises. I would be surprised if there are very many high-criticality systems as complex as a TCP/IP stack, let alone the Linux kernel. You can run them in the car stereos, though.

Also, barring some very significant advance in program verification, the Linux kernel can never be even tested to the level required. Generally the lowest levels of criticality require things like a test suite with 100% coverage. To see the kind of testing required for higher levels, take a look at, for example, Modified Condition/Decision coverage (or see below). The only open source piece of software I know claims to have 100% MC/DC coverage is SQLite, and even they misunderstand it and basically only have plain Condition/Decision Coverage.

One of the requirements in MC/DC is that, for each source code level branching condition (boolean expression), you need to separately show (with tests in your test suite) for each subexpression that there is a pair of inputs where that expression only differs in the value of that subexpression such that different branches are taken. That is, if you have a condition: if (a || (b && c)) { ... } else { ... }

you will have to write the following tests:

  1. A test that makes a true (e.g. a=1, b=0, c=0, branch taken=then)
  2. A test that makes a false with the same b and c as in the above test AND takes the other branch (e.g. a=0, b=0, c=0, branch taken=else)
  3. A test that makes b true (e.g. a=0, b=1, c=1, branch taken=then)
  4. A test that makes b false with the same a and c as in the above test AND takes the other branch (e.g. a=0, b=0, c=1, branch taken=else)
  5. A test that makes c true (the above a=0, b=1, c=1, branch taken=then test suffices for this)
  6. A test that makes c false with the same a and b as in the above test AND takes the other branch (e.g. a=0, b=1, c=0, branch taken=else)
  7. A test that makes (b && c) true (e.g. a=0, b=1, c=1, branch taken=then)
  8. A test that makes (b && c) false with the same a as in the above test AND takes the other branch (e.g. a=0, b=0, c=1, branch taken=else)

I hope you are starting to see the hopelessness of testing Linux, or basically any other piece of code that either is moderately large or not designed from the beginning to be so tested, to such a strict standard...

Note that the safety critical people do not claim this process is perfect. It just is a process that results in a lot of eyeballs staring at the code, the specification and the test cases and thinking about them and testing them from nearly every possible angle imaginable. It still happens that there are bugs, but they are certainly much more rare than bugs in Linux kernel :)

Linux and automotive computing security

Posted Oct 11, 2012 1:44 UTC (Thu) by quotemstr (subscriber, #45331) [Link]

Thanks for the interesting explanation of the development process behind safety-critical systems. Would it be safe to say that for these systems, the majority of the actual effort is expended on writing testcases?

Linux and automotive computing security

Posted Oct 11, 2012 8:16 UTC (Thu) by hickinbottoms (subscriber, #14798) [Link]

Being involved in this world as well I can say that whilst testing is a considerable part of the process (the back-end of the development model, if you like), the majority of the effort lies in the front-end during and before the design phase.

You can't design a safety-critical system without knowing what the safety requirements are, and they're often harder to identify than you imagine. For example a hypothetical brake-control system might have a safety requirement that the brakes are applied within X ms of being commanded, with Y reliability, which is a fairly easy requirement to spot. Slightly harder is that it's also likely to be potentially hazardous for the brakes to be applied when not commanded, so you need to spot that and engineer the requirements appropriately -- there have been aircraft losses during landing for such failures if my memory serves me correctly.

It's this identification of the requirements and the associated safety analysis process involving tools such as fault trees, event trees, FMEA/FMECA, hazard analysis/logs, SIL analysis etc that makes safety-critical development really hard and expensive. It is, however, critical to get this right before diving into coding and testing since as we know changing the requirements of systems after they're built is difficult and often leads to unexpected behaviours being implemented. The high-integrity world is littered with examples of failures caused by changed requirements or systems being used to fulfil requirements that were never identified.

Because the resulting design of the system is heavily-influenced by the requirements analysis that got you there it's also very difficult to make a convincing safety case and retrospectively develop a safety substantiation for a system that hasn't been designed that way from the outset.

As the parent poster says, you can't stop non-trivial software from having bugs and crashing, but you can build a confident argument that such failure cannot lead to a hazardous condition with an intolerable frequency. The safety analysis process lets you make such statements with evidence.

It's always a little disappointing that at the end of the day you just end up with 'normal-looking' software that isn't somehow magical and better -- but it's the confidence that it's more likely to do what's expected and that when it doesn't it can't lead to situations you've not at least considered that's important.

Linux and automotive computing security

Posted Oct 11, 2012 15:01 UTC (Thu) by rgmoore (✭ supporter ✭, #75) [Link]

You can't design a safety-critical system without knowing what the safety requirements are, and they're often harder to identify than you imagine.

Yes, and in this case, it turns out that one of the things the designers failed to identify is that they couldn't necessarily trust all of the other systems on the CAN. It's easy to understand why somebody might make that mistake, but the major thrust of the security researchers' article is that it is a mistake. Now they need to go back to the drawing board and design a better set of specifications for their networking component so it won't let the system be subverted by malicious messages.

Writing tests cases

Posted Oct 11, 2012 11:57 UTC (Thu) by man_ls (guest, #15091) [Link]

Would it be safe to say that for these systems, the majority of the actual effort is expended on writing testcases?
I hope that, in this day and age, the effort on writing and running test cases for any non-trivial system is the majority of the coding effort! In a recent interview Kernighan says that in his classes:
I also ask them to write tests to check their code, and a test harness so the testing can be done mechanically. These are useful skills that are pretty much independent of specific languages or environments.
Given that tests should be about half the size of the system (for a big system), and that they are run repeatedly, they should take the majority of the coding effort. For critical systems this amount should be probably higher.

I am just speaking about coding, but obviously it is not the only development activity. I am not surprised to learn from the above poster that analysis and design take even longer than coding.

Writing tests cases

Posted Oct 18, 2012 18:22 UTC (Thu) by TRauMa (guest, #16483) [Link]

Then again, nobody pays for test cases unless regulations force them to. :(

Linux and automotive computing security

Posted Oct 11, 2012 14:57 UTC (Thu) by ortalo (guest, #4654) [Link]

Certainly. And in some cases, manual coding in a conventional language is even nearly prohibited: code is generated from the specification. (With the testcases, the timing calculations, etc.) And even in this case, the testing effort is paramount.

Linux and automotive computing security

Posted Oct 11, 2012 15:00 UTC (Thu) by ortalo (guest, #4654) [Link]

The last line of the above comment disappeared mysteriously. It was:

But is that enough for security (!= safety)?

Linux and automotive computing security

Posted Oct 11, 2012 13:03 UTC (Thu) by etienne (guest, #25256) [Link]

May I ask, which part of your build chain do you trust? I.e:
- Do you test the source code (and so trust the compiler); then you can reuse that unmodified and tested source code from other parts of the software, you do let the compiler optimise (inline function calls).
- Do you test the libraries (and so trust only the linker); then you can call any function of that library from other parts of the software.
- Do you test the hexadecimal code (and so only test the hardware, i.e. FLASH + processor + memory) then it is really difficult to do that sort of every "if" fully checked...

Linux and automotive computing security

Posted Oct 11, 2012 20:37 UTC (Thu) by SLi (subscriber, #53131) [Link]

Generally speaking the compiler used also needs to be certified to the same safety critical standard, or else you will need to spend a nontrivial effort in showing equivalence of the source code and the machine code. As you can probably imagine, optimizing compilers are not used a lot :) The same goes for other parts of the toolchain that can affect the final output.

And the same goes for the microcontroller used. The hardware needs to be certified. You obviously also cannot use libraries that are not certified to the same standard. Though we're mostly talking about small microcontrollers anyway; generally everything is always linked in statically.

There are ways to incorporate complexity without doing it in safety critical code, though. Generally you develop as little safety-critical code as possible, and specify a simple interface over which it interfaces to non-critical code. Then you certify it with the argument that it will behave safely regardless of what input it gets from the non-trusted source (often you still don't need to consider adversarial situations). How this is accomplished really depends on the application: The simplest case is the one where you can ensure safety simply by shutting down the system in case of invalid input. For example, a nuclear reactor can be shut down, or some other heavy machine may be simply stopped (power cut). As you can imagine, this is not such a good solution for, say, aeroplanes. There the usual safe mode means falling back to manual operation.

Usually this separation also means separate hardware for the critical and noncritical parts. However, if you have a kernel certified to a certain level, where the certification is for noninterference of non-critical processes with critical processes, you might be able to run critical and noncritical tasks on the same microcontroller. In practice this is hard to do, as the kernel is a complex piece of software to develop.

I hear there all kinds of crazy hardware solutions for this, especially in the automotive industry, as profit margins drive developers towards single-chip solutions. Like microcontrollers where every other instruction has access to some privileged memory areas and operations, and the others do not. Thus you can get a very simple kind of separation of trusted and untrusted code without a full-blown MMU (or even without a full MPU) and without a page table.

Linux and automotive computing security

Posted Oct 16, 2012 18:13 UTC (Tue) by Baylink (guest, #755) [Link]

Let us reflect...

http://lwn.net/Articles/372224/

(Alas, it appears that DejaGoogle now *requires* a login even to read news articles; shame that hasn't garnered more complaint)

Linux and automotive computing security

Posted Oct 17, 2012 9:28 UTC (Wed) by njwhite (guest, #51848) [Link]

> Alas, it appears that DejaGoogle now *requires* a login even to read news articles

In a characteristically sneaky way, that's only half true. It requires a login if you have a google cookie so they reckon you *have* a google login. Otherwise they let you through with no problem. (I haven't tested this extensively, it's just what seems to be happening in my experience. It's possible it's also location based, as I generally use tor so would expect to see inconsistent behaviour if they were doing that.)

Linux and automotive computing security

Posted Oct 17, 2012 17:01 UTC (Wed) by Baylink (guest, #755) [Link]

That. Is.

Evil.

Linux and automotive computing security

Posted Oct 18, 2012 18:26 UTC (Thu) by TRauMa (guest, #16483) [Link]

It is the same for public google docs documents. While I understand the rationale (logging in usually gives you more actions on the resources you see, in this case the option to copy the document to your account), making this explicit and providing a "proceed without login" button wouldn't have hurt - not doing this shows that the real motivation seems to be to get people to log in as often and long as possible to get better data tracking (on the other hand, if you have a google cookie, the tracked data will be high quality anyway).

Linux and automotive computing security

Posted Oct 12, 2012 13:37 UTC (Fri) by peter-b (guest, #66996) [Link]

> I would be surprised if there are very many high-criticality systems as complex as a TCP/IP stack, let alone the Linux kernel.

As I mentioned in another post, there's a very good counter-example: avionics subsystems within the SpaceX Falcon 9 launch vehicle all communicate over TCP/IP. And the Dragon capsule that docked at the ISS this week has avionics that run the Linux kernel exclusively. I think it would be fair to say that those more-or-less *define* "high-criticality"! ;-)

Linux and automotive computing security

Posted Oct 22, 2012 12:47 UTC (Mon) by pflugstad (subscriber, #224) [Link]

Yes, but they probably didn't need to get FAA DO-178 certification (or whatever the equivalent is for automobiles or health care instruments, etc).

Linux and automotive computing security

Posted Oct 13, 2012 2:02 UTC (Sat) by giraffedata (guest, #1954) [Link]

someone being able to steal your car is not a safety concern

I don't mean to take anything away from the conclusion, but I thought I'd point out that some people consider someone being able to steal a car to be a safety concern.

The US agency empowered to restrict, for safety reasons, what kinds of cars can be built (NHTSA, I believe) requires many anti-theft features, such as locking steering wheels. It claims this authority based on statistics that show cars are driven significantly more carefully by their owners than by their thieves. Stolen cars are especially more likely to be in high speed chases with the police that end in bloody crashes.

Linux and automotive computing security

Posted Oct 11, 2012 2:07 UTC (Thu) by daniels (subscriber, #16193) [Link]

As much as I hate to be that 'well actually' guy, there is some GENIVI code available now. The LayerManager (essentially window manager/compositor/shell framework) has just been opened up, as well as the audio routing framework: http://git.projects.genivi.org

Linux and automotive computing security

Posted Oct 16, 2012 12:55 UTC (Tue) by wookey (guest, #5501) [Link]

Yes, and there are ITPs for Debian for the genivi parts that are free software but not already in distros (The basic packaging has already been done as this code has been used with Ubuntu Automotive remix as the Genivi demonstration distro for some time)

CAN vs Internet-0

Posted Oct 11, 2012 9:36 UTC (Thu) by jnareb (subscriber, #46500) [Link]

Could CAN bus be replaced by Internet-0?

Linux and automotive computing security

Posted Oct 11, 2012 15:56 UTC (Thu) by iabervon (subscriber, #722) [Link]

I think a substantial portion of the actual problem is using the CAN bus for things other than status data. A lot of things become much easier to secure if you only have an ECU with clearly-specified functionality bridging the safety-critical and non-safety-critical busses, and that ECU can't be reprogrammed arbitrarily over either bus. It is relatively straightforward to reduce your attack surface by never bridging packets from one network to the other; the bridge device would sit on both networks and report conditions which it determines from the sensors. So it would look at wheel sensors and report "the car is in motion", and look at the wireless key receiver and report "disable the ignition". The compromised CD player wouldn't be able to DoS or spoof the brake pedal without compromising the bridge ECU, and it should be possible to have the bridge use CAN hardware that can't use high-priority IDs on the safety-critical bus.

Automotive circuit security.

Posted Oct 11, 2012 20:05 UTC (Thu) by smoogen (subscriber, #97) [Link]

I am always amazed by people thinking that cars haven't been hackable until recently.

People have been hacking the "computers" in Cars since at least the 1970's for ill or good intent. While the computers in a 1972 VM 411 was basically a block of circuits dipped in epoxy.. if you grounded out certain circuits it would change the performance. By the 1980's people were hacking later systems to get around smog controls or change the various speed controls so that you could get different performance. Supposedly in the same logic paths you could also cause the power brakes in certain cars to fail after the car went over X mph depending on what resistor you ran between two terminal points. [The original hack was to turn on the horn and lights when that happened but someone saw that the circuit controlling the breaks was open also.]

While that was physical hacking.. there were chipset hacks once 8 bit controllers got cheap enough for them to be put in.

In all that time, security wasn't considered an issue because well "Who would do that?" and "Well now we have to replace the parts on the next 8 model years if we do that."

Linux and automotive computing security

Posted Oct 12, 2012 7:39 UTC (Fri) by PinguTS (guest, #87177) [Link]

Sorry, but this article shows, that the author has no glue about embedded network design. He basically suggest to trade robustness and reliability for security.

The funniest thing is, when he writes: "CAN bus needs to be replaced; working with a standard IP stack, instead, means not having to reinvent the wheel."

Because what has he written? If you are familiar with networks, then you know CAN is like Ethernet. So what does he has written? "Ethernet needs to be replaced; working with a standard IP stack, instead, means not having to reinvent the wheel."

Ethernet also has no security designed in. I don't know about any Data Link Layer Protocol that has security designed in. Because security is not part of the Data Link Layer functions. Security is part of the network layer and the session layer.

That is the same for IP. Also IP has no security designed in. The security is added by the higher layers on top of IP. Like where is security in FTP, Telnet, HTTP, SMTP, POP3, SNMP, and so on. It is always added afterwards.

Then, the first paper from 2010 describes common knowledge. Because what does the paper describes? It describes, that you can make a firmware download and as such modify the behavior of an ECU. Actually, it does not matter what type of network I run or what type of OS runs on the ECU. If the authentication protocol is weak, then I can download anything. Anybody in the security knows, that authentication shall not depend on the underlying network protocol or the operating system. So, where is the connection between the two?
Especially authentication is hard in any embedded network. Think about XBox360, PS3 and so on. You have to put the keys somewhere into the devices itself, which always means there are some ways to extract those keys.

The second paper has also some not so precise descriptions. Like if written the brakes can be disabled remotely, then it means that breaking without electronics is impossible. WRONG. Thats why no current car has brake-by-wire implemented (except some prototypes). A mechanical backup is always required, because the worst case scenario is, that the power system fails and then the brakes have to work. FULLSTOP. That is a requirement in the US and in Europe (maybe no so much in India or China, I don't know).
What you loose is brake support. OK, I think most of the people will not anymore be able to brake a car without brake support. But that is a different story.

I could go on and go on.

Linux and automotive computing security

Posted Oct 12, 2012 10:05 UTC (Fri) by ortalo (guest, #4654) [Link]

Well I understand your position (really). However, I do not agree.
Personnally, I do not want to trade robustness and reliability for security. I want robustness, reliability *and* security.

However, the level of open and verifiable guarantees that security necessitates is apparently something that manufacturers are not ready to offer. That they are not willing to offer. As a security professional, this makes me untrustworthy (and not only about security, but also about robustness and reliability by the way). And I know too that overcoming management lack of interest and lack of funding for security is hard, so...

I'll certainly concede you that it is my job to be doubtful. But I do not think I am paranoid here. And I am pretty sure that things could be done *much* better in this area.

Linux and automotive computing security

Posted Oct 12, 2012 11:44 UTC (Fri) by gnb (subscriber, #5132) [Link]

"CAN bus needs to be replaced; working with a standard IP stack, instead, means not having to reinvent the wheel."

I don't think that's as silly as you're claiming: yes, taken literally that's mixing up layers, but I'd say the author is making the point that CAN is not a suitable layer 2 for running IP over (you could probably make it work, but it'd have to be pretty horrible given the frame length limits) so to use an IP stack you'd need to replace CAN.

Linux and automotive computing security

Posted Oct 12, 2012 20:25 UTC (Fri) by ggreen199 (subscriber, #53396) [Link]

One thing missed on the discussion of replacing CAN with ethernet, is that CAN is priority based. You can prove which message is going to get on the bus at a given time (Because of the priority-based message id's), which you can't do on ethernet. If you were to look at the design of ARINC 664 (real time ethernet IP/TCP on aircraft), you would see that this is not trivial to solve. For critical systems on the car, it is not a slam dunk to replace CAN with ethernet.

Linux and automotive computing security

Posted Oct 12, 2012 21:05 UTC (Fri) by dlang (guest, #313) [Link]

> You can prove which message is going to get on the bus at a given time (Because of the priority-based message id's)

except for the tiny detail that the priority of the messages is a software thing, so it can be forged.

besides, an easy work-around for needing priority to 'prove' what message will go on a buss first is to just over-provision the network speed to a ridiculous amount. if you were to put in gig-ethernet, the priority is really unlikely to matter as the delay to wait just isn't significant.

Linux and automotive computing security

Posted Oct 12, 2012 21:15 UTC (Fri) by ggreen199 (subscriber, #53396) [Link]

Well of course a rogue box can forge the id but that is the security aspect, the safety aspect is that the priority is hardware determined. Two different things.

And of course you can over-provision the network, except when you are already pushing the limits. If you are near the limit, how do you prove which goes on the bus? This isn't theoretical, we had this very problem (on ethernet, not CAN). So where CAN does what you need, I stand by my comment it is not a slam-dunk to replace it. Just putting in ethernet doesn't prove you will meet your real-time milestone when you HAVE to.

Linux and automotive computing security

Posted Oct 16, 2012 18:20 UTC (Tue) by Baylink (guest, #755) [Link]

Not according to other comments in this thread, which suggest that Canbus transceivers can be provisioned on a board which are *physically incapable* of generating high-priority addresses.

Linux and automotive computing security

Posted Oct 13, 2012 6:11 UTC (Sat) by alison (subscriber, #63752) [Link]

Those interested in V2V networking and its security could do no better than to subscribe to the new IETF-ITS mailing list:

https://www.ietf.org/mailman/listinfo/its

Not sure if there is an archive, but there have been many substantive discussions already.

GENIVI is releasing its code as fast as limited support staff can manage. What about the code for the Google self-driving cars? Meanwhile, we could always take refuge in existing open source HW and SW. Check out this amazing U. Sherbrooke project:

http://sourceforge.net/apps/mediawiki/openecosys/index.ph...

H/T ToxicGumbo on #linuxice.

Linux and automotive computing security

Posted Jan 28, 2013 11:27 UTC (Mon) by akostadinov (guest, #48510) [Link]

Is not it mega-easy? Just have a read-only connector where user requests are filtered to be safe. And have another connector where you can flash firmware, etc. Will be much easier than making every sensor/device security aware.

This can be extended for example to filter communications with the entertainment system to accept only communications that is expected from that system. Kind of layer 7 firewalling/proxying. I have not read about the attack vectors available but I don't see any other sensible solution.


Copyright © 2012, Eklektix, Inc.
This article may be redistributed under the terms of the Creative Commons CC BY-SA 4.0 license
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds