Does adding an IP address change embedded designs?

Thursday, September 15th, 2011 by Robert Cravotta

A recent analysis from McAfee titled “Caution: Malware Ahead” suggests that the number of IP-connected devices will grow by a factor of fifty over a ten year period based on the number of IP-connected devices last year. The bulk of these devices are expected to be embedded systems. Additionally, connected devices are evolving from a one-way data communication path to a two way dialog – creating potential new opportunities for hacking embedded systems.

Consider that each Chevy Volt from General Motors has its own IP address. The Volt uses an estimated 10 million lines of code executing over approximately 100 control units, and the number of test procedures to develop the vehicle was “streamlined” from more than 600 to about 400. According to Meg Selfe at IBM, they use the IP-connection for a few things today, like finding a charging station, but they hope to use it to push more software out to the vehicles in the future.

As IP-connected appliances become more common in the home and on the industrial floor, will the process for developing and verifying embedded systems change – or is the current process sufficient to address the possible security issues of selling and supporting IP-connected systems? Is placing critical and non-critical systems on separate internal networks sufficient in light of the intent of being able to push software updates to both portions of the system? Is the current set of development tools sufficient to enable developers to test and ensure their system’s robustness from malicious attacks? Will new tools surface or will they derive from tools already used in high safety-critical application designs? Does adding an IP address to an embedded system change how we design and test them?

Tags:

15 Responses to “Does adding an IP address change embedded designs?”

  1. T.F. @ LI says:

    Having worked on networked-enabled embedded devices, adding the ability to work on the TCP/IP dimension does change how the devices are designed, and, especially, tested. One needs to consider the overhead placed by TCP/IP and think out all possible scenarios that can potentially “defeat” the device. One must also consider the nature of the incoming and outgoing traffic on the device. How often are messages exchanged with servers or other devices? Could the level of traffic be construed by a server or router as an attack (e.g. DoS)? The needs still exists for adequate testing tools that allow for fully tracing and synthesizing traffic in a full network configuration. In one project, we relied heavily on Wireshark. Though helpful, we could have benefited from better tools.

  2. S.K. @ LI says:

    The Light Weight TCP/IP (LWIP) are more suitable to embedded systems.

  3. L.H. @ LI says:

    If an embedded device is to be connected to a public network then security issues must be considered. Not long ago it was revealed that Iranian centrafuges were crippled by a virus thought to have originated in Isreal.
    Embedded devices with firmware update capabilities connected to public networks are using certificate based security schemes to ensure only authorized accesses are allowed. This means acquiring and storing certificates with the firmware.

  4. L.R. @ LI says:

    L., there is no evidence that Stuxnet has originated in Israel. Also, having read the details of this malware I realized it’s main vector is through the use of a Windows machine running the PLC development tools, and modifying the PLC source code, and then relying on the developers to compile and download that modified code into the PLC (embedded device) which does not even have an IP address. The point is that certificates would not help in this case as the tool would have generated a perfectly valid signature using the maliciously modified source code, which would then be deemed legitimate for all purposes.

    s to the main topic of this discussion, adding TCP/IP networking into an embedded systems detracts significantly from its “determinism”, which does pose a problem, and flaws in the designed architecture and code implementation may indeed open new vectors for various forms of attack, but besides the obvious hype around the subject, there is really nothing new about it.

    Embedded systems with all sorts of communications capabilities have existed for decades, and the fact that the protocols where other than TCP/IP did not prevent various forms of attacks from occurring over the years. There have been various incidents where various point-of-sale and automatic-teller-machines were hacked via their communications capabilities.

    Firmware updates “over the air” for embedded systems such as those running vehicles could be a source of assorted quality issues, both inadvertent and malicious, and should not be underestimated.

    The good news however is there is a lot of experience already in the design of networked embedded systems, and may help prevent repeating past mistakes.

  5. KER @ LI says:

    Many reasons, for me as an instrumentation engineer, make the security issue for TCP/IP supported embedded devices isn’t big deal:

    - Usually malware and hacking are for either forcing ads to people around the cyberspace or getting information about those people around for purpose of future ads wave or maybe for criminal intentions to abuse their information.

    - PC based node terminals on the cyberspace are all using the same architectures and protocols, and even the same security techniques to protect themselves, and the architectures used are widely known to all the people around the globe (either the hardware or the software architectures), unlike the embedded systems their architectures and even the software required for them requires a specialist in the area and in some cases you need to study three or four intensive courses to master one of the architectures and software development for it, which mean the usual hacker/malware applications so far aren’t compatible with the TCP/IP supported embedded devices.

    - In embedded systems hardware resources and CPU traffic usually aren’t huge or jammed and easy to monitor, I believe the ports are much easier to monitor than our Windows/Linux computers.

    - Connectivity over the network for embedded systems is usually limited for either sending relatively small amount of data or receive commands from another node/server terminal, this limited connectivity isn’t motivating enough to hackers/malware developers to get into, will be hectic for them and the results aren’t rewarding for them.

    I believe I didn’t highlight the main dangers or maybe other reasons why not to worry but this is my opinion from my limited view and knowledge about security in embedded systems.

  6. A.M. @ LI says:

    @K.: Your logic is correct but only for short term applications. This approach to design is very dangerous. It is much easier and cheaper to patch a security hole in a PC or hand-held device like a cell phone or PDA than in smaller embedded system. Think cyber-warfare and attacks against national infrastructures. Think hijacking your car critical systems via blue-tooth access to its network. The latter had only one known kind of attach known to me – car whispers – so far without major impact as they were only to break into car audio system since it was the only one on blue-tooth network. /* OK, blue-tooth is not IP but it is a short range wireless network with its own protocol stack too. */

  7. KER @ LI says:

    I agree with the idea of security in embedded systems and specially when it comes to connectivity over a network either public one or private, but shouldn’t be our main concern right now, cause security applications usually “memory hungry” and embedded systems you don’t have the privilege of large memory or buffers to play with, beside as I mentioned previously that network traffic with the embedded device can be easily monitored for intrusion/strange data transfers and/or resources usage on the embedded device, locks in the application layer of the embedded device can be efficient and low cost/resources solution.

    On the other hand to protect networks that monitors and/or controls high importance systems like a network of controllers inside nuclear reactor or even the car network the best way is to run the security application (e.g. firewall) through the main server and let it responsible of the intrusion protection, will reduce both computation load and resources management at the embedded device side, one other option is to isolate networks of strategic sensitivity (national security related projects, infrastructure) from global and public networks to minimize the risks of intrusion/viruses/malware/hacking, thats safer and preferable solution by many organizations.

    My point is to try to minimize the network security load required from the embedded device and save resources cause the risks that endanger networked embedded devices aren’t the highest concerns and priorities in the current world cyberspace security or embedded systems, however I claim its important as you said from long term aspect.

    cheers :)

  8. R.M. @ LI says:

    You are less secure than you think and only as secure as you can prove. This issue is addressed in a more formal way with ISA-99 (Industrial Automation and Control Systems Security). http://www.isa.org/MSTemplate.cfm?MicrositeID=988&CommitteeID=6821 . Homeland Security wants to see all control systems, especially for critical infrastructure, become more robust and resistant to assaults (or accidents).

  9. A.A. @ LI says:

    Any system that relies strictly on software security for protection can and will be compromised at some point in time. This is where mechanical conditioning works as a safeguard that many motherboards have on them to prevent someone from overwriting the BIOS of a PC without the user’s authorization. On mine I have to move a jumper to allow the programming. In the case of an automobile this would prevent someone from remotely tampering with the vehicles operation by requiring something as simple as the throwing of a programming allowed switch, although I would hope it would require some more secure form of authorization.

  10. L.H. @ LI says:

    Physical security is great, but not always practical. For example, large banks have thousands of ATM machines in the field and it is not practical to send a tech and a guard out to each branch to enable firmware upgrades to the equipment. This is where certificate based sucurity schemes are used to enable pushing firmare updates to the field.

  11. L.R. @ LI says:

    L., you bring up a good point, the whole discussion is about a tradeoff, a compromise between cost of maintenance (e.g. upgrade) and security (i.e. the chance of malicious code modification). There is a body of knowledge in the insurance industry which knows how to express risk in monetary terms, and then put a price on it. With regards to banks and ATM networks, I am pretty sure that their equipment have always been insured, and therefore the insurance companies would the the ones to decide to allow unattended remote code update or require a physical presence of a guard or a bank clerk with authorization codes to allow any modification of the embedded software.

    Karim, security does not have to be an add-on piece of software, this is a misconception arising from the PC industry. If you look at this not just as security (i.e. the protection against intentional harmful act) but as quality (protection against inadvertent malfunction or accident), you may get at a better starting point.

    Here is an anecdote: many years ago a worked for an industrial equipment maker who produces machines that make use of extremely dangerous materials – they were not nuclear, but they were explosive and toxic at the same time. So in theory at least one might see those as prime targets for subversion. At the time however our main concern had been with inadvertent malfunction (i.e. quality), but the solution is equally effective for sabotage prevention. There is a tiered system of protections put in place using mechanical, electrical and software interlock mechanisms that all work with the aim to prevent any situation that may lead to an uncontrolled reaction (explosion) or leak of toxic materials. Also, these machines are networked with TCP/IP, but are only connected to isolated networks. The worst that could happen if any software element misbehaves in any way is that some production batch will be botched, or the machine will need to undergo several hours of “cleaning” before it can resume production, but nothing more. Electrical circuits with hard-wired logic protect the machine from any serious damage to itself or its environment, and on top of that certain extremely dangerous conditions are prevented mechanically. Of course if an authorized person sabotages the mechanical and electrical interlocks, they do not really need to hack the software or synthesize a DDoS attack to make the thing go boom.

    In summary, the need for reliability and safety in embedded systems has addressed the security aspect implicitly already, in most instances. This is in contrast to the PC industry where very little attention to quality has been paid, and where the cost of an failure or accident was considered low enough to ignore.

  12. KER @ LI says:

    I believe defining levels of vulnerability for each category of applications in embedded systems area will be the best, similar to the Avionics DO-178B standard that defines five levels of hazard for aviation software, giving level A to the fly-by-wire control system and the coffee machine firmware level E, cause security for oscilloscope connected over ethernet isn’t that critical like an IP based surveillance camera in a chemical facility, this way we can have a common ground defining the “necessity” for security in embedded software.

  13. R.M. @ LI says:

    Ethernet requires low-level and high-level processing. At the low level it uses interrupts, which are always high priority. At the high level you must process incoming data fast enough to avoid buffer overrun under normal conditions. Your security scheme has to manage the processing at both levels to prevent interference with higher security processes. An Ethernet storm or assault can overwhelm an unprepared system Avionics does this with partition scheduling that limits the CPU used by any one process/subsystem.

  14. L.R. @ LI says:

    R., many embedded processors have a mechanism to control interrupt priority programmatically, allowing to set Ethernet ISR priority lower than some other time-critical ISRs. Modern Ethernet supports flow control at the MAC level, and it can be used to throttle incoming traffic when receive buffers are running low.

    In VxWorks, which I have used for many years, the Ethernet ISR does very little work – it wakes up a task dedicated to processing network input and disables the interrupt, until the network task has finished processing all input. It is advantageous to be able to control the scheduling priority of this network task such that any unexpectedly high network traffic does not impact real-time critical processing. This technology has been available with VxWorks since the mid-80′s, and has given it a great competitive advantage, as was the fact that it was the first RTOS to be integrated with Ethernet and TCP/IP.

  15. OAZ @ TI says:

    GSM subscriptions supporting Machine to Machine (M2M) applications are expected to reach 1 billion by 2015 (NSN estimate). So it’s not only IP-connections are opening up doors …

Leave a Reply to KER @ LI