Security & Encryption Channel

As designers entrust their embedded systems with a larger scope of data to enable them to make better decisions, the need to protect the integrity of the system grows. This series focuses on the challenges and opportunities available to designers to protect their IP (intellectual property), data integrity, and the identity and privacy of the user of end-devices such as personal medical devices.

Does adding an IP address change embedded designs?

Thursday, September 15th, 2011 by Robert Cravotta

A recent analysis from McAfee titled “Caution: Malware Ahead” suggests that the number of IP-connected devices will grow by a factor of fifty over a ten year period based on the number of IP-connected devices last year. The bulk of these devices are expected to be embedded systems. Additionally, connected devices are evolving from a one-way data communication path to a two way dialog – creating potential new opportunities for hacking embedded systems.

Consider that each Chevy Volt from General Motors has its own IP address. The Volt uses an estimated 10 million lines of code executing over approximately 100 control units, and the number of test procedures to develop the vehicle was “streamlined” from more than 600 to about 400. According to Meg Selfe at IBM, they use the IP-connection for a few things today, like finding a charging station, but they hope to use it to push more software out to the vehicles in the future.

As IP-connected appliances become more common in the home and on the industrial floor, will the process for developing and verifying embedded systems change – or is the current process sufficient to address the possible security issues of selling and supporting IP-connected systems? Is placing critical and non-critical systems on separate internal networks sufficient in light of the intent of being able to push software updates to both portions of the system? Is the current set of development tools sufficient to enable developers to test and ensure their system’s robustness from malicious attacks? Will new tools surface or will they derive from tools already used in high safety-critical application designs? Does adding an IP address to an embedded system change how we design and test them?

What techniques do you use to protect information?

Thursday, June 16th, 2011 by Robert Cravotta

The question of how to protect information on computers and networks has been receiving a lot of visibility with public disclosures of more networks being hacked over the past few weeks. The latest victims of hacking in the last week include the United States CIA site, the United States Senate site, and Citibank. Based on conversations with people about mine and their own experiences with having account and personal information compromised, I suspect there are a number of techniques that each of us use that could prove useful to share with each other on how to improve the protections on your data.

Two techniques that I have started to adopt in specific situations involve the use of secure tokens and the use of dummy email addresses. The secure token option is not available for every system, and it does add an extra layer of passwords to the login process. The secure token approach that I use generates a new temporary passcode every 30 seconds. Options for generating the temporary passcode include using a hardware key-fob or a software program that runs on your computer or even your cell phone. The secure token approach is far from transparent, and there is some cost in setting up the token.

I have only just started playing with the idea of using temporary or dummy email addresses to provide a level of indirection between my login information and my email account. In this case, my ISP allows me to create up to 500 temporary email ids that I can create, manage, and destroy at a moment’s notice. I can create a separate email address for each service. What makes these email addresses interesting though is that there is no way to actually log into the email account with those names as they are merely aliases for my real account which remains private. I’m not positive if this is a better way than just using a real email address, but I know I was worried the one time I had an service hacked because I realized that the email address that was connected to that service was also used by other services – and that represented a potential single point of failure or security access point to a host of private accounts.

One challenge of the dummy email accounts is keeping track of each one; however, because there is no access point available for any of these addresses, I feel more comfortable using a text file to track which email address goes to which service. On the other hand, I am careful to never place the actual email address that I use to access those dummy addresses in the same place.

Do you have some favorite techniques that you have adopted over the years to protect your data and information? Are they techniques that require relying on an intermediary – such as with the secure tokens, or are they personal and standalone like the dummy email address idea? Are any of your techniques usable in an embedded device, and if so, does the design need to include any special hardware or software resources to include it in the design?

Is the cloud safe enough?

Wednesday, May 18th, 2011 by Robert Cravotta

The cloud and content streaming continue to grow as a connectivity mechanism for delivering applications and services. Netflix now accounts for almost 30 percent of downstream internet traffic during peak times according to Sandvine’s Global Internet Phenomena Report. Microsoft and Amazon are entering into the online storage market. But given Sony’s recent experience with the security of their PlayStation and Online Entertainment services, is the cloud safe enough, especially when new exploits are being uncovered in their network even as they bring those services back online?

When I started working, I was exposed to a subtle but powerful concept that is relevant to deciding if and when the cloud is safe enough to use, and that lesson has stayed with me ever since. One of my first jobs was supporting a computing operations group and one of their functions was managing the central printing services. Some of the printers they managed were huge impact printers that weighed many hundreds of pounds each. A senior operator explained to me that there was a way to greatly accelerate the wear and tear on these printers merely by sending a print job with the correct but completely legal sequences of text.

This opened my eyes to the fact that even when a device or system is being used “correctly,” unintended consequences can occur unless the proper safeguards are added to the design of that system. This realization has served me well in countless projects because it taught me to focus on mitigating legal but unintended operating scenarios so that these systems were robust.

An example that affects consumers more directly is exploding cell phone batteries a few years back. In some of those cases, the way the charge was delivered to the battery weakened the batteries; however, if a smarter regulator was placed between the charge input and the battery input, charge scenarios that are known to damage a battery could be isolated by the charge regulator instead of being allowed to pass through in ignorance. This is a function that adds cost and complexity to the design of the device and worse yet, does not necessarily justify an increase in the price that the buyer is willing to pay. However, the cost of allowing batteries to age prematurely or to explode is significant enough that it is possible to justify the extra cost of a smart charge regulator.

I question whether the cloud infrastructure, which is significantly more complicated than a mere stand-alone device or function, is robust enough to act as a central access point because it currently represents a single point of failure that can have huge ramifications from a single flaw, exploit, or weakness in its implementation. Do you think the cloud is safe enough to bet your product and/or company on?

Is embedded security necessary?

Wednesday, February 16th, 2011 by Robert Cravotta

I recently had an unpleasant experience related to online security issues. Somehow my account information for a large online game had been compromised. The speed in which the automated systems detected that the account had been hacked into and locked it down is a testament to how many compromised accounts this particular service provider handles on a daily basis. Likewise, the account status was restored with equally impressive turn-around time.

What impacted me the most about this experience was realizing that there is obviously at least one way that malicious entities can compromise a password protected system despite significant precautions to prevent such a thing from occurring. Keeping the account name and password secret; employing software to detect and protect against viruses, Trojan horses, or key loggers; as well as ensuring that data between my computer and the service provider is encrypted was not enough to keep the account safe.

The service provider’s efficiency and matter-of-fact approach to handling this situation suggests there are known ways to circumvent the security measures. The service provider offers and suggests using an additional layer of security by using single-use passwords from a device they sell for a few bucks and charge nothing for shipping.

As more embedded systems support online connectivity, the opportunity for someone to break into those systems increases. The motivations for breaking into these systems are myriad. Sometimes, such as in the case of my account that was hacked, there is the opportunity for financial gain. In other cases, there is notoriety for demonstrating that a system has vulnerability. In yet other cases, there may be the desire to cause physical harm, and it is this type of motivation that begs this week’s question.

When I first started working with computers in a professional manner, I found out there were ways to damage equipment through software. The most surprising example involved making a large line printer destroy itself by sending a particular sequence of characters to the printer such that it would cause all of the carriage hammers to repeatedly strike the ribbon at the same time. By spacing the sequence of characters with blank lines, a print job could actually make a printer that weighed several hundred pounds start rocking back and forth. If the printer was permitted to continue this behavior, mechanical parts could be severely damaged.

It is theoretically possible to perform analogous types of things with industrial equipment, and with more systems connected to remote or public networks, the opportunities for such mischief are real. Set top boxes that are attached to televisions are connecting to the network – offering a path for mischief if the designers of the set top box and/or television unintentionally left an opening in the system for someone to exploit.

Is considering the security implications in an embedded design needed? Where is the line between when implementing embedded security is important versus when it is a waste of resources? Are the criteria for when embedded security is needed based on the end device or on the system that such device operates within? Who should be responsible for making that call?

‘DRM’ For Systems: Protecting Data and Engineering Intellectual Property

Friday, November 19th, 2010 by Max Baron

Freescale Semiconductor has just launched a low cost chip that can be used to protect network-connected low power systems from unauthorized access to system-internal resources. Freescale’s target: a chip that can secure the growing number of network endpoints.

When it comes to e-books, images, music, TV episodes and movies, the authors’ and producer’s rights are protected by encryption. Encryption makes it impossible to easily and legally take on trips or vacations any literature or multimedia from the device to which these have been originally attached. Further, it makes it impossible to create important backups since optical, magnetic and flash memory media can lose part of their content more easily than books or film.

Priceless art in its different forms must be protected. If however we separate the unique talent and genius from the money and time invested, we find that DRM (Digital Rights Management) protects investments ranging from a few tens of thousands of dollars to the sometimes high cost of two-three hundred million dollars per movie (exception: the cost of Avatar was estimated at $500M).

Yet, nobody protects the brainchildren of system architects, software and hardware engineers and the investments and hard work that have produced the very systems that made feasible the art and the level of civilization we enjoy today. They are not protected by a DRM where “D” stands for “Designers.”  Separating priceless engineering genius and talent from investment, we find similar sums of money invested in hardware and software aside from the value of sensitive data in all its forms that can be stolen from unprotected systems.

Freescale Semiconductor’s recent introduction is adding two new members to the QorIQ product family (QorIQ is pronounced ‘coreIQ’). They are the company’s Trust Architecture-equipped QorIQ P1010 and the less featured QorIQ P1014. The QorIQ P1010 is designed to protect factory equipment, digital video recorders, low cost SOHO routers, network-attached storage and other applications that would otherwise present vulnerable network endpoints to copiers of system HW and SW intellectual property, data thieves and malefactors. 

It’s difficult to estimate the number of systems that have been penetrated analyzed and/or cloned by competitors or modified to offer easy access to data thieves, but some indirect losses that have been published can be used to understand the problem.

In its December 2008 issue, WIRED noted that according to the FBI, hackers and corrupt insiders have stolen since 2005 more than 140 million records from US banks and other companies, accounting each year for a loss of $67 billion. The loss was owed to several factors.  In a publication dated January 19, 2006, c-net discusses the results of an FBI-research involving 2,066 US organizations out of which 1,324 have suffered losses over a 12-month period from computer-security problems. Respondents spent nearly $12 million to deal with virus-type incidents, $3.2 million on theft, $2.8 million on financial fraud, and $2.7 million on network intrusions. The last number represents mostly system end-user loss since it’s difficult to estimate the annual damage to the system and software companies that have created the equipment. 

Freescale Semiconductor’s new chip offers a two-phase secure access to the internals of network-connected low cost systems. The first phase accepts passwords and checks the authorization of the requesting agent be it a person or machine. The second phase provides access to the system’s HW and SW internals if the correct passwords have been submitted.

Fabricated in 45nm SOI, Freescale Semiconductor’s QorIQ P1010 shown in the Figure is configured around an e500 core — a next generation core that’s downward code-compatible with the e300 core that populates the company’s PowerQUICC II Pro communications processors. QorIQ P1010’s Power Architecture e500 core is designed to be clocked at up to 800MHz and is estimated by Freescale Semiconductor to consume in some applications less than 1.1W.

The chip’s single e500 core configuration follows the same concept employed in Freescale Semiconductor’s higher performance QorIQ chip family where protected operating system and applications are executed by multiple e500 cores at higher frequencies. The common configuration has the family’s processing cores and supporting cache levels surrounded by application-specific high bandwidth peripherals that include communication with the network, system-local resources and system-external peripherals.

External peripherals such as cameras will be encountered by the QorIQ P1010 in digital video recorders (DVR) accepting analog video streams from surveillance cameras. The DVRs may employ local storage for locally digitized and encoded video and/or make use of a network to access higher capacity storage and additional processing.

FlexCAN interfaces are the most recent on-chip application-specific peripherals encountered in the QorIQ P1010. The chips’ architects and marketing experts are probably responding to requests coming from potential customers building factory equipment. Aside from e500 cores and peripherals, the denominator common to most chips in the family is the little documented Trust Architecture.

An approximate idea of the Trust Architecture’s internals and potential can be gleaned from a variety of documents and presentations made by Freescale Semiconductor, from ARM’s Cortex-A series of cores employing a similar approach under that company’s brand named “TrustZone,” and from today’s microcontroller technologies used in the protection of smart cards.

The basic components of a secure system must defend the system whether it’s connected to the network or not, under power or turned off, monitored externally or probed for activity.

In the QorIQ P1010’s configuration of the first phase we should expect to find encrypt-decrypt accelerators used in secure communication with off-chip resources and on-chip resident ROM. The content of these resources should be inaccessible to any means that don’t have access to the decryption keys.

Off-chip resources could include system-internal such as SDRAM, Flash memory, optical drives and hard drives. Examples of system-external resources can be desktops, network attached storage, and local or remote servers.

The on-chip resident ROM probably contains the code-encrypted security monitor and boot sequence. A set of security fuses may be used to provide the necessary encryption-decryption keys. Different encryption-decryption keys defined for different systems would tend to limit damage made by malefactors but in case of failure could render encrypted data useless unless the same keys could be used in a duplicate system. Keys implemented in 256 bits or higher will make it very difficult in time and money to break into the system.

The block diagram of the QorIQ P1010 shows the peripherals incorporated on-chip but understandably offers less information about the Trust Architecture protecting the SoC. The QorIQ P1014 should cost less and should be easier to export since it lacks the Trust Architecture and shows a more modest set of peripherals. (Courtesy of Freescale Semiconductor)

Initial system bring-up, system debug and upgrade processes require access to chip internals. According to Freescale Semiconductor the chip’s customers will be provided with a choice among three modes of protected access via JTAG: open access until the customer locks it, access locking w/o notification after a period allowing access for debug, and JTAG delivered as permanently closed w/o access.  Customers also have the option to access internals via the chip’s implementation of the Trust Architecture.

Tamper detection is one of the most important components in keeping a secure chip from unauthorized analysis. Smart cards detect tampering in various ways: they monitor voltage, die temperature, light, bus probing, and in some implementations, also employ a metal screen to protect the die from probing. We should assume that the system engineer can use the QorIQ P1010 to monitor external sensors through general purpose input/outputs protecting the system against tampering — to ensure that the system will behave as the intended by the original manufacturer.

 The QorIQ P1010’s usefulness depends on the system configuration using it.  A “uni-chip” system will be protected if it incorporates hardwired peripherals such as data converters and compressors, but it employs only the QorIQ P1010 for all its programmable processing functions. According to Freescale Semiconductor’s experts, a system’s data communications with other systems on the network will be protected if the other systems also employ the QorIQ P1010 or other members of the QorIQ chip family. Simple systems of this kind can use the QorIQ P1010 to protect valuable system software and stored data since except in proprietary custom designs the hardware may be easy to duplicate.

Note that systems employing the QorIQ P1010 plus additional programmable SoCs are more vulnerable.

Freescale has not yet announced the price of the QorIQ P1010. To gain market share the difference in cost of the QorIQ P1010 compared with a SoC lacking protection should be less than 2%-3% of the system cost–else competing equipment lacking protection will sell at lower prices.  Freescale Semiconductor has introduced a ready-to-use chip that can become important to end users and system designers. Now all we need to see is pricing.

Robust Design: Quality vs. Security

Monday, June 14th, 2010 by Robert Cravotta

I had a conversation recently with Nat Hillary, a field application engineer at LDRA Technologies, about examples of software fault tolerance, quality, and security. Our conversation identified many questions and paths that I would like to research further. One such path relates to how software systems that are not fault tolerant may present vulnerabilities that attackers can use to compromise the system. A system’s vulnerability and resistance to software security exploits is generally a specification, design, and implementation quality issue. However, just because secure systems require high quality does not mean that high quality systems are also secure systems because measuring a system’s quality and security focuses on different metrics.

 

Determining a system’s quality involves measuring and ensuring that each component, separately and together, fits or behaves within some specified range of tolerance. The focus is on whether the system can perform its function with acceptable limits rather than on the complete elimination of all variability. The tightness or looseness of a component’s permitted tolerance balances the cost and difficulty of manufacturing identical components with the cumulative impact of allowing variability among the components against the system’s ability to perform its intended function. For example, many software systems ship with some number of known minor implementation defects (bugs) because the remaining bugs do not prevent the system from operating within tolerances during the expected and likely use scenarios. The software in this case is identical from unit to unit, but the variability in the other components in the system can introduce differences in behavior in the system. I will talk about an exhibit at this year’s ESC San Jose that demonstrated this variability in a future post.

 

In contrast, a system’s security depends on protecting its vulnerabilities operating under extraordinary conditions. A single vulnerability under the proper extraordinary conditions can compromise the system’s proper operation. However, similar to determining a system’s quality, a system’s security is not completely dependent on a perfect implementation. If the system can isolate and contain vulnerabilities, it can still be good enough to operate in the real world. The 2008 report “Enhancing the Development Life Cycle to Produce Secure Software” identifies that secure software exhibits:

 

1. Dependability (Correct and Predictable Execution): Justifiable confidence can be attained that software, when executed, functions only as intended;

2. Trustworthiness: No exploitable vulnerabilities or malicious logic exist in the software, either intentionally or unintentionally inserted;

3. Resilience (and Survivability): If compromised, damage to the software will be minimized, and it will recover quickly to an acceptable level of operating capacity;

 

An example of a software system vulnerability that has a fault tolerant solution is the buffer overflow. The buffer overflow is a technique that exploits functions that do not perform proper bounds checking. The Computer Security Technology Planning Study first publicly documented the technique in 1972. Static analysis software tools are able to assist developers to avoid this type of vulnerability by identifying array overflows and underflows, as well as when signed and unsigned data types are improperly used. Using this fault tolerant approach can allow a software system to exhibit the three secure software properties listed above.

[Editor's Note: This was originally posted on the Embedded Master]

Robust Design: Fault Tolerance – Nature vs. Malicious

Tuesday, June 1st, 2010 by Robert Cravotta

[Editor's Note: This was originally posted on the Embedded Master]

For many applications, the primary focus of robust design principles is on making the design resilient to rare or unexpected real world phenomenon. Embedded designers often employ filters to help mitigate the uncertainty that different types of signal noise can cause. They might use redundant components to mitigate or compensate for specific types of periodic failures within various subsystems. However, as our end devices become ever more connected to each other, there is another source of failure that drives and benefits from a strong set of robust design principles – the malicious attack.

On most of the systems I worked on as an embedded developer, we did not need to spend too much energy addressing malicious attacks on electronics and software within the system because the systems usually included a physical barrier that our electronics control system could safely hide behind. That all changed when we started looking at supporting remote access and control into our embedded subsystems. The project that drove this concept home for me was on a Space Shuttle payload that would repeatedly fire its engines to maneuver around the Shuttle. No previous payload had ever been permitted to fire its engines for multiple maneuvers around the Shuttle before. The only engine fire they performed was to move away from the Shuttle and move into their target orbital position.

The control system for this payload was multiple fault-tolerant, and we often joked among ourselves that the payload would be so safe that it would not ever be able to fire its own engines to perform its tasks because the fault tolerant mechanisms were so complicated. This was even before we knew we had to support one additional type of fault tolerance – ensuring that none of the maneuvering commands came from a malicious source. We had assumed that because we were working in orbital space and the commands would be coming from a Shuttle transmitter, that we were safe from malicious attacks. The NASA engineers were concerned that a ground-based malicious command could send the payload into the Shuttle. The authentication mechanism was crude and clunky by today’s encryption standards. Unfortunately, after more than two years of working on that payload, the program was defunded and we never actually flew the payload around the Shuttle.

Tying this back to embedded systems on the ground, malicious attacks often take advantage of the lack of fault tolerance and security in a system’s hardware and software design. By deliberately injecting fault conditions onto a system or into a communication stream, an attacker with sufficient access to and knowledge of how the embedded system operates, can create physical breaches that provide access to the control electronics or expose vulnerabilities in the software system through techniques such as forcing a buffer overflow.

Adjusting your design to mitigate the consequences of malicious attacks can significantly change how you approach analyzing, building, and testing your system. With this in mind, this series will include topics that would overlap with a series on security issues, but with a mind towards robust design principles and fault tolerance. If you have experience designing for tolerance to malicious attacks in embedded systems, not only with regards to the electronics and software, but also from a mechanical perspective, and you would like to contribute your knowledge and experience to this series, please contact me at Embedded Insights.