Articles by Robert Cravotta

As a former Technical Editor covering Embedded Processing at EDN, Robert has been following and commenting on the embedded processing space since 2001 (see article index). His expertise includes software development and system design using microprocessors, microcontrollers, digital signal processors (DSPs), multiprocessor architectures, processor fabrics, coprocessors, and accelerators, plus embedded cores in FPGAs, SOCs, and ASICs. Robert's embedded engineering background includes 16 years as a Member of the Technical Staff at Boeing and Rockwell International working on path-finding avionics, power and laser control systems, autonomous vehicles, and vision sensing systems.

Are you using (or planning to use) Java for programming embedded systems?

Wednesday, October 6th, 2010 by Robert Cravotta

Java is a general-purpose software programming language that features concurrent, class-based, object-oriented constructs that is designed to embody as few implementation dependencies of the target processor as possible. It is a programming language that targets application developers and provides an abstract platform that lets them write their software once and easily run it on many different target processors. Java is a programming language that is used for many mobile device and web applications. But is Java appropriate to meet the trade-offs that embedded developers need to make to build true embedded systems?

Developing application software is different than developing software for embedded systems. I do not think that just because a system is small or resource constrained that it qualifies as an embedded system. Rather, embedded systems are small or resource constrained because they do not lend themselves to extracting a premium from the end user precisely because they are generally invisible to the end user.

While portions of the application code will operate invisibly to the end user, the application software has a real interactive component with the end user. The strength or weakness of that interaction will affect the success or failure of that application code. In contrast, almost all of the software in an embedded system operates invisibly and in an autonomous-like fashion from the end-user.

To clarify, a mobile device, such as a smart phone or a tablet computer, contain both application- and embedded-level software. The application software is what the end user uses and might select and load onto the application processor. For example, the operating system that the target processor supports is often a key consideration, even indirectly from a branding perspective. Apple devices are positioned as “not Microsoft” or “not Windows” devices – even though there are significant similarities in the embedded components of these devices.

The embedded portion of these systems are those parts that the user has no need to know how they are implemented. Examples of the embedded parts of these devices include the wireless network controller and the power management controller. There are many embedded systems even in a desktop computer. The hard disk controller, the network controller, the keyboard and mouse controller, are a few examples. The cooling system and the health checking modules are other examples of embedded systems within the desktop. These are items that how they are implemented does not drive a user’s decision to select that end-system.

At this point, I am not aware of embedded systems as I have identified them here being developed with Java code. Is Embedded Java a marketing meme or is it real? Are you using (or planning to use) Java for your embedded designs? If so, what types of designs are you doing this for?

When and how much should embedded developers implement robust defenses against malicious software in their designs?

Wednesday, September 29th, 2010 by Robert Cravotta

It is easy to believe that malware, malicious software designed to secretly access a system without the owner’s informed consent, only affects computers. However, as more devices support connection with other devices through a USB port, malware is able to hide and launch itself from all types of devices. For example, earlier this year, Olympus shipped over 1700 units of their Stylus Tough 6010 digital compact camera with an unexpected bonus – the camera’s internal memory was carrying an autorun worm that could infect a user’s Windows computer when they connected the camera to it.

Another example of malware using a USB device as a transport and infection mechanism involves USB sticks that IBM handed out at this year’s AusCERT (Australian Computer Emergency Response Team) conference on the Gold Coast, Queensland. In this case, the USB carried two pieces of malware and was handed out at a conference that focuses on Information Security.

Researchers at Stanford’s Computer Security Lab identify that many low cost devices, such as webcams, printers, and other devices that ship with embedded web interfaces, are not designed to withstand malware attacks despite interfacing with the most sensitive parts of a computer network. According to the researchers, NAS (network-attached storage) devices pose the highest risk because they are susceptible to all five of the attack classes the researchers considered in their study.

There are other examples of companies selling infected end products to users. My key concern is that so many devices are sold at such low margins already that it is unlikely the designs include much attention to withstanding malicious software planting themselves in those devices.

The question is, how much energy and attention should embedded designers give to hardening their designs to stopping malicious software – or being an unintended transport for malicious software? I’m not sure an isolated incident that involves using an infected test computer (such as the Olympus example), which infects the end products during the test, is a basis for changing the design flow other than to ensure the test equipment is not itself infected. How would you, or do you, determine how much effort to put into your embedded design to address malicious software from using your system in an unintended fashion?

What are your criteria for when to use fixed-point versus floating-point arithmetic in your embedded design?

Wednesday, September 22nd, 2010 by Robert Cravotta

The trade-offs between using fixed-point versus floating-point arithmetic in embedded designs continues to evolve. One set of trades-offs between using either type of arithmetic involves system cost, processing performance, and ease-of-use. Implementing fixed-point arithmetic is more complicated than using floating-point arithmetic on a processor with a floating-point unit. The extra complexity of determining scaling factors for fixed-point arithmetic and accommodating precision loss and overflow, has historically been offset by allowing the system to run on a cheaper processor, and depending on the application, at lower energy consumption and more accuracy than with a processor with an integrated floating-point unit.

However, the cost of on-chip floating-point units has been dropping for years and they crossed a cost threshold over the last few years as signaled by the growing number of processors that include an integrated floating-point unit (more than 20% of the processors listed in the Embedded Processing Directory device tables now include or support floating-point units). In conversations with processor vendors, they have shared with me that they have experienced more success with new floating-point devices than they anticipated, and this larger than expected success has spurred them to plan even more devices with floating-point support.

Please share your decision criteria for when to use fixed-point and/or floating-point arithmetic in your embedded designs. What are the largest drivers for your decision? When does the application volume make the cost difference between two processors drive the decision? Does the energy consumption between the two implementations ever drive a decision one way or the other? Do you use floating-point devices to help you get to market quickly and then migrate to a fixed-point implementation as you ramp up your volumes? As you can see, the characteristics of your application can drive the decision in many ways, so please share what you can about the applications with which you have experience performing this type of trade.

Alternative touch interfaces – sensor fusion

Tuesday, September 21st, 2010 by Robert Cravotta

While trying to uncover and highlight different technologies that embedded developers can tap into to create innovative touch interfaces, Andrew commented on e-field technology and pointed to Freescale’s sensors. While exploring proximity sensing for touch applications, I realized that accelerometers represent yet another alternative sensing technology (versus capacitive touch) that can impact how a user can interact with a device. The most obvious examples of this are devices, such as a growing number of smart phones and tablets, which are able to detect their orientation to the ground and rotate the information they are displaying. This type of sensitivity enables interface developers to consider broader gestures that involve manipulating the end device, such as shaking it, to indicate some type of change in context.

Wacom’s Bamboo Touch graphic tablet for consumers presents another example of e-field proximity sensing combined with capacitive touch sensing. In this case, the user can use the sensing surface with an e-field optimized stylus or they can use their finger directly on the surface. The tablet controller detects which type of sensing it should use without requiring the user to explicitly switch between the two sensing technologies. This type of combined technology is finding its way into tablet computers.

I predict the market will see more examples of end devices that seamlessly combine different types of sensing technologies in the same interface space. The different sensing modules working together will enable the device to infer more about the user’s intention, which will in turn, enable the device to better learn and adapt to each user’s interface preferences. To accomplish this, devices will need even more “invisible” processing and database capabilities that allow these devices to be smarter than previous devices.

While not quite ready for production designs, the recent machine touch demonstrations from the Berkeley and Stanford research teams suggest that future devices might even be able to infer user intent by how the user is holding the device – including how firmly or lightly they are gripping or pressing on it. These demonstrations suggest that we will be able to make machines that are able to discern differences in pressure comparable to humans. What is not clear is whether each of these technologies will be able to detect surface textures.

By combining, or fusing, different sensing technologies together, along with in-device databases, devices may be able to start recognizing real world objects – similar to the Microsoft Surface. It is becoming within our grasp for devices to start recognizing each other without requiring explicit electronic data streams flowing between those devices.

Do you know of other sensing technologies that developers can combine together to enable smarter devices that learn how their user communicates rather than requiring the user to learn how to communicate with the device?

What are good examples of how innovative platforms are supporting incremental migration?

Wednesday, September 15th, 2010 by Robert Cravotta

Companies are regularly offering new and innovative processing platforms and functions in their software development tools. One of the biggest challenges I see for bringing new and innovative capabilities to market is supporting incremental adoption and migration. Sometimes the innovative approach for solving a set of problems requires a completely different way of attacking the problem and, as a result, it requires developers to rebuild their entire system from scratch. Requiring developers to discard their legacy development and tools in order to leverage the innovative offering is a red flag that the offering may have trouble gaining acceptance by the engineering community.

I have shared this concern with several companies that brought out multicore or many-core systems over the past few years because their value proposition did not support incremental migration. They required the developer to completely reorganize their system software from scratch in order to fully take advantage of their innovative architecture. In many cases, this level of reorganizing a design represents an extremely high level of risk compared to the potential benefit of the new approach. If a development team could choose to migrate a smaller portion of their legacy design to the new approach and successfully release it in the next version of the product, they could build their experience with the new approach and gradually adopt the “better” approach without taking a risk that was larger than their comfort level.

Based on stories from several software development companies, software development tools are another area of opportunity for new capabilities to benefit from being able to support incremental adoption. In this case though, the new capabilities do not require a redesign, but they require the developer to accept an unknown learning curve to even evaluate the new feature. As the common storyline goes, the software tool vendor has found that describing the new capability, describing how to use it, and getting the developer to understand how that new capability benefits them is not as straight forward as they would like. As a result, some of the newest features they have added to their products go largely unnoticed and unused by their user base. Their frustration is obvious when they share these stories – especially when they talk about the circumstances where developers do adopt the new features. In many cases, a developer calls them with a problem and the software tool vendor explains how they have a capability already in the tool that will help the developer solve that problem. Only then does the developer try out the feature and then adopt using it in future projects, but they had to experience the type of problem that the feature was designed to assist with before they even recognized that the feature was already added to the tool suite.

Rather than harp on the failures of accommodating incremental migration or easing the adoption learning curve, I would like to uncover examples and ideas of how new innovations can support incremental adoption. For example, innovative multicore structures would probably be able to better support incremental migration if they accommodated inter processor communication mechanisms with cores that exist outside the multi-core fabric rather than leave it to the developer to build such a mechanism from scratch.

Texas Instruments’ recent 0.9V MSP430L092 microcontroller announcement provides two examples. The microcontroller itself is capable of operating the entire digital and analog logic chain from a 0.9V power supply without the need for an on-board boost converter. To support legacy tools sets, the available flash emulation tools provide a mechanism to transparently translate the power and signals to support debugging the 0.9V device with legacy tools.

The other example of the L092 device is that it includes a new type of analog peripheral block that TI calls the A-POOL (Analog Functions Pool). This analog block combines five analog functions into a common block that shares transistors between the different functions. The supported functions are an ADC (analog-to-digital converter), a DAC (digital-to-analog converter), a comparator, a temperature sensor, and an SVS (system voltage supervisor). The analog block includes a microcode engine that supports up to a 16-statement program to autonomously activate and switch between the various peripheral functions without involving the main processor core. The company tells me that in addition to directly programming the microcode stack, the IAR and Code Composer development tools understand the A-POOL and can compile C code into the appropriate microcode for the A-POOL.

If we can develop an industry awareness of ways to supporting incremental adoption and migration, we might help some really good ideas to get off the ground faster than otherwise. Do you have any ideas for how to enable new features to support an incremental adoption?

Giving machines a fine sense of touch

Tuesday, September 14th, 2010 by Robert Cravotta

Two articles were published online on the same day (September 12, 2010) in Nature Materials that describe the efforts of two research teams at UC Berkeley and Stanford University that have each developed and demonstrated a different approach for building artificial skin that can sense very light touches. Both systems have reached a pressure sensitivity that is comparable to what a human relies on to perform everyday tasks. The sensitivity of these systems can detect pressure changes that are less than a kilopascal; this is an improvement over earlier approaches that could only detect pressures of tens of kilopascals.

The Berkeley approach, dubbed “e-skin”, uses germanium/silicon nanowire “hairs” that are grown on a cylindrical drum and then rolled onto a sticky polyimide film substrate, but the substrate can be made from plastics, paper, or glass. The nanowires are deposited onto the substrate to form an orderly structure. The demonstrated e-skin consists of a 7x7cm surface consisting of an 18×19 pixel square matrix; each pixel contains a transistor made of hundreds of the nanowires.A pressure sensitive rubber was integrated on top of the matrix to support sensing. The flexible matrix is able to operate with less than a 5V power supply, and it has been able to continue operating after being subjected to more than 2,000 bending cycles.

In contrast, the Stanford approach, sandwiches a thin film of rubber molded into a grid of tiny pyramids, packing up to 25 million pyramids per cm2, between two parallel electrodes. The pyramid grid makes the rubber behave like an ideal spring that supports compression and rebound of the rubber that is fast enough to distinguish between multiple touches that follow each other in quick succession. Pressure on the sensor compresses the rubber film and changes the amount of electrical charge it can store which enables the controller to detect the change in the sensor. According to the team, the sensor can detect the pressure exerted by a 20mg bluebottle fly carcass placed on the sensor. The Stanford team has been able to manufacture a sheet as large as 7x7cm, similar to the Berkeley e-skin.

I am excited by these two recent developments in machine sensing. The uses for this type of touch sensing are endless such as in industrial, medical, and commercial applications. A question comes to mind – these are both sheets (arrays) of multiple sensing points – how similar will the detection and recognition algorithms be to the touch interfaces and vision algorithms that are being developed today? Or will it require a completely different approach and thought process to interpret this type of touch sensing?

What is the balance between co-locating and telecommuting for an engineering team?

Wednesday, September 8th, 2010 by Robert Cravotta

My wife’s current job position has got me wondering about how to find the balance between going to the office versus working remotely. Her engineering team is split between two locations that are three thousand miles apart. She has been doing a lot of flying to the second location because often times being physically present is more effective than working strictly through email and phones.

In fact, after examining my own career, I realized that more than half of my time as a member of the technical staff was spent coordinating between two or more locations. My first “real” engineering job after completing my bachelor degree involved working at two different locations for the same company that were 70 miles apart. I later transferred to an embedded controls group and worked in a lab and field facility that were more than 100 miles apart. Several other jobs required me to coordinate between two offices that were three thousand miles apart. In each case, the team dynamics and the network and communication technology available enabled us to manage the need to be able to work remotely a portion of the time.

Contemporary embedded systems continue to increase in their complexity, and the number of people on the design teams continues to grow. It is increasingly unrealistic to expect everyone on the design team to be co-located with each other. Design projects need to be able to accept some sort of accommodations to allow team members to work from remote locations – whether those locations are a home office, a remote lab, or out in the field.

In the past, I have seen project managers claim that their projects are too hands-on and evolve too quickly to support members working remotely to the rest of the team. Is this an accurate sentiment? I’m not advocating that remote teams never meet face-to-face, but I wonder if constant co-location is a hard and fast requirement when we have access to so many cheap and pervasive technologies that allow us to share more details with each other than we could face-to-face twenty years ago.

How does your team determine the balance between co-locating and telecommuting?

Alternative Touch Interfaces

Tuesday, September 7th, 2010 by Robert Cravotta

Exploring the different development kits for touch interfaces provides a good example of what makes something an embedded system. To be clear, the human-machine interface between the end device and the user is not an embedded system; however, the underlying hardware and software can be. Let me explain. The user does not care how a device implements the touch interface – what matters to the user is what functions, such as multi-touch, the device supports, and what types of contexts and touch commands the device and applications can recognize and respond to.

This programmable rocker switch includes a display that allows the system to dynamically change the context of the switch.

So, while using resistive and capacitive touch sensors are among the most common ways to implement a touch interface in consumer devices, they are not the only way. For example, NKK Switches offers programmable switches that integrate a push button or rocker switch with an LCD or OLED display. In addition to displaying icons and still images, some of these buttons can display a video stream. This allows the system to dynamically change the context of the button and communicate the context state to the user in an intuitive fashion. I am in the process of setting up some time with these programmable switches for a future write-up.

Another example of alternative sensing for touch interfaces is infrared sensors. The infrared proximity sensing offered by Silicon Labs and the infrared multi-touch sensing offered by Microsoft demonstrate the wide range of capabilities that infrared sensors can support at different price points.

Silicon Labs offers several kits that include infrared support. The FRONTPANEL2EK is a demo board that shows how to use capacitive and infrared proximity sensing in an application. The IRSLIDEREK is a demo board that shows how to use multiple infrared sensors together to detect not only the user’s presence, but also location and specific motion of the user’s hand. These kits are fairly simple and straightforward demonstrations. The Si1120EK is an evaluation platform that allows a developer to explore infrared sensing in more depth including advanced 3-axis touchless object proximity and motion sensing.

By working with these kits, I have a greater appreciation of the possible uses for proximity sensing. For example, an end device could place itself into a deep sleep or low power mode to minimize energy consumption. However, placing a system in the lowest power modes incurs a startup delay when reactivating the system. A smart proximity sensing system could provide the system with a few seconds warning that a user might want to turn the system on, and it could speculatively activate the device and be able to respond to the user more quickly. In this scenario, the proximity sensor would probably include some method to distinguish between likely power-up requests versus an environment where objects or people pass near the device without any intent of powering up the device.

Finally, Microsoft’s Surface product demonstrates the other end of touch sensing using an infrared camera system. In essence, the Surface is a true embedded vision system – an implementation detail that the end user does not need to know anything about. In the case of the Surface table, there are several infrared cameras viewing a diffusion surface. The diffusion surface has specific optical properties that allow the system software to identify when any object touches the surface of the display. This high end approach provides a mechanism for the end user to interact with the system using real world objects found in the environment rather than just special implements such as stylus with specific electrical characteristics.

The point here is to recognize that there are many ways to implement touch interfaces – including sonic mechanisms. They may not support touch interfaces in the same way, nor be able to support a minimum set of commands sets, but taken together, they may enable smarter devices that are able to better predict what the end user’s true expectations are and prepare accordingly. What other examples of alternative touch sensing technologies are you aware of?

When do you use your compiler’s inline assembler feature and for what reasons?

Wednesday, September 1st, 2010 by Robert Cravotta

I am working on a mapping for software that is analogous to the mapping I developed to describe the different types of processing options. The value of this type of mapping is that it improves the visibility as to the assumptions and optimization trade-offs that drive the design and implementation details of a given tool or architecture. A candidate mapping criteria is the coupling between different layers of abstraction between the software and the hardware target. I will be asking questions that try to tease out the assumptions and trade-offs behind the tools you use to move between different layers of abstraction in your designs.

For example, a compiler allows a software developer to write instructions in a high-level language that generally allows the developer to focus on what the software needs to accomplish without having to worry about partitioning and scheduling the execution engine resources such as register reads and writes. For the mapping model, a compiler would have a strong coupling with the high-level language. Additionally, if the developer is using an operating system, the compiler may also support targeting the software to the operating system API (application programming interface) rather than a privileged mode on the target processor. This would constitute another layer of coupling that the compiler must account for.

However, most compilers also include an inline assembler that allows the developer to break these abstraction layers and work at the level of the target processor’s assembly instructions and resources. Using the inline assembler usually means more complexity for the software developer to manage because the compiler is no longer directly controlling some of the target processor’s resources. Using assembly language can also reduce the portability of the software, so developers usually have a good reason to break the abstraction layer and work at the level of the target processor. Reasons for using an inline assembler include improving the execution speed of the software, optimizing the memory usage in the system, and directly controlling special hardware resources in the processor such as co-processors, accelerators, and peripherals.

Under what conditions do you use the inline assembler (or a separate assembler) for you software? What are the project management and technical trade-offs you consider when choosing to work at the assembly level? What features would a compiler need to support that would allow you to avoid using assembly language? Your answers will help refine the software sweet spot mapping that I am currently developing.

Identifying sweet spot assumptions

Monday, August 30th, 2010 by Robert Cravotta

I am continuing to develop a taxonomy to describe the different types of software tools. Rather than waiting until I have a fully fleshed out model, I am sharing my thought process with you in the hopes that it will entice you to share your thoughts and speed up the process of building a workable model.

I am offering up the following processing mapping as an example of how an analogous software mapping might look. The mapping identifies two independent characteristics, in this case, the number of states and the amount of computation that the system must handle. One nice thing about mapping the design characteristics like this is that it provides an independence from the target application and allows us to focus on what an architecture is optimizing and why.

For example, a microcontroller’s sweet spot is in the lower end of the computation load but spans from very simple to complicated state machines. Microcontroller architectures emphasize excellent context switching. In contrast, DSP architectures target streaming problems where context switching is less important and maximizing computation for the same amount of time/energy is more important.

I suspect that if we can identify the right characteristics for the axis of the mapping space that software tools will fall into analogous categories of assumptions and optimizations. The largest challenge at this moment is identifying the axes. Candidate characteristics include measures of productivity, efficiency, reusability, abstraction, coupling, and latency tolerance.

An important realization is that the best any software can accomplish is to not stall the hardware processing engine. The software will perform data manipulations and operations that cause the processing engine to stall, or be idle, some percentage of the time. As a result, all software tools are productivity tools that strive to help the developer produce software that is efficient enough to meet the performance, schedule, and budget requirements of the project. This includes operating systems, which provide a layer of abstraction from the underlying hardware implementation.

I propose using a measure of robustness or system resilience and a measure of component coupling as the two axes to map software development tools to a set of assumptions and optimization goals.

The range for the component coupling axis starts at microcode and moves toward higher levels of abstraction such as machine code, assembly code, BIOS, drivers, libraries, operating systems, and virtual machines. Many embedded software developers must be aware of multiple levels of the system in order to extract the required efficiency from the system. As a result, many software tools also target one or more of these layers of abstraction. The more abstraction layers that a tool accommodates, the more difficult it is to build and support.

Consider that while a compiler ideally allows a developer to work at a functional and/or data flow level, it must also be able to provide the developer visibility into the lower level details in case the generated code performs in an unexpected fashion that varies with the hardware implementation. The compiler may include an inline assembler and support #pragma statements that enable the developer to better specify how the compiler can use special resources in the target system.

The robustness axis is harder to define at this moment. The range for the robustness axis should capture the system’s tolerance to errors, inconsistent results, latency, and determinism. My expectation for this axis is to capture the trade-offs that allow the tool to improve the developer’s productivity while still producing results that are “good enough.”  I hope to be able to better define this axis in the next write-up.

Do you have any thoughts on these two axes? What other axes should we consider? The chart can go beyond a simple 2D mapping.