What should compilers do better?

Wednesday, August 11th, 2010 by Robert Cravotta

Is assembly language programming a dead-end skill? So much marketing material for embedded development tools seems geared toward making it seem that developers no longer need to work at the level of assembly language for their designs. How true is this message? My observation of the embedded market suggests that the need to program in assembly language is quite alive and well.

I think I have an explanation as to why compilers have enabled the majority of software development to occur in a higher-level language, and yet, they are falling short of eliminating that “final mile” of assembly programming. One target goal for compilers is that it can generate code that is as good as hand-coded assembly, but requires less time, energy, and resources to produce than hand-coded assembly.

While faster compilation is a good thing, I think we have reached a point of diminishing returns for many types of embedded software projects. While automated project and configuration management enables developers to spend even more minutes on software, it does not actually make those extra minutes more productive. Increasing the productivity of saved minutes is essential for today’s software development tools because embedded systems are growing in complexity, and a significant portion of that additional complexity (more than 50% from my estimate) manifests in the software portion of the design.

The problem I see is that while compilers are strong at scheduling and sequencing the loading and execution of opcodes and operands, they are horrible at partitioning and allocating global resources, such as memory spaces beyond the register set and possibly tightly coupled memories. This limits the compiler’s ability to increase a developer’s productivity in precisely the area that contributes a significant portion of the additional complexity in new designs.

Processor architects perform at herculean levels to deliver memory architectures that minimize silicon cost, minimize energy dissipation, and the hide latency of data reads and writes in the memory system so that the instruction execution engines do not stall. A problem is that programming languages do not capture, beyond register files, how the memory subsystem experiences varying access times for each of the different types of limited memory resources. Currently, it is only through manual assembly language coding that the software can match the different types of data with the access times for each memory resource that the processor architect provided.

Matching data with the different memory resources is an involved skill set, and it may be a long while before compilers can perform that partitioning with any level of proficiency. However, what other things should compilers do better to improve the productivity of software developers and offset the increasing amount of complexity they must account for in their embedded designs?

First and second generation touch sensing

Tuesday, August 10th, 2010 by Robert Cravotta

I recently proposed a tipping point for technology for the third generation of a technology or product, and I observed that touch technology seems to be going through a similar pattern as more touch solutions are integrating third generation capabilities. It is useful to understand the difference between the different generations of touch sensing to better understand the impact of the emerging third generation capabilities for developers.

First generation touch sensing relies on the embedded host processor to support, and the application software to understand how to configure, control, and read, the touch sensor. The application software is aware of and manages the details of the touch sensor drivers and analog to digital conversion of the sense circuits. A typical control flow to capture a touch event consists of the following steps:

1)  Activate the X driver(s)

2) Wait a predetermined amount of time for the X drivers to settle

3) Start the ADC measurement(s)

4) Wait for the ADC measurement to complete

5) Retrieve the ADC results

6) Activate the Y driver(s)

7) Wait a predetermined amount of time for the Y drivers to settle

8) Start the ADC measurement(s)

9) Wait for the ADC measurement to complete

10) Retrieve the ADC results

11) Decode and map the measured results to an X,Y coordinate

12) Apply any sensor specific filters

13) Apply calibration corrections

14) Use the result in the rest of the application code

Second generation touch sensing usually encapsulates this sequence of steps to activate the drivers, measure the sensing circuits, and applying the filters and calibration corrections into a touch event. Second generation solutions may also offload the sensor calibration function, although the application software may need to know when and how to initiate the calibrate function. A third generation solution may provide automatic calibration so that the application software does not need to know when or how to recalibrate the sensor because of changes in the operating environment (more in a later article).

A challenge for providing touch sensing solutions is striking a balance between meeting the needs of developers that want low- and high-levels of abstraction. For low-level design considerations, the developer needs an intimate knowledge of the hardware resources and access to the raw data to be able to build and use custom software functions that extend the capabilities of the touch sensor or even improve its signal to noise ratio. For developers using the touch sensor as a high-level device, the developer may be able to work through an API (application programming interface) to configure, as well as turn on and off, the touch sensor.

The second and third generation touch API typically includes high-level commands to enable and disable, calibrate, and read and write the configuration registers for the touch sensor as well as low-level commands to access the calibration information for the touch sensor. The details to configure the sensor and the driver for event reporting differ from device to device. Another important capability that second and third generation solutions may include is the ability to support various touch sensors and display shapes without requiring the developer to rework the application code. This is important because for many contemporary touch and display solutions, the developer must be separately aware of the display, touch sensing, and controller components because there are not many options for fully integrated touch and display systems. In short, we are still in the Wild West era of embedded touch sensing and display solutions.

Exploring multiprocessing extremes

Friday, August 6th, 2010 by Robert Cravotta

Extreme multiprocessing is an interesting topic because it can mean vastly different things to different people depending on what types of problems they are trying to solve.

At one end of the spectrum, there are multiprocessing designs that maximize the amount of processing work that the system performs within a unit of time while staying within an energy budget to perform that work. These types of designs, often high-compute, parallel processing, work station, or server systems, are able to deliver a higher processing throughput rate at lower power dissipation than if they used a hypothetical single core processor that ran at significantly faster clock rates. The multiple processor cores in these types of systems might operate in the GHz range.

While multiprocessing architectures are an approach to increase processing throughput while maintaining an energy budget, for the past few years, I have been unofficially hearing from high performance processor suppliers that some of their customers are asking for faster processors despite the higher energy budget. These designers understand how to build their software systems using a single instruction-stream model. The contemporary programming models and tools are falling short for enabling software developers to scale their code across multiple instruction streams. The increased software complexity and risks outweigh the complexity of managing the higher thermal and energy thresholds.

At the other end of the spectrum, there are multiprocessing designs that rely on multiple processor cores to partition the workload among independent resources to minimize resource dependencies and design complexity. These types of designs are the meat and potatoes of the embedded multiprocessing world. The multiple processor cores in these types of systems might operate in the 10’s to 100’s MHz range.

Let me clarify how I am using multiprocessing to avoid confusion. Multiprocessing designs use more than a single processing core, working together (even indirectly) to accomplish some system level function. I do not assume what type of cores the design uses, nor whether they are identical, similar, or dissimilar. I also do not assume that the cores are co-located in the same silicon die, chip package, board, or even chassis because a primary difference for each of these implementation options are energy dissipation and latency of the data flow. The design concepts are similar between each scale as long as the implementation meets the energy and latency thresholds. To further clarify, multicore is a subset of multiprocessing where the processing cores are co-located in the same silicon die.

I will to try to identify the size, speed, energy, and processing width limits for multiprocessing systems for each of these types of designers. In the next extreme processing article, I will explore how scaling multiprocessing upwards might change basic assumptions about processor architectures.

Impacts of touchscreens for embedded software

Thursday, August 5th, 2010 by Ville-Veikko Helppi

No question, all layers of the embedded software are impacted when a touchscreen is used on a device. A serious challenge is finding space to visually show a company’s unique brand identity, as it is the software that runs on the processor that places the pixels on screen. From the software point of view, the touchscreen removes one abstraction level between the user and software. For example, many devices have removed ‘OK’ buttons from dialogs as the user can click the whole dialog instead of clicking the button.

Actually, software plays an even more critical role as we move into a world where the controls on a device are virtual rather than physical. In the lowest level of software, the touchscreen driver provides a mouse-emulation that basically means the same as clicking a mouse cursor on certain pixels. However, the mouse driver gets its data as “relative” while the touchscreen driver gets its data as “absolute.” Writing the touchscreen driver is usually trivial, as this component only takes care of passing information from the physical screen to higher levels of software. The only inputs the driver needs are Boolean if the screen is touched, and in what x- and y-axes has the touch taken place.

At the operating system level, a touchscreen user interface means more frequent operating system events than the typical icon or widget-based user interface. In addition to a touchscreen, there may also be a variety of different sensors (e.g., accelerometers) inputting stimuli to the operating system through their drivers. Generally, the standardized operating system can give confidence and consistency to device creation, but if it needs to be changed, the cost of doing so can be astronomical due to testing the compatibility of other components.

The next layer is where the middleware components of the operating system are found, or in this context, where the OpenGL/ES library performs. Various components within this library do different things from processing the raw data with mathematical algorithms, providing a set of APIs for drawing, interfacing between software and hardware acceleration, or providing services such as rendering, font engines, and so on. While this type of standardization is generally a good thing, in some cases, it can lead to non-differentiation – in the worst case, it might even kill the inspiration of an innovative user interface creation. Ideally, the standardized open library, together with rich and easily customizable user interface technology, results in superb results.

The application layer is the most visible part of the software and forms the user experience. It is here where developers must ask:

1)      Should the application run in the full-screen mode or enable using widgets distributed around the screen?

2)      What colors, themes, and templates are the best ways to illustrate the behavior of the user interface?

3)      How small or large should the user interface elements be?

4)      In what ways will the user interface elements behave and interact?

5)      How intuitive do I want to make this application?

Compelling UI design tools is essential for the rapid creation of user interfaces.

In the consumer space, there are increasingly more competitive brands with many of the same products and product attributes. Manufacturers are hard-pressed to find any key differentiator among this sea of “me too” offerings. One way to stand out is by delivering a rich UI experience via a touchscreen display.

We are starting to see this realization play out in all types of consumer goods, even in white goods as pedestrian as washing machines. There are now innovative display technologies replacing physical buttons and levers. Imagine a fairly standard washing machine with a state-of-the-art LCD panel. This would allow the user to easily browse and navigate all the functions on that washing machine – and perhaps learn a new feature or two. By building an attractive touchscreen display, simply changing the software running on the display can manifest any customization work. Therefore, things like changing the branding, adding compelling video clips and company logos, all become much simpler because it’s all driven by software. If the manufacturer uses the right technology, they may not even need to modify the software to change the user experience.

Driven by the mobile phone explosion, the price point of display technology has come down significantly. As a result, washing machine manufacturers can add more perceived value to their product without necessarily adding too much to the BoM (bill of materials). Thus, before the machine leaves the factory, a display technology may increase the BoM by $30, but this could increase the MSRP by at least $100. No doubt, this can have a huge impact on the company’s bottom line. This results in a “win-win” for the manufacturer and for the consumer. The manufacturer is able to differentiate the product more easily and in a more cost effective manner, while the product is easier to use with a more enhanced UI.

The final part in this four-part series presents a checklist for touchscreen projects.

How much of an engineer’s job is writing?

Wednesday, August 4th, 2010 by Robert Cravotta

My experience suggests that engineers do a lot more writing than the rest of the world realizes. We had a saying where I worked that the engineering effort was not complete until the documentation outweighed the final product. At one time, I was writing (more like typing) so much material for system specifications, proposals, and trade-off studies, that I actually developed an unpleasant case of acute tendinitis. To be fair, the tendinitis was not so much a function of the amount of writing I did, but more a function of the layout of my workspace. Eighteen months of physical therapy and reading everything I could find about ergonomics taught me how to continue writing without hurting myself.

As a boy, I recall hearing how my father was in the top of his class for math and science, but that he was also in the bottom of his class for writing and language. His teachers passed him through the language courses because they figured he would get by fine on his technical skills and would not need to know how to write well. How wrong they were. I remember sensing his frustration that the skill he was most weakest in, writing, was the one thing he had to spend significant portions of his time doing to produce specifications, interface documents, and data analysis reports.

Jon Titus has been writing about how engineers can write better, and I suspect he will be posting about this topic a few more times. Jon offers a nice list of types of writing that engineers might need to engage in, such as status reports, technical articles, application notes, marketing material, manuals, instructions, proposals and justifications, as well as blogs or columns. I would like to expand on that list and include logs or lab journals, specifications, as well as design justification and review documents.

Ambiguity is unacceptable in most of these documents, and as a result, engineering writing can exhibit structures that language majors find humorous or frustrating. I know that when I transitioned to article writing, it took me some time to adopt an active voice rather than a passive voice. Even then, there are times when the passive voice just makes more sense. I found an article that discusses the passive engineer in a useful fashion. What I appreciate about the essay is that it avoids being dogmatic about never using the passive voice, such as when you want to emphasize results or the actor is unimportant to the concept you are communicating.

I would like to uncover whether my experience with writing as an engineer is niche to the aerospace market or if engineers in many or all other fields also engage in a significant amount of writing. What kind of writing do you do as an engineer, and what percentage of your time do you spend doing it? Do you consider yourself a fast writer or a slow writer?

The Next Must-Have System

Friday, July 30th, 2010 by Max Baron

The 9th annual Research@Intel Day held at the Computer History Museum showcased more than 30 research projects demonstrating the company’s latest innovations in the areas of energy, cloud computing, user experience, transportation, and new platforms.

Intel CTO Justin Rattner made one of the most interesting announcements about the creation of a new research division called Interaction and Experience Research (IXR).

I believe IXR’s task will be to determine the nature of the next must-have system and the processors and interfaces that will make it successful.

According to Justin Rattner, you have to go beyond technology; better technology is no longer enough since individuals nowadays value a deeply personal, information experience. This suggests that Intel’s target is the individual, the person that could be a consumer and /or a corporate employee. But how do you find out what the individual that represents most of us will need beyond the systems and software already available today?

To try to hit that moving target, Intel has been building up its capabilities in the user experience and interaction areas since the late nineties. One of the achievements was the Digital Health system, now a separate business division. It started out as a research initiative in Intel Labs – formerly the “Corporate Technology Group” (CTG), with an objective of finding how technology could help in the health space.

Intel’s most recent effort has been to assemble the IXR research team consisting of both user interface technologists and social scientists. The IXR division is tasked to help define and create new user experiences and platforms in many areas some of which are end-use of television, automobiles, and signage. The new division will be led by Intel Fellow Genevieve Bell–a leading user-centered design advocate at Intel for more than ten years.

Genevieve Bell is a researcher. She was raised in Australia. She received her bachelor’s degree in Anthropology from Bryn Mawr College in 1990 and her master’s and doctorate degrees in Anthropology in 1993 and 1998 from Stanford University where she also was a lecturer in the Department of Anthropology. In her presentation, Ms. Bell explained that she and her team will be looking into the ways in which people use, re-use and resist new information and communication technologies.

To envision the next must have system, Intel’s IXR division is expected to create a bridge of research incorporating social research, design enabling and technology research. The team’s social science, design, and human-computer interaction researchers will continue the work that’s already been going on, by asking questions to find what people will value and what will fit into their lives. New systems, software, user interactions and changes in media content and consumption could emerge from using the obtained data on one hand, and the research into the next generation of user interfaces on the other.

Bell showed a photo of a child using his nose to key in data on his mobile—an example of a type of user-preferred interface that may seem strange, but it can be part of the data used by social scientists to define an innovation that may create the human I/Os for 2020. The example also brought out a different aspect that was not addressed: how do you obtain relevant data without placing in the hands of your population sample a scale model or an actual system to help review it, improve it or even reject it and start from scratch?

In addition to the very large investment Intel makes in US-based research it also owns labs, and it collaborates with or supports over 1,000 researchers worldwide. According to Intel, 80% of Intel Labs China focuses on embedded system research. Intel Labs Europe conducts research that spans the wide spectrum from nanotechnologies to cloud computing. Intel Labs Europe’s website shows research locations and collaboration in 17 sites – and the list doesn’t even include the recent announcement of the ExaScience Lab in Flanders, Belgium.

But Intel is not focused only on the long term. Two examples that speak of practical solutions for problems encountered today are the Digital Health system that can become a link between the patient at home and doctor at the clinic, and the connected vehicle (misinterpreted by some reporters as an airplane-like black box intended for automobiles).

In reality, according to Intel, the connected vehicle’s focus was on the personal and vehicle safety. For example, when an attempt is made to break into the vehicle, captured video can be viewed via the owner’s mobile device. Or, for personal safety and experience, a destination-aware connected system could track vehicle speed and location to provide real-time navigation based on information about other vehicles and detours in the immediate area.

Both life-improving systems need to find wider acceptance from the different groups of people that perceive their end-use in different ways.

Why is Intel researching such a wide horizon of disciplines and what if anything is still missing? One of the answers has been given to us by Justin Rattner himself.

I believe that Rattner’s comment “It’s no longer enough to have the best technology,” reflects the industry’s trend in microprocessors, cores, and systems. The SoC is increasingly taking the exact configuration of the OEM system that will employ it. SoCs are being designed to deliver precisely the price-performance needed at the lowest power for the workload—and for most buyers the workload has become a mix of general-purpose processing and multimedia, a mix in which the latter is dominant.

The microprocessor’s role can no longer be fixed or easily defined since the SoCs incorporating it can be configured in countless ways to fit systems. Heterogeneous chips execute code by means of a mix of general purpose processors, DSPs and hardwired accelerators. Homogeneous SoC configurations employ multiple identical cores that together can satisfy the performance needed. And, most processor architectures have not been spared; their ISAs are being extended and customized to fit target applications.

Special-purpose ISAs have emerged –trying and most of the time, succeeding in reducing power and silicon real-estate for specific applications. Processor ISA IP owners and enablers are helping SoC architects that want to customize their core configuration and ISA. A few examples include ARC (now Viraje Logic and possibly soon–Synopsis), Tensilica, and suppliers of FPGAs such as Altera and Xilinx. ARM and MIPS offer their own flavors of configurability. ARM is offering so many ISA enhancements available in different cores that aside from its basic compatibility, it can be considered as a “ready-to-program” application-specific ISA while MIPS leaves most of its allowed configurability to the SoC architect.

In view of the rapidly morphing scenario, the importance of advanced social research for Intel and, for that matter for anybody in the same business cannot be overstated. Although it may not be perceived as such, Intel has already designed processors to support specific applications.

The introduction of the simple, lower performance but power-efficient Atom core was intended to populate the mobile notebook and net book. The Moorestown platform brings processing closer to the needs of mobile low power systems while the still-experimental SCC –Intel’s Single-chip Cloud Computer is configured to best execute data searches in servers.

It’s also interesting to see what can be done with an SCC-like chip employed in tomorrow’s desktop.

If Intel’s focus is mainly on the processing platform as it may be, what seems to be missing and who is responsible for the rest? The must-have system of the future must run successful programs and user interface software. While Intel is funding global research that’s perceived to focus mostly on processors and systems–who is working on the end-use software? I don’t see the large software companies engaging in similar research by themselves or in cooperation with Intel’s or other chip and IP companies’ efforts. And we have not been told precisely how a new system incorporating hardware and software created by leading semiconductor and software scientists will be transferred from the draft board to system OEMs. An appropriate variant on Taiwan’s ITRI model comes to mind but only time will tell.

How does a “zerg-to-market” strategy affect your design cycle?

Wednesday, July 28th, 2010 by Robert Cravotta

The term “zerg” serves several purposes in this week’s question. It is: a nod toward the recent release of Blizzard’s Starcraft 2 real-time strategy game; a reference to a rush strategy to win a Starcraft match that has found its way into general usage among online gamers; and a way to blend the terms time-to-market and first-to-market into a single phrase in a serious question for embedded developers. Despite the connotation of a rush strategy, there is more than a ten year gap between the launch of the prior and current release of Starcraft.

Time-to-market is the length of time for the design and development process of transforming a concept from an idea to a deliverable product in the market. It applies to new concepts/products as well as incremental improvements to existing products. Being able to get an idea to market quickly is especially important when products undergo a rapid obsolescence cycle.

First-to-market is a special-case of time-to-market, and it is usually the implied message when marketers tout how using their embedded components will shrink your time-to-market. Adopting a first-to-market strategy relies on the assumption that time-to-market matters most for first-of-a-kind products because it provides a first mover advantage.

A time-to-market emphasis assumes that you can shrink your design cycle so that it is shorter than your competitors that chose a different way to complete their design effort. There are several strategies to shrink the development cycle. The easiest, and most risky, strategy is to skip “low-value” steps in the development process, but this can cause the development team to miss flaws earlier in the design cycle and cause them to become expensive, in terms of both resources and schedule, to fix.

A more robust, but also more costly approach is to have more developers working in parallel so that you do not skip any steps in your design process. This is an approach taken by many teams aiming for a first-to-market product release; however, it has its own unique set of risks. By virtue of trying to be the first to build something, the components you have access to have not yet been characterized to your project’s requirements. In essence, you become an early adopter working with preproduction components. Likewise, your team usually lacks access to mature, domain-targeted tools, documentation, and training because you are completing your first-to-market design while these are being built and certified. This means your team will need to build some of the components, such as drivers, from scratch because the production version of these may not exist until you are ready to ship out your product.

When marketing material touts that it shrinks your time-to-market, it usually is relying on a third approach – using pre-engineered and pre-certified components and tools. In essence, you can safely skip steps because you actually outsourced those steps to someone else. On the other hand, you are not going to be able to find production ready components for novel first-to-market ideas because building these pre-engineered solutions is costly, and the companies that provide them choose to build them because they believe they can recover the development cost across many customers and designs – your competitors.

How important is a first-to-market strategy to your projects? What trade-offs do you consider when trying to shrink your time-to-market? What types of risks are you willing to accept to get the product out the door sooner, and what steps do you take to mitigate the consequences of those accepted risks?

Email me with any questions you would like to suggest for future posts.

Robotics and autonomous systems

Tuesday, July 27th, 2010 by Robert Cravotta

Robotics is embedded design made visible. It is one of the few ways that users and designers can see and understand the rate of change in embedded technology. The various sensing and actuating subsystems are not the end-system, nor does the user much care how they are implemented, but both user and designer can recognize how each of the subsystems contribute, at a high level of abstraction, to the behavior of the end-system.

The state of the art for robotic systems keeps improving. Robots are not limited to military applications. Robots are entrenched in the consumer market in the form of toys and cleaning robots. Aquaproducts and iRobot are two companies that sell robots into the consumer market that clean pools, carpets, roof gutters, and hard floors.

A recent video from the GRASP (General Robotics Automation Sensing and Perception) Laboratory at the University of Pennsylvania demonstrates aggressive maneuvers for an autonomous, flying quadrotor (or quadrocopter). The quadrotor video demonstrates that it can autonomously sense and adjust for obstacles, as well as execute and recover from performing complex flight maneuvers.

An even more exciting capability is groups of autonomous robots that are able to work together toward a single goal. A recent video demonstrates multiple quadrotors flying together to carry a rigid structure. At this point, the demonstration only involves rigid structures, and I have not yet been able to confirm whether the cooperative control mechanism can work with carrying non-rigid structures.

Building robots that can autonomously work together in groups is a long-term goal. There are robot soccer competitions that groups such as FIRA and RoboCup sponsor throughout the year to promote interest and research into cooperative robots. However, building packs of cooperating robots is not limited to games. Six development teams were recently announced as finalists for the inaugural MAGIC (Multi Autonomous Ground-Robotic International Challenge) event.

Robotics relies on the integration of software, electronics, and mechanical systems. Robotics systems need to be able to coordinate sensing the external world with their own internal self-state to navigate through the real world and accomplish a task or goal. As robotic systems continue to mature, they are incorporating more context recognition of their surroundings, self-state, and goals, so that they can perform effective planning. Lastly, multiprocessing concepts are put to practical tests, not only within a single robot, but these concepts are tested within packs of robots. Understanding what does and does not work with robots may strongly influence the next round of innovations within embedded designs as they adopt and implement more multiprocessing concepts.

The User Interface Paradigm Shift

Thursday, July 22nd, 2010 by Ville-Veikko Helppi

Touchscreens are quickly changing the world around us. When clicking on an image, a touchscreen requires much less thinking and more user intuition. Touchscreens are also said to be the fastest pointing method available, but that isn’t necessarily true – it all depends on how the user interface is structured. For example, most users accept a ten millisecond delay when scrolling with cursor and mouse, but with touchscreens, this same period of time feels much longer so the user experience is perceived as not as smooth. Also, multi-touch capabilities are not possible with mouse emulations, at least, not as intuitively as with a touchscreen. The industry has done a good job providing a screen pen or stylus to assist the user when selecting the right object on smaller screens, thus silencing the critics of touchscreens who say it’s far from ideal as a precise pointing method.

The touchscreen has changed the nature of UI (user interface) element transitions. When looking at motions of different UI elements, these transitions can make a difference in device differentiation and if implemented properly tell a compelling story. Every UI element transition must have a purpose and context as it usually reinforces the UI elements. Something as simple as buffers are effective at giving a sense of weight to a UI element – and moving these types of elements without a touchscreen would be awkward. For UI creation, the best user experience can be achieved when UI element transitions are natural and consistent with other UI components (e.g., widgets, icons, menus) and deliver a solid, tangible feel of that UI. Also, the 3D effects during the motion provide a far better user experience.

3D layouts enable more touchscreen friendly user interfaces.

Recent studies in human behavior along with documented consumer experiences have indicated that the gestures of modern touchscreens have expanded the ways users can control a device through its UI. As we have seen with “iPhone phenomena” the multi-touchscreen changes the reality behind the display screen, allowing new ways to control the device through hand-eye (e.g., pinching, zooming, rotating) coordination. But it’s not just the iPhone that’s driving this change. We’re seeing other consumer products trending towards simplifying the user experience and enhancing personal interaction. In fact, e-Books are perfect examples. Many of these devices have a touchscreen UI where the user interacts with the device directly at an almost subconscious level. This shift in improved user experience has also introduced the idea that touchscreens have reduced the number of user inputs required for the basic functioning of a device.

The third part in this four-part series explores the impact of touchscreens on embedded software.

To MMU or not to MMU?

Wednesday, July 21st, 2010 by Robert Cravotta

I have seen responses to previous questions of the week, most notably “What matters most when choosing an embedded processor?” and “Do you use or allow dynamic memory allocation in your embedded design?”, uncover at least two different developer biases with regards to memory management hardware. One type of developer considers the use of an MMU (memory management unit) an expensive and usually unnecessary component, while the other type of developer considers an MMU an essential and necessary component in their design.

After looking through the processors listed in the Embedded Processing Directory, I think it is safe to say that processors that include MMUs are predominantly limited to the 32-bit processor space. While there are some 8-, and 16-bit processors that lay claim to an MMU, the majority of these sized processors, listed across approximately 60 pages in the directory, do not include an MMU. These smaller processors support smaller memory sizes and are not likely targets for consolidating many functions within a single processing core.

Even in the 32-bit processor space, there is a lot of activity at the small end of the processing spectrum. Consider the ARM Cortex-M0 that only hit the market within the last year and does not include an MMU. The Cortex-M0 is the smallest 32-bit core ARM offers and it experienced the fastest adoption rate of any ARM core ever. The Cortex-M3 does not support an MMU, but it does support an optional MPU (memory protection unit). In fact, MMU support only exists in the Cortex-Ax class processors with the Cortex-Rx processors only supporting an optional MPU.

I do not believe there is a universal answer to using an MMU; rather, it seems that when to use an MMU depends greatly on the choice of processor and the type of software the end-device requires. Is using an operating system an essential ingredient to using an MMU? Is there a size threshold for when using an MMU makes sense? Does the use of dynamic or static memory allocation affect when it makes sense to insist on an MMU? Does an MMU make sense in systems that have deterministic hard real-time requirements?

In other words, where is the line between using an MMU and not using an MMU? The embedded space is too large to generalize an answer to this question, so I ask that you share what type of systems you work with and any specific engineering reasoning you use when deciding whether or not to use an MMU in your design.

If you have a question you would like to see in a future week, please contact me.