Entries Tagged ‘Low Power’

What is important when looking at a processor’s low power modes?

Wednesday, June 1st, 2011 by Robert Cravotta

Low power operation is an increasingly important capability of embedded processors, and many processors support multiple low power modes to enable developers to accomplish more with less energy. While low power modes differ from processor to processor, each mode enables a system to operate at a lower power level either by running the processor at lower clock rates and voltages or by removing power from selected parts of the processor, such as specific peripherals, the main processor core, and memory spaces.

An important characteristic of a low power or sleep mode is the current draw while the system is operating in that mode. However, evaluating and comparing the current draw between low power modes on different processors requires you to look at more than the just current draw to perform an apples-to-apples comparison. For example, the time it takes the system to wake-up from a given mode can disqualify a processor from consideration in a design. The time it takes a system to wake up is dependent on such factors as the settling time for the clock source and for the analog blocks. Some architectures offers multiple clock sources to allow a system to perform work at a slower rate while the faster clock source is still settling – further complicating the comparison between the wake-up time for the processor.

Another differentiator for low power modes is the level of granularity the power modes support that allows the developer to turn on and off individual versus blocks of peripherals or coprocessors. Some low power modes remove power from the main processor core and leave an autonomous peripheral controller operating to manage and perform data collection and storage. Low power modes can differ on which circuits they leave running such as brown-out detection, preserving the contents of ram or registers, and whether the real time clock remains active. The architectural decisions of which circuits can be powered down or not depends greatly on the end application, and they provide opportunities for specific processors to best target niche requirements.

When you are looking at a processor’s low power modes, what do you consider the important information that must be considered? When considering different processors, do you compare wake-up times or does current draw trump everything else? How important is your ability to control which circuits are powered on or off?

Boosting energy efficiency – Energy debugging

Monday, April 4th, 2011 by Oyvind Janbu

Using an ultra low power microcontroller alas does not by itself mean that an embedded system designer will automatically arrive at the lowest possible energy consumption.  To achieve this, the important role of software also needs to be taken into account.  Code needs to be optimized, not just in terms of functionality but also with respect to energy efficiency.  Software has perhaps never really been formally identified as an ‘energy drain’ and it needs to be.  Every clock cycle and line of code consumes energy and this needs to be minimized if best possible energy efficiency is to be achieved.

While the first two parts of this article have proposed the fundamental ways in which microcontroller design needs to evolve in the pursuit of real energy efficiency, so this third part considers how the tools which support them also need to change.  Having tools available that provide developers with detailed monitoring of their embedded systems’ energy consumption is becoming vital for many existing and emerging energy sensitive battery-backed applications.

As a development process proceeds, code size naturally increases and optimizing it for energy efficiency becomes a much harder and time-consuming task.  Without proper tools, identifying a basic code error such as a while loop that should have been replaced with an interrupt service routine can be difficult.  Such a simple code oversight causes a processor to stay active waiting for an event instead of entering an energy saving sleep mode – it therefore matters!  If these ‘energy bugs’ are not identified and corrected during the development phase then they’re virtually impossible to detect in field or burn-in tests.  Add together a good collection of such bugs and they will have an impact on total battery lifetime, perhaps critically so.

In estimating potential battery lifetimes, embedded developers have been able to use spreadsheets provided by microcontroller vendors to get a reasonable estimation of application behavior in terms of current consumption.  Measurements of a hardware setup made by an oscilloscope or multimeter and entered into spreadsheets can be extrapolated to give a pretty close estimation of battery life expectancy.  This approach however does not provide any correlation between current consumption and code and the impact of any bugs – the application’s milliamp rating is OK, but what’s not known is whether it could be any better.

With a logic analyzer a developer gets greater access to the behavior of the application and begins to recognize that ‘something strange’ is going on.  Yes it’s a ‘code view’ tool, and shows data as timing diagrams, protocol decodes, state machine traces, assembly language or its correlation with source level software, however it offers no direct relationship with energy usage.

Combine the logic analyzer, the multimeter, and the spreadsheet and you do start to make a decent connection between energy usage and code, but the time and effort spent in setting up all the test equipment (and possibly repeating it identically on numerous occasions), making the measurements and recording them into spreadsheets can be prohibitive if not practically impossible.

Low power processors such as the ARM Cortex-M3 however are already providing a SWO (serial wire output) that can be used to provide quite sophisticated and flexible debug and monitoring capabilities that tool suppliers can harness to enable embedded developers to directly correlate code execution with energy usage.

Simple development platforms can be created which permanently sample microcontroller power rail current consumption, convert it, and send it along with voltage and timing data via USB to a PC-based energy-to-code profiling tool.  Courtesy of the ARM’s SWO pin, the tool can also retrieve program counter information from the CPU.  The coming together of these two streams of data enables true ‘energy debugging’ to take place.

Provided that current measurements have a fairly high dynamic range, say from 0.1µA to 100mA, then it’s possible to monitor a very wide and practical range of microcontroller current consumption.  Once uploaded with the microcontroller object code, the energy profiling tool then has all the information resources it needs to accurately correlate energy consumption with code.

The energyAware Profiler tool from Energy Micro shows the relationship between current consumption, C/C++ code, and the energy used by a particular function. Clicking on a current peak, top right, reveals the associated code, bottom left.

The tool correlates the program-counter value with machine code, and because it is aware of the functions of the C/C++ source program, it can then readily indicate how energy use changes as various functions run.  So the idea of a tool that can highlight to a developer in real time, an energy-hungry code problem comes to fruition.  The developer watches a trace of power versus time, identifies a surprising peak, clicks on it and is immediately shown the offending code.

Such an ability to identify and rectify energy drains in this way and at an early stage of prototype development will certainly help reduce the overall energy consumption of the end product, and it will not add to the development time either, on the contrary.

We would all be wise to consider the debug process of low power embedded systems development as becoming a 3-stage cycle from now on:  hardware debugging, software functionality debugging, and software energy debugging.

Microcontroller development tools need to evolve to enable designers to identify wasteful ‘energy bugs’ in software during the development cycle.  Discovering energy-inefficient behavior that endanger battery lifetime during a product field trial is after all rather costly and really just a little bit too late!

Low Power Design: Energy Harvesting

Friday, March 25th, 2011 by Robert Cravotta

In an online course about the Fundamentals of Low Power Design I proposed a spectrum of six categories of applications that identify the different design considerations for low power design for embedded developers. The spectrum of low power applications I propose are:

1) Energy harvesting

2) Disposable or limited use

3) Replaceable energy storage

4) Mobile

5) Tethered with passive cooling

6) Tethered with active cooling

This article focuses on the characteristics that affect energy harvesting applications. I will publish future articles that will focus on the characteristics of the other categories.

Energy harvesting designs represent the extreme low end of low power design spectrum. In an earlier article I identified some common forms of energy harvesting that are publicly available and the magnitude (typically in the μW to mW range) of the energy that are typically available for harvesting.

Energy harvesting designs are ideal for tasks that take place in locations that are difficult to deliver power. Examples include remote sensors, such as might reside in a manufacturing building where the quantity of devices might make performing regular battery replacements infeasible. Also, many of the sensors may be in locations that are difficult or dangerous for an operator to reach. For this reason, energy harvesting systems usually run autonomously, and they spend the majority of their time in a sleep state. Energy harvesting designs often trade-off computation capabilities to fit within a small energy budget because the source of energy is intermittent and/or not guaranteed on a demand basis.

Energy harvesting systems consist of a number of subsystems that work together to provide energy to the electronics of the system. The energy harvester is the subsystem that interfaces with the energy source and converts it into usable and storable electricity. Common types of energy harvesters are able to extract energy from ambient light, vibration, thermal differentials, as well as ambient RF energy.

The rate of energy captured from the environment by the energy harvester may not be sufficient to allow the system to operate; rather, the output of the energy harvester feeds into an energy storage and power management controller that conditions and stores the captured energy in an energy bladder, buffer, capacitor, or battery. Then, when the system is in an awake state, it is drawing energy from the storage module.

The asymmetry between the rate of collecting energy and consuming energy necessitates that the functions the system needs to perform are only executed on a periodic basis that allows enough new energy to be captured and stored between operating cycles. Microcontrollers that support low operating or active power consumption, as well as the capability to quickly switch between the on and off state are key considerations for energy harvesting applications.

A consideration that makes energy harvesting designs different from the other categories in the low power spectrum is that the harvested energy must undergo a transformation to be usable by the electronics. This is in contrast to systems that can recharge their energy storage – these systems receive electricity directly in quantities that support operating the system and recharging the energy storage module.

If the available electricity ever becomes insufficient to operate the energy harvesting module, the module may not be able to capture and transform ambient energy even when there is enough energy in the environment. This key condition for operating means the decision for when and how the system will turn on and off must take extra precautions to avoid drawing too much energy during operation or it will risk starving the system into an unrecoverable condition.

Energy harvesting applications are still an emerging application space. As the cost continues to decrease and the efficiency of the harvesting modules continues to improve, more applications will make sense to pursue in an analogous fashion that microcontrollers have been replacing mechanical controls within systems for the past few decades.

Boosting energy efficiency – Sleeping and waking

Friday, March 18th, 2011 by Oyvind Janbu

While using a 32-bit processor can enable a microcontroller to stay in a deep-sleep mode for longer, there is nevertheless some baseline power consumption which can significantly influence the overall energy budget. However, historically 32-bit processors have admittedly not been available with useful sub-µA standby modes. With the introduction of power efficient 32-bit architectures, the standby options are now complementing the reduced processing and active time.

With the relatively low power consumption many microcontrollers exhibit in deep sleep, the functionality they provide in these modes is often very limited.  Since applications often require features such as real time counters, power-on reset / brown-out detection or UART reception to be enabled at all times, many microcontroller systems are prevented from ever entering deep sleep since such basic features are only available in an active run mode.  Many microcontroller solutions also have limited SRAM and CPU state retention in sub-µA standby modes, if at all.  Other solutions need to turn-off or duty-cycle brown-out and power-on reset detectors in order to save power.

In the pursuit of energy efficiency then microcontrollers need to provide product designers with a choice a sleep modes offering the flexibility to scale basic resources, and thereby the power consumption, in several defined levels or energy modes.  While energy modes constitute a coarse division of basic resources, additional fine-grained tuning of resources within each energy mode should also be able to be implemented by enabling / disabling individual peripheral functions.

There’s little point though in offering a microcontroller with tremendous sleep mode energy consumption if its energy efficiency gains are lost due to the time it takes for the microcontroller to wake up and enter run mode.

When a microcontroller goes from a deep sleep state, where the oscillators are disabled, to an active state, there is always a wake-up period, where the processor must wait for the oscillators to stabilize before starting to execute code.  Since no processing can be done during this period of time, the energy spent while waking up is wasted energy, and so reducing the wake-up time is important to reduce overall energy consumption.

Furthermore, microcontroller applications impose real time demands which often mean that the wake-up time must be kept to a minimum to enable the microcontroller to respond to an event within a set period of time.  Because the latency demanded by many applications is lower than the wake-up time of many existing microcontrollers, the device is often inhibited from going into deep sleep at all – not a very good solution for energy sensitive applications.

A beneficial solution would be to use a very fast RC oscillator that instantly wakes up the CPU and then optionally transfers the clock source to a crystal oscillator if needed. This meets both the real time demands as well as encourages run- and sleep mode duty cycling. Albeit the RC oscillator is not as accurate as a crystal oscillator, the RC oscillator is sufficient as the CPUs clock source during crystal start-up.

We know that getting back to sleep mode is key to saving energy. Therefore the CPU should preferably use a high clock frequency to solve its tasks more quickly and efficiently.  Even if the higher frequency at first appears to require more power, the advantage is a system that is able to return to low power modes in a fraction of the time.

Peripherals however might not need to run at the CPU’s clock frequency.  One solution to this conundrum is to pre-scale the clock to the core and peripherals, thereby ensuring the dynamic power consumption of the different parts is kept to a minimum.  If the peripherals can further operate without the supervision of the CPU, we realize that a flexible clocking system is a vital requirement for energy efficient microcontrollers.

The obvious way for microcontrollers to use less energy is to allow the CPU to stay asleep while the peripherals are active, and so the development of peripherals that can operate with minimum or no intervention from the CPU is another worthy consideration for microcontroller designers.  When peripherals look after themselves, the CPU can either solve other high level tasks or simply fall asleep, saving energy either way.

With advanced sequence programming, routines for operating peripherals previously controlled by the CPU can be handled by the peripherals themselves.  The use of a DMA controller provides a pragmatic approach to autonomous peripheral operation.  Helping to offload CPU workload to peripherals, a flexible DMA controller can effectively handle data transfers between memory and communication or data processing interfaces.

Of course there’s little point in using autonomous peripherals to relieve the burden of the CPU if they’re energy hungry.  Microcontroller makers also need to closely consider the energy consumption of peripherals such as serial communication interfaces, data encryption/decryption engines, display drivers and radio communication peripherals.  All peripherals must be efficiently implemented and optimized for energy consumption in order to fulfill the application’s need for a low system level energy consumption.

Taking the autonomy ideal a step further, the introduction of additional programmable interconnect structures into a microcontroller enable peripherals to talk to peripherals without the intervention of the CPU, thereby reducing energy consumption even further.  A typical example of a peripheral talking to another peripheral would be an ADC conversion periodically triggered by a timer. A flexible peripheral interconnect allows direct hardware interaction between such peripherals, solving the task while the CPU is in its deepest sleep state.

The third part of this three part article explores the tools and techniques available for energy debugging.

Low Power Design Course

Tuesday, March 8th, 2011 by Robert Cravotta

In last week’s Question of the Week I asked “what makes an embedded design low power?” I asked if there was a way to describe low power design such that it accommodates moving the threshold definitions as technology continues to improve. The reader responses encompassed a common theme – that the magnitude of the system’s power consumption is not what defines it as a low power design. Rather, you must take into account the end-use requirements and compare the final implementation with other implementations for analogous functions to confirm that the “low power” implementation completes the function at less cost and/or less energy.

The impetus for this question comes from a new information center about the basics of design that we have added to Embedded Insights to supplement an online course that I recently put together and presented. The course is hosted at EE Times as part of their Fundamentals feature, and it is called Fundamentals of Low Power Design using the Intel Atom Processor.

A challenge in creating the course was to create an approach that imparted useful information to every type of embedded developer – not just the people that were interested in the target processor (the Intel Atom in this case). I developed a spectrum of low power design that expands the sweet spot concept that I have proposed for processor architectures. In this case, the spectrum identifies six (6) different target implementations that share a common goal and available techniques for low power/energy designs – but they may define their thresholds and optimization approaches differently. The categories identified in the spectrum are energy harvesting, disposable, replaceable, mobile, tethered with passive cooling, and tethered with active cooling. I briefly describe each of these categories in the course, and I will follow-up with articles that focus on each one.

The Basics Information Center aggregates and organizes information on the basics for design topics. The inaugural page includes a set of low power optimization datasheets for a number of processor architectures that I researched when developing the course – however, including them in the final version of the course material would disrupt the flow of the sponsored material, so we are providing them as supplemental material. The concepts in the datasheets are a work-in-progress, so we welcome any comments that help us to tweak and fill-in the content so that the lower power spectrum and datasheets become a more useful tool for describing embedded processors. The end goal is to take what we learn from this effort and incorporate it into a parts search engine for the Embedded Processing Directory.

The datasheets currently reflect the organization of the course material; we will probably need to change them to make the information more generally accessible. Please share any ideas or specific information that we can use to refine the datasheets.

Green with envy: Why power debugging is changing the way we develop code

Friday, March 4th, 2011 by Shawn Prestridge

As time passes, consumers demand more from their mobile devices in terms of content and functionality, and this demand has eroded the ability of battery technology to keep up with our insatiable appetite for capability.  The notion of software power debugging is assisting the development engineer to create more ecologically sound devices based on the ability to see how much power is consumed by the device and correlating this to the source code.  By doing statistical profiling of the code with respect to power, an engineer has the ability to understand the impact of their design decisions on the mobile devices that they create.  Armed with this information, the engineer will be able to make more informed decisions about how the code is structured to both maximize battery life and minimize the impact on our planet’s natural resources.

Anyone who has a modern smartphone can attest to their love/hate relationship to it – they love the productivity boost it can provide, the use of GPS functionality to help us find our destination and the ability to be connected to all aspects of their lives, be it via text messaging, e-mail or social networking. But all of this functionality comes at a great cost – it is highly susceptible to the capacity of the battery and can even have a deleterious impact on the life of the battery as the battery can only withstand a certain number of charge cycles.  There are two ways that this problem can be approached: either increase the energy density of the battery so that it can hold a greater mAh rating for the same size and weight or to pay special attention to eliminating extraneous power usage wherever possible.  The problem with the former is that advances in energy storage technology have been far outstripped by the power requirements of the devices they serve.  Thus, we are left with the choice of minimizing the amount of power consumed by the device.

Efforts to reduce the power footprint of a device have been mostly ad-hoc or out of the control of the device’s development engineers, e.g. improvements in wafer technology give the ability to space transistors closer together to cut down on power consumption via reduced capacitances.  However, power debugging gives a modern development engineer the ability to see how their code decisions impact the overall power consumption of the system by tying the measurements of power being supplied to the system with the program counter of the microcontroller.  Power debugging can give you the ability to see potential problems before you go to manufacturing of production hardware.  For example, you may have a peripheral that the engineer thought was deactivated in their code, but in reality is still active and consuming power.  By looking at the power graph, the engineer has the contextual clue that the power consumption of the device is more than it should be and warrants an inspection of the devices that are active in the system that are consuming energy.

Another example of how power debugging can assist an engineer is by looking at the duty cycles of their microcontroller.  A common design paradigm in battery-powered electronics is to wake up from some sort of power-saving sleep mode, do the processing required and then return to the hibernation state.  This is relatively simple to code, but the engineer may not be aware that there is an external stimulus causing the microcontroller to rouse from the sleep mode prematurely and thus causing the power consumption to be higher than it should.  It is also possible that an external signal is occurring more often than was planned in the original design specification.  While this case can be traced with a very judicious use of breakpoints, the problem may persist for quite some time before the behavior is noticed.  A timeline view of the power consumption can foreshorten this latent defect because it can evince these spikes in current and allow the engineer to double-click the spike to see where in the code the microcontroller was executing when the spike occurred, thus providing the engineer with the information necessary to divine what is happening to cause the power requirements to be so high.

Power debugging can also provide statistical information about the power profile of a particular combination of application and board.  This can be used in baselining the power consumption in such a way that if the engineer adds or changes a section of code in the application and then sees the power differ drastically from the baseline, then the engineer knows that something in the code section they just added or modified caused the spike and can investigate further what is happening and how to mitigate it.  Moreover, an engineer can change microcontroller devices to see if the power consumption of one device is lower or higher than that of another device, thus giving a commensurate comparison between the two devices.  This allows the engineer to make very scrupulous decisions about how their system is constructed with respect to power consumption.

It is evident that our consumer society will begin to rely increasingly on mobile devices which will precipitate demand for more capability and – correspondingly – more power from the batteries which drive these devices.  It behooves an engineer to make their design last as long as possible on a single battery charge, so particular attention must be paid to how the design is constructed – both in hardware and software – to maximize the efficiency of the device.  Power debugging gives the engineer the tools necessary to achieve that goal of making a more ecologically-friendly device that makes every electron count.

What makes an embedded design low power?

Wednesday, March 2nd, 2011 by Robert Cravotta

It seems that nearly everything these days is marketed as a low power device/system. I see it so much in marketing material and in so many unsubstantiated contexts that it has become one of those phrases words that becomes invisible on the page or screen that I am reading. It is one of those terms that lack a set-in-concrete context – rather, it is often used as an indication of the intent of a device’s designers. Is it reasonable to declare an mW device as low power when there are μW devices in existence? Is it ever reasonable to declare a multi-watt system as low power?

The fact that low power thresholds are moving targets makes it more difficult to declare a system as low power – meaning that what is considered low power today soon becomes normal and the threshold of what constitutes low power necessarily shifts.

I recently was asked to build an online course about low power design for a processor that consumes power on the order of Watts. When I think of low power designs, I usually think of power consumption that is several orders of magnitude lower than that. While low power is not defined as a specific threshold, it can be addressed with appropriate techniques and strategies based on the context of the end design. I came up with an energy consumption spectrum that consists of six major categories. Even though the specific priorities for low power are different for each category, the techniques to address those priorities are similar and combined in different mixes.

We will be rolling out a new approach (that will eventually become fully integrated within the Embedded Processing Directory) for describing and highlighting low power features incorporated within microprocessors (including microcontrollers and DSPs) to enable developers to more easily identify those processors that will enable them to maximize the impact of the type of low power design they need.

What do you think is necessary to consider an embedded design as low power? Are there major contexts for grouping techniques and strategies for a set of application spaces? For example, energy harvesting applications are different from battery powered devices, which are different again from devices that are plugged into a wall socket. In some cases, a design may need to complete a function within a maximum amount of energy while another may need to limit the amount of heat is generated from performing a function. What are the different ways to consider a design as a low power one?

Boosting energy efficiency – How microcontrollers need to evolve

Monday, February 21st, 2011 by Oyvind Janbu

Whatever the end product, all designers have specific tasks to solve and their solutions will be influenced by the resources that are available and the constraints of cost, time, physical size and technology choice.  At the heart of many a good product, the ubiquitous microcontroller often has a crucial influence on system power design and particularly in a brave new world that’s concerned with energy efficiency, users are entitled to demand a greater service from them.  The way microcontrollers are built and operate needs to evolve dramatically if it is to achieve the best possible performance from limited battery resources.

Bearing in mind that the cost of even a typical coin cell battery can be relatively high compared to that of a microcontroller, there are obvious advantages in designing a system that offers the best possible energy efficiency.  It can enable designers to reduce the cost and size of a battery.  Secondly, it can enable designers to significantly extend the lifetime of a battery, consequently reducing the frequency of battery replacement and for certain products the frequency, cost and ‘carbon footprint’ associated with product maintenance call-outs.

Microcontrollers, like many other breeds of electronic components, are these days very keen to stress their ‘ultra low power’ credentials, which is perfectly fine and appropriate where a device’s dynamic performance merits; however, with a finite amount of charge available from a battery cell, it is how a microcontroller uses energy (i.e. power over the full extent of time), that needs to be more closely borne in mind.

Microcontroller applications improve their energy efficiency by operating in several states – most notably active and sleep modes that consume different amounts of energy.

Product designers need to minimize the product of current and time over all phases of microcontroller operation, throughout both active and sleep periods (Figure 1). Not only does every microamp count, but so does every microsecond that every function takes.  This relationship between amperage and time makes the comparison of 8-, and 16-bit microcontrollers with 32-bit microcontrollers less straightforward. Considering alone their current consumption characteristics in a deep-sleep mode, it is easy to understand why 8-bit or 16-bit microcontrollers have been in an attractive position in energy sensitive applications, where microcontroller duty cycles can be very low.  A microcontroller may after all stay in a deep sleep state for perhaps 99% of the time.

However, if product designers are concerned with every microamp and microsecond every function takes, then using a 32-bit microcontroller should be being considered for even in the ‘simplest’ of product designs.  The higher performance of 32-bit processors enables the microcontroller to finish tasks quicker so that they can spend more time in the low-power sleep modes, which lowers overall energy consumption.  32-bit microcontrollers are therefore not necessarily ‘application overkill’.

More than that though, even simple operations on 8-bit or 16-bit variables can need the services of a 32-bit processor if system energy usage goals are to be achieved.  By harnessing the full array of low-power design techniques available today, 32-bit cores can offer a variety of low-power modes with rapid wake-up times that areon par with 8-bit microcontrollers.

There is a common misconception that switching from an 8-bit microcontroller to a 32-bit microcontroller will result in bigger code size, which directly affects the cost and power consumption of end products.  This is borne of the fact that many people have the impression that 8-bit microcontrollers use 8-bit instructions and 32-bit microcontrollers use 32-bit instructions.  In reality, many instructions in 8-bit microcontrollers are 16-bit or 24-bit in length.

The ARM Cortex-M3 and Cortex-M0 processors are based on the Thumb-2 technology, which provides excellent code density.  Thumb-2 microcontrollers have 16-bit as well as 32-bit instructions, with the 32-bit instruction functionality a superset of the 16-bit version.  Typical output from a C compiler gives 90% 16-bit instructions. The 32-bit version would only be used when the operation cannot be performed with a 16-bit instruction.  As a result, most of the instructions in an ARM Cortex microcontroller program are 16-bits.  That’s smaller than many of the instructions in 8-bit microcontrollers, typically providing less compiled code from a 32-bit processor than 8- or 16-bit microcontrollers.

The second part in this three part series looks deeper at the issues around microcontroller sleep modes.

Energy Management in Power Architecture

Tuesday, December 21st, 2010 by Fawzi Behmann

Embedded computing applications, such as printers, storage, networking infrastructure and data center equipment, continue to face the challenge of delivering increased performance within a constrained energy budget. In February 2009, the Power architecture added power management features in the Power ISA v.2.06 (the most recent specification). The largest gains in performance while maintaining a constrained energy budget comes from creating systems that can pace the workload with energy consumption in an intelligent and efficient manner.

In general, the work performed in embedded computing applications is done in cycles – a combination of active states, management states, and dormant states – and that different areas of the system may have higher demand for energy than other areas throughout the workflow. It becomes important for system architects to model the system from an energy consumption perspective and employing energy saving techniques to the building blocks (CPUs, ASICs and I/Os) of their computing system.

The processor is the heart of the system.  There will be times when high frequencies will be required, but these will likely be very short cycles in the work flow. The vast majority of time, the processor is being asked to perform relatively low-performance tasks. Reducing the processor’s clock frequency during these management periods saves energy, which in turn be used by the ASIC or I/O that are working harder. Throughout the workflow, energy demands vary among system components. Some devices need more power than others, and the system needs to tightly control and manage the power sharing. It is also important that software saves previous known states in non-volatile memory so that the processor can retrieve those states upon entering a more active state.

In many applications, high computing performance during periods of activity should be balanced with low power consumption when there is less workload. Microprocessor cores typically operate at higher frequencies than the rest of the system. Therefore, power consumption can be best minimized by controlling core frequency.  Software can dynamically increase or decrease the core’s clock frequency while still allowing the rest of the system continues operating at the previous frequency.

The Power ISA v.2.06 includes specifications for hypervisor and virtualization on single and multi-core processor implementations. The Power Architecture includes support for dynamic energy management; some of which are enabled internally in the core. For example, it is common for execution units in the processor pipeline to be power-gated when idle. Furthermore, Power Architecture cores offer software-selectable power-saving modes. These power-saving modes reduce the function available in other areas, such as limiting cache and bus-snooping operations, and some modes turn off all functional units except for interrupts. These techniques are effective because they reduce switching on the chip and give operating systems a means to exercise dynamic power management.

Sometimes only the application software running on the processor has the knowledge required to decide how power can be managed without affecting performance. The Power ISA 2.06 added the wait instruction to provide application software with a means to optimize power by enabling the application software to initiate power savings when it is known that there is no work to do until the next interrupt. This instruction enables power savings through user-mode code, and it is well matched to the requirements of the LTE market segment, which requires that the total SoC power be managed effectively. The combination of CPU power-savings modes, the wait instruction, and the ability to wake on an interrupt has been demonstrated to achieve deep sleep power savings with wake up on external events.

Considerations for 4-bit processing

Friday, December 10th, 2010 by Robert Cravotta

I recently posed a question of the week about who is using 4-bit processors and for what types of systems. At the same time, I contacted some of the companies that still offer 4-bit processors. In addition to the three companies that I identified as still offering 4-bit processors (Atmel, EM Microelectronics, and Epson), a few readers mentioned parts from NEC Electronics, Renesas, Samsung, and National. NEC Electronics and Renesas merged and Renesas Electronics America now sells the combined set of those company’s processor offerings.

These companies do not sell their 4-bit processors to the public developer community in the same way that 8-, 16-, and 32-bit processors are. Atmel and Epson told me their 4-bit lines support legacy systems. The Epson lines support most notably timepiece designs. I was able to speak with EM Microelectronics at length about their 4-bit processors and gained the following insights.

Programming 4-bit processors is performed in assembly language only. In fact, the development tools cost in the range of $10,000 and the company loans the tools to their developer clients rather than sell them. 4-bit processors are made for dedicated high volume products – such as the Gillette Fusion ProGlide. The 4-bit processors from EM Microelectronics are available only as ROM-based devices, and this somewhat limits the number of designs the company will support because the process to verify the mask sets is labor intensive. The company finds the designers that can make use of these processors – not the other way around. The company approaches a developer and works to demonstrate how the 4-bit device can provide differentiation to the developer’s design and end product.

The sweet spot for 4-bit processor designs are single battery applications that have a 10 year lifespan and where the device is active perhaps 1% of that time and in standby the other 99%. An interesting differentiator for 4-bit processors is that they can operate at 0.6V. This is a substantial advantage over the lowest power 8-bit processors which are still fighting over the 0.9 to 1.8V space. Also, 4-bit processors have been supporting energy harvesting designs since 1990 whereas 8- and 16-bit processor vendors are only within the last year or so beginning to offer development and demonstration kits for energy harvesting. These last two revelations strengthen my claim in “How low can 32-bit processors go” that smaller sized processors will reach lower price and energy thresholds years before the larger processors can feasibly support those same thresholds – and that time advantage is huge.

I speculate that there may be other 4-bit designs out there, but the people using them do not want anyone else to know about them. Think about it, would you want your competitor to know you were able to simplify the problem set to fit on such a small device? Let them think you are using a larger, more expensive (cost and energy) device and wonder how you are doing it.