Power Architecture Channel

Energy Management in Power Architecture

Tuesday, December 21st, 2010 by Fawzi Behmann

Embedded computing applications, such as printers, storage, networking infrastructure and data center equipment, continue to face the challenge of delivering increased performance within a constrained energy budget. In February 2009, the Power architecture added power management features in the Power ISA v.2.06 (the most recent specification). The largest gains in performance while maintaining a constrained energy budget comes from creating systems that can pace the workload with energy consumption in an intelligent and efficient manner.

In general, the work performed in embedded computing applications is done in cycles – a combination of active states, management states, and dormant states – and that different areas of the system may have higher demand for energy than other areas throughout the workflow. It becomes important for system architects to model the system from an energy consumption perspective and employing energy saving techniques to the building blocks (CPUs, ASICs and I/Os) of their computing system.

The processor is the heart of the system.  There will be times when high frequencies will be required, but these will likely be very short cycles in the work flow. The vast majority of time, the processor is being asked to perform relatively low-performance tasks. Reducing the processor’s clock frequency during these management periods saves energy, which in turn be used by the ASIC or I/O that are working harder. Throughout the workflow, energy demands vary among system components. Some devices need more power than others, and the system needs to tightly control and manage the power sharing. It is also important that software saves previous known states in non-volatile memory so that the processor can retrieve those states upon entering a more active state.

In many applications, high computing performance during periods of activity should be balanced with low power consumption when there is less workload. Microprocessor cores typically operate at higher frequencies than the rest of the system. Therefore, power consumption can be best minimized by controlling core frequency.  Software can dynamically increase or decrease the core’s clock frequency while still allowing the rest of the system continues operating at the previous frequency.

The Power ISA v.2.06 includes specifications for hypervisor and virtualization on single and multi-core processor implementations. The Power Architecture includes support for dynamic energy management; some of which are enabled internally in the core. For example, it is common for execution units in the processor pipeline to be power-gated when idle. Furthermore, Power Architecture cores offer software-selectable power-saving modes. These power-saving modes reduce the function available in other areas, such as limiting cache and bus-snooping operations, and some modes turn off all functional units except for interrupts. These techniques are effective because they reduce switching on the chip and give operating systems a means to exercise dynamic power management.

Sometimes only the application software running on the processor has the knowledge required to decide how power can be managed without affecting performance. The Power ISA 2.06 added the wait instruction to provide application software with a means to optimize power by enabling the application software to initiate power savings when it is known that there is no work to do until the next interrupt. This instruction enables power savings through user-mode code, and it is well matched to the requirements of the LTE market segment, which requires that the total SoC power be managed effectively. The combination of CPU power-savings modes, the wait instruction, and the ability to wake on an interrupt has been demonstrated to achieve deep sleep power savings with wake up on external events.