<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
		>
<channel>
	<title>Comments on: What is driving lower data center energy use?</title>
	<atom:link href="http://www.embeddedinsights.com/channels/2011/08/03/what-is-driving-lower-data-center-energy-use/feed/" rel="self" type="application/rss+xml" />
	<link>http://www.embeddedinsights.com/channels/2011/08/03/what-is-driving-lower-data-center-energy-use/</link>
	<description>Shedding Light on the Hidden World of Embedded Systems</description>
	<lastBuildDate>Mon, 28 Jul 2014 16:18:37 -0400</lastBuildDate>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.0</generator>
	<item>
		<title>By: P.M. @ LI</title>
		<link>http://www.embeddedinsights.com/channels/2011/08/03/what-is-driving-lower-data-center-energy-use/#comment-7384</link>
		<dc:creator>P.M. @ LI</dc:creator>
		<pubDate>Fri, 05 Aug 2011 16:52:46 +0000</pubDate>
		<guid isPermaLink="false">http://www.embeddedinsights.com/channels/?p=614#comment-7384</guid>
		<description>A significant improvement in efficiency has come from hard disk drive manufacturers prioritizing power management in their products. Some years ago large data center users were reporting that they had hit capacity limits purely because the local power grid could not support any growth in consumption. The HDD manufacturers responded with a large investment in data caching algorithms, power management within the drive, vastly increased storage capacity per unit and faster host interfaces.</description>
		<content:encoded><![CDATA[<p>A significant improvement in efficiency has come from hard disk drive manufacturers prioritizing power management in their products. Some years ago large data center users were reporting that they had hit capacity limits purely because the local power grid could not support any growth in consumption. The HDD manufacturers responded with a large investment in data caching algorithms, power management within the drive, vastly increased storage capacity per unit and faster host interfaces.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: L.R. @ LI</title>
		<link>http://www.embeddedinsights.com/channels/2011/08/03/what-is-driving-lower-data-center-energy-use/#comment-7378</link>
		<dc:creator>L.R. @ LI</dc:creator>
		<pubDate>Fri, 05 Aug 2011 04:54:12 +0000</pubDate>
		<guid isPermaLink="false">http://www.embeddedinsights.com/channels/?p=614#comment-7378</guid>
		<description>Although I fail to see how this discussion relates to embedded or real-time technology, I will add my 2 cents:
The reduction in energy consumption in my opinion is due to technology, ideology and most importantly economics.
Electricity is one of the largest OpEx lines for a data center, so with increased demand and competition, downward price pressures motivated to optimize and save on energy costs.
Environmental ideology which has expanded into mainstream that calls for CO2 emission reduction has added to the motivation significantly.
Technologies such as Virtualization and Storage Area Networks enabled a significant improvement in efficiency - instead of dedicating entire machines to a service, hardware resources now can be allocated based on actual load and usage, while unused hardware can be turned off.
Processor technology had to overcome a barrier at about the same time - no longer could Intel simply increase the clock of their new processors - the heat density has reached a critical level which required a methodical redesign of the logic circuits that in turn resulted in a leap advance in MIPS/Watt efficiencies.
These and undoubtedly other advances that took place in the recent decade have all undoubtedly contributed to the trend, but I am doubtful the economic downturn had much impact at all - this market is way too conservative to make changes so quickly.</description>
		<content:encoded><![CDATA[<p>Although I fail to see how this discussion relates to embedded or real-time technology, I will add my 2 cents:<br />
The reduction in energy consumption in my opinion is due to technology, ideology and most importantly economics.<br />
Electricity is one of the largest OpEx lines for a data center, so with increased demand and competition, downward price pressures motivated to optimize and save on energy costs.<br />
Environmental ideology which has expanded into mainstream that calls for CO2 emission reduction has added to the motivation significantly.<br />
Technologies such as Virtualization and Storage Area Networks enabled a significant improvement in efficiency &#8211; instead of dedicating entire machines to a service, hardware resources now can be allocated based on actual load and usage, while unused hardware can be turned off.<br />
Processor technology had to overcome a barrier at about the same time &#8211; no longer could Intel simply increase the clock of their new processors &#8211; the heat density has reached a critical level which required a methodical redesign of the logic circuits that in turn resulted in a leap advance in MIPS/Watt efficiencies.<br />
These and undoubtedly other advances that took place in the recent decade have all undoubtedly contributed to the trend, but I am doubtful the economic downturn had much impact at all &#8211; this market is way too conservative to make changes so quickly.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Andy T</title>
		<link>http://www.embeddedinsights.com/channels/2011/08/03/what-is-driving-lower-data-center-energy-use/#comment-7366</link>
		<dc:creator>Andy T</dc:creator>
		<pubDate>Wed, 03 Aug 2011 21:18:00 +0000</pubDate>
		<guid isPermaLink="false">http://www.embeddedinsights.com/channels/?p=614#comment-7366</guid>
		<description>I think the doc missed it completely.  Use of electricity by datacenters has been capped by the utilities since 2007/8 because the grid can&#039;t support anymore power going into supercomputer and server farm facilities. That&#039;s the main reason.</description>
		<content:encoded><![CDATA[<p>I think the doc missed it completely.  Use of electricity by datacenters has been capped by the utilities since 2007/8 because the grid can&#8217;t support anymore power going into supercomputer and server farm facilities. That&#8217;s the main reason.</p>
]]></content:encoded>
	</item>
</channel>
</rss>
