Entries Tagged ‘Low Price’

Extreme Processing: New Thresholds of Small

Friday, May 21st, 2010 by Robert Cravotta

[Editor's Note: This was originally posted on the Embedded Master]

While the recent stories about the DNA-based Robot and the Synthetic Organism are not techniques that are available to current embedded developers, I think they point out what type of scale future embedded designs may encompass. In short, the stories relate to building machines that designers can program to perform specific tasks at the molecular or cellular level. Before I relate this to this series, let me offer a quick summary about these two announcements.

The synthetic organism is a synthetic cell that the creators at J. Craig Venter Institute claim is completely controlled by man-made genetic instructions. The new bacterium is solely a demonstration project that tests a technique that may be applied to other bacteria to accomplish specific functions, such as developing microbes that help make gasoline. The bacterium’s genetic code began as a digital computer file, with more than one million base pairs of DNA, which was sent to Blue Heron Bio, a DNA sequencing company, where the file was transformed into hundreds of small pieces of chemical DNA. Yeast and other bacteria were used to assemble the DNA strips into the complete genome, which was transplanted into an emptied cell. The tam claims that the cell can reproduce itself.

100521-dnarobot.jpg

There are two types of DNA-based robots that were announced recently. Each is a DNA walker, also referred to as a molecular spider that move along a flat surface made out of folded DNA, known as DNA origami, that the walker binds and unbinds with to move around. One of the walkers is able to “follow” a path, and there is a video of the route the walker took to get from one point to another. The other type of walker is controlled by single strands of DNA to collect nano-particles.

These two announcements relate to this series both from a size scale perspective and to our current chapter about energy harvesting. The synthetic organism article does not explicitly discuss how the bacterium obtains energy from the environment, but the molecular robot article hints at how the robots harvest energy from the environment.

“The spider is fueled by the chemical interactions its single-stranded DNA “legs” have with the origami surface. In order to take a “step,” the legs first cleave a DNA strand on the surface, weakening its interaction with that part of the origami surface. This encourages the spider to move forward, pulled towards the intact surface, where its interactions are stronger. When the spider binds to a part of the surface that it is unable to cleave, it stops.”

Based on this description, the “programming” is built into the environment and the actual execution of the program is subject to random variability of the molecular material positioning in the surface. Additionally, the energy to enable the robot to move is also embedded in the surface material. This setup is analogous to designing a set of tubes and ruts for water to follow rather than actually programming the robot to make decisions. When our hypothetical water reaches a gravity minimum, it will stop, in a similar fashion to the robot. Interestingly though, in the video, the robot does not actually stop at the end point, it jumps out of the target circle just before the video ends.

I’m not trying to be too critical here; this is exciting stuff. I will try to get more information about the energy and programming models for these cells and robots. If you would like to participate in a guest post, please contact me at Embedded Insights.

Extreme Processing Thresholds: Low Price

Friday, March 26th, 2010 by Robert Cravotta

[Editor's Note: This was originally posted on the Embedded Master

Exploring processing thresholds is a tricky proposition. There is a certain amount of marketing specmanship when you are releasing a product that extends some limit of a processing option – say price, power, performance, or integration. It is helpful to understand how the supplying semiconductor vendor is able to meet the new threshold so you can better understand how those trade-offs will or will not affect any design you might choose to consider that new part in.

To lead off this series, I am looking at processors that cross new low price thresholds because there have been a handful of announcements for such parts in the past few months. Texas Instruments’ 16-bit MSP430 represent the lowest public cost parts which start at $0.25. Moving up the processing scale points our attention to NXP’s 32-bit Cortex-M0 processors which start at $0.65. Rounding out the top end of the batch of new value-priced processors is STMicroelectronics’ 32-bit Cortex-M3 processors which start at $0.85.

In looking at these announcements, be aware that the pricing information is not an apples-to-apples comparison. While all of the parts of the announced processor families can address a range of applications spaces and overlap with each other, each of these specific announcements is significant to a different application space. What is most relevant with each of these processors is that each potentially crosses a cost threshold for a given level of processing capacity such that existing designs using a processor at that same price point, but delivering less capability, can now consider incorporating new features with a larger processor than was available before at that price point. The other relevant opportunity is that there are applications that were not using processors before because they cost too much that can now economically implement a function with a processor.

When looking at these types of announcements, there are a few questions you might want to get answers for. For example, what volume of parts must you purchase to get that price? The Cortex-M0 and -M3 pricing is for 10,000 units. This is a common price point for many processor announcements, but you should never assume that all announced pricing is at that level. For example, the MSP430 announcement pricing is for 100,000 units. The announced 1,000 unit pricing for the MSP430G2001 is $0.34. To get an idea of how much volume purchasing can drop the price, VC Kumar, MSP430 MCU product marketing at Texas Instruments, shares that the pricing for the G2001 part drops to around $0.20 at 1,000,000 units. Fanie Duvenhage, Director Product Marketing/Apps/Architecture for the Security, Microcontroller & Technology Development Division at Microchip points out that since around five years ago, very high-volume, small microcontrollers have been available for a unit price in the $0.10 to $0.15 range. So there is a wide range of processing options at a variety of price points.

So how what do these suppliers have to do to be able to sell their processors for these lower prices? According to Joe Yu, Strategic Business Development at NXP Semiconductors, choosing the right core with the right process technology has the largest impact on lowering the price threshold of a processor. The packaging choice represents the second largest impact on pricing thresholds. After that, reducing Flash, then RAM, and then individual features are choices that a processor supplier can make to further lower the pricing point.

VC Kumar shares that the latest MSP430 part price point uses the same process node as other MSP430 devices. The lower price point is driven by smaller on chip resources and by taking into account what are the boundary conditions that the processor will have to contend with. By constraining the boundary conditions, certain value-priced parts can use less expensive, but lower fidelity IP blocks for different functions. As an example, standard MSP430 parts can include a clock module configuration that supports four calibrated frequencies with ±1% accuracy while the value-line sister parts use a clock module configuration that supports a single calibrated frequency and no guarantee for the ±1% accuracy.

Another area of controversy for processors that push the low-end of the pricing model is how much on-chip resources they provide. To reach these price points, the on-chip resources are quite constrained. For example, the Cortex-M3 part includes 16-kbytes of Flash, while the Cortex-M0 part includes 8-kbytes of Flash. The MSP430 part includes 512-bytes of Flash and 128-bytes of SRAM. These memory sizes are not appropriate for many applications, but there are growing areas of applications, including thermometers, metering, and health monitoring that might be able to take advantage of these resource constrained devices.

One thing to remember when considering those devices at the lowest end of the pricing spectrum is that they might represent a new opportunity for designs that do not currently use a processor. Do not limit your thinking to tasks that processors are already doing or you might miss out on the next growth space. Are you working on any projects that can benefit from these value-priced processors or do you think they are just configurations that give bragging rights to the supplier without being practical for real world use?

Extreme Processing Thresholds

Friday, March 19th, 2010 by Robert Cravotta

[Editor's Note: This was originally posted on the Embedded Master

Just in the past few weeks there have been two value-line processor announcements that push the lower limit for pricing. STMicroelectronics’ 32-bit Cortex-M3 value line processors are available starting at $0.85, and Texas Instruments’ 16-bit MSP430 are available starting at $0.25. These announcements follow the earlier announcement that NXP’s 32-bit Cortex-M0 processors are available for as low as $0.65.

These value pricing milestones map out the current extreme thresholds for pricing for a given level of processing performance. These types of announcements are exciting because every time different size processors reach new pricing milestones, they enable new types of applications and designs to incorporate new or more powerful processors into their implementation for more sophisticated capabilities. An analogous claim can be made when new processor power and energy consumption thresholds are pushed.

There are many such thresholds that make it both feasible and not feasible to include some level of processing performance into a given design. Sometimes the market is slower than desired in pushing a key threshold. Consider for example the Wal-Mart mandate to apply RFID labels to shipments. The mandate began in January of 2005 and progress to fully adopt the mandate has been slow.

In this new series, I plan to explore extreme processing thresholds such as pricing and power efficiency. What are the business, technical, hardware, and software constraints that drive where these thresholds currently are and what kinds of innovations or changes does it take for semiconductor companies to push those thresholds a little bit further?

I am planning to start this series by exploring the low-end or value pricing thresholds followed by low energy device thresholds. However, there are many other extreme thresholds that we can explore, such as the maximum amount of processing work that you can perform within a given time or power budget. This might be addressed through higher clock rates as well as parallel processing options including hardware accelerators for vertically targeted application spaces. Examples of other types of extreme thresholds could include interrupt service response latency; how much integrated memory is available; how much peripheral integration and CPU offloading is available; higher I/O sampling rates as well as accuracy and precision; wider operating temperature tolerances; and how much integrated connectivity options are available.

I need your help to identify which thresholds matter most to you. Which types of extreme processing thresholds do you want to see more movement on and why? Your responses here will help me to direct my research to better benefit your needs.