What matters most when choosing an embedded processor?

Wednesday, July 14th, 2010 by Robert Cravotta

I remember the first embedded project that I worked on where I had visibility into choosing the processor. I was a junior member of the technical staff and I “assisted” a more senior member in selecting and documenting our choice of processor for a proposal. I say assisted because my contribution consisted mostly of writing up the proposal rather than actually evaluating the different options. What stuck with me over the years about that experience was the large number of options and the apparent ease with which the other team member chose the target processor (a 80C196KB). I felt that processor had been chosen mostly based on his prior experience and familiarity with the part.

Today, the number of processing options available to embedded developers is vastly larger; just check out the Embedded Processing Directory to get a sense of the current market players and types of parts they offer. While prior experience with a processor family is valuable, I suspect it is only one of many considerations when choosing a target processor. Today’s device families offer many peripherals and hardware accelerators in a single package that were not available just a few years ago. Today’s devices are so complex that it is insufficient for processor vendors to just supply a datasheet and cross assembler. Today, most processor suppliers provide substantial amounts of software to go with their processors. I view most of this software as “low-hanging integration fruit” rather than a “necessary evil” to sell processors, but that is a topic for another day.

I suspect that while instruction set architecture and maximum processing performance are important, they are not necessarily the same level of deciding criteria that they used to be. There are entire processor platforms built around low energy or value pricing that trade processing performance to enable entirely different sets of end applications. There is a growing body of bundled, vertically targeted, software that many processor platforms support, and I suspect the bundled software is playing a larger role in getting a processor across the line into a design win.

With the recent launch of the Embedded Processing Directory, I would like to ask you what matters most to you when choosing an embedded processor? Is having access to the on-chip resources in a table format still sufficient, or are there other types of information that you must evaluate before selecting a target processor? We have a roadmap planned for the directory to incorporate more application-specific criteria, as well as information about the entire ecosystem that surrounds a processor offering. Is this the right idea, or do you need something different?

Please share your thoughts on this and include what types of applications you are working on to provide a context for the criteria you examine when selecting an embedded processor.

38 Responses to “What matters most when choosing an embedded processor?”

  1. SKV @ LI says:

    - The type of operations (DSP/ non DSP).
    - Max speed that processor can support
    - Any additional components that comes with processor like cache, DMA e.t.c
    - If your team is novice in developing a system, better go for a processor with less number of pins
    - Yor final device size also matters. hence the footprint
    - Temperature
    - The above mentioned are only wrt HW and you need to consider your platform/RTOS as well.

  2. RVDW @ LI says:

    1. Decide what software it is to run.
    2. Then select a core with peripherals that can run that software.

    Cost, development tools, core speed and peripherals are really all determined by 1, because if there are several ICs that fulfill 1, all that’s left at 2 is picking the least expensive option.

    The only “secret” is that memory is the largest expense in a computer, and cores can differ substantially in code density, so sometimes a slightly more expensive core can reduce the system’s total cost. In my tests last year, popular modern cores differed by less than 5% in code density, when using leading compilers.

    So, that leaves the costs of silicon and licensing the IP. IP vendors seem to use similar technologies, so the silicon areas are similar for similar classes of CPU.

    Of course, the most popular cores have the highest licensing costs. So, it boils down to popular, but not -too- popular. We picked ColdFire.

  3. M.K. @ LI says:

    It is the requirements of the product that you are building that matters most when choosing an embedded processor, not the processor itself.

    Product requirements, both functional and non-functional (i.e. business requirements) should serve to influence all of your design decisions, including the choice of processor. The silicon itself is of little importance, as long as it speaks to the requirements.

    For example, how many units do you expect to make? A high volume, low price point product might need to cost as little as possible to make it marketable. If you are making 10 million power supplies, it may be necessary to use an 8051 processor at 40 cents each, despite the additional engineering effort it takes to implement a phase-locked-loop in software.
    However if you are making for example, 20 cardiac stress testers per year, the product has a high price point and you can afford to chose ANY processor that has all the facilities you need in order to minimize engineering effort and design cycle time.

    Other factors to consider:

    - Good vendor support. It helps a lot if the manufacturer is domestic. It’s a pain when you ask a question that only the processor design team is qualified to answer, and they are in Japan.
    - Do you have immediate and unrestricted access to design change notices and errata?
    - Good tool chain support. Choice of processor is secondary to the tools available to make it go.
    - Is the processor actually available to buy? If the manufacturing yields are low, or the part is on allocation because everyone else is using it too, think again.
    - Do you have environmental requirements? Operating temperature, electromagnetic compatibility, non-lead soldering
    - Do you have quality requirements? Automotive, life support?
    - Do you have existing in-house expertise/equipment on an existing platform and tool chain? If not, what are the training and tooling start-up costs
    - When does the product need to be delivered in order to be considered useful. Pick a processor that takes the time and effort to design with, but is still affordable. It’s a balance.

    So far I’ve said nothing about what’s inside the chip.

    - It goes without saying that it must it have all the I/O you require.
    - Is it electrically compatible with circuit it has to go into? This is especially important in mixed analog designs.What additional support with your circuit have to provide? Voltages, oscillators.
    - Benchmark candidate processors with samples of your application code and a few different compilers to see if it can meet your throughput and and real-time requirements.
    - Memory is not cheap, so select the least you need, but remember that as your application grows and you begin to run out of memory, this will double and triple your design time and future maintenance time. Try to establish system requirements as soon as possible, with the help of benchmarking.

    In summary, it is the requirements of the product that you are building that matters most when choosing an embedded processor,

  4. R.A. @ LI says:

    M. is correct that it is “the requirements of the product that you are building that matters most”.

    The most common mistake made in choosing a processor, is in the choice of who the decision maker should be. In general, the software team responsible for application should be the decision maker for the processor. The reason for this is simple; the processor is there for no other reason than to support the application software, and the software is directly answerable to the vast majority of product requirements (typically more than 90% of documented product requirements these days are implemented by the software).

    Items that M. didn’t mention:

    1. Does the processor have a MMU.

    With the complexity of device software today, a MMU is mandatory for pretty much every application.

    2. Does the processor have a cycle counter.

    Again, because of the complexity of devices advanced debugging support for the software is required. Cycle counters allow the debugging of race conditions that are exceedingly difficult (if not impossible) to debug otherwise.

    3. Is the instruction set a target for any binary software components that might be used.

    If the application uses any off-the-shelf 3rd party binaries, then the processor must support that instruction set.

    4. What calling conventions does the processor support. This is related to the tools support that Murray mentioned, but it requires analysis by the software team.

    A good example of this is the Freescale E500 cores. They do not have a floating point unit, but do have a SPE unit that can be used to do hardware floating point. On more than one occasion I have seen the selection team/individual choose this processor because they felt the SPE could be used for the floating point requirement, only to learn that the calling convention is different, and thus incompatible with the system software.

  5. A.S. @ LI says:

    IMHO … the processor itself isn’t important is you have to choose an “embedded” processor. Important is the functionality of whole SoC which is “embedding” the processing core. The whole SoC must fit optimal to the requirements …

  6. M.P. @ LI says:

    I agree with M. “the requirements of the product that you are building that matters most” and one of those requirements will be cost, I have not been involved in a project where cost does not matter and the CPU tends to be a more expensive item.

    I will also look at the support, I like to have local support available direct from the manufacturer if possible, especially if I am using a new part, which might have some non documented device faults.

    My company has used a lot of Freescale parts and I have found there support to be very good. This gives them a competitive advantage in my company as we have a strong historical relationship, so they tend to be a company I will look at first for a solution.

  7. G.D. @ LI says:

    All remarks very practical, but how about some examples? In the past I’ve found some of the best info that I got was from other engineer’s success…and failures. That is, which companies gave good support, and which gave poor support. Companies that had bugs in there chips, and ones where few if any bugs were discovered. Etc…

    FYI, I’m considering designing a “1/4DIN” industrial controller with a ¼ VGA touch screen.


  8. M.K. @ LI says:

    Obviously the requirements of the product are key (the processor must be able to do the job), then cost factors come in – in small volume applications experience or the use of existing tools may matter more than the cost of the parts.
    But there is only one universal requirement – availability. There is no point in choosing a perfect fit if you can’t meet production deadlines. Right now this is a big issue -on a recent project we could use one of at least three different processor families with little influence on product cost or performance but the key decider was the family with the largest range of pin comaptible parts !
    I do have to take exception to R.’s comment “With the complexity of device software today, a MMU is mandatory for pretty much every application.”
    This just reflects the area he works in – I’ve never designed a system with an MMU and don’t see it on the horizon for the kind of work I do. My current project is ARM based, burns < 5mW, about 9k of code – the customer hopes to make a few million of them.
    (Actually Rennie, I'm not sure about letting software people choose a chip for low power either – a flat battery trumps even totally bug free software !)

    M. K.

  9. R.A. @ LI says:

    “I’ve never designed a system with an MMU and don’t see it on the horizon for the kind of work I do”

    That’s why I said “pretty much”. There are, of course, always exceptions, but the vast majority of embedded developers today are working on systems that have significantly more than 9KB of code throughout the product.

    “Actually R., I’m not sure about letting software people choose a chip for low power either – a flat battery trumps even totally bug free software”

    What makes you think that using MMU enabled chips increases the power consumption of the total system?

    The truth is, that having a single MMU based chip that replaces 15 MMU-less micro-controllers, actually reduces the systems overall power consumption; sure, if your “system” has only a single micro-controller with 9KB of code, then an MMU is pointless; but unless you are building something on the scale of a musical greeting card or a microwave oven, you aren’t likely to have the luxury of this degree of simplicity.

    In any case, such simple applications are universally well understood, and while this may have been an interesting discussion in 1974, it really isn’t all that interesting in 2010…

  10. W.P. @ LI says:

    If product requirements are correctly understood, then the CPU selection process has a shot at getting you somewhere. I have been down a few projects where the software team selected a CPU that was 2x more powerful than what the initial requirements needed, and had a bunch of peripherals that we needed, even spare serial ports and i2c and everything, and by the end of the project, the requirements have changed enough that we should have picked something on a different end of the marketplace scale. CPU selection isn’t the tough job; It’s requirements gathering and market intelligence that seem to have caused a lot of headaches later on that get laid at the feet of “whoever picked this durn’ MCU”.

  11. J.H. @ LI says:

    R. – There’s plenty of realtime embedded systems out there without MMU’s. and there will be for a very long time.

    maybe if someone is coming to embedded applications from a big systems programming background, or developing things with graphical touchscreen interfaces and TCP stacks for networking then there could be a point in having an MMU, but plenty of realtime embedded applications do not require one.

  12. M.Z. @ LI says:

    :-) The most stable will have MMUs though. I completely agree with R. on this one. You *can* code on a system without one, but if you want stability, you have to be very, very careful.

  13. M.K. @ LI says:

    R. – I just can’t let this go unanswered:

    “In any case, such simple applications are universally well understood, and while this may have been an interesting discussion in 1974, it really isn’t all that interesting in 2010…”

    In effect you are saying that my work is so trivial that it has no place in a discussion of embedded engineering – which is quite simply silly.

    My point, which seems to have skimmed effortlessly below your radar, is that embedded engineering is a much wider field and topic than your own work.

    Your ‘truth’ that merging many micro based sub systems into one mega system is an asssertion which would require some actual evidence to be convincing. There are many reasons, some commercial, some regulatory and some performance based for building large systems from a collection of smaller components.
    The sensor I mentioned which uses the 9k of code represents many years of development (and this is not unusual) . It is delivered to the system builder fully specified, tested and calibrated. Whilst it would be possible to carry the raw analogue data into some central acquisition and processing block it would be unfeasible because:

    The analogue data would need expensive cables and connectors (and an ADC on the sensor would cost more than the processor).
    The whole system would need to be calibrated (24 hour temperature cycle in dry nitrogen !!!!)
    Every system builder would need to be entrusted with the IP which processes the raw data into something useful.
    etc etc

    There are times when mopping up many processors into one is a good plan but there are many where it is not.

    My assertion, (unsupported by hard facts – but I still think it is true) is that building complex systems from many simple, proven components gets you to a reliable result faster and cheaper.

    malloc, mmu, dynamic linking — just say NO !

    Have a nice day !

    M. K.

  14. W.P. @ LI says:

    I think it is safe to say that embedded engineers can be more easily sorted into the types of work they do by the things that they “say no” to, with deliberate engineering reasoning behind their choices, than the things they say “yes” to. Embedded engineering is a tricky field to make generalizations in, precisely because of what M. points to — embedded engineering is indeed a very wide field. Some guys are doing “embedded CPU” using an IP core on an SOC prototype on a bunch of high end FPGAs, and their end goal is a new silicon ASIC. Some are doing microcontroller work on small 8 bit cores, at 16 mhz, and some are using >1 ghz cores and specs that look better than your average 1998 desktop PC in every category. Choosing an RTOS, choosing a CPU. Do you want to start a religious war with embedded engineers? It’s not hard. I pity anybody who tries to hire someone for one of the many sub-niches in the embedded world and expects an engineer schooled in the one of the other sub-niches to fit in well with the rest of the engineering team. WHAT? you used an FPGA? What? You didn’t include an FPGA? What? You don’t have an MMU? What? You put a linux OS on this thing?

  15. R.A. @ LI says:

    “Your ‘truth’ that merging many micro based sub systems into one mega system is an asssertion which would require some actual evidence to be convincing.”

    If you perform an electronic systems reliability prediction computation such as SR332 on two systems, one implemented out of many distributed processors, and one implemented using a single processor, you will have your evidence (unless you elect to discount the decades of research, development and testing that went into these analysis techniques).

    “There are many reasons, some commercial, some regulatory and some performance based for building large systems from a collection of smaller components. ”

    As long as cost, development effort, simplicity, debugability and overall system reliability aren’t concerns, then systems composed of multiple extraneous processors are always an option.

    “My assertion, (unsupported by hard facts – but I still think it is true) is that building complex systems from many simple, proven components gets you to a reliable result faster and cheaper.”

    If by components, you are referring to hardware components, then not only is this assertion not supported by facts, it flies-in-the-face of many decades of research and analysis which, in fact, show the opposite conclusion to be true. It also seems intuitively obvious that the more components there are, the less reliable the overall system will be (i.e. this isn’t one of those cases where the reality is counter-intuitive).

    One counter-intuitive result that is manifest by adding complexity, is that if one builds two systems that are functionally identical and implements software that synchronizes one system as a back-up, that the overall SERVICE AVAILABILITY increases (though, as intuition would indicate, because of the additional complexity, the overall reliability of the system does indeed decrease).

  16. M.K. @ LI says:

    I’m going to duck out of the religious war that W. has warned us of – we have enough of them already.
    There are many paths to true enlightenment – each engineer must walk his own.

  17. sathesh says:

    Requirement and How good tools support… After that it’s in your hand.

  18. J.H. @ LI says:

    m.: Actually, I’d take code for an 8 bit micro not written with any dynamic memory usage at all over a huge overblown contrivance with an MMU for stability (OK I admit not everything you can do this way, and without dynamic memory, things like comms protocols have to be written to survive throwing away of data sometimes, but simple systems written well this way are pretty much bulletproof)

    rennie: there’s a whole world of embedded engineering you clearly don’t have the faintest clue about. good luck with that……

    As for the religious war, what I want to know is who let all the computer programmers into embedded engineering anyway? shouldn’t you all be sitting about wearing turtlenecks and writing ipad apps or something?

  19. R.A. @ LI says:

    “m.: Actually, I’d take code for an 8 bit micro not written with any dynamic memory usage at all over a huge overblown contrivance with an MMU for stability”

    The presence or absence of dynamic memory allocation is completely unrelated to the absence or presence of a MMU.

  20. W.P. @ LI says:

    See what you’ve started? Internecine geek warfare.

  21. J.H. @ LI says:

    I’m not talking about an MMU that’s just for simple bank switching.. in that case we should be also listing the importance of the processor having pins or pads to solder to, a VCC and a GND connection, and some kind of packaging to allow transport of the part from the manufacturer to the assembly house and loading into a p&p machine….

    “The presence or absence of dynamic memory allocation is completely unrelated to the absence or presence of a MMU. ”

    So professor, what exactly would you want in an MMU for in a system (say 16k of code space, and 1k of ram) where you statically allocated all your memory that was needed?

    MMUs do have a place, but to assume that systems without them are irrelevant when there are such massive volumes of devices made up with small microcontrollers sold these days that don’t use them for anything more than bank switching if at all is just plain ignorant.

  22. R.A. @ LI says:

    “So professor, what exactly would you want in an MMU for in a system (say 16k of code space, and 1k of ram) where you statically allocated all your memory that was needed?”

    It seems you have misunderstood. I never suggested that micro-controllers should have MMU’s, I am suggesting that micro-controllers are becoming less and less relevant (by the minute it seems) as they simply become components in more complex systems where the micro-controllers functionality can be simply replicated for zero additional incremental cost, as a thread in a process within the protection domain of a MMU within a larger processor. The benefit is not only cost, but reliability since an appropriately sized 32 bit processor with MMU amortizes the cost of the supporting circuitry thus reducing the overall gate count, and therefore improving reliability.

  23. T.W. @ LI says:

    after quite few years working around embedded systems the only use i saw for mmu was to run hypervisor on multiple core system and delegate each core to run other os in its own address space + stack overflow checking (not sure if the second reason is still valid; it was there primarily for diagnosis). right now even vxworks still plans to support the memory protection within hypervisor client, but they still don’t have it (i guess it’s going to show up somewhere early next year, when 6.9 is out?)

    there have been some topics on the embedded systems and os functions to manage memory here in the past, and i agree that dynamic memory allocation is not exactly going to do any good to anybody as long as we talk about embedded systems; you’d profit from mmu mostly during debug, when you have to catch something that messes around in your software (later it the field it won’t help you recover if page fault, dsi, isi or such kill your surveillance task and everything goes havoc – you’d need to reboot anyways; you know all your applications already), but even that can be done without mmu. besides, maintaining TLBs with all the rights and coherency require you to be consequent all the time, even when it’s not required (or even unwanted – say – in an interrupt? or exception handler?); but still, since your memory will most of the time be preallocated, mmu is something you don’t really need to care about.

    apart from what has been said already i would only add one more point: just think carefully what you need to achieve, what the software is supposed to do. there’s plenty of project around that still utilize old good 8bit cpus with no ‘real’ OSes or MMU and they fit just perfectly :-)

  24. P.P. @ LI says:

    You sometimes can find creative ways to use a MMU.
    I for one have used the MMU on an ARM926EJ-S to map twice the same area in memory, once with write buffer and cache enabled and once with them disabled.
    That area was used for various communication ring buffers where I needed both DMA and code access – having the write buffer enabled when writing with the code was convenient, or processing messages with random access using the cache (it needs to be invalidated before starting to process the message, though) while DMA accesses were seamless in the background accessing the data without any of these.
    Essentially it seems that MMus are very well tuned for using with preemptive RTOS where they can be used to allocate physical or logical memory to threads but can still be useful with a wrap-your-own one-of-a-kind ad-hoc solution.

  25. D.G. @ LI says:

    First, print the processor selection chart of the semiconductor company whose local sales rep takes you to lunch the most.
    Second throw a dart at the chart. If that processor is too big/expensive, go down one row. If it is too small, go up one.
    Is a development board available? If the answer is no, repeat step #2.
    Now, get back to work. The software will take much, much longer than the hardware and board layout. You can get a long way down the road, even on the wrong horse.
    If changing the hardware platform is a major complication for your embedded software, you are doing it wrong.

  26. L.R. @ LI says:

    There is no “right” answer, and this is what is so special about the embedded space – it all depends on the APPLICATION, where the application market considerations ultimately dictate the criteria in selecting the components (including processor) and development tools.
    Some appliations target a high-volume opportunity, so it will be more concerned with cost of parts (BOM), while others target medium volume where cost of development could easily overshadow any savings in BOM.
    Some applications targetting the consumer are in cut-throat competition and must have the latest and greatest technology, while other applications targetted at some of the more conservative industrial markets will be much more concerned with product longevity, and obscolecense mittigation.
    There are applications where the software will change after the product release, (updates, user apps) and must maintain maximum flexibility, while other fields that lack a human operator must operate without a glitch for months and thus would need to maximize stability and carefully control any changes.

    It is thus that many processor offerings which are quite diverse are being offered and will continue to exist, each in its particular niche. While the higher-end CPUs that can run protected OSes (e.g. Linux) quickly improve their price performance, there are lower-end microcontrollers (all the way down to 8-bit micros with 4KB of program flash) that also make great improvements wuth even lower cost and power consumption measred in micro-Watts.

    Many projects do not suceed because the designers failed to keep an “open mind”, and adhered to the persuasion that considers only the tools (and parts) they are already familiar with -

    If all you got is a hammer, then everything looks like a nail!

  27. K.P. @ LI says:

    First, I eliminate all vendors that do not have a strong balance sheet– (I want them to be around for decades). Then, I eliminate all vendors that do not have a LONG track-record of supplying parts for at least a decade after design-in. Then, using my list of vendors, I select those parts that might have a reasonable chance of having the features and computational power needed to adequately support the application I am designing for. With that list, I eliminate all parts that will not result in a SYSTEM cost that will meet the final pricing budget. After that, I look at the development tools– The tools do not have to be free, but they must be of reasonable cost, and they must work well. Since more development time will be spent in software debug and verification, the debugging and/or emulation tools (both hardware and software) get most of my attention. After I do all of these things, there is usually a very short list of parts to choose from– and I can then look at performance-to-power ratios, performance-to-price ratios, etc. to further reduce the size of the list. Then, it is a matter of choosing the part that can very likely also be used in other projects as well, (to reduce the cost of the tool chain if any, and to reduce the cost of the “learning-curve”.) After all of that (if I have more than one part left on my list), I look at the public and vendor support network– (active user-to-user forums, availability of applications engineers, cooperative emulation and evaluation tool supplies, etc.)

    With all of that, I usually end up with the “Right Choice” very quick in the process. I use a similar process for the other components in my designs.

    If I had to pick the one thing that has the most influence on my selection decision of any of the above, I would choose “LONG track-record of supplying parts for at least a decade after design-in”– (and I don’t mean PROMISES, I mean an actual track-record– very few vendors can meet this requirement). I hope this helps you and is applicable to what you were asking.

  28. L.R. @ LI says:

    K. P. has described a process typical of medium-volume conservative persuasion -
    The silicon vendor’s track record is very useful to predict product obsolescense policies. The product life term is critical for people who’s embedded systems must pass certifications and then be produced for several years without any changes.
    The other useful aspect of a vendor’s track record is to evaluate teh quality of their newly released products, which is important for estimating the cost of development due to risks involved in device errata.

    In contrast, projects who expect high production volumes are not nearly as concerned with either of these considerations – they can pre-order their components in the 100k units, which pretty much guaantees them product availability any time. The cost of development in a large volume project is less significant when compared to the COGS over product life cycle, hence they are more likely to take risks and even work with a fresh new silicon startup, even if it saves them a few bucks on the BOM.,

  29. B.P. @ LI says:

    ” What matters most when choosing an embedded processor?” The most important thing when choosing any component, is that you can get it. If you can’t get it nothing else really maters. Lots of parts are on Allocation these days.

  30. R.M. @ LI says:

    EMC performance, current draw, lows, power modes, the selection (or lack) of on board peripherals, what the rest of the family looks like for both upgrades and downgrades, the forecast end of life, … the list goes on and on.

  31. D.M. @ LI says:

    Availability into the future is something that also needs due consideration.

    A product that I currently work on was designed over 15 years ago. The embedded processor chosen at the time is no longer available and yet the product is still being manufactured and sold.

  32. R.A. @ LI says:

    “A product that I currently work on was designed over 15 years ago. The embedded processor chosen at the time is no longer available and yet the product is still being manufactured and sold. ”

    It is certainly an important factor.

    Longevity of design can be aided through selection of operating system as well. The O/S can abstract the processor, allowing the hardware to be re-engineered with current components, and minimal changes to the application code (assuming that the code is also well designed and implemented).

  33. W.S. @ LI says:

    Key things to consider:
    #1 Tool support. If you work in C (and are porting existing C code) don’t even think about choosing a part that doesn’t have a fully compliant C compiler. Otherwise, you’ll have a huge engineering job to port the code and re-write. There are certain major vendors that fail miserably in this regard. Also look at tool effeciency. Some tools are good and effecient. Others, well, are not.

    #2 Development support. Make sure you have a good, reliable emulator chain available. This makes debugging much simpler. Without this, your software development will be very, very difficult.

    #3 Long term support. Is the part going to be here in the future? Is there an upgrade path. Yes, you may be able to get by with a small part now, but if your design exhibits feature creep (which all embedded products do) can you easily expand to a larger part?

    #4 Environmental concerns. Clock frequency, temperature ranges, I/O, RAM/ROM. All of those things are important. While it may be possible to run a chip faster, does a faster crystal cause EMC issues? Does the chip have a PLL if you need a faster clock speed but can not have a faster crystal? How about ESD protection? Is your part susceptible to ESD problems? All parts are in theory, but some manage this better.

  34. P.V. @ LI says:

    Lots of interesting comments so here is a small consideration. If you are using C in an embedded system with just a single prom space for non-volatile storage and limited RAM you my want to consider a non-Harvard architecture. This facilitates non-volatile data access more readily in C. I have found that architectures for which the code and data spaces are addressed differently engender special handling for access in C which tend to make the code less portable.

  35. J.G. @ LI says:

    The only simple answer to this question is: How well does it meet the requirements of the project? As you can see, that’s another question. The reason for that is that there isn’t any one factor to be considered.

    Like buying a house, you’ll need to list a number of needs and wants. Then you’ll look at the available options then select the one that meets all the needs and as many of the wants as possible.

  36. L.R. @ LI says:

    J. is right, except that one should not assume that all of their requirements can be met by any one specific product, where a certain cost range is one of these requirements. Hence in order to make a smart decision, one must be prepared to compromise, and to do that, the requirements must be given a weight or priority, and then every specific product being considered can be graded on how well it answers the requirements as a set, with the more important requirements contributting a bigger portion of this grade. I beleive there is in fact a whole theoretical basis on how to make decisions, which can be readily applied to this field too (as well as buying a house if one takes out the emotional component): http://en.wikipedia.org/wiki/Decision_theory

  37. R.K. @ LI says:

    In desending order of priority

    Application to be implemented
    Speed required for application
    Features available does it support the application
    and obviously Price

  38. L.R. @ LI says:

    R., you are rigjt for medim volume products (100 to 1,000 per year), where the cost of development can easily overtake the cost of a processor if the development tools are of poor quality or unframiliar to the developers. In those cases the processor cost will not be at the top of your considerations.
    If however the expecte production volume is in the 100,000 range (or more), the equation changes, and it may make sense to use the cheapest processor even if you have to write your own tools to support it, or write it all in assembly – any extra R&D expenditure will be compensated y the savings in cost of parts.

Leave a Reply