Question of the Week: Do you use or allow dynamic memory allocation in your embedded design?

Wednesday, March 24th, 2010 by Robert Cravotta

[Editor's Note: This was originally posted on the Embedded Master

Back when I was deep into building embedded control systems (and snow was always 20 feet deep and going to and from school was up hill both ways), the use of dynamic memory allocation was forbidden. In fact, using compiler library calls was also forbidden in many of the systems I worked on. If we needed to use a library call, we rewrote it so that we knew exactly what it did and how. Those systems were highly dependent on predictable, deterministic real-time behavior that had to run reliably for long periods of time without a hiccup of any kind. Resetting the system was not an option, and often the system had to keep working correctly in spite of errors and failures for as long as it could – in many cases lives could be on the line. These systems were extremely resource constrained both from a memory and processing duty-cycle time perspective and we manually planned out all of the memory usage.

That was then, this is now. Today’s compilers are much better than they were then. Today’s processors include enormous amounts of memory and peripherals compared to the processors back then. Processor clock rates support much more processing per processing period than before such that there is room to waste a few cycles on “inefficient” tasks. Additionally, some of what were application-level functions back then are low-level, abstracted function calls in today’s systems. Today’s tools are more aware of memory leaks and are better at detecting such anomalies. But are they good enough for low level or deeply embedded tasks?

Do today’s compilers generate good enough code with today’s “resource rich” microcontrollers to make the static versus dynamic memory allocation a non-issue for your application space? I believe there will always be some classes of applications where using dynamic allocation, regardless of rich resources, is a poor choice. So in addition to answering whether you use or allow dynamic memory allocation in your embedded designs, please share what types of applications your answer applies to.

191 Responses to “Question of the Week: Do you use or allow dynamic memory allocation in your embedded design?”

  1. S.T. @LI says:

    Hi Robert, from my opinion still embedded system designers are very very careful about using the dynamic memory allocation. Because, dynamic memory allocation will be a “blocking” call (in multi threaded app) in any system, which will decrease or “changes” the normal systems flow / performance. I used to work in a Telecom application, that was not a mission critical application, even then we made sure all the “dynamic memory” are statically allocated during system bootup.

  2. A.T. @EM says:

    NFW.

    One crash in machine control takes out hundreds of dollars in tooling. Just to save $3 in RAM? And if you need more RAM than $10 worth of RAM in a real time system, it’s time to fire the firmware people.

  3. D.D. @LI says:

    In my experience in telecommunications systems the use of dynamic memory is allowed only during the system initialization. At the end of this phase the dynamic memory allocation is disabled. This behavoiur is defined into the software design

  4. B.Z. @LI says:

    Almost never. I’m looking for the most deterministic behavior possible and using dynamic allocation during normal operation adds more complexity to any recovery routine. It has been easier to gracefully degrade operation with static allocations. Checkpointing and swapping cause unpredictable delays in processing that generate random problems, not my favorite operation for an embedded system.

    That said, a situation with wildly variable message sizes can be more efficient with a dynamic size allocation pool, but consider the effects of fragmentation and the possibly random defrag time that may occur if messages are not processed mostly in order of appearance.

  5. K.R. @LI says:

    Depends whats the design is for. These days embedded systems are deploying very fast processors and large memories than it used to be 10 years ago. Therefore I have seen many embedded designs using full operating system services let alone dynamic memory management.

  6. P.O. @EM says:

    My designs all have 100% uptime requirements (I am not allowed to count on the user ever restarting the system during the lifetime of the product). The risks of a memory leak outweighs the benefits of dynamic memory allocation in my applications.

    I do use library functions, but my compiler vendor has lived in the deeply embedded world for a long time (Keil). In addition, we are careful to avoid using function that involve dynamic memory allocation. We also code in C not C++. We do write OO code, but manually encapsulate data and methods instead of using C++ constructs.

  7. G.E. @EM says:

    I use dynamic memory, but I use the dynamic memory in private heaps. Private heaps allow only a few function to use it and it can be reset without taking down the entire system.

    Construction of private heaps are easy as the code example in page 140- 150 of the K & R book

  8. M.P. @LI says:

    Hi Robbert,

    I have to say for me it depends on the embedded system you are trying to produce, the term applies to such a wide range of devices.

    I personally break them down into OS and non OS systems. An OS system generally for me has more resources (MMU, Memory) especially if it is Linux/Windows CE and therefore I tend to think that dynamic memory allocation is acceptable.

    However in non OS based systems for me it is almost never, as pointed out earlier I generally want this type of system to be deterministic. Also these types of systems tend to be constrained in terms of resources so I don’t want the library code that implements dynamic memory management in my system as I may have limited programme memory.

  9. R.A. @LI says:

    I see in many instances of dynamic memory allocations in mobile phone software.So i think it is okay in embedded systems,but not so ok in hard real-time systems.

  10. A.P. @LI says:

    Use of dynamic memory allocation in embedded systems is still, generally, considered a violation of best practices. It’s not a question of processor size or speed but rather that it is considered risky behavior that can result in memory fragmentation, memory leaks and depletion of resources. If your system runs for hours-to-days between initializations (any many of mine have), you probably don’t want to do dynamic allocation.

    That said, it seems that it would be difficult-to-impossible to use C++ without dynamic memory allocation and there are certain classes of applications (text messaging comes readily to mind) where dynamic allocation is virtually required.

    So be aware of the risks and act accordingly.

  11. G.H. @LI says:

    OK, I also attempt to avoid dynamic memory allocation as long as I can. Restricting its use only during initialization is also a good practise, but the problem arises when portions of your code must rely on third-party libraries which use dynamic memory. How do you deal with this kind of problem?

    With a full-featured OS (linux for instance) you can split your application in different processes, each of them with a different memory map. The risky portion (the one relying in dynamic memory) can run with a completely different address space from the most critical parts. With this approach, I expect that if the system goes out of memory due to fragmentation only “risky” proccesses are affected. They can die gracefully or be killed by the OS, and be born again a few milliseconds after by a supervisor. During this unhappy period, critical processes (which only allocated static memory) have been working fine. For a short time you get a diminished but still working system wich recovers quickly.

    What do you think of this approach? My OS-intrinsics knowlegde is not very huge, so I`m not completely sure that address space isolation gives enough reliability to the static-memory portion.

  12. M.F. @LI says:

    I would like to make a distinction in type of embedded systems – real time and non-real time.

    I would never use dynamic memory allocation in real-time systems, no matter how fast and powerful CPU is. Dynamic memory allocation is not guaranteed to complete over any time interval (actually, it is not guaranteed to complete at all), so no matter how long your deadlines are or how fast you micro is you can not use it. In fact, one of my favorite interview strategies is to gently probe prospective employee on their dynamic memory allocation usage in real-time apps. If they have no problem with it they are not going to be hired

    I don’t have a major issue with allocation during initialization, but I do wonder why? How about simple array declaration at compile time? Takes lot less time and code, simple to write, and no need for dealing with rogue code that does memory allocation later. That’s what preprocessor is for…

    That said, many OSs would call malloc, etc when creating threads, queues and other such things. C++ might also call it behind the scene. Intimate knowledge of tools and environment is extremely important in avoiding such unintended behavior.

    For non-real-time embedded apps (such as iPhone apps) I see nothing wrong with it. Those apps are not that different from PC apps, so who cares.

  13. B.S. @EM says:

    We only allow dynamic allocation… never allowed to let it go.

  14. S.B. @LI says:

    Hi ,
    I work in the Set top Box software. There are part of the system where memory allocation is deterministic like the buffers used for holding streaming data . These are allocated once and reused as the buffers get consumed . On the other hand storing EPG data is dynamic , I would choose to allocate and deallocate the same depening on when I enter and exit teh use case . This is to optimize the memory usage . If I had unlimited memory I would have allocated the memory to store EPG data once.

  15. E. @EM says:

    Like a previous poster, I have never
    designed an embedded device where system
    failure was an acceptable performance
    metric (This ain’t Windows).

    In my present industry (UPSs), the
    typical system is installed by the
    customer, turned on, and may run until it
    is retired (5-7 years) without a re-boot;
    making it extremely intolerant of memory
    leaks.

    I also follow a personal standard that if
    you run out of a resource, the device
    must degrade gracefully. For example, I
    must maintain a log of past events that
    the customer can scroll through on the
    front panel. Making it finite in length
    forces me to confront the only choices
    when I run out of RAM:
    - Stop logging, or
    - Overwrite the oldest event.

    Dynamic allocation schemes make it much
    more difficult to handle such scenarios
    and increase the chance of a catastrophic
    failure.

  16. K. @EM says:

    My experience from the embedded world (only one ‘real’ project so far). Good clean object oriented C++, lower layer C wrapped up for easy C++ access.

    NO DYNAMIC memory allocations used. But the machine was very limited with resources.

  17. S.H. @LI says:

    Question: whether you use or allow dynamic memory allocation in your embedded designs (or not)?
    Answer: It is considered sacrilege to use dynamic memory in hard-real time (embedded) systems. The non-deterministic behavior of typical implementations of ‘malloc’, is detrimental in the design of hard-real time systems, where reliable upper-bounds on certain events must be met ALWAYS.
    In such instances use memory pools instead of dynamic memory allocation.

    For deeply embedded systems (referring to soft-real time systems here), devoid of VM, (no hardware MMU), uClinux is a case in point. It uses mmap obtaining memory from kernel free memory pool and disposes it calling munmap, using one system call, having an overhead of 56 bytes per allocation, and if there are a lot of smaller allocation over a period of time, the overhead is a factor along with memory fragmentation. We can’t defragment as there is no VM.

    Newer refinements including kmalloc2, malloc-simple from uClibc, etc are available. Many commerically successful consumer gadgets use uCLinux/uClibc.

  18. J.S. @LI says:

    Totally depends on the context: where the software will be used (or reused), what its operational requirements are, etc. If the requirements include “five-nines” (99.999% uptime) reliability, hard real-time, or small memory footprint, the answer is “never”. But I’ve also worked in embedded systems with a fast CPU, MMU and large memory, an SSD, and hundreds of thousands of lines of C++ code, in which case the answer is “always”. Frequently all of these things are on the same product, even in the same CPU, just depends on at what level in the software stack you are working or what the restrictions are.

    I’ve written ISRs in both assembler and C++, so it’s a complicated question to answer for the general case.

    Hey, if it were easy, anybody could do it, and I would be in management.

  19. S.S. @LI says:

    A fixed-size memory management scheme is deterministic in time, and there is no external fragmentation (though there is wasted memory inside the allocated block). You always take and free from the head of the list.

    Typically, I will create an array of fixed size buckets where each bucket has a fixed size. The size increases from one bucket to the next in powers of 2. (First bucket is 64 bytes, next is 128, and then 256, etc.). As with anything, YMMV. We used this algorithm on supercomputers at Convex. This blocks like any other shared resource. When you are out of memory, you need to wait or recover.

    OTOH, I prefer greedy algorithms where most memory is grabbed at the beginning. It makes testing simpler.

  20. L.R. @LI says:

    Like all engineering , it is a tredeoff, a compromise.
    Like several comments before have stated, it very much depends on the application, and its priorities.
    If the application requires tight determinism, i.e. low deviation of worst-case response time from typical, then static allocation and fixed-size buffer management schemes are the right fit.
    If however memory is very tight, and every percentage point of utilization is more critical than determinism, then both static and fixed- alloication schemes may be too wasteful, and dynamic allocation could be the needed.
    Static allocation space ineffeciency comes from its temporal invariance, while the fixed- allocation schemes have to round-up unit size to the pre-determined pool sizes,
    Tools have little to do with this, except in the ase of C++, which strongly suggests dynamic allocation, without offering memory effeciency either.

  21. M.S. @LI says:

    I think the term “dynamic memory allocation” is a bit to unspecific. I have no problem with using dynamic memory allocation in an embedded environment as long as the allocation is deterministic.

    It is the “dynamic memory recycling” that kills applications, i.e. when “dynamic memory deallocation” occurs as well as “allocation”. Especially if it is from a heap shared by several processes *shudder!*. Even when using a deterministic allocation/deallocation heap, it is very hard – if not impossible – to show that you don’t eventually run out of core.

    As long as you only allocate, either only at start-up or in a deterministic and known rate (you known you have memory enough e.g. for the flight time of the missile), I see no problem with it.

    Still, who hasn’t used a classic lock-free producer-consumer round-robin buffer in an embedded system? That’s both dynamic allocation and deallocation. Or a FIFO of an UART? Same thing – allocated by the device and emptied by the application code. This proves that it is not only feasible to use dynamic allocation/deallocation, but sometimes it is also the only way.

  22. J.L. @LI says:

    I personally have seen many linked lists fail because of an unforseen system state allowing corrupt pointers to occur, especially during garbage collection. Having worked on several large projects where any system is supposed to run forever without intervention I consider dynamic allocation to be against best prcatice. The closest I have seen work was a window shade model of fixed, finite packets and this was an easily testable solution. However it was found at first that it was always running into its default recovery state and working that way. Being a deterministic model made the fix simple to implement.

  23. D.T. @LI says:

    Wow, we have touched on a bit of a religious issue here, haven’t we. :)

    I think that the claim of “best practices”, or rejecting candidates out of hand because they would consider dynamic memory management for embedded, is throwing the baby out with the bath water.

    Quoting Marco Tabini of Dr. Dobbs Journal…
    “C programmers have to handle pointers in much the same way that explosive experts handle nitroglycerin—as little as possible and with a lot of respect.”

    This is the bathwater. We have all been burned by the challenge of using pointers. Oh, it sounds easy at first, but a little experience can lead one to start thinking of malloc (or, heaven forbid, realloc!) as the plague.

    However, I must counter that:
    1. It is “user-real-time” even when it takes 250ms, so that does not automatically preclude DMM for user interface tasks.
    2. Because embedded is usually on constrained-resource systems, you are doing your customers a favor when you leverage minimal memory to maximum benefit.

    In a nutshell, don’t throw out DMM just because pointer allocation/use is hazardous, or because it is inappropriate for some tasks, you should be free to use that option when the context merits.

    -D.

  24. T.E. @LI says:

    Dynamic memory allocation is acceptable even for hard real-time systems if the code
    a) is designed for test (C2 coverage suggested)
    b) includes exception handling strategies (not enough memory) e. g. leading to a safe state
    c) includes timeout handling to stay within the timeslice
    d) is proven not to compromise mission-critical code running on the same target

  25. R.S. @LI says:

    The question “… do you allow dynamic memory allocation …” seems to me a bit outdated. Take e.g. hardware memory management units and cache memory into account. You may not know where your data reside physically. Take further into account whether it is wise to relinquish proven-in-use dynamic allocation and garbage collection and opt on a per-project basis by some kind of overlay or re-use concept.

    The only question may be: How can we do it in a safe “best practice” way without “opening the door too wide”. In that Thomas list seems to me not exhaustive.

    But as I experienced – and from the contributions above – there will be no application without facing and answering this question: how!

  26. K.P. @LI says:

    I (personally) would never use dynamic memory allocation for safety-critical systems. However, I like D. D.’s “telecom” solution– only allow dynamic allocation during initialization. After that, if you need some sort of dynamic structures, then use memory-pools (that are statically allocated or were allocated from dynamic memory at initialization time.) This will guarantee deterministic behavior.

    Many systems (these days) need to have graphical user interfaces, and other functionality that requires dynamic memory. Some languages require the availability of dynamic memory. These systems may also have some safety-critical functions. It would seem that this is an untenable situation, but there are ways to get this done. One solution is to use a hypervisor kernel (like the one from OK Labs for example). The hypervisor then allocates a guaranteed amount of CPU time and memory to a hard-real-time system running on one virtual machine– (some high-reliability RTOS for example). This follows all of the rules as stated above. On another virtual machine, you run an OS (like Linux for example) that my then use the remainder of CPU time and/or memory. The OS can do all of the non-critical things, without having to worry about causing problems with the safety-critical things running on the other virtual machine. The virtual machines communicate with each other through the hypervisor kernel.

  27. R.K. @LI says:

    In Automotive Embedded systems there is little scope for using dynamic memory allocation, even though a proprietary or a standard OS is chosen which allow effective memory management.
    In safety critical systems it is very important to have a deterministic behavior in terms of software reliability than looking at effective utilization of memory. It is also expensive to deploy advanced software tools to test large software applications to test any memory leaks and meet safety standards, although consideration is to test the static memory atleast.
    In non-safety critical systems like the Infotainment applications, the dynamic memory allocation model is followed. This is because not all the software applications are running at the same time also response time not being a constraint. While another software application is chosen with user interaction, the earlier one is terminated and the next is loaded.
    There is definitely a tradeoff between the software design to choose versus quality, reliability and cost.

  28. M.Y. @LI says:

    I don’t like to be dogmatic about anything, so I would say it depends completely upon the product and what the consequences of dynamic allocation failure are.

    That being said, when I need to use “dynamic” allocation in an embedded device or device driver, my general solution is to calculate my maximum memory requirements, allocate a chunk of memory of that size during startup, and then dynamically allocate from that pool at runtime. I also like to use fixed-size blocks to avoid pool fragmentation issues.

    However, in my opinion, the simplest and most reliable memory allocation method is to simply dedicate memory for specific purposes at build time so that you are guaranteed that the resource is going to be there when you need it.

    M.

  29. D.S. @LI says:

    Interesting question, but too broad to answer quickly.

    As others have stated, to some, “embedded” means Windows CE, 1GB of DRAM and a Pentium class processor. To others, it’s an 8-bit or 16-bit CPU with real-time constraints and minimal RAM.

    I’ve worked on systems that use dynamic allocation, and those that don’t. The embedded systems I work on tend to be hard real-time and safety critical, so use of malloc() is pretty much non-existent.

    On those that do, sometimes the allocation is only done at initialization time (in which case, you could ask “Why not just use static allocation?”), sometimes it’s done at run time.

    On those that allocate throughout the run time, rarely is malloc()/free() used — too many perils (re-entrancy, fragmentation, non-deterministic behavior, etc…) Usually some type of memory pool/buffer pool system is used (many RTOSs also provide such services).

    Embedded systems using C++ tend to be more likely to use dynamic allocation, but even there, the programmer can override the standard new & delete operators, instead of allocating from the heap/free-store.

    A few links that might be interesting:

    Dan Saks recently wrote about providing your own malloc to ensure that the toolset’s own malloc library routine isn’t used: http://www.embedded.com/columns/programmingpointers/223300112

    Michael Barr recently wrote about heap fragmentation & buffer pools on his blog ( http://embeddedgurus.com/barr-code/2010/03/firmware-specific-bug-5-heap-fragmentation/ ) and at Embedded.com ( http://www.embedded.com/columns/barrcode/224200699?pgno=4 )

    Miro Samek discusses the perils of heap-based dynamic allocation in his book ( http://books.google.com/books?id=XrWPsLzH9WoC&lpg=PP1&dq=samek%20practical&pg=PA290#v=onepage&q=heap%20problems&f=false ) and on his blog here ( http://embeddedgurus.com/state-space/2010/01/heap-of-problems/ ) and here ( http://embeddedgurus.com/state-space/2010/01/free-store-is-not-free-lunch/ ).

  30. A.R. @LI says:

    Hi,
    Yes, I use dynamic memory. Not very often, of course. But I use it in the soft real-time C++ applications.

  31. A.F. @LI says:

    I normally use both memory allocations – static and dynamic depends on how I’m going to use it.
    For things like buffers and global structures I know I need all the time I use static allocation. All the time-critical stuff is done either static way or using pre-defined pools of memory where only one task has access to it. Every task either gets its own pool to avoid race and memory fragmentation conditions or uses common large memory space. For all non-realtime stuff using common memory pool is perfectly fine. To do so I override standard malloc()/free() with my own versions.

  32. S.G. @LI says:

    We are an in-memory database vendor for a database designed for embedded systems. We’ve incorporated several custom memory allocators into the database run-time to eliminate dependence on malloc and deal with issues like blocking in a multi-threaded app, fragmentation, and excessive overhead. We produced a webinar on the topic and that explains the various memory managers (list, block, stack, bitmap and thread-local), typical use cases, and advantages/disadvantages, and offer some of the code, here: http://www.mcobject.com/index.cfm?fuseaction=download&pageid=705&sectionid=115

  33. A.K. @LI says:

    I have experience in real-time embedded applications for airborne systems ( aircraft upgrading ) . Dynamic allocation is forbidden , except some cases during initialization process. A lot of words were sayed about determenistic issues of real time application and I agree with them. I want to add my opinion . Generally a embedded application is predictible application. The number of IO is known , the messages is known. Maximum number of objects is known etc. Then there aren’t real needs to use dynamic allocation, use static allocation instead. Another issue is certification . In our industry customers require at least DO178B level , for civil avionics certification criteria is more pedantic. I amn’t sure that this kind of application can pass certification criteria if it use dynamic allocation.

  34. A.P. @LI says:

    A.:

    I’m also pretty sure that dynamic allocation won’t pass DO178B, Level A or B; not sure about Level C but it probably would Level D or E (we don’t do much D or E but they’re fairly straight-forward).

    I’m told (by people who do the testing) that C++ code won’t pass Levels A or B, either. Given how much C++ relies on dynamic allocation (among other “risky” behavior), that doesn’t surprise me in the least.

  35. M.S. @LI says:

    The language C++ does not rely on dynamic allocation. Period.

  36. A.K. @LI says:

    A. :
    Code that requires DO178 level D certification we write in C++. For my oppinion even DO178B could be passed with C++ code with more effors. Actually for this kind of certification we use C or Ada. But dynamic allocation is allowed in C and Ada language like in C++. Dynamic allocation isn’t language issue it is design issue and implementation issue. Personally for me , I like C++ and have deep knowledge how to use it in real time projects. Because C++ is multiparadigm language it is possible use it in wide range of application including real-time application.

  37. A.M. @LI says:

    Even in a real time embedded application all the code is not mission critical, it is the developer who can decide that which part he can do the dynamic memory allocation and which part he can not.

    Also it is the frequency of the need to allocate memory dynamically, if it is quite frequent then he should allocate this memory in the begining and then hold it, but if it is once in a while kind of scenario then we should go for dynamic memory allocation rather than holding the memory.

  38. S.K. @LI says:

    Best practice would be, dedicated memory pools per task(/purpose) and the pools configured as per the absolute maximum requirements of that task (max number and max size). This also helps catch memory leaks better, because you know who to blame for a given pool. Also, allocating fixed size blocks instead of bytes will help avoid memory fragmentation.

  39. S.M. @LI says:

    Hi,
    I dont use dynamic allocation memory for embedded system. The problem is can you write in a bad allocation. If you use a dynamic memory, you have to implement a special routine that check the space in memory every time you increase the size dynamic

  40. D.S. @LI says:

    M. S.: Absolutely correct, well said. Unless the code explicitly performs dynamic allocation, or it uses library code that does, there is no reliance on dynamic memory in C++.

  41. R.S. @LI says:

    At first I thought M. S. and D. S. were too terse in their posts about one of the least understood aspects of C++, by C programers (and by many C++ programmers). But I reread the thread and saw
    they both had prior (verbose) posts (and provided links). I’ll just add: Google “placement new” and “operator new” along with “embedded”.

  42. K.H. @LI says:

    I do not think dynamic allocation is a design issue. But if what you meant is if I use dynamic allocation in an embedded application answer is I never use dynamic allocation in real-time embedded application; But GUI side and other none critical section are widely use this method.
    In matter of fact I used dynamic allocation in simulated environment for latest project I have done. But replace alloc, malloc, realloc and all others with my own memory management routines based on static allocation for target environment.

  43. R.S. @LI says:

    Hi, K.

    let me amend two comments:

    1. there are GUI around covering safety critical functions.
    2. how do you know that your own implementation of “buffer management” is more safe than globally tested “malloc” etc. or something the like?

    I would advocate the question “how” instead of “yes or no” about using dynamic memory in safety or mission critical applications.”

  44. K.H. @LI says:

    Hi R.,

    Sorry if I specifically answer you, others may get to read your comment first.
    Answer to 1. In my applications GUI is not covering safety aspects. We have always a big STOP red button which stop safely the process.
    Answer to 2. As I said it is base on static allocation; as you can guess its usually a big array of blocks (like disk managements). Every time your process take a block (same size) from an already allocated (statically) buffer.
    Hope this helps to understands.

    static unsigned char blocks[FIX_BUFFER_SIZE][FIX_BLOCK_SIZE];
    // malloc() is returning a block from this array and free() return it back

  45. D.R. @LI says:

    For space systems, not a chance. All memory is allocated during the system startup process. If, for some problematic reason, something fails and preallocated memory gets full, ground operators command the system recovery somehow

  46. E.W. @LI says:

    At risk of seeming facetious, it’s a similar problem to managing the stack in a system with external interrupt sources.
    As many people have pointed out, you can’t judge whether using dynamic memory allocation is acceptable or not without looking at the wider context of:-
    the development process (including the analysis tools)
    the criticality of the system
    the architecture of the software

  47. P.B @LI says:

    There is one big difference between a stack and a heap. A stack cannot suffer from fragmentation because allocation and release of memory is strictly LIFO. A heap – by which I mean an area from which allocations of arbitrary size can occur – *will* eventually have fragmentation problems if the order of allocations and releases is also arbitrary. And arbitrary it will be if these operations depend upon external events for their timing.

    Fragmentation is too big a risk for most stand-alone embedded systems to allow a heap of this kind to be used except, perhaps, for initialisation, using strictly sequential code, where all the memory used is freed before the system goes live and becomes vulnerable to fragmentation. The most effective way to provide dynamic memory management, if it is needed at all, is to use pools of fixed-size blocks. This is very much easier to manage in C++ than it is in C.

    Hardware memory management might be of use in larger systems, provided its indeterminate performance can be tolerated, but the relatively large page sizes used mean it is usually not viable for smaller systems. And, of course, you have to pay for the MMU hardware.

    There is one further use for a heap: it can be used as a source of memory to set up the pools, or for other purposes, provided the memory allocated from the heap is never freed and reallocated. There are more efficient ways of providing such a “quasi-heap”, however.

  48. S.R. @LI says:

    Generally, there are two problems of dynamic memory allocation in embedded system. First one is that it may take long time to allocate the memory (the algorithm should run in deterministic time O(1) to find the free memory slot to avoid this problem) as well as the task may put to sleep in case of not enough memory available (provided that MMU support is there) and second is that it may fail to allocate memory at all.

    The first problem could be alleviated using memory pool which can be initialized at the system startup.

    There’s no definitive solution to alleviate the second problem. If the section of the code is running critical task, it should preallocate the memory before entering into the section or use static/global memory.

    For multi-processor system, one could use per-cpu variable (mainly for avoiding locks between processors to access the same kind of objects).

    If some data element in a structure is valid spanning several subsequent use, then that piece of data should be stored in static/global variable in order to use the cache line. Use of dynamic memory allocation will loose the cached data in this case.

  49. E.M. @LI says:

    As stated by others in no-mission critical UI applications it is fine. I do not use dynamical allocation on flight control or mission management systems for my projects. Any memory allocation is done and validated at system startup. 1 failure out of 200 million or even a billion instructions is 2 failures too many for these applications.

  50. D.G. @LI says:

    In my opinion, dynamic memory allocation should be safe, but I would recommend using C99 features when available and decide for every allocation whether it’s best to use variable-length-arrays on the stack (lifetime within the scope of the allocating function) or allocations on the heap (lifetime beyond the scope of the allocating function).
    Todays toolchain-supplied memory allocators are smart enough to prevent as little holes on your heap as possible, and nowadays, most individual memory allocations are considerable smaller that the total amount of memory. Back in the day when I wrote software for 8-bit micro-controllers, this rarely was true.

  51. R.I. @LI says:

    If your OS supports dynamic memory allocation, you have to use it judiciously.
    - Not all allocation will be dynamic. Some processes will have higher reliability requirements than others. Such processes should avoid using dynamic allocation
    - If the frequency of allocation and de-allocation is very high, consider not using dynamic allocation. The overhead of the memory manager may over ride the benefits you get.
    Summary is that dynamic memory management is a reality and it cannot be escaped. It has to be used carefully. Most embedded systems get loaded exponentially with additional features. It becomes practically impossible to have all of them supported with static allocation (However, if you manage to do it, it will be a very good example of how not to design feature rich a system).

  52. J.M. @LI says:

    Same opinion for safety-critical.
    You use dynamic memory if you have to share the memory space with other applications (e.g. on your desktop) and you want to be a ‘good neighbor’. Mission- and safety-critical application get allocated to a dedicated processor(s). They typically have full access to the whole platform. The next question is: do you have enough memory? Do even know how much memory you are going to need? Let’s say that the amount of memory needed is a reasonable amount, but your platform does not have that.The first question is: do I really need that much ? You might be able to optimize the memory usage (e.g. do all static strings need to be as big as they are). If you do need the memory, can you increase it (if your project includes designing the hardware, it is a reasonable option to increase the memory). If you are limited in memory size and you can not reduce what you need, there is a chance there is something fundamentally wrong with your project. Only as a last resort, use dynamic memory allocation.

    Consider that dynamically allocating memory is not the problem, especially when you do all of it at startup, which is practically the same as static allocation. It is when you start freeing up the memory for re-use later that you get in trouble. Fragmentation is the main problem of dynamic allocation. When using dynamic memory, consider using predefined buffer lengths from buffer pools. That will reduce fragmentation.

  53. D.T. @LI says:

    This has been a fascinating discussion. I have seen many intelligent and experienced folks stump for one way or the other. Often, the decision is a simple as figuring out the context, i.e. need to share memory vs the risks of sharing memory.

    Let me propose a codification of this tribal knowledge into a design-time decision tree.
    if ((MallocBenefit > OtherSolutions) && (MallocCost

  54. H.S. @LI says:

    In TI’s Cable modem software reference design, it is allowed to dynamically allocate memory. We are aware of the real-time implications and memory leak issues that may occur, so the instructions are to try to use static buffers case it is a high priority task. We try to prevent memory leaks or buffer overflows by holding code reviews, and applying code inspection utilitys. Also, it is possible to debug memory leaks with dmalloc library (open source).

    -H.
    http://www.rt-embedded.com/

  55. S.T. @LI says:

    Aside from size, the issues are (1) How far is the reset button? and (2) What is the consequence of an unforeseen lockup/reset? In the embedded business risk analysis is the name of the game. ALL development decisions are derived from that analysis, or should be.

  56. B.P. @LI says:

    Dynamic against Static is a big question, it’s really determined by the application or function of the software… how mission critical the software is, if it’s safety critical then STATIC is the only way, depending on criticality of requirements.

    The other question that we must ask, what is happening when we perform malloc() and free(), do we trust our source code to be programmed correctly to management dynamic allocation of memory? The OS, do we trust the OS to manage allocation and release of memory blocks for us successfully, there are stories of Operating Systems that leak memory until finally they need to be restarted…

    IMHO, to be safe, I would try to use Static memory, but it does force the designer of the software to calculate min/max and optimum use of static buffers… this is a good thing though in my opinion.

    I have recently written a new MIL STD 1553 library and I have taken care to allow use of Static or Dynamic memory allocation, 1553 these days might be used for DO-178B type applications and needs to use Static memory allocation at levels A & B.

    Please feel free to PM me if you want more information.

    B.

  57. P.V. @LI says:

    – Lessons learned –

    I used (subset of) C++ for an application on embedded devices such as cell phones, PDAs, and car navigation. The application had to handle input of different size resulting in not being able to determine the max size of various intermediate data structures. We could not use over-spec’ed fixed-size data structures due to overall limited memory resources. So we used dynamic allocation…

    One advantage we had was that the application would typically (but not guaranteed) get input in bursts which helped with memory allocation. In addition, it would defragment memory whenever idling. Up-time of the application was easily in the order of days and we would run tests for as long as we could without running out of memory due to fragmentation.

    Having said this, dynamic memory allocation turned out to consume (still acceptable but) very significant part of the CPU time. On one particular platform the standard heap manager was so bad (~10x slower than usual) that I had to write our own fast memory manager to take over new/delete/malloc/free…

    Overall, I think use of dynamic memory allocation depends on the application spec. It certainly introduces overhead but has its advantages in flexibility. Use with caution :-)

  58. G.T. @LI says:

    I agree with K., it is better to use dynamic memory allocation once and manage inside the code. At Clarinox, we have a multi-platform middleware, in that we do what you suggest! But for the user, they don’t know it, because we provide new/delete, malloc/free to user as usual. They can be even careless. Our PC based debugger is able to connect to the target platform and help optimizing the memory pool usage, detect memory leaks, identify when/where they were allocated etc.

    The following is an extract from our white paper; http://115.30.39.108:8080/clarinox/downloads/ForEngineers/Whitepaper_06_CrossPlatformDiscussion.pdf

    • dynamic memory management. SoftFrame provide a simple and effective
    memory management module to replace C style (malloc/free) or C++ style
    (new/delete) calls. Inside these calls are the smart, and adaptive memory and pool management that does not result with memory fragmentation, yet fast and efficient.

  59. G.L. @LI says:

    I work with software defined radios that also implement media access control (the MAC of a LAN like network).

    These systems would be very difficult to make work without dynamic memory allocation. We use two dynamic memory allocation systems, one for communications buffers and one for general control structures. Of course most of the control structure allocation occurs at initialization time. But the facilities are used throughout the system operation.

    We have constructed our own allocation systems that tack usage. All software is required to recover from the inability to allocate a buffer after system initialization.

    I strongly disagree with those that frown on the use of dynamic allocation. It’s a useful technique; though like most powerful techniques it requires some care and discipline in its use.

  60. D.T. @LI says:

    As one who likes to challenge conventional assumptions, I’m going to kick the anthill here.

    I wonder if those who have zero tolerance for dynamic memory in safety critical applications, could please share their specific examples of an initial design which used dynamic allocation, and which during testing was proven to not meet the safety requirements. And of course, more importantly… why?

    I’m throwing down this gauntlet because I don’t believe that dynamic allocation is in and of itself inherently unsafe. As long as you have covered the exceptional cases where memory is unavailable (due to either out-of-memory or fragmented-taking-too-long situations), and you can still provide the needed safety features, then dynamic allocation is still an option.

    I concede the point that it is often easier to create a hard-real-time solution by eliminating dynamic allocation. But is it really a 100% absolute?

    In the end, the only true hazard with dynamic memory is programmer complacency: such as not checking for a null pointer returned, or assuming that the function will return within an expected time-frame.

    -D.

  61. J.L. @LI says:

    someone mentioned using pointers in an application. imho a static declared memory structure using the dot operator as a pointer generator from the compiler is best. dynamic allocation should only be allowed where there is a human to reset the machine with no loss. an application that is expected to run unattended and error free for the life of the machine should never use dynamic allocation. ever find out what happens when a customer loses functionality in their business phone system? how about a robot expected to handle semiconductor wafers without breaking any with no human intervention? that is correct no repeat sales and a mad customer base.

  62. S.R. @LI says:

    Just to kick the proverbial anthill once more…. Who’s home-brewed memory management system has undergone the number of hours of testing of the one in the C/C++ library that accompanies your compiler, or the specialized one that accompanies your commercial RTOS? Put another way, how many bugs did you address getting yours to production, and how sure are you that you’ve seen the last? I honestly don’t recall ever having found a bug in malloc/free or new/delete, though instrumenting them has been essential in identifying misuse by in-house developers. While arguments based on fragmentation and non-deterministic behavior hold true for many systems, there is a lot to be said for sticking to your company’s core competency, and having some trust (with verification) in your tools.

  63. T.B. @LI says:

    I think Dynamic memory allocation won’t harm you if you completely understand your system . I am not sure about mission critical projects but for rest of embedded systems like Mobile Devices and Media player it could be safely used. However one must give thorough attention to the third part applications running on your device . Sometimes you might be busy in avoiding malloc and new and some other application might be using STL without your knowledge.

  64. G.L. @LI says:

    @S. R.
    As I said we have two memory managers, one for buffers and one for control structures. We also have several implementation environments, with some portions of the software running on several different embedded systems.

    In most cases the control structure allocator is a set of instrumentation and tracking code running on top of the manufacture supplied memory allocator. I don’t think we have ever had a bug in any version of that.

    The buffer allocator comes in a couple of flavors. One emphasizing speed of allocation the other emphasizing efficient use of memory. So far, we have not had a deployed bug in either one. Though the high speed one had a few problems during development.

    We prefer to use the manufactures code as a base when it meets our needs. But we are not afraid to replace it when its too slow or unpredictable. It’s just another case of picking your battles.

  65. J.B. @LI says:

    I would just like to point out that 99.9% of the systems any of us will develop actually use “dynamic memory allocation”. Most of those who claim they do not use it are simply refusing to call malloc() and free() (and their ilk); they are NOT avoiding dynamic memory allocation. If you “allocate all your memory at initialization”, aren’t you really just substituting your own purpose-built dynamic memory allocator for the general purpose one provided by the malloc library?

    There are several axes to consider: First, software bugs: you probably consider your linked-list of buffers (or whatever particular management data structures you use) to be simpler and easier to test for bugs than the general purpose — and hence significantly more complex — malloc library. That it is simpler is quite probably true, though as others have pointed out above, whether it has fewer bugs is something that might be debatable, given the long history and extensive testing of some of the malloc implementations.

    Second, fragmentation: this is definitely something you need to consider if you have a relatively tight memory arena and/or are apt to execute code that makes many differing size allocations and also holds onto those allocations for a long time. But then you are simply building that consideration into your own custom-built memory allocation strategy again.

    Third, performance (“real-time” response). Obviously, this is also a continuum along which each application falls and the designer must take into account the effect of the algorithm used to manage memory.

    Dynamic memory allocation does not mean that the system *must* eventually fall into ruin due to fragmentation. Consider an operating system kernel as a paradigm (e.g. linux, or even windows): the OS may be providing the illusion of virtual memory beyond the actual physical memory available to applications, but for critical internal data, the OS itself almost always has a finite limit to the amount of memory it can utilize. Yet, there are systems that can go for years on end without being rebooted even while being heavily used every day. [I now have several linux systems that have been running for more than a year, and on several other occasions have had systems that ran continuously for two or three years and could have gone longer except for power outages, kernel upgrades, etc.] Operating systems aggressively manage memory in order to achieve this: usually adopting various best-fit allocation strategies and ensuring that memory is recovered / coalesced upon release.

    Admittedly, if you require frequent allocation of buffers that need to be made available on a real-time basis, a linked list of buffers is likely to give you more predictable behavior and timing, but don’t argue that this isn’t dynamic memory allocation.

  66. N.D. @LI says:

    In real-time systems, dynamic allocation can eat you alive! Static analysis tools like Coverity and Klockwork, and compiler libraries like STL and BOOST have come a long way over the years, as have testing and monitoring techniques. However, with real-time safety critical systems, are you willing to put your customers lives or national security on the line with dynamic allocation?

    The article _A Comparative Study of Industrial Static Analysis Tools_ by Par Emanuelsson and Ulf Nilsson has some relevant observations about the efficacy of tools. They are not a panacea.

    At the cost of a little more memory, allocating “dynamic” resources at start-up (or elaboration) time is well worth it! For non-safety critical systems, the is more room for variation. Tools, your organization’s Capability Maturity Model level, the expectations for your industry, how you define “fail safe”, etc. are factors in the answer that is right for you.

  67. D.T. @LI says:

    Regarding the comment: “However, with real-time safety critical systems, are you willing to put your customers lives or national security on the line with dynamic allocation?”

    The best engineering approach will of course always be to step out of the fear-uncertainty-and-doubt ruts, and put down actual numbers.

    It is true that race-condition related bugs are high on the list of Embedded Engineering Mistakes which Caused Death, and dynamic allocation is one potential ingredient to said race conditions. However, you will find that in each case use of dynamic allocation in and of itself was not the cause of death. Feel free to present an example that proves me wrong here, I would like to read about it.

    But here’s the real problem with calling dynamic allocation a monster and throwing it out without further consideration. FALSE SENSE OF DESIGN SECURITY. If your design cannot handle the vagaries of dynamic allocation, I contend it will not handle sensor variations, broken protocols, invalid signals, unanticipated user inputs, and surprising hardware failures.

    We must design with margins. For safety critical systems, the margins must be larger. Dynamic allocation can (and for some designs, should) fit within those margins.

    -D.

  68. K.F. @LI says:

    As was said by others, the question is a bit simplistic for the real question and therefore hard to answer as stated. I agree with the opinion of others in that there are a couple main factors to consider: determinism and correctness.

    If the dynamic memory allocation is in all cases deterministic, and its use is in all cases correct (i.e. no leaks, no faults), then dynamic memory allocation may be useful for some applications.

    This is why I often balk at development projects that desire to use programming languages with non-deterministic memory management/garbage collectors. You can neither rely on their determinism or correctness.

  69. K.P. @LI says:

    No one will dispute the fact that a software engineer will be more productive when using a language that has a higher level of abstraction (like Python, Ruby, etc. over say, ‘C’.) If you are building a system with a GUI, life becomes very difficult if you cannot use an object-oriented language and dynamic memory. (You can still design a GUI, it is just harder to do with statically allocated memory.) The problem is that almost all of these higher level languages require (either in the language definition itself, or in the companion libraries) some kind of dynamic memory mechanism, and many of these also use a garbage collector.

    As was already mentioned many times, no matter how you do it, module-scope dynamic memory allocation is not deterministic; and I won’t even mention what a GC can do to a real-time system. Because of this *KNOWN FACT*, safety-critical systems REQUIRE the use of statically allocated memory. [See the relevant safety-critical standards, such as DO-178B, etc.]

    If you can partition your safety-critical system into a safety-critical portion and a non-critical portion (and do so absolutely with some kind of hardware memory protection), then you can have the best of both worlds. This would mean that you would need some kind of mechanism that will guarantee that the safety-critical code will always have some minimal access to the CPU time required to maintain a real-time response profile, and will have a guaranteed protected area of memory that the non-safety-critical code cannot corrupt. As mentioned before, one way to do this is to use a hypervisor (like the one from OK-Labs for example)– that provides two virtual machines– one for the safety-critical code, and one for everything non-critical. The non-safety-critical code can be restarted by the hypervisor if it fails, but meanwhile the safety-critical code keeps on going. There are other ways to accomplish the same thing, but I am certain you get the idea now. In the non-safety-critical section, we may now use higher-level languages, object-oriented programming techniques, dynamic memory allocation, garbage collectors, etc.– all of which makes life much easier for things like GUI’s, etc.

    So, we don’t need to choose to use dynamic memory (or not)– we really can have the best of both worlds– it just has to be done in a way that is guaranteed to work. Of course in a non-safety-critical (or non-mission-critical) system, there is no reason why you can’t use dynamic memory, as long as there is enough memory to have a heap in the first place.

  70. R.S. @LI says:

    There are times when operators got to do quick safety related decisions on the basis of what the computers screen tells about the real conditions. And there may be no other feasible way to identify the real situation because it may be far away or it may be more uncertain than relying on the computer.

    With recommendations of segregation of software with differing safety levels you surely would solve problems preparing the safety case. But you will not be able to deliver intuitive graphics and state-of-the-art animations on a single screen you are able to rely on, given a feasible development budgets.

    However, if someone comes up with a smart solution to use unsafe software to build safe systems (like it is to use an unsafe channel to build a safe communication, or it is to use unsafe complex CPUs to build save hardware) it would become possible to deliver situation-displays exposing an intuitive quality of a situation every kid enjoys at their home PC to about no costs.

    The way of looking at the question of dynamic memory and pre-developed software will continue to be subject to investigations, innovations, and new experiences. With all precautions in mind, I do not believe that we are able (or allowed) to come up with a final answer on the basis of safety thinking of the past.

    I would like to discuss the question “How”, not “Whether” and I’m looking forward to Do178C.

  71. L.S. @LI says:

    Preallocating memory at initialization, used different size LL buffers, then the allocate function at run time decide on buffer size, grab it from the LL, use it and deallocate it back to LL.

    This way is predictable, and have no fragmentation issue, there are still memory wast from the fix size buffers, but can be adjusted based on how many LL you choose (more LL mean more memory optimization)

    This cost is not high compare to the advantages, Even malloc have memory wast as well.

  72. D.D. @LI says:

    There already are embedded systems in production, for critical mission, having been certified to DO-178B, and so on, that make use of dynamic memory allocation. This is a first fact.

    Some are based on JamaicaVM, which is a Java VM, with hard real time capabilities and a deterministic garbage collector, specialized for embedded systems. This is a second fact.

    Most issues I could read in this discussion look outdated, because these conservative measures such as disallowing dynamic memory management are a bit bizarre when considering the big advances with cache memories and muticores invading our good old embedded territories and making former rules obsolete, even dangerous to rely on.

    This is the last fact: it is a surprise you don’t know about JamaicaVM; go and check it now!

  73. T.W. @LI says:

    @D., looking through your post i have a feeling you address topic more from application software perspective – maybe because you’re familiar with that most; yet this topic reaches even lower than that.

    right now, where i work, we have a number of equipment with “predictable” memory utilization, running only and purely embedded software. for us, utilizing OS native calls (on systems like vxworks) is asking for trouble – you wouldn’t even be able to keep a track on memory fragmentation, when your target system runs something like 80 cores and around 40 times more tasks where most of the tasks utilize memory for communication (so these are relatively small allocations). don’t forget that every allocation costs you a bit of time, too ;-) .

    running through various implementations, the one i like most was using either a slab allocator, or similar solution; you can keep track on all your memory allocations and dispositions (without having to mess with os) and your allocationsn are much faster & safer (no fragmentation). we still allow dynamic memory allocations, but only where time is not at stake and where it’s safe (say – de/compression, socket i/o etc – usually allocating fixed size blocks of something like 64k).

    now, to summarize, i completely agree with your approach, as long as we’re talking about applications; however, if we talk about systems that are not meant to be ‘user programmable’, are running a very specific list of tasks, are required to operate fast and be reliable (plus potential memory issues should be easy to debug to prevent resource shortage) – dynamic memory allocation is still out of the question (there’s not even a way customer could add extra memory modules btw ;-) ).

    [naturally we are also working on hardware that has enough horsepower and memory to run linux and at this point most guys don't really care about memory allocations. too bad they had a hard time finding leaks ;-) but then, tasks running on these are not really time critical, that's why we don't even stick to RTOSes there]

  74. B.G. @LI says:

    I do mission-critical systems. When I do not, I still treat the system as mission-critical. This keeps things simpler, because it makes design, implementation, and debugging easier.

    Sometimes I have the luxury of an RTOS (ThreadX and uCos are my top 2), but sometimes I have to do control loops with state machines and interrupts. Most systems do not use MPUs with an MMU, or it is simple, ie a Coldfire.

    I also do not like MPUs with cache – determinism goes out the window.

    I have to agree with Louay – I create private buffer pools and malloc/free out of them. I always know the size of buffers I need. I usually create 2..6 different buffer pools.

    There is one trick you can use if you have a bunch of malloc()s, followed by a general free() (ie malloc(), malloc(), free_all()). This is most useful when doing a transaction of some sort with a bunch of pieces, but once the transaction is completed, you start from scratch with the next transaction.

    To use this technique, start with a large-enough-for-all-cases single buffer. Create a data struct to manage the mallocs(). Malloc() as needed from it. When all done, re-init your data struct. (all malloc()ed references become void – you could use some sort of tag to verify the reference is “live” or “old”.)

    If you manage your own buffer pool, you can return pointers, indexes, or tags (opaque UINT16/UINT32 with hidden indexes, time stamps, pool numbers, etc for safety and verification). Supply the functions to use the indexes or tags for the actual access – this reduces the dangers of pointers by restricting them to your functions. You can also do easy bounds checking on indexes.

    Another advantage of private buffer pools is keeping metrics – number allocated, number free, mean time allocated, which caller allocated, etc.

    I also do not see the advantage of malloc() at boot vs statically allocating at compile time. I statically allocate.

    The problem with C++ is malloc()/new can happen silently. I will not use C++ for an embedded system for exactly that reason – I always use C. Even if one C++ compiler does not silently malloc(), another can, so the code is not portable.

  75. K.V. @LI says:

    Not on my PIC16 project with 128 bytes of RAM!

    Probably on an embedded linux project

  76. J.L. @LI says:

    I allow only statically allocated RAM for embedded systems as I have seen a major project in an international corporation fail commercially because of dynamically allocated, user editable memory with garbage collection !
    I’ll not bore you all with the details. Ask me if you like.

  77. D.S. @LI says:

    @b. — I also work on mission- and life-critical systems, and I share many of your views (buffer pools, etc…)

    The one thing I want to point out is the issue with C++. C++ (the language) does not do any “behind the scenes” dynamic allocation, at least that I’m aware of. A non-conforming compiler might violate the C++ standard, but that’s no different than a compiler which increments an integer variable by 2 with the post-increment operator ++ — it’s just wrong.

    There are plenty of programs (and library calls) which do make use of dynamic memory, but that’s a different issue altogether. If I use an STL container or the new operator, then I fully expect dynamic allocation to take place.

    I work mostly on ARM, PowerPC and MIPS targets, and I don’t recall encountering any such problems in at least the last 10 years. Depending on the target and/or timeframe you’re talking about, your mileage might have been different.

    Endnotes: Dan Saks, who has probably forgotten more about C++ than I know, in his article “Poor reasons for rejecting C++” (at http://www.embedded.com/design/219401085 ), says the following:

    “I know of no place where the C++ language performs dynamic allocation or recursion behind the scenes. Indeed, your code might call a function that uses dynamic allocation or recursion, but this is no more a problem in C++ than in C. In fact, C++ supports simple compile- and link-time techniques you can use to explicitly prevent using dynamic allocation, which I’ll cover in an upcoming column.”

    I don’t want to hijack this thread, but I’d be happy to pass along other sources of info on C++ & dynamic allocation – just contact me.

  78. B.G. @LI says:

    @D.:

    I don’t want the thread to start into the weeds on C++ either. That said, I *do* have several biases against C++. On the dynamic allocation front, it used to be that C++ could create a new object if you declared one as a local variable in a function. I understand this new object now gets created on the stack (ie constructor called upon entry into the function, with all data on the called function stack), then the destructor gets called on function exit.

    The issue is with compilers that *do not* put the local object (for whatever reason) on the stack, but call malloc() instead. Another is ill-formed classes from “re-use” that calls malloc(), perhaps in a library. Not the compiler or language fault, but it can still get you.

    If this has changed in the last while (ie 10 years), then good – it will save other folks system from *that* particular failure. I am not planning on changing my bias against C++ anytime soon, because I think it is half-baked. YMMV.

    On sprintf() and malloc(): I like the library call to provide your own buffer to sprintf(), and use it when I can. The problem is: it is not thread-safe. To get around this, I have modified the library code or written my own thread-safe versions of several library calls, and avoid the “bad” ones (strtok()).

    I have read most of the comments on this thread and do not recall anyone bringing up the concept of having your own malloc() and free() calls in a library which gets linked in early. The functions complain when called, so you can find offending code. Of course, one should also look at the link map to see if malloc() is being called – one of the advantages of statically declaring your buffers, because you should then never see malloc().

    I have yet to find any case in any embedded system I have worked on that would require a call to malloc() … ever.

  79. M.W. @LI says:

    The C++ language itself does not allocate the memory per se. However, there are C++ language features that will do memory allocation to the (somewhat) naive user. In particular, I refer to STL container templates. Not all embedded compilers use the standard version of STL (see the IAR tools), but they are close. Some containers can and will do memory allocation. This is definitely a gotcha hidden in the (not so deep) bowels of STL and may cause non-deterministic behavior. I favor memory pools and doing memory allocation early (i.e. at startup) and then never again. If there’s a memory allocation taking place from a stack declared object, then you (or somebody else) did something wrong in the declaration or definition of that class/object (or somewhere down the line). In fact, this bit me recently… Fortunately, because I had my own memory pool wrapper class, I was able to change it without effecting all the users (hurray for encapsulation!) and resolved the issue quickly. It seems that we have more and more memory/heap space available to us…I say use it, but use it wisely. Allocate/reserve it early, whatever the language, and then manage it accordingly. This helps make systems more scalable and data driven if applicable. C++ has numerous features to help you do this without reinventing the wheel! As Dan (Smith and Saks) have stated, people tend to dismiss C++ because they think it does things behind the scenes and has a lot of memory & CPU overhead. Yes, there are some cases where C++ will use a little more memory or cause an extra pointer dereference, However, C++ can save you CPU usage, code size, and memory in several cases while taking advantage of OO features

  80. [...] of the week, most notably “What matters most when choosing an embedded processor?” and “Do you use or allow dynamic memory allocation in your embedded design?”, uncover at least two different developer biases with regards to memory management hardware. [...]

  81. W.K. @ LI says:

    Unless you’re using FORTRAN or you’re not using variables local to functions, you’re using dynamic allocation. You’re just dynamically allocating from the stack rather than the heap. By using a language like C++ (properly), you can have memory deallocation happen almost as automatically for the heap as it does for the stack.

    The big problem with dynamic allocation (of memory or other resources) is that it’s very challenging to have test cases that can produce the worst-case sequence of events that maximize the high-water mark of resource usage. Most embedded systems have many events to handles, but only one or two or some small number of CPUs. We dynamically allocate CPUs to the handling of particular events. So it’s a hard problem to get away from entirely. We tend to deal with it by having large safety margins, of CPU horsepower, stack space, heap space or whatever resource is dynamically allocated.

  82. A.F. @ LI says:

    certainly at the device driver level, you will use pre allocated buffers. vxworks was handy with giving preallocated buffers.

  83. R.R. @LI says:

    Excellent question. I read many interesting answers, most of which I agree with, and all have good points.

    Although I’m okay with allocating memory during initialization, dynamic memory allocation for a real-time embedded system is not a good idea. I also like the idea of allocating a memory pool, which can also be used for Producer/Consumer buffers.

    Some people mentioned that you can do it as long as you are very careful, well, I’d like to compare it to playing with gasoline (or Nitro as Dan mentioned). It may be safe and you can be careful, but what about someone else? What if someone decides to light a cigarette while you are handling the gasoline??

    In most / all programming projects, the code will not stay with the original developer. Code has to be written such that it is maintainable, scalable and readable. That means that someone will implement new features, fix bugs that suddenly appear or as a side affect of new features, etc. Often, this is passed down to someone else. The more mature the code, the more junior the person who handles it. Now, what happens to the amount of care that is taken by the original developer with dynamically allocated memory? How many people actually take the time to design or revisit the original design before parachuting into the code and implementing a solution in less time than it took management to decide to have the new feature in the first place?

    I can’t speak for embedded application for smart phones and such, where if it crashes, you simply power off and on the device and you keep going, while cursing at the unit for having to re-type the message. And I’m sure we have all seen such examples..

    I’m not convinced that exception handling is a solution for performance and memory issues (fragmentation, leaks, depletion, etc) caused by dynamic allocation. Isn’t that considered masking a problem? Dealing with unusual events is one thing, but expecting problems due to architecture and patching it is another. What happened to the concept of “robustness”? Why should only mission critical software be coded with care? Why should the same level of care not be taken with any application?

    I agree with D. that “we have touched on a bit of a religious issue here”. My comments are not to add fuel to the fire, but rather as food for thoughts. I very much enjoy topics like this one, where we read various points of view.

  84. P.B. @LI says:

    R., I think you have summed up the general concerns – mine included – rather well. Here is a summary of my stance:

    (1) When programming in C, my strategy is to avoid dynamic allocation, if possible. Yes, it can be done safely (pools of fixed-size buffers), but the management of it all can be tricky and all the infrastructure code is plainly visible for anyone to misunderstand and abuse when “maintaining ” the software under pressure! I would NEVER use the standard malloc(), etc., unless it were just for initialisation, as you mention.

    (2) I regard dynamic allocation as an important aspect of many OO designs, so when programming in C++ I accept that and override new and delete for any class which requires pools of objects. Luckily, this can be done by deriving each such class from a predefined template class (see my blog at http://software-integrity.com/ ), so the management and maintenance issues just go away. As with malloc() (which is lurking under the covers), the standard new and delete are dangerous, and new[] and delete[] are abominations which should be banned – and are in my code!

  85. Rick Matz says:

    MISRA C guidelines frown on dynamic memory allocation.

  86. D.H. @ LI says:

    As others have pointed out it all depends. I have worked on a number of embedded systems where dynamic memory allocation worked very well. You can break that down further into systems that use fixed size buffer allocations which are very deterministic and avoid fragmentation issues and systems that use variable size memory allocators which tend to be non-deterministic and subject to fragmentation. Both of these problems can be managed in the design. In today’s environment you need to be flexible and go with what works for your specific application.

  87. I.B. @ LI says:

    To me, the answer is “nearly never”.
    In embedded design you nearly always have a process which must not crash, no matter what, or must react in given time frame – any of this is incompatible with the generic malloc/free (and with the C++’ operator new which is based on them). So at least for these critical processes you have to employ the buffer pool memory management scheme – and isolate these pools from the other, non-critical processes as best as you can. For everything else you can use the generic malloc/free – provided the crash of these processes is not critical AND you have a mechanism in place to restart the crashed process without the need to restart the whole system.
    B.T.W. most 3-d party libraries may accept the malloc/free replacement and even provide the hooks to do that. C++ provides these hooks on the language level in form of overloaded operator new. So isolate these libraries with their memory management infrastructure in a sandbox to keep the critical parts of your system protected.
    As an added benefit of this separated memory pools model the memory leak debugging is greatly simplified. In case the pool is exhausted, only the ill-behaved subsystem will be affected. Another related benefit is the system behavior is more acceptable from the user’ perspective. Operation rejected with the message like “this data set is to big for the device to handle” is far more acceptable than the crash in a seemingly unrelated subsystem – especially if it happens hours after.

  88. B.R. @ LI says:

    My experience (perhaps similar to S. S. comments) has generally been that memory pools can be allocated for specific purposes.
    - If each pool employs an appropriate fixed block size, fragmentation issues can be avoided and guaranteed response time can be achieved.
    - If pools are limited to single-thread usage (e.g GUI thread), it is possible to achieve better performance that using general memory pool with thread-safe mechanisms.
    - The pools can be instrumented during development to determine normal and worst case behavior and to verify that resource leaks do not exist.
    - During operation a full pool condition can be treated as a major fault and the system placed in a safe state.
    So I don’t see dynamic memory allocation (from fixed pools) as being problematic.

  89. D.C. @ LI says:

    In critical embedded systems, dynamic memory allocation is rarely if ever used. I wrote about the reasons for this in my blog on verification of safety critical software last year, see http://critical.eschertech.com/2010/07/30/dynamic-memory-allocation-in-critical-embedded-systems/ .

  90. A.S. @ LI says:

    Hello ladies & gens

    I am mind to the facts. The question submitted is to me clear.
    Do you or dynamic memory allocation…?

    So the answers might be such a kind are follows:

    First. In my job like architect of the embedded systems, I use basically to do just what it is suitable for:
    - initialization phase, once the boot and the raw level BIOS have started…
    - OS set-up

    Second. I allow it depending on where the code is acting…
    I have never used it in the firmware phase, let’s say when you are booting the device…
    I have designed as hardware engineer many ASICS in automotive field — not that control subsystems — but for telematics and radios…Every critical aspect during the boot reliability are the states wich the device is issuing out during the boot: current, node voltages, register value, registers out-states…and so forth.
    All this stuff is dramatically depending on the fact the device is behaving like a deterministic system…i.e. if you may describe exactly the chronogram followed by all the controlled parameters.
    It is dangerous and such a big expensive have designed a firmware code that making use of dynamic memory is not predictable in his timing. How to design the test program to check it at EWS ?
    In fact the boot it is important even for the test coverage of the systems we are developing nowadays: one or more CPU, many memories devices (ROM, RAM, FLASH, D-RAM,..).
    Basically the boot firmware is embedded with a ROM memory so it is part of the device!
    Finally I allow it only once the boot it is accomplished and you need to configures and manage the application. Of course some critical application code would never be designed with such a memory allocation schema, provided that you are taking care about every deadline aspect and recovery time.

  91. S.M. @ LI says:

    An interesting white paper on memory allocation for embedded systems topic is available at:
    http://www.ittia.com/resources/white-paper-confident-memory-management-embedded-devices

  92. S.M. @ LI says:

    Hello,
    I used a dynamic memory allocation for the put of messages on bus.
    The function was used for railways application, the bus is MVB and the function is inside at the TCMS design.

  93. D.C. @ LI says:

    Memory management can be very dangerous if you don’t know what you are doing, but sometimes it is the best shot you can have. I often use dynamic memory allocation, but we use a continuous method of searching for memory leaks and double frees.

  94. Dynamic memory allocation makes complicated programs simpler. Yes, it does introduce the possibility of memory leaks but those leaks must be guarded against by (1) strong coding standards, (2) the use of analysis tools such as Coverity, and (3) testing, testing, testing.

    Static memory allocation is fast from an execution point-of-view, though.

  95. J.C.R. @ LI says:

    I’ve seen lots of projects that start-out with static-only memory, but as their needs grow, they find themselves in need for a more efficient use of the available memory. Dynamic memory management can be very helpful in reusing your memory, in particular in cases where processing happens in stages, or for occasional processes that need a lot of memory for a short period of time.

    Dynamic memory doesn’t necessarily need to be malloc and free. Fixed-size memory block pools are a very simlpe and efficient way to manage dynamic memory, and they don’t suffer from fragmentation problems.

  96. M.G. @ LI says:

    As i am in aerosapce Industry. I would not recommend to use DMA . Due to the nature of safety criticallity and hard real time in nature ,the dynamic allocation is a high risk to trade off.

    But having said that ,in level E application like passedger entertainment the DMA can be alloed which will reduce the cost drastically due to the lesser ctiticality nature .

  97. N.F. @ LI says:

    In automotive area, especially in safety products such as breaks DMA is not used, or at least is strongly suggested not to be used, because of the risk involved. Debugging a problem caused by DMA is also time consuming and hard to detect. If you are focus on quality (0 fails ppm), you must make some compromises and find other ways to increase the system performance (better hardware architecture for the system). At the and it can be much chipper that to have a product with gets to the costumer with a lot of problems.

  98. V.S. @ LI says:

    I feel that systems like aerospace, elevators etc which involved risk to human lives should avoid using DMA. This is because the use of DMA is highly unpredictable if the software is not designed to handle all kinds of errors and exceptions.

  99. B.B. @ LI says:

    Generally, allocation is fine for startup. But I don’t ever deallocate it, so I don’t have issues. If I need to manage memory, I do it myself. “Most programmer think dynamic memory management is too important to be left to the programmer, but a realtime programmer thinks it is too important to be left to the system.”

  100. M.S. @ LI says:

    In the EmbeddedGurus.com blog “A Heap of Problems” ( http://embeddedgurus.com/state-space/2010/01/heap-of-problems/ ) I have compiled a list of problems the free store can cause in real-time embedded systems. In the following post “Free Store is not free lunch” ( http://embeddedgurus.com/state-space/2010/01/free-store-is-not-free-lunch/ ) I summarize common sense guidelines for dealing with the heap.

  101. M.H. @ LI says:

    Some architecture, for example partitioning messaging and control protocols into carefully encapsulated entities with controlled messaging between them naturally calls for buffer allocation/deallocation from a common pool. We chose to implement our own slotted pools of specific buffer sizes “dynamicaly allowed” at boot. The pools are used both for timing-critical processes and less timing-sensitive processes.This also allowed us to implement our custom overrun check + usage stats + debug dump code. It does take some effort to analyze / optimize the buffer sizes and amount of buffer of each size in a tight RAM situation.
    A discussion on pros and cons of dynamic memory allocation is also a good interview topic.

  102. D.B. @ LI says:

    It seems to me, to do dynamic memory management in a hard real-time system, you have to able to show several things: memory allocation is determinstic, memory freeing is deterministic, and the memory management function is pre-emptible. The last point is needed for systems using real-time operating systems such that the highest priority thread is always running (subject to scheduling latency).
    Using dynamic memory obviously makes programming much easier.
    Although I don’t have a solution for C or C++, there is a formally proven, and industry proven solution for embedded, deterministic Java. Today, developers can use the Java programming language with the RTSJ (real-time specification for Java), and the JamaicaVM deterministic memory manager, RTGC, to develop embedded, “hard real-time” applications.
    For a paper on the formal proof, or other information on JamaicaVM, please contact me at dbeberman@aicas.com
    Disclaimer: I am associated with the company, Aicas, Inc.

  103. D.T. @ LI says:

    @D.,

    Early on in this thread, I made the point that although it is difficult to use dynamic memory in a hard real time system, it can be done properly, and sometimes it SHOULD be. Clearly the votes for Fear Uncertainty and Doubt have weighed heavy in many of the comments… so thank you very much for your offer of a formal proof that it can be done right. Perhaps you could post a link to avoid the “email harvesting” aspect of being required to send you an email?

    @All,

    One more plug for using dynamic memory in HRT apps. I have worked in avionics, and during safety analysis reviews and discussions it was regularly found that bad user interfaces were more likely to cause hazard to the aircraft than dynamic memory problems. That’s right, confusing or misinforming the pilot is much more hazardous than simply making it clear that the device isn’t currently working.

    Yes, dynamic memory problems can cause a device to cease functioning properly. So can a failed resistor, or connector, or even inaccurate data stored in flash. You have to do the work to anticipate these problems and design a way for the system to handle them appropriately.

    So, design the hard real-time portions of your project to not use malloc (or to use a custom designed malloc). Then allow the lower-priority threads to use malloc in order to gain safety, reliability, or even just a better user interface. Don’t succumb to FUD, do your homework and then use malloc whenever there is more benefit than cost.

    Take-away points:
    1. Eliminating dynamic memory does not imply safety.
    2. High safety/reliability requirements do not automatically preclude dynamic memory.
    3. It’s a complex issue, but it can be worth the extra work.

    -D.

  104. D.G. @ LI says:

    Hi Robert, it depends on the application field of your system. I work in the area of transportation (car manufacturers, bus, railway, etc) and we don’t allow use of dynamic memory allocation. Our software must be compliant with some norms (e.g. MISRA for cars) that don’t allow any dynamic stuff. This has nothing to do with the size of the memory, or the performance of the microcontroller. Nevertheless we use MPU in order to improve the safety of the system.

  105. D.B. @ LI says:

    Hi D., I’m not sure if this is relevant for you, but the latest aviation safety critical specs, DO-178C, will allow for both dynamic memory management as long as it meets the criteria specified, and object oriented languages again as long at the criteria is met.
    Agree with you about user interfaces having a critical role to play. By the way, I have heard the same comments coming from the medical device industry about user interface related error.
    Safety in software is not just about meeting hard deadlines, I would agree.

    Per your request, I’ve also just put up a paper that describes some of the technology in a blog (jamaicavm.blogspot.com), and since the cut&paste messes up the formatting, a link is at the top of the post and below.

    http://www.aicas.com/papers/jtres07_siebert.pdf

    There is a book available for order on the website: “Siebert, Fridtjof: Hard Realtime Garbage Collection in Modern Object Oriented Programming Languages.”
    That goes into detail on the mathematics and algorithms of the technology.

  106. S.G. @ LI says:

    Military Embedded Systems journal has published an article by yours truly: “Justifiably taboo: Avoiding malloc() / free() calls in military/aerospace code”

    You can read it here: http://goo.gl/YWFK9

  107. E.D.J. @ LI says:

    Pretty tough to write the ~10M line real-time applications I deal with every day without dynamic memory allocation.

    Intercepting cruise missiles is obviously mission- and safety-critical but can not be a hard real-time application and cannot be done deterministically.

  108. D.B. @ LI says:

    Hi D.,

    Responding to your comment about ~10M lines, and determinism:
    With respect to your memory management, and general program constructs overhead, there are solutions available that are provably deterministic and can be used for safety-critical (DO-178 related) and general real-time systems. One such solution does support dynamic memory while maintaining determinism and can be used in real-time systems.

    If you mean that the algorithms to meet your requirements can not be implemented to run deterministically, such as recursion with a varying boundary condition, or iteration to meet a minimizing error condition, for example, the issue would be the algorithms, not the use or non-use of dynamic memory, I think.

    Of course, using a memory management system that is non-deterministic obviously would make it impossible to implement a deterministic system with it.

  109. M.K. @ LI says:

    Hi All ,
    malloc () function however in dynamic allocation is a complete no go. It is otherway satisfied in real time , ” real time dynamic systems’ is calculating , rewriting calibration tables in reset phase or init phase of the system.
    Question is why block the dynamic allocation :
    question raises is what happens if overflow happens in allocation of the value to the mem allocated.
    this is dificult in fixed point systems where tight datatyping is needed. However with good offset and resolution understanding this can be extended tousing in counter for statistic logging etc..
    This is a issue also according to nroms devised by IEC and ISO WD norms of 26262 norms.
    Except for really required in 2D or 3D tables or matrices and for self extracting MACROS in code generatio ( Note only in code generation methodics, to make the code generation for alowig it to be non time consuming, however such code should be checked before road release tests of te vehicle) , pointers are used , else are purely avoided in automtiv.
    So dynamic allocation in real time systems nad especially for functinal safety systems would be a no go as a safety systems architect and designer for me.

    Any solutions for this would be a really learning thing for the safety systems domain in automotiv, wind and aerospace.

  110. E.D.J. @ LI says:

    Hi D. My main point actually was that there are many systems and applications (I’m thinking here about military ones) that are safety critical in the extreme, but they are inherently dynamic and non-deterministic. So the state of the art in formal certification isn’t able to deal with them, you have to do lots (years) of testing. You’re right, the main issue isn’t dynamic memory allocation, which is really needed in multi-million line real-time applications.

  111. E.S. @ LI says:

    Based on my humble opinion, no MMU = no malloc, with MMU (+OS) = malloc. The limit between embedded system and full system really is blur by now. But definitely if you want to keep your application safe, you can consider not using malloc as a must.

  112. E.D.J. @ LI says:

    Either a) not using malloc is a must, or b) it is not a must and there are a lot of big weapon systems that are not safe in the sense you mean.

    Obviously (to me) neither a nor b are true. That said, dynamic memory management is a major reason that new big weapon systems (and some traditional small scale embedded subsystems, but I don’t care about those, they are well understood) are being written Java to exploit automatic garbage collection. Some part of these systems can employ regular Java, and other parts use “real-time Java” (usually compliant with Sun’s RTSJ) plus one form or another of real-time garbage collection. Most real-time GC techniques are not deterministic, but that is usually down in the noise compared to how non-deterministic the system and application software inherently are.

  113. J.H. @ LI says:

    Dear D.,

    As you are aware, there are ways of using dynamic memory without using malloc. DO-178C recognizes the need for dynamic memory management and has a section on it in the Object-Oriented Technology Supplement. It provides a complete list of vulnerabilities for dynamic memory management and objectives to address these vulnerabilities.

    Malloc, as you assert, is the least acceptable means covered in the supplement, since it is generally difficulty to prevent failure due to fragmentation. It also relies on the programmer always knowing when an object can be deallocated. Other means include pooling, stack allocation, and scoped memory.

    Traditionally, garbage collection has been a problem for realtime systems due to pauses caused by the collector usurping the CPU at unpredicatable times. Modern collectors due better. Paced collectors schedule the collector at specific intervals, but determining what the pace should be is difficult. Just look at some of the tools that IBM has developed to address this problem. Slack based collectors are not much better and both have a relatively large blocking period.

    Our JamaicaVM has a work-based collector that does a bit of garbage collector work, some mark or sweep steps, at each allocation. The advantage of this approach is that the collector automatically paces the allocation rate and the blocking time is extremely short. By using fixed size allocation blocks, we have made our collector completely deterministic. With this collector, we can both provide the safety of garbage collection, no dangling pointers, memory leaks, and false object aliasing, and still be able to react to the same granularity as the underlying operating system.

    The RTSJ has many good features, but scoped memory is not one of them. It is quite difficult to use properly. We support scoped memory, but very few find it to be enough faster than using our collector and being careful with where allocations occur in the application. This is because a high priority thread that does not allocate will never be interrupted by the collector.

    There are other arguments for Java, especially when programming for a multicore platform. The memory model of Java is quite strict, so that one can understand what happens when two threads access the same memory location. java also has build in synchronization constructs, which in an RTSJ implementation are priority aware. C and C++ have neither a well define memory model or synchronization mechanisms. One has to use some library or OS API. Furthermore, much of the cutting edge work on formal verification is being done on Java, because the language syntax and semantics are much cleaner than C or C++.

    Java is not a perfect language, but given the level of commercial support and experience, it is quite powerful for realtime and embedded applications.

  114. J.H. @ LI says:

    Dear E.S.,

    There is no relationship between having an MMU and doing malloc. Even garbage collection can be used without an MMU. We support linking our JamaicaVM into an OS kernel for small systems.

  115. K.F. @ LI says:

    In my opinion, much like everyone here is expressing opinion, dynamic memory allocation is possible and powerful. But with power comes responsibility.

    In the past I have worked on projects that absolutely disallowed it, and those that allowed it. When it was disallowed, it made sense to and it wasn’t a big deal. They were small, well defined/contained systems. But when it was allowed, it was needed because we were dealing with systems that had to deal with bursts of activity/data but normally did not require so much memory. So rather than over-provisioning the hardware with memory, and thus inflating the cost, we accounted for burstiness with statistical models to determine optimum memory, and then managed that memory strictly.

    So when you do include dynamic memory allocation, it is a good idea to put fences around it and keep a close eye on it. We wrapped every malloc and free call inside something else so that it was used in a controlled manner and could be accounted for.

    Also, on some kernels, we wrote our own memory managers to provide dynamic memory feature, but with very tight controls. So instead of having to walk the memory heap and do all the hole merging with free(), We tailored our memory allocation to the needs of the system and eliminated the need for so much activity during malloc/free. It was a compromise with performance, control, and cost. And it usually worked well.

    In short, yes, you can use dynamic memory. But if you simply have mallocs scattered around the code, you are already doing it dangerously.

  116. J.H. @ LI says:

    Kevin, if you can write your system as a state machine, especially for safety-critical systems, then by all means do so. You then do not need to allocate memory dynamically. If your program is more complex, then dynamic memory can be essential. It is possible to make a system safe with malloc and free (so long you do not expect it to run a long time you always allocate memory chucks of the same size), but this requires a large amount of additional effort. Effort that I would rather spend on ensuring the system is correct.

    Now a garbage collector can ensure that there are no classical memory leaks, i.e., there are no unreachable object that cannot be recycled; however, it is still possible to hold onto object that are no longer needed. What I like about Java is that it is easy to write tools to help ensure that this does not happen, because one programmatically walk through the data structures of a system an give feedback about what objects are listed to what other objects. This is much harder to do in a C or C++ environment.

    I agree that one needs to understand what one is allocating and why. There are too many Java programmers that just create garbage because someone else will pick it up. Care still needs to be taken, but do you really want to redesign the memory architecture for each system?

  117. K.F. @ LI says:

    J., I think you may be overstating my position, perhaps because I was not clear.

    No, I do not want to redesign the memory architecture for each system. Yes I will redesign a memory handler for each system that really needs it. As has been the case on some RTOS kernels I have used in the past.

    Wrapping malloc/free of “objects” (not saying OO coding here, just using the term as it was originally intended), is just good design and allows for the use of memory checking tools.

    I would posit that in spending the time to ensure the system is “correct” would by definition include ensuring memory allocation and deallocation is used correctly.

    Again, yes you can use dynamic memory in systems that really need it. Just be careful. And if necessary, to ensure your system is correct, you may need to create your own memory allocation manager or at a minimum wrap the dynamic allocation into controlled modules.

  118. J.H. @ LI says:

    What do you mean by wrapping malloc/free of objects? Are you referring to mallocing memory at the beginning of some block of code and freeing the object at the end? If this is the case, could you not use a stack, or are you referring to something else? What do you do in rolling algorithms where say three data sets are needed at any one time and their lifetimes overlap but are their begin and end are sequential?

    I do not think we are so far apart in our opinions, but I would rather use a well tested, exact garbage collector than malloc and free in the vast majority of cases. Note: that it is impossible to write a garbage collector for C or C++ that cannot be subverted by an application error as long as one can add and arbitrary value to a arbitrary pointer and then dereference the results.

  119. K.F. @ LI says:

    Real world example. I worked on a comms controller and had to manage “transmit buffers”. These buffers normally served an average load, just for example, something like 100 messages per second. But occasionally could burst to a loading of 500 messages per second. Now, with available memory, I could simply allocate 500 transmit buffers. But this was on a small scale device and memory was not so abundant. While my task could allocate those 500 transmit buffers as static storage, we did not want all tasks doing this as the memory requirements to handle their max bursts as that would have severely impacted total memory requirements.

    So what we did was create a module, call it “txBuffer” for the lack of a better term – this was not OO and the language was C – and provide a function like “txBuffer_Get(void)” which would return a pointer to a transmit buffer structure. It did not matter to the caller if that buffer was reused or recently allocated (or even static). At the end of the chain of processing, the txBuffer having been passed through a stack with zero-copy, the final owner of the message would call “txBuffer_Release(ptr)” to release the transmit buffer back into the pool.

    Using this simple, and possibly overly simple, mechanism, we were able to use memory monitoring hooks and manage the amount of memory allocated. The txBuffer module maintained some static buffers and when needed would allocate additional buffers to handle the burst. Eventually when the transmit buffer was released, the txBuffer module knew whether it was static, should keep it in a local pool, or release the memory.

    If the burst was long enough, the txBuffer would simply return null, having exhausted its allowed resources, which is what would happen in any system in such cases.

    In essence, we took advantage of statistical multiplexing of the demands on memory to handle bursts of activity across multiple tasks while keeping overall memory requirements constrained.

    The combination of the wrapping of malloc/free and the semantics of the the txBuffer pseudo-object (i.e. module) was something that in the end simplified memory management in the comms controller.

    That is what I mean by “wrapping the malloc/free calls”. Put them in a managed object to prevent mallocs spilling out everywhere and possibly being forgotten to be freed.

  120. G.L. @ LI says:

    I have been designing embedded systems for over 30 years. My
    typical application:
    - Has all executables stored in non-volatile memory,
    - Has no off-line storage,
    - Is embedded in a appliance in an industrial environment.
    - The customer may plug my system in and expects it to run
    for 20 years without maintenance.

    I realize that the definition of embedded system has changed
    in the years I’ve been working at it, and now encompasses
    devices indistinguishable from PCs, so my rules only apply
    to my systems.

    Dynamic memory allocation is only safe to use in a
    device for which total system failure is an acceptable
    performance metric (i.e. Windows).

    Even if you are able to head off all memory leaks, there is
    very little data available on dealing with heap
    fragmentation.

    I teach OOP at a community college and it bothers me that
    dynamic memory allocation is drilled into our youth as the
    standard method for managing memory. At that level no one is
    taught to deal with an allocation failure, other than to
    shut down the application.

    What would you do if you pushed on the brake pedal and your
    dashboard told you it was time to re-boot?

  121. E.D.J. @ LI says:

    We should also be bothered that schools do not teach how to do fixed point arithmetic (scaling etc.), people take for granted that processors have floating point units.

  122. T.Z. @ LI says:

    I personally have no problem with dynamic allocation, but in certain cases , it is just not needed. Decision whether or not allocating dynamically depends on your system requirements, scalability and performance constraints. I’ll give 1 example for each case.

    1. memory pooling: let’s assume that your system is a router capable of receiving packets having constant sizes s1, s2, etc ( for example DVB Audio/video streaming which format is well known), and that you have to buffer them before processing them. Let assume as well that you allow a pre-defined size of memory for buffering. In that case, dynamic allocation is not needed and expensive. Pre-allocating a pool of n blocks of size s1, s2 etc will be the most efficient and cost effective way to do so.

    2. dynamic allocation: in networking, packets flowing through a router have variable and unpredictable sizes. In case you have to process these packets and buffer them, memory pooling seems unrealistic, even though I have seen that in a project that I worked on in the past ( a solution for transferring data over GSM if anybody remembers that … from the last century :-) ) allocating many pools of blocks with different sizes (128, 256, 512 b etc…) and using a kind of best-fit algorithm to optimize memory utilization in these blocks … Well, in this case, I would definitely allocate dynamically.

    Scalability is also a factor: let’s assume that our system is a switch capable of learning up to X mac addresses and supporting VLAN functionality . Each address is owned by the interface ( on which it was received ) and each interface is configured and owned by 1 VPLS instance.
    System requirements allow to allocate up to Y interfaces and Z instances. These maximum numbers are pre-defined and you could easily create a pool of X blocks of data to store MAC addresses, Y blocks of data for interfaces and Z blocks of data for instances.

    Reliability: if your system is “critical real time” and provides for instance security services – e.g. Intrusion Prevention System aimed to block cyber-crime attempts on you private networks and servers storing sensitive information – do your best to use memory pools. Allocating and freeing resources is without doubt time consuming and in this case might reduce into dust all your technology…

    To be continued …

  123. E.D.J. @ LI says:

    I should be able to assume that everyone on this list keeps up with the embedded systems workshop and conference announcements. Thus in the past I haven’t thought there was a need to post such announcements here, but perhaps I’ll start doing so in case someone missed seeing them.

    The 8th IEEE International Conference on Embedded Software and Systems
    (IEEE ICESS-11), Changsha, China, November 16-18, 2011.
    URL: http://trust.csu.edu.cn/conference/icess2011/

  124. D.B. @ LI says:

    Responding to G.L.:
    Thought I’d follow up a couple of your comments:

    “Even if you are able to head off all memory leaks, there is
    very little data available on dealing with heap
    fragmentation.”

    As J.H. has mentioned in this thread, there has been technology developed for dynamic memory management that addresses both the memory leaks issue and the heap fragmentation problem. This is the realm of automatic garbage collection and real-time garbage collection. A language like C, or C++ can not directly take advantage of such technology, but other languages with built-in automatic garbage collection can.

    “I teach OOP at a community college and it bothers me that
    dynamic memory allocation is drilled into our youth as the
    standard method for managing memory. At that level no one is
    taught to deal with an allocation failure, other than to
    shut down the application.”

    Agree with you that the computer is viewed as an “ideal” computer with unlimited resources for most computer science courses.
    Memory usage, seems to me independent of, and has to be determined, for a system that uses either dynamic memory or a statically allocated approach.
    Assuming the system in question is using a dynamic memory technology that eliminates memory leaks and fragmentation, the question still remains, what is the maximum memory commit?
    Assuming the system in question is using a static memory approach (static arrays of buffers, or even dynamic memory allocated at startup only), the question still remains, what is the maximum memory commit?

    Responding to K.F., and T.Z.:
    Networking traffic at the datalink layer does have the advantage of having known maximum sized buffers, or alternatively known minimum sized buffers. Memory pools for these are commonly used. Although the Windows NDIS driver API forces the allocation of new receive buffers for each buffer sent up the stack to be processed (at least it did back in the NT 4.0 days), in high performance protocol stack applications, network packets are commonly handled in buffer pools in my experience. Receive buffer pools would be one place that dynamic memory management does not make sense in my opinion. This does not preclude the use of dynamic memory for any sort of state management within the networking device, or for other applications that are not high performance network traffic related.

    I don’t think anyone would argue the advantages to using dynamic memory. I think the overall thrust of the original question, or discussion thread is, is it safe to use dynamic memory in an embedded system, with the concept of “safe” measured in a context. Conventional wisdom would indicate that for safety-critical applications (human life dependency), or similar high-reliability systems, or network devices, the answer is, it is not safe to use dynamic memory. Up until the last decade, going back to mainframe computing, I would have agreed with this position; and, for systems built with languages supporting pointer arithmetic and user-controlled memory management (e.g. malloc/free), I would continue to agree.

    I think I disagree with the conventional wisdom when user-controlled memory management is replaced by automatic memory management using deterministic allocation and garbage collection, and pointer arithmetic is eliminated. Any approach that uses a buffer pool or other statically allocated solution, requires an application specific, specialized approach to memory management. This opens up the possibility of hard-to-find coding errors, and future maintenance errors due to lost knowledge of the specialized memory management requirements. I have seen “one-off” errors cause sporadic corruptions, static array sizing errors cause similar sporadic corruptions, receive buffer exhaustion states cause empty receive buffer descriptor lists, and other errors.
    continued….

  125. D.B. @ LI says:

    continued…
    Some of the systems in question were network protocol stacks, robotic systems, and avionics systems. I’ve also seen, as I’m sure others have, pointer arithmetic mistakes corrupt stacks, corrupt data structures, and cause CPU faults.

    Given a memory management system that has deterministic attributes – allocation and freeing – automatic garbage collection to eliminate memory leaks, elimination of heap fragmentation, and a language that eliminates pointer arithmetic without sacrificing functionality, I would submit that the conventional wisdom on the prohibition of using dynamic memory for embedded systems with some sort of safety attribute should be reexamined.
    As I’m sure you can tell, my argument is that the safest (with whatever measure desired) solution and most reliable solution (with respect to coding considerations, maintenance, fielded bugs, determinism etc.), is in fact to use a current language that has these memory management system attributes and language attributes. From my perspective, I would feel much more comfortable knowing that the software running my systems on the next plane I take is built with current memory management and current language attributes including real-time dynamic memory, instead of specialized buffer pools, with pointer arithmetic and other adhoc solutions. (I would include pressing the brake pedal on my next car in that comment too.)

  126. K.F. @ LI says:

    @D.B. – Regarding NDIS drivers… I don’t know how they work. I don’t do Windows. And the real world example I was referring to was in an RTOS environment. And these “buffer pools” as described is what I implemented because it was not available in the kernel we had. Still, there are times you cannot allocate memory for max conditions and having an expanding buffer system really helps.

    But… it always depends on the requirements of the system you are working on. Sometimes, having dynamic memory burdens the system with too much processing time walking the heap or collapsing fragments or other factors. Other times, it solves problems.

    In reality, some real-time/embedded/whatever systems should not have dynamic memory allocation and others should use it. There is no hard and fast rule. If you do not take a systems approach to these issues, the myopic views of “never use dynamic memory” or “dynamic memory is no problem” will cause pain and agony, and those things suck.

  127. T.Z. @ LI says:

    Responding to D.B.-
    Regarding buffer pool at the data link layer, after reading your comment, it makes sense, agree.

    However, how do we manage memory incase we do have high number of different type of objects, in such a way the total number of objects does exceed a certain number K, due to an existing dependancy between them: ( O1 contains up to O2 objects that contains up to O3 objects etc ….)

    N1 * S1 + N2 * S2 + ……….+ Ni * Si = K ( Si being the type of object Oi size and Ni the number of objects to be allocated for each type )

    Are you going to allocate a pool of T * MaxSize blocks ? ( MaxSize being the greatest size between all types of objects MaxSize = max { S1, S2, …, Si } ). Isn’t it a waste of memory to be statically allocated ?

    I agree that if you are able afford the cost of unlimited memory in your system, use pooling, since it does reduce overhead .

    However not using dynamic memory at any price even when justified just because the fear of leak is not a valid reason for me: this is the work of a software engineer to make the right choice and ensure Zero-memory leak.

    ( to be continued … interesting subject)

  128. D.B. @ LI says:

    So does the original question about using dynamic memory really come down to this:
    Either you prohibit dynamic memory because of the perceived risk of memory leaks and heap fragmentation, or
    you allow dynamic memory with the caveats that the operating environment eliminates memory leaks and fragmentation, and/or the developer uses static and dynamic testing tools to verify proper memory use?

    I would agree with all of the comments about fixed sized block memory pools when the embedded system is primarily related to managing those fixed sized blocks. However, without dynamic memory, I find object-oriented languages difficult to use in embedded systems. I have overridden the new() and delete() operators in C++, with a best-fit allocator from pools of different fixed sized blocks, which from the source code point of view maintains the strengths of dynamic memory, code portability, and still supports the use of static and dynamic test tools.
    The Class, Object, and Method paradigm pretty much demands a dynamic memory approach to the operating environment in my opinion. Although, I’m sure you can statically allocate all of the objects needed at initialization, I think the programmer always ends up doing the work of determining if a particular object is available or in use; whereas the new() operator guarantees that the object returned is always available.

    If the concern is memory exhaustion, the Try/Throw/Catch paradigm or simple null return value checking are appropriate. Again, even with memory pools, isn’t the programmer doing the same work, although the programming constructs may look different, and specialized?

    To avoid misunderstanding, I am not advocating using the heap manager provided by a runtime library arbitrarily. I know of plenty of implementations that have had fragmentation problems. I would strongly recommend against using a heap manager that has a known risk of fragmentation (e.g. using malloc()/free() on certain flat memory model systems).

    To answer the original question directly though – I have successfully developed and deployed many embedded systems that use dynamic memory. The majority of these were for high performance data networking systems. The up-time on these systems was measured in months, possibly years. Dynamic memory was not used in critical path packet handling, but in system and session state management, monitoring and control.

  129. P.B. @ LI says:

    @D.B.
    “…without dynamic memory, I find object-oriented languages difficult to use…”

    I agree.

    “I have overridden the new() and delete() operators in C++, with a best-fit allocator from pools of different fixed sized blocks…”

    While this gets rid of the “direct” fragmentation problem, it leaves open two possibilities for exhaustion:

    (1) The need for a common set of buffer pools with a first-fit algorithm implies (to my mind, at least) that the “quart from a pint pot” aspect of dynamic memory management is being used. That is to say that because the total memory can be shared for allocation of objects of different classes, we can somehow get away with less memory than we would otherwise need if it were statically allocated. This might or might not be true in the worst case. But even establishing what the the worst case might be in such circumstances – and re-evaluating it for each non-trivial change during maintenance – is problematic.

    (2) If the first-fit algorithm allows a fit to be made with a block with greater than the original intended size (e.g. a 100-byte request discovers that there are no more 128-byte buffers left, so a 256-byte buffer is allocated instead), we shift the exhaustion problem from the current request (where the problem really lies) to some subsequent request requiring a 256-byte buffer. This is “fragmentation by stealth”.

    I, too, have overridden new() and delete() but have done it at the class level. This is optimal (each class’s pool contains buffers of exactly the proper size) and fast. Also,somewhat to my surprise, I found that that it could be done in such a way (with a template class) that the operators are supplied to the user by simple inheritance and can be used quite transparently thereafter. Of course, this approach doesn’t allow reuse of memory by dissimilar objects and each pool size must be explicitly given as a template parameter when declaring the pooled class. But these things are the dangerous things. The big advantage of my approach is summarised also in David’s post:

    “The Class, Object, and Method paradigm pretty much demands a dynamic memory approach to the operating environment in my opinion. Although, I’m sure you can statically allocate all of the objects needed at initialization, I think the programmer always ends up doing the work of determining if a particular object is available or in use; whereas the new() operator guarantees that the object returned is always available.”

    In other words, I agree with most of what David says but my approach, I believe, refines his approach to eliminate its residual dangers. It retains the convenience of dynamic memory allocation to which he alludes, but requires the designer to use the same rigour in calculating the worst-case memory requirements *per class* as he would have to have used with conventional static allocation.

  130. D.B. @ LI says:

    P., thanks for the follow up.

  131. P.B. @ LI says:

    You’re welcome, D.

    If you or anyone else is interested in how I did what I did, please browse my blog:

    http://software-integrity.com/blog/

    It’s quite old stuff, so some digging in the archives is necessary.

    To get the actual code (which is overdue for a little refreshing), you have to become a subscriber, though :-)

  132. S.M. @ LI says:

    Hi at all. My experience in a soft, or hard, real time environment is “never use dynamic memory allocation”. It is forbidden in same cases of risk-reduction (EN 61508). Simply is dangerous: when you have limited resources you “must” to know how your process grows, especially when the hardware stay up for long time. There are some technique for use a kind of dynamic allocation, but they are always based on a static pool of resources… simply play with pointer for a free or used element.

  133. T.W. @ LI says:

    Ravenscar Ada and Safety Critical Java both share the following definitions.
    (MISRA-C heads in the same direction.)
    Mission mode: once the mission starts, dynamic allocation is not allowed.

    In more complex modes of operation of SCJ, there can be nested missions.
    I encourage people to investigate DO-178C (Aviation safety critical software standard, pending revision), which is addressing OO languages.
    A lot of clear thinking on reliability and predictability has gone into it.

  134. E.D.J. @ LI says:

    “Safety critical” is another widely misunderstood term, like “real-time,” “predictability,” and “embedded.”

    As is normally used, “safety critical” does not literally refer to all safety critical systems. Instead, it is an idiom:

    “Idiom … is an expression, word, or phrase that has a figurative meaning that is comprehended in regard to a common use of that expression that is separate from the literal meaning or definition of the words of which it is made.” –Wikipedia

    If safety critical were a literal term, it would refer to:

    “A life-critical system or safety-critical system is a system whose failure or malfunction may result in:

    * death or serious injury to people, or
    * loss or severe damage to equipment or
    * environmental harm.”
    –Wikipedia

    In reality, the idiom is inevitably used to refer to a small but important subset of safety critical systems which are governed by regulatory processes such as DO-178B.

    Only highly stylized systems — e.g., very small scale and static, such as digital avionics flight control — are amenable to to these regulatory processes.

    There are a great many large, complex, dynamic systems — such as most military weapon systems — that are literally safety and life critical in the extreme, but are far beyond the current or even foreseeable state of the art for these regulatory processes. (However, many of these systems include figuratively safety critical subsystems).

  135. T.W. @ LI says:

    @E.D.; “far beyond the [..] state of the art for these regulatory processes” that has been my experience working on military weapon systems. When I’ve worked on embedded systems in more relaxed environments, the same techniques from safety systems had useful guidance to provide. I’m still prepared to dynamically allocate memory, but never in “mission mode”. If you put my fire control system (M-some-number) into setup mode, I’d allocate memory whenever I felt the need to. (But not willingly when the gun is active.)

  136. S.M. @ LI says:

    @T. Sure. It’s very dangerous. In some embedded system we have an exact number of Kbytes for stack. Local variables in a function, or parameters, bad used of large structures for example, could cause a stack overflow, let’s figure out a dynamic allocation.

  137. E.D.J. @ LI says:

    T., your reference to a gun prompts me to use the Phalanx (on Aegis and other ships) as an example. Phalanx is a good example of a hard real-time embedded literally safety-critical subsystem that is amenable to current formal development processes. The Aegis Weapon System is part of the Aegis Combat System. The ACS is a safety critical system that is too dynamic to be amenable to current formal development processes. The ACS is also a “real-time” system that is more accurately described as a time-critical system, again because it is dynamic — as much as the vendor or some in the USN might want to claim it is a hard real-time system (which would be highly desirable), NSWC has performed detailed timing measurements (with logic analyzers) that show the ACS is a soft real-time system in the sense that as a whole it necessarily has non-deterministic timeliness. The timeliness is considered good enough, but that is not achieved with tools and techniques for hard real-time systems.

    Another example example of extreme safety criticality is cruise and ballistic missile defense. Say you can put 500 people on an Airbus 380 and that failure of its digital avionics flight control system causes one to crash on a building with say another 500 people. We all (especially those of us who fly a lot) are very glad that the flight control system can be developed in accordance with safety regulatory processes (DO-178B etc.). But a nuclear armed cruise or ballistic missile can easily kill tens or hundreds of thousands of people so is as literally safety critical as you can get. And unlike plane crashes, they rarely come one at a time. Missile defense, for example the F2T2EA kill chain, is highly dynamically concurrently asynchronous — and hence far beyond even the foreseeable eventual state of the art of “safety critical” development processes.

  138. M.H. @ LI says:

    I come from a very conservative background on dynamic memory using C/C++. I found
    that in multi-threaded apps many developers simply clueless when it comes to dynamic memory.

    Then I began developing a perl/Event based data acquisition system. Yes perl is a slow and bloated beast, but I can do things in perl in 5 minutes which take hours in in C/C++.

    Perl is totally dynamic, things come and go constantly. I’ve been using this design now for several years and have had no problems with memory leaks, except self inducted boo boos in my C perl mods. This is because perl handles all the memory management in a consistent way.

    Had I done this in C/C++ memory management would have been a problem and required much more attention. I did sacrifice a huge amount RTOS performance, but the system requirements were such that rapid development was far more important than high speed performance.

  139. P.B. @ LI says:

    @M. H.
    Leaks are but one problem. However the intractable problem with all generic heap-based allocation systems is fragmentation. Some people pretend that this problem can be eliminated by various design tricks, but it can’t. It can only be avoided by using a suitable number of pools, each with a suitable number of blocks of suitable fixed size (determinate timing) or by some kind of MMU in hardware or, for interpreted languages, in software (indeterminate timing and usually too coarse-grained to be practical).

    What does your Perl run-time system documentation say about fragmentation?

  140. G.L. @ LI says:

    I don’t see a problem with dynamic memory as long as it is not used for deterministic components, though I do insist that appropriate mechanisms be in place to prevent memory leaks and overallocation. Systems today can be much more complex than those of 10-20 years ago. For simple systems where the memory requirements are known and can be partitioned between the various components, it only makes sense to use static allocation. For complex systems with deterministic and non-deterministic components, it makes more sense (to me) to allow sharing the memory pool as needed amongst the higher level components. However, deterministic components, like interrupt driven bit sending, would need to have their memory allocated at object startup.

    For example, I wrote the forward link component for a wide area one-way paging system. This supported from one to many outbound links. At the lowest level were individual drivers for each link transport type (serial, TCP/IP, UDP/IP, UDP broadcast, R.F., etc.). These drivers get created at run time or when the user reconfigures to add a link. Each driver gets a fixed memory allocation when it is created. Above these drivers sits a manager to handle communications between them and other system components via message passing. The messages passed to/from the link manager are variable and may contain additional data. At this level the message passing is from dynamic allocation, which allows buffering significant amounts of data without permanently hogging all that memory. There are of course memory limiters and failsafe timers to prevent stale data from causing a memory leak.

  141. M.H. @ LI says:

    Memory fragmentation has same end effect as leaking, but far more subtle. Yes perl might do something bad fragmentation-wise. Its uses simple power of 2 buckets to
    keep allocations of similar size together which works fairly well. In practice some
    incarnations of this this system have run for years at time having reached they’re high water mark in the first few minutes of operation. I do not claim this is an all around solution to everything, and I easily concede that it is bloated pig relative to a much tighter c/C++ design.

    Some details:
    Board platform is AT91 200Mhz ARM, 16M sdram, 16M NOR flash, eth, 12 uarts running linux. It reads and parses serial data from various sensors, does some calculations, produces messages, logs data to flash, and presents TCP and UDP interfaces to the world. Avg incoming message rate is < 10 40byte msgs/sec. Some messages are much larger, but infrequent. Fairly simple, but needs to be highly configurable – and is.

  142. D.B. @ LI says:

    I’ve got an idea… I don’t know that there is a final correct answer on dynamic allocation in embedded systems, or the best programming language to use.
    What I propose is that anyone that wants to try out an embedded, real-time Java toolchain for whatever your application happens to be, I will provide a 60 day evaluation copy of JamaicaVM. You can decide for yourself if dynamic allocation in an environment that supports automatic garbage collection is suitable for your applications. We are a commercial product, but happy to support anyone looking to experiment with alternative approaches to embedded systems development.
    Please feel free to contact me by email directly or through linkedin.

  143. J.B. @ LI says:

    Yes, dynamic memory allocation should be allowed. There are limits, however. Relative size to the amount of available memory is a factor. Many systems have found that memory is cheap and in abundance. The old paradigm no longer applies in many instances. As embedded Java and embedded C# make their way in, the issue should be moot. Proper code reviews, proof of memory usage and cleanup.recovery is an acceptable means of monitoring and holding accountable memory management. If these concepts are held strictly, there is no reason why allocation can’t be done. I tend to lean on newer software packages and analysis tools, as well as code reviews to get to the same end.

    The excuse that there are system blocks during memory management calls falls short since many mutex, semaphore and critical timing calls do the same thing. As far as dynamic allocation during time-critical code, such as ISR’s, it does not belong just due to performance related issues. Other methods are available in such cases.

    I have produced product for deep space missions, hand held radios and extremely small form factor designs. All moderns designs can be designed with the same rules. Let us evolve.

  144. M.L. @ LI says:

    What a great thread. Taking all of these comments and collected wisdom and distilling them all to their essence would make a good paper or chapter for a book ;-)

    I agree with everyone who says, basically, that it depends. I would argue that one should avoid dynamic memory in the inner, critical loop of an application for all of the reasons that folks have mentioned above. However, many embedded systems are multi-mode nowadays and so long as care is applied to insure against memory leaks reallocation of memory during mode changes should be safe. There is usually a defined period of time for such a mode change to occur and so long as the housecleaning of freeing memory from the old mode and allocating memory for the new mode can be accomplished in that time, where is the harm?

  145. A.H. @ LI says:

    Considering that they are made of structure, It all depend on the hardware. well i think i will do that if i am working on some hardware that will perform paradynamic string but considering the general i wont do an allocation in amn embedded system

  146. A.G. @ LI says:

    I Have nothing to say but .. it Depends

  147. P.S. @ LI says:

    R. and T. have listed the most important things.

    Also keep in mind that most operating systems (Even of the RT embedded variety) perform dynamic allocation when you use any of a number of their calls (E.g. sockets, files and pipes. So avoiding dynamic allocation in this context usually is a moot point.

    The suggestion to move the dynamic allocation part to initialization always simplifies the test consequences, but the requirements does not always allow for this.

    Some tricks:
    - If you are concerned about fragmentation, only allocate in bursts with the associated reverse-order burst freeing. Try to prevent that allocation bursts can occur simultaneously.
    - If you are concerned about the blocking nature of the malloc() call in an RT system, call malloc() within the context of a non-RT critical thread. If the system design makes the allocation of variable-sized resources necessary within the context of some RT critical task, I would suggest that you go have a long hard discussion with the System Designer about the basics of RT design.

  148. G.L. @ LI says:

    This is really hard to say.
    My rule is:
    For hard real-time control system, the answer is NO
    For soft real-time system, the answer is IT DEPENDS
    For commercial embedde system, the answer is MOST-LIKELY
    For Non-Embedded system, the answer is WHO-CARES

  149. A.A. @ LI says:

    In couple projects I’ve worked on I solved the need for the use of dynamically allocated variable sized data buffers used for data communication purposes, by creating specialized buffer allocation function that utilized statically defined section of memory provided at start-up. This solved the problem of memory fragmentation that would have occurred if the system buffer allocation functions would have been used because since the manager controlled not only the memory pool but the pointers to the buffers in the pool it was capable of performing cleanup on the pool if necessary. It also made the use of the data in the dynamically allocated buffers easier to manage and process by the different data processing threads. using system allocation functions to manage the memory was ruled out during initial testing because of the fragmenting that would occur since buffers were not deallocated in the same order or rate they were allocated in.

  150. J.L. @ LI says:

    my programs always start out with static memory allocation which is cleared when used no problem with memory or automated garbage collection. this method grasps complete control of memory

  151. J.F. @ LI says:

    The underlying operating system can dictate the choice and/or the rules of engagement when it comes to memory management. IIf it’s already handling dynamic memory allocation, then it becomes a question of the problem you’re trying to solve.

    In general, this is a very context-sensitive question, IMO. If you have a lots of CPU power and free cycles and use some kind of memory handle approach, dynamic memory with a garbage collection scheme is great. If you can’t afford to spend cycles collecting free blocks, or your system has some fixed processing requirements in which the need for memory can be determined at start up, static memory allocation is the way to go.

    Again, I think fixed versus static is all about context. It’s like asking whether a car should be affixed with 4-wheel drive or a manual versus automatic transmission.

  152. L.G. @ LI says:

    Good discussion. I think the real answer is that it depends on which operating system you are using. Enea’s OSE RTOS is a hard real time system. It actually has 3 different memory management systems. The main memory allocation system is deterministic. It achieves this by allowing the user to pre-configure the buffer pool size and a set of fixed buffer sizes. The buffer pool is then set up at init time. At run time, the application is guaranteed to get a buffer in a deterministic amount of time. There are special calls to allocate and free this type of memory to distinguish this from the more conventional heap style manager. So yes, the OS is essentially doing pre-allocation at init time, but the application doesn’t have to worry about it at run time.
    Also, as mentioned, there is also a standard heap style memory manager available which implements all the usual ansi malloc/calloc variants. This system provides the programmer with a standard memory allocation APIs. I consider this pseudo deterministic in that allocation time is slightly variable, but there is a guaranteed upper limit on the amount of time any allocation will take.
    Disclaimer – I work for Enea. Not trying to sell anything, but wanted to offer some up some other ideas on how other RTOS’s implement memory managers.

  153. D.B. @ LI says:

    MISRA C++ has the following rule
    Rule 18-4-1 (Required) Dynamc heap memory allocation shall not be used.
    This is for the motor trade and its something I like to follow in the medical field.

  154. D.R. @ LI says:

    Well… dynamic memory allocation has been a taboo word in the real time embedded software market for a long time. In Mil Aero applications where reliability and predictability are important aspects of any software design, dynamic memory allocation is outcast as it is mutually exclusive to those goals.

    We develop mainly in Ada where no garbage collection has existed for many year and so dynamic memory allocation was always first on the list of inhibited language features.

    With the emergence of Object Oriented software, dynamic memory allocation is an almost necessary part of the implementation, however we still restrict the allocation of objects on the heap to the elaboration phase. To ensure dynamic allocation was not used (in programs developed in Ada83) we had to resort to stringent code review, however pragma restrictions can now be employed to ensure programs that contain dynamic allocation of data will not compile.

  155. E.D.J. @ LI says:

    In _low level static subsystems_ in “Mil Aero,” not true of mil aero in general. Try doing track association on AWACS, or scheduling radar service requests for missile defense kill chains without dynamic memory management.

  156. C.M. @ LI says:

    I have used dynamic memory allocation in a major computer vision application implemented in LabVIEW Real-time. LabVIEW sets some ground rules for this to work, but even with that, my advice is that if you should choose to use dynamic memory allocation, make sure all the possible scenarios are exercised during startup testing. All the buffers need to be exercised.

    Just to add to what Ken Peek has written, there is a trend these days to remove memory allocation concerns from the programmer. That is where “languages” like LabVIEW and others come in.

  157. B.S. @ LI says:

    As the application will be blocked or no more resource for applications. I like to use the dynamic memory to allocate a block of memory pool in the application initialization phase, and then will implement a deterministic memory management algorithm to manage the memory pool, such as TLSF.

  158. M.S. @ LI says:

    @B.: why would you that? Why not just allocate the same block statically? I mean, with static allocation you know exactly how much memory you need at *link time*. If you blow your memory budget your application won’t link. With dynamic memory you defer such error to *run time*. Why is it better?

  159. J.S.W. @ LI says:

    I am willing to use D-A. Like anything get into the guts of it. Know how it works and be sure it works. RTOS VM hypervisor sounds unproven to me for safety critical apps. If your doing a spacecraft or an automobile. The language issue is usually settled if you have to get into safety critical. Don’t use the ones that can’t show correctness. There was some work on this for Java translation and JVM execution. As I recall it was about around 2000 to 2004. Layered Abstract State Machines were used to trace the successive translations from source code down to actual execution were covered. I’ve not seen any recent papers but they may exist. This work uncovered an number of flaws in the Java system and suggested cures. I don’t know if the ideas were incorporated into Java system.

  160. J.H. @ LI says:

    There seem to be a fare amount of variation in the use of the term “dynamic memory”. For this reason, the Object-Oriented Supplement for DO-178C not only defines but also gives examples for dynamic memory. It includes techniques such as memory pooling, stack allocation, scope allocation, malloc and free, and garbage collection. Of all these techniques, malloc and free, or new and free in C++, is the most error prone and difficult to manage.

    Furthermore, it discusses the vulnerabilities of the use of dynamic memory.

    a) Ambiguous References

    An allocator returns a reference to live memory, for example, an object that is still
    reachable from program code. This could cause failure by allowing the program to
    use the given memory in an unintended manner.

    b) Fragmentation Starvation

    An allocation request can fail due to insufficient logically contiguous free memory
    available.

    c) Deallocation Starvation

    An allocation request can fail due to insufficient reclamation of unreferenced
    memory (for example, objects, or structures). This might also be caused by losing
    references.

    d) Heap Memory Exhaustion

    The simultaneous heap memory requirements of a program could exceed the memory
    available.

    e) Premature Deallocation

    A memory fragment could be reclaimed while a live reference exists.

    f) Lost Update and Stale Reference

    In a system where objects are moved to avoid fragmentation of memory, an update
    could be made to the old copy after the new copy has been created or to the new copy
    before it is initialized, or a read could be made from the old copy after the new copy
    has been created and changed or from the new copy before it has been initialized.

    g) Time bound Allocation or Deallocation

    An application could encounter unexpected delay due to dynamic memory
    management.

    It should be clear that memory pooling is subject at least five of these vulnerabilities and all of them must be addressed in each and every application. The advantage of a proper deterministic, exact garbage collection is that all but one can be addressed once and for all in the garbage collector. The difficulty is that most collectors are not deterministic and many, particularly for C++ are not exact. This means that most developers have never really seen a proper deterministic, exact garbage collector and for popular languages such as C++ one would be hard pressed to design a exact, let alone a deterministic garbage collector. That is because as soon as one can directly manipulate a pointer, on can easily subvert the collector.

    On the other hand, just because one has a garbage collector, does not mean one need not worry about creating objects after initialization. Many Java programmers forget that creating objects has a cost. If your problem can be described as a state machine, then there is little need for dynamic memory, but if your application is complex, it can really simplify your code.

  161. M.S. @ LI says:

    Static memory allocation is applicable even in the case if an object constructor takes parameters that are known only later at runtime. (Constructors of static objects in C++ run even before main().)

    What you can do is to statically allocate memory like this:

    void *mem_for_MyClassXYZ[(sizeof(MyXYZClass) + sizeof(void *) / sizeof(void *)];

    Please note that I’ve used (void *) to achieve the memory alignment suitable for (void *). You could use a different type if it has more strict memory alignment requirements.

    Later you can use *placement new* to actually call the constructor passing the memory block:

    MyXYZClass *ptr = new (mem_for_MyClassXYZ) MyClassXYZ(foo, bar, tar…);

    The point is that you *can* eliminate the waste of the heap and know your memory requirements at link time.

  162. C.S. @ LI says:

    The standard implementations of malloc and new are non-deterministic so should not be used in real-time critical code sections. Also the consequences of memory exhaustion and failure to allocate must be considered, and the unpredictability caused by memory fragmentation.

    In some circumstances, static allocation of C++ objects is not possible; for example if a constructor has a dependency on some RTOS object that cannot be initialised until after RTOS initialisation. In these circumstances you would likley dynamically instantiate at system initialisation and never delete, in which case it is little different than static allocation but provides more deterministic initialisation (in terms of order in which constructors are called and when).

    So I would say that it is entirely OK for initialisation objects that are instantiated and never deleted in order to control order and timing of initialisation, but allocation/deallocation cycles should be carefully considered because of fragmentation, leakage, and determinism issues. The argument is far less compelling for C code where typically no application code executes before main() unkless you customise your runtime-startup.

  163. C.S. @ LI says:

    @M. Good point. If you choose to prohibit dynamic memory allocation (or the prohibition is imposed), that is a good solution.

  164. P.S. @ LI says:

    Dynamic memory allocation is unavoidable if the use-cases dictate that you cannot account for all permutations of memory use at compile-time. Avoiding malloc() and free() or new and delete in C++ also doesn’t mean that you don’t have dynamic allocation. Any form of memory pool of your own creation also implies dynamic allocation.

    If you can get away with the simple use-cases where everything can be accounted for at compile-time, do so!

    If your use-cases make this impossible, use good common-sense in your design:
    * Design for the safe use of well tested dynamic allocation mechanisms (I.e. Deal with allocation failures, allocate and de-allocate in such a way that fragmentation is avoided, etc). Don’t create your own allocators unless you know what you’re doing.
    * Test to the boundaries of memory use (Stimulate allocation failures so that the associated recovery / failure mechanisms are well tested).
    * Make sure that real-time critical processes do not make use of allocation / deallocation. Allocate and deallocate in a different context.
    * Design with an RTOS so that the real-time paths cannot be disrupted by the processes performing the allocation / deallocation.

  165. B.S. @ LI says:

    @M. S.: allocate memory will be more flexible. You can change the memory pool size when initialize memory. Static will a good idea too when the memory will not be changed. But it’s not the core question. Using a deterministic to replace an indeterminate alloc/free methods can be used in RealTime critical system when you don’t trust the system alloc/free method.

  166. M.M. @ LI says:

    It’s always a challenge to use dynamic memory location. I tried this kind of approaches to make multi boot system where my two or three type of program codes stored as data in some external memory. My system is running one kind of application. Now at some point my first program replace by second one. Replace means like burn the code in controller. One thing more i am talking about Microchip PIC microcontrollers. In PIC you have very small memory for code. Second approach is you just replace a complete function while a program is running. And why you do that because of small RAM.

  167. T.Z. @ LI says:

    As others have pointed out, the stack itself is dynamically allocated memory with a very simple model. It tends to be easy to determine a worst case, at least without recursion. And It is usually possible to replace actual recursion with a loop and incrementing/decrementing buffer pointers so you know when the buffer overruns instead of blowing your stack.

    Once I had to implement the zlib decompress code which does use malloc/free for a bootloader, and I didn’t have malloc, but discovered that the malloc and frees were always done in the same order, so I could just declare what amounted to a second stack and malloc would move the partition pointer one way and free would restore it.

    It was calling malloc, but it was technically using a second stack.

    In other cases, sometimes you just need a large scratch buffer for more than one thing, and can share it – it will be deterministic and not delay or deadlock if you design it right.. Instead of a bucket-brigade set of mallocs and frees that will fragment, just use two buffers, A and B, and go A to B, then B to A, then A to B as the steps are performed.

    If you eliminate all but one or maybe two uses of malloc, the remaining ones will be far eaiser to track and validate that the free()s are being called and that fragmentation will be limited. You can’t model dozens of mallocs and frees from multiple threads in any order from any place, but you can if there are only one or two places.

    So you should avoid the temptation “Well, I’m already using malloc here because I have to, so I should just use it everywhere else”. No, you should replace malloc, or for that matter every non-determinisn with determinism every place it is possible.

    If you need a heap, use malloc since that will have been tested (but read the details from your OS docs!). But you can often get away with any of several other memory structures which are simpler and thus can be validated far more easily.

  168. P.S. @ LI says:

    @B. S.: A’h yes: Determinism. As with anything, make sure that you understand the characteristics of your OS well, and design with the limitations. Only if the limitations are debilitating, creates something that you can live with.
    If you are using an OS to design something that requires the determinism, and the behavioral documentation is inadequate, you are using the wrong OS.

  169. S.T. @ LI says:

    Many systems implement dynamic memory allocation in a non-deterministic way, so it is unacceptable for hard real-time systems. In additon, memory-leakage is another big problem which dynamic memory users may experience. Some guys work on a real-time dynamic memory allocator, called “TLSF” (Two Level Segregated Fit), which provides real time response for memory allocation & deallocation. There are also working principles of other different real time allocators at their site.

  170. E.D.J. @ LI says:

    @S.: Of course, it helps to have a crisp (non-ambiguous) — preferably formal — definition of “determinism.” The real-time practitioner community uses (U.S. Supreme Court Justice) Potter Stewart’s model: “I don’t know what [pornography] determinism is, but I know it when I see it.”

  171. P.L. @ LI says:

    @K. – totally agree, your design is much more modern than what I was thinking – I’d suggest running the GUI on a different hardware platform and using a communications channel with a clearly defined protocol If it’s hard realtime instrumentation then I’d do things using 1) or 2) below.

    I tend to mentally break memory allocation into classes:

    1) Statically allocated

    2) Statically allocated with a runtime limit of dynamically allocated data structures. What I mean by this is that one component can’t eat all the systems memory and cause other components to fail.

    3) Dynamically allocated Malloc / Free everything on the heap.

    The idea of 2) being you statically allocate buffers with a known size for each data structure which may grow. Just like a character buffer on a UART.

    3) I can’t imagine doing this for any kind of hard real time instrumentation. I’d at least do 2) where I determined sane maximum run time quantities so one producer couldn’t gobble all the memory.

  172. G.P. @ LI says:

    In systems where you may not get away without dynamic memory allocation the architecture should account for that, and act in a defined way. For example, on a TV set the user may tolerate the loss of the digital program guide if audio, video and channel change still work. Quite different requirements from a safety-critical system, but still the same process of defining outcomes from failures.

  173. J.F. @ LI says:

    I am finding myself more and more frequently writing my own application-specific version of “malloc” to ensure that I know exactly what it does and I am in complete control. I am probably just a glutton for punishment….sigh….

  174. T.O. @ LI says:

    @K. QNX OS has a nice approach to running critical and non critical processes on the same system, called Adaptive Partitioning.

  175. A. @ LI says:

    With demands on embedded systems for enhanced capabilities (especially in multicore environments), I believe that dynamic memory allocation is absolutely necessary. With enhanced capabilities come compliance to modern coding standards (such as DO178).

    For a good discussion on this topic see the article “Memory Allocation Strategy in Safety-Critical Mil Systems” at the link: http://www.cotsjournalonline.com/articles/view/101217

  176. M. @ LI says:

    ya i use dynamic memory allocation.Its a good technique for utilizing memory space

  177. T. @ LI says:

    In my experience, it is not so much a question of whether it can be used as it is a matter of whether the application necessitates it. In about 25 years of engineering embedded designs, I have had only one situation where dynamic memory was actually necessary. In that particular design, I rewrote the standard C library malloc function to prevent some of the problems that could have occurred, at the expense of slightly slower execution. Most of my work has been on 8 and 16 bit micro-controllers using only internal RAM. I would say that if the application justifies it, then dynamic allocation and de-allocation can be used, but with a great deal of caution.

  178. J.S. @ LI says:

    Malloc rewrites have to be a concern. If your doing a real-time application with hard scheduled deadlines then dynamic memory management could get in the way. If a scheduled deadline can override, that is take control as needed, dynamic memory management could be included in a design. That means dynamic memory would not be used by those routines that must run at a scheduled time. There would be the overhead of a context switch to the routines then to run to perform the scheduled task(s).

  179. A. @ LI says:

    I think dynamic memory allocation is unavoidable in some embedded systems which are not hard real time like DTV ,STB (which are rich in CPU ,memory etc).

    We will not be knowing how much memory we require during sys initialization,Object creation ,Function initialization and etc.
    EX; In DTV I don’t know in advance stream specific information such as programs,descriptors in tables ,epg data etc.

    So use the resource diligently when required.

  180. M. @ LI says:

    Most of my embedded experience has been with safety critical embedded applications, such as air data computers, navigation, and defense systems. In such applications, it is common practice to avoid dynamic memory allocation because of the high cost of testing and certification. A memory leak in an air data computer could cause an aircraft to lose its airspeed, altitude, and similar information. So dynamic allocation is avoided to insure that the resulting system in testable and predictable. This does not mean that all embedded applications cannot use dynamica allocation. If I was doing a consumer product with a simple GUI application, I would prefer using an OO system and most of those have libraries which use dynamic memory allocation. I have not had to write any embedded applications which demanded dynamic memory allocation for any systems yet.

  181. D. @ LI says:

    I would strongly recommend against using dynamic memory allocation in C or C++ embedded systems other than in the initialization phase. The reason is memory fragmentation, which isn’t so much of a problem in PC applications with virtual memory available, but is a killer in long-running embedded systems. Memory leaks can also be difficult to eliminate completely in complex C systems.

    For non-critical embedded systems without hard real-time requirements, Java (or C# on a Microsoft platform) may be more suitable. These languages provides copying garbage collectors, which avoid fragmentation (provided you don’t allocate and free large objects) as well as memory leaks.

  182. D. @ LI says:

    I guess I have to chime in when I read about hard realtime, garbage collection and Java. Our patented hard realtime, preemptible, multithreaded, garbage collector technology is designed specifically to support critical embedded systems. Contrary to general consensus, it does not use copying or compaction, it does not fragment, and it is deterministic with the ability to do worst-case-execution-time analysis. As has been noted elsewhere in this thread, and is well documented, memory management is only one aspect of deterministic and safety critical systems. We address many other aspects as well.
    Our products specifically are targeted at safety critical applications. We believe that it is safer to use a program language and runtime environment that eliminates pointer arithmetic errors, program stack errors, null pointer errors, memory leaks and similar errors, than a so-called “stronger” system level language that includes direct pointer manipulation and programmer controlled memory freeing. We also believe that static analysis plays an important role in good software development practices and results in more reliable programs.
    I know there is a lot of disbelief out there, but our technology is not new. We have been an operating business for over 10 years. There are 10′s of thousands of devices using our products. It may be a question of personal choice whether to use C/C++ for safety critical and realtime systems or not. But, if someone wants to explore using Java as an alternative that eliminates the type of memory related mistakes I just mentioned, to reduce their development time and increase the reliability of their code, I would encourage that person to review the technical papers on our website, and to contact me directly for further information, and to obtain a personal edition license of our products.
    My apologies for the blatant product pitch, but I am responding to what I perceive as misinformation that directly contradicts some of the exact reasons our company, aicas, was founded.
    my contact info: David Beberman, dbeberman@aicas.com.

  183. J. @ LI says:

    Interesting discussion.
    Short answer: Don’t.

    Long answer:
    As many pointed out, dynamic memory allocation is almost tabu for real-time embedded systems. This is generally true when referring to a traditional global heap, using the allocate/free mechanisms available in C and C++. Memory fragmentation due to non-deterministic lifetime and granularity of data objects makes it unacceptable to use DM for mission critical systems.

    I design bare-metal systems with over 180K lines of C code for mission-critical industrial systems, using no Heap. All real-time tasks have static allocation.
    But it is also true that such systems make wide use of FIFOs, transmit/receive buffers and queues, as Mattias pointed out.
    You have to analyse the usage data patterns and prove that you will never get out of memory for the critical part of the system.
    However, use of fixed block heaps and dynamic allocation for the non-critical portions of the code, like sockets servlets and user interface objects, can reduce the RAM footprint for the system, without jeopardizing the core system. In this way, you can even use more than one heap, for each concern that has different usage patterns.

    When designing deeply embedded, you need to know the details of memory allocation, RAM usage patterns and behavior for extended periods of time. The same is valid for automatic data objects that allocate RAM from the stack with long call chains.

  184. D. @ LI says:

    I would encourage people to please read what I just wrote regarding C/C++ versus garbage collected languages and our realtime garbage collector technology, in the above discussion thread. I would expect that many of us in this group have written or currently write large real-time systems (myself included). Analysis of deeply embedded systems memory usage is a requirement regardless of the language in use, as is worst-case-execution-time analysis, and the use of static analysis tools. Academic papers on the work-paced automatic memory management technology can be found on our website under the technology section.

  185. R. @ LI says:

    I prefer preallocating memory (fixed or at startup, the result is the same) and using managed memory pools.per application. That gives us a lot of confidence that this is one issue we will not have to debug. One issue that no one has touched on is resource deadlock. When dynamic memory is shared between applications and there is not enough for everything at once — a legitimate case in resource-limited systems — then then code must be robust. Each application must assume that any allocation can and will fail. It must be able to back out and release any additional resources it has allocated and suspend until it can safely try again. It may be helpful to have a tool that analyzes the dynamic memory pool and identifies owners of the allocations. It should also provide statistics for identifying fragmentation.

  186. C. @ LI says:

    I also prefer pre-allocating memory. But, going back to the original question, I think the simple answer is: it depends on the application. The key issue in real time systems is determinism. If you can use dynamic memory allocation and still ensure determinism, you should be alright. I like to play it safe and pre-allocate memory.

  187. D. @ LI says:

    It’s all horses for courses really. I’ve used pre-allocated arrays of objects with in-place construction in some circumstances, written specialist memory managers by overloading operator new sometimes and used ordinary heap management other times. It really boils down to the size and dynamics of the system concerned.

    One thing I am always dubious about though is garbage allocation and the languages that rely upon it. IMO the control of object lifetimes should be in the hands of the engineer, not a compiler run-time system, and relaxed disposal systems actually need more thought than simply using new and delete or malloc and free.

  188. R. @ LI says:

    “Mise en place” is a french phrase for “everything in place”, (typically in cooking,) which is the technique of having all ingredients pre-measured and chopped and sifted, etc, in little bowls around the work area. This allows the chef to create his masterpiece dishes without worrying about the (non-deterministic) ingredient preparation. You see this on TV cooking shows all the time.

    A similar philosophy can be used in memory allocation. Take the actual allocation and freeing of the memory out of the critical response path. Do this by always having enough memory “chunks” pre-allocated for usage by the real-time events. The “softer realtime tasks” can ensure there are memory blocks available at all times, and can do the garbage collection and defragmentation.

  189. pohudet says:

    I do trust all of the ideas you’ve presented to your post. They’re very convincing and will certainly work. Still, the posts are too short for novices. Could you please lengthen them a bit from subsequent time? Thank you for the post.

Leave a Reply to G.P. @ LI