Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Programming Operating Systems Software Windows IT Technology

Dynamic Memory Allocation in Embedded Apps? 102

shootTheMessenger asks: "My company is porting our C++ Windows app to C in an embedded device and the question of whether to use dynamic memory allocation continues to come up. So far I have resisted malloc/free use but it gets tedious having the same argument with the next set of managers to take an interest in the project. Is there a definitive answer on the subject, especially one to counter the 'we have plenty of RAM - 16MB - so why not use dynamic allocation' argument? A quick google search finds that some sites frown on allocations within embedded applications, while others say it is OK in some contexts and yet others hack around it with pseudo-static allocations. How do you feel about this particular subject?"
This discussion has been archived. No new comments can be posted.

Dynamic Memory Allocation in Embedded Apps?

Comments Filter:
  • by HotNeedleOfInquiry ( 598897 ) on Wednesday November 16, 2005 @08:32PM (#14048618)
    Over to comp.arch.embedded

    Lots of seasoned pros that can give you a good answer.
    • What? c.a.e? Are you saying... you don't trust a bunch of sweaty 15 year olds to give advice on serious embedded coding?

      "Hey d00d- why not just write it in Python? Python is am embedded lang, and you can write hard math parts in C no prob mmkay?"

      "Stfu! Perl > C anyday."

      "I heard you could do embedded stuff in PILOT, with a PILOT interpreter running as a kernel module on Linux.... I know a guy who know a guy who runs Linux, so I would reccomend it holehartedly!!1"
  • Tradeoff (Score:4, Insightful)

    by addaon ( 41825 ) <addaon+slashdot.gmail@com> on Wednesday November 16, 2005 @08:36PM (#14048636)
    If you have dynamic memory allocation, some day you're going to run out. If you're prepared to handle that case, either by (a) crashing or (b) recovering, then doing dynamic memory allocation will save you a good amount of development effort for a cost you've already decided to pay. If you're not going to allow (a), the question is whether (b) is more or less effort than just doing static allocation. That depends heavily on exactly what you're doing... but 16MB of memory is gargantuan, for many things.
    • Agreed, there is no absolute answer here. In many cases the overhead involved with malloc() or whatever you're using is too high, or if the data structure is simple enough you don't need it in the first place. I'm used to dealing with 2-4K of memory, in which case the cards are pretty much all laid out in front of me, only in special situations would I use dynamic allocation..

      However, if you're doing a complex application and have 16 megs of memory (HUGE!) with an ARM or some higher end processor that has t
    • Re:Tradeoff (Score:2, Insightful)

      by ksheff ( 2406 )
      I'm sure anyone doing DOS application development a couple decades ago would have loved a flat address space with 16MB of memory. Many applications used dynamic memory allocation then, so what's the problem with modern embedded applications? If you're getting lots of memory fragmentation, either the implementation of free() on the dev kit is broke (like it was with a version of Microsoft C in the late 80s), and/or you need to re-think the design of your application. I guess many programmers don't care mu
      • Re:Tradeoff (Score:3, Insightful)

        by bunratty ( 545641 )
        One problem is that it's often desirable for embedded applications to behave predictably. If you're using malloc, it can sometimes take more time than others, and the real-time behavior is not deterministic. Another problem is that embedded systems need to be very reliable over long periods of time with no user intervention. The smallest memory leak could lead to crashes or other erratic behavior.

        When I was programming embedded systems and was lucky enough to have actual dynamic memory allocation, I reser

        • I'll put in a me too on this, at least for anything more than a toy.

          In the stuff I've worked on, items like linked list were really an array with a class wrapped around it to provide the same equivalent functions. In the end it was finite (60 items) and as a result had to give exception when filled.

          But this meant that the memory could be accounted for at all times, instead of running away or being orphaned.

          In a deterministic project, the keyword "new" is always replaced by finite static instances of obje
        • One problem is that it's often desirable for embedded applications to behave predictably.

          Strange. This could be construed to mean that it doesn't matter whether or not a full blown desktop or server application behaves predictably. Since nobody seems to think this is wrong, something is seriously broken in this industry.

          Another problem is that embedded systems need to be very reliable over long periods of time with no user intervention.

          This is next to impossible in C, the reality of the software industry
          • I have personally created several extremely reliable apps in embedded hard real time systems. It is not easy.

            You don't do it by coding the way they tell you to in CS school (by and large). And you don't do it by coding the way many self-taught programmers hack it up. It's a combination of a good CS education and experience.

            C has a lot of problems but the alternatives all have problems of their own. C++ is not straightforward and can lead to over-reliance on the heap. There are ways to work around t

            • You don't do it by coding the way they tell you to in CS school

              I wish that was true. They don't tell you about coding in CS school at all, which is a shame. Sometimes they throw buzzwords at you and call that coding, which is even worse. So yes, due to lack of any serious programming being taught, writing robust/working/efficient, basically just good software takes a background in CS and then lots of experience.

              C...C++...Java

              Was that supposed to be an exhaustive list of all programming languages? I cert
    • The larger problem, in my mind, is potential memory leaks. In an embedded application, presumably, this thing is going to run forever. Also, not dynamically allocating memory gives you more deterministic behavior, so you can put a little more faith in your testing.

      Of course, a lot will depend on how many limitations avoiding dynamic memory will impose on your system. Keeping it to a minimum in an embedded system is always a good idea.

  • Why? (Score:3, Interesting)

    by ObsessiveMathsFreak ( 773371 ) <obsessivemathsfreak.eircom@net> on Wednesday November 16, 2005 @08:44PM (#14048681) Homepage Journal
    My company is porting our C++ Windows app to C in an embedded device and the question of whether to use dynamic memory allocation continues to come up.

    OK, I realise I know precious little about embedded devices, but exactly why do you even need to port the code from C++ to C. Surely there's a C++ compiler for the device, but if there isn't, naturally I see the issue.

    Secondly, dynamic memory in an embedded device?! Something seems awry here. Would preallocating memory work just as well? You, in theory, know all the parameters and tolerences beforehand.

    Lastly, going back to the first paragraph, I'd keep away from C if I could, especially considering that the C++ system is likely object oriented. On top of that new/delete is much superior to the malloc/free. But that is just my humble C++ centric opinion!
    • Re:Why? (Score:3, Interesting)

      I second sticking with C++. The poster may have a good reason for switching to C, but he should have explained it, because the difference between C and C++ is extremely relevant to his question! Thanks to RAII and smart pointers like boost::scoped_ptr (see boost.org) C++ is way ahead of C for using dynamic memory allocation safely and readably. Switching to dynamic allocation will likely involve much greater cost in bugs and readability if C is used instead of C++.

      Also, if performance is important, it'

      • One problem with C++ is that it's difficult to know that an operation always will take the same amount of time. There can be quite a lot going on in the background whenever you create a new instance of a class, for example, and figuring that out can be almost impossible (especially if things like overloading have been used). For embedded applications (where you frequently must have an upper bound on reaction time) this can be a deal-breaker.

        Of course, if that is the case, he shouldn't be looking at dynamic
        • Re:Why? (Score:2, Insightful)

          C++ imposes hardly any run-time cost, and in most cases none at all, if you turn off exception handling and RTTI. This has been an explicit design goal of C++ since the beginning. Any problem that forces you into the situation you describe (which I think is inheritance with virtual functions) would require an equally complex C solution which would be harder to read and would probably compile to slower code since C++ compilers use implementation tricks that can't be expressed in C.

          The proof of this is that
        • Re:Why? (Score:5, Informative)

          by swillden ( 191260 ) <shawn-ds@willden.org> on Thursday November 17, 2005 @12:08AM (#14049699) Journal

          One problem with C++ is that it's difficult to know that an operation always will take the same amount of time.

          Absolute nonsense. C++ doesn't have a garbage collector or anything like that running that can introduce random delays. The same bit of code following the same path will take the same amount of time to execute every time... just like C (well, assuming you don't run into cache misses, etc. but those typically don't crop up on embedded processors.).

          There can be quite a lot going on in the background whenever you create a new instance of a class, for example, and figuring that out can be almost impossible (especially if things like overloading have been used).

          Dead wrong again. If you know the language you can quite easily see what precisely is going to happen. You might have to go read the constructor, and if the class contains other objects you might have to read their constructors, but all of that code is code that would have to be executed in a C implementation as well -- you have to initialize your data somewhere.

          C++ does impose some "hidden" costs, but the language is specifically designed to make sure they're insignificant. Virtual function invocations have a hidden cost: An extra pointer dereference, which is dwarfed by the cost of the function call, even if the function is a no-op. If you're going to worry about that cost, you should probably avoid function calls altogether. The biggest potential hit is exceptions. The compiler has to generate extra code in every function to handle cleanup in the event of an exception, and it can cost a bit. In practice, not using exceptions also costs something, because you have to write a lot more manual error handling, which also has to be compiled in, but, in general, stack frames with exception unwinding may add noticeable run-time to functions that don't do much, but do create objects that must be destroyed. If that's a concern, just compile with -f-no-exceptions and write the error-handling code yourself. Some people like to turn off RTTI also, but that's actually very cheap, and it allows for some very clean solutions to complex problems (but use it sparingly, because it's easy to abuse).

          The most likely source of apparent non-determinism in C++ code is the same in C code: memory allocation and deallocation. One way to fix that is to get a real-time implementation of malloc()/free(), but even without doing that, C++ provides great tools for making allocation more reliable: by overloading 'new' on key classes and implementing a pooled allocation scheme, you can ensure that allocations and deallocations are both constant-time and faster than any general-purpose allocator could be.

          Where C++ is a big win is in the libraries, especially the template libraries. The STL's sort routine, for example, is usually significantly faster than qsort(), and never slower. High-performance, typesafe, THOROUGHLY DEBUGGED collections implementing a variety of different data structures are a big win for any environment (though platforms with very tight constraints on code size may need to avoid them, since they can bloat the binary -- not likely an issue for our friend with 16MB RAM).

          For embedded applications (where you frequently must have an upper bound on reaction time) this can be a deal-breaker.

          For _hard_real-time_ applications, you mean -- lots of embedded apps are not real-time, much less hard real-time, you're not going to trust C, either. For hard real-time apps what you do is:

          1. Pick a processor and platform that has guaranteed maximum instruction times (which means you want to avoid a lot of cache memory and no deep pipelines that may stall badly on a wrong branch prediction).
          2. Write your code in a language that compiles to assembler/machine code.
          3. Identify the time-critical sections and calculate the cycles that they will consume. (Well, actually you profile first -- it's only necessary to actually count cycles if you're getting close
          • Absolute nonsense. C++ doesn't have a garbage collector or anything like that running that can introduce random delays. The same bit of code following the same path will take the same amount of time to execute every time... just like C (well, assuming you don't run into cache misses, etc. but those typically don't crop up on embedded processors.).

            Cache misses crop up on many embedded processors. But lets ignore that. The real problem is that C++ is harder to estimate the time up front than C. If you have

            • If you have a function and you pass it an object, it it going to call the constructor once? Twice? Once and the operator equals once? It differs by compiler.

              No, it doesn't differ by compiler, unless the compiler is broken. The ISO C++ standard defines the semantics quite precisely.

              To answer the specific question: Pass by value? It will call the copy constructor once. Period. And when the function terminates, it will call the destructor once. But why would you pass objects by value? Do you common

              • No, it doesn't differ by compiler, unless the compiler is broken. The ISO C++ standard defines the semantics quite precisely.

                To answer the specific question: Pass by value? It will call the copy constructor once. Period. And when the function terminates, it will call the destructor once. But why would you pass objects by value? Do you commonly pass structs around by value in your C code? I've often wished C had an equivalent to const reference parameters, so I wouldn't have to pass pointers. That's probably

                • Read Effective C++, and you can see all the wierd ways constructors and destructors get called. Its unobvious, and it can be a huge performance hit.

                  I read Meyers' book when it first came out and found it tedious and often wrong. Perhaps later editions got better, but I haven't looked.

                  As far as the "unobviousness" goes, I guess I just have to disagree, with a caveat. I've been writing C++ code professionally since 1992 or thereabouts, so it's possible that what's obvious to me is not obvious to others



                • ITs not that simple. Read Effective C++, and you can see all the wierd ways constructors and destructors get called. Its unobvious, and it can be a huge performance hit.


                  Obvioulsy you know that, so no one will convince you, that you only think you know that :D

                  Read Steve Meyers again and you will figure that most of what you find "wierd" is only wierd if you lack knowledge. Its completely well defined under what circumstances what is happening. The problem with C++ is, that this number of circumstances easy
                  • This is nonsense. Debugging means you use a debugger. That means you set break points and watch the programm running and break at those break points. There you examine varibale values, stack traces, registers, memory dumps, what ever you want. Its completely independent from your language.

                    First off- we're talking embedded here. We're lucky if we have serial output. Forget about being able to run a debugger on it.

                    Secondly- hitting the debugger is rarely the best way to debug a problem. It can help, but

                    • Forget about being able to run a debugger on it. /me looks at the blue device shaped like a hockey puck made by microchip thats on his floor.

                      true its a little on feature poor side compared to a modern desktop debugger but it certainly exists.
                    • THey exist for some platforms, assuming your hardware has the necessary special ports. Not always the case.
            • Seems you have no clue about C++, and thats why you write stuff like this:


              The real problem is that C++ is harder to estimate the time up front than C. If you have a function and you pass it an object, it it going to call the constructor once? Twice? Once and the operator equals once? It differs by compiler.


              First of all: you can't pass an object in C++.
              Second: if you like to do this, you either pass a reference or a pointer.
              Third if you can't do that, you indeed pass it like it is and then its like any other
              • First of all: you can't pass an object in C++.
                Second: if you like to do this, you either pass a reference or a pointer.
                Third if you can't do that, you indeed pass it like it is and then its like any other value, e.g. an integer, that means there is a copy created on the stack for the called function. And in this case the CTOR is called, gues what .... ONCE.

                Umm, you sure as hell can pass an object in C++. Go ahead, code up a test.

                For the third issue- read Meyer's effective C++, and see all the hidden place

                • I wrote: First of all: you can't pass an object in C++.
                  Second: if you like to do this, you either pass a reference or a pointer.
                  Third if you can't do that, you indeed pass it like it is and then its like any other value, e.g. an integer, that means there is a copy created on the stack for the called function. And in this case the CTOR is called, gues what .... ONCE.


                  You wrote:

                  Umm, you sure as hell can pass an object in C++. Go ahead, code up a test.

                  No, you can't. If you attempt that, a copy is created, as I

            • If you have a function and you pass it an object, it it going to call the constructor once?

              It calls the copy constructor exactly once. If you pass a constant reference instead, nothing is called. But of course, if you *do* pass by value, you know what you're doing.

              Unless of course you're a dimwitted Java programmer retrofitted to crank C++ because the JVM was larger than the available RAM and you thought, you could code anything that had curly braces.

              (Returning an object by value calls the copy constructo
          • Identify the time-critical sections and calculate the cycles that they will consume. (Well, actually you profile first -- it's only necessary to actually count cycles if you're getting close to the limit).

            I've never done hard real-time applications, so I might be wrong here; but it seems to me that trusting profiling is asking for trouble. How do you know that the longest execution time seen by the profiler is the longest possible time ? You don't. Which means that you don't know if the function might,

            • it seems to me that trusting profiling is asking for trouble.

              It can be. You have to be smart about it. But actually calculating cycles is so much work that it many cases makes sense to optimize developer time by trusting the profiler when it says that this very straightforward, non-looping function runs in 1% of the time budgeted for it.

          • The biggest potential hit is exceptions. The compiler has to generate extra code in every function to handle cleanup in the event of an exception, and it can cost a bit.

            I don't have time to write a long comment here, but I'll quickly note that the above isn't necessarily true any more. Modern compilers are pretty good at generating efficient code for exceptions. Although to the programmer they're quite an open-ended design tool and might be thrown in many places if you don't know exactly what someone el

            • That's awesome! What compilers perform this optimization?
              • Sorry, I'm not sufficiently expert to tell you for sure who's implementing the technique already. IIRC, I first saw it described in some sort of research paper. It was a while back now, but I think it came out of somewhere like HP or Microsoft Research.

                I'm pretty sure it's also been mentioned in discussions on a couple of the more serious C++ Usenet groups, so you might like to search archives of things like comp.lang.c++.moderated if you're interested in more details. If nothing else, there's probably a

      • The poster may have a good reason for switching to C, but he should have explained it, because the difference between C and C++ is extremely relevant to his question!
        I agree with your post but when storage is minimal the CRT wins everytime over the C++ runtime. Nothing against C++. You do know you can OOP using C?

        Enjoy.
        • when storage is minimal the CRT wins everytime over the C++ runtime

          No, it doesn't. C++ doesn't need much runtime, basically just memory management (same as in C) and some support for exception handling (optional, but worth it, imho). The rest is the standard library, and there you only pay for what you use, as in C.

          If you compare specific parts of the two libraries, C++ wins. Most striking example: C printf (ever written a program without it?) need to include all the formatting code for dealing with inte
    • Re:Why? (Score:5, Informative)

      by AiY ( 175830 ) on Thursday November 17, 2005 @12:10AM (#14049711) Homepage
      First off, I'll just note that all my professional development career has been working with embedded systems.

      Second, I'm gonna stay out of the whole "... but <language x> is not an issue because..."

      There are many, many reasons to use C instead of C++ on an embedded system. The two biggest are
        a) Portability
        b) Stability/Conformance

      C has a much better record of consistency between embedded platforms. It carries little baggage so most vendors get it right. If the same codebase has to run on several platforms, the specific C compiler for each platform probably produces code that behaves more consistently than C++ compilers.

      Stability and confromance come from the fact that embedded vendors don't always spend the time keeping their compilers up to date, and you may be forced to used the vendor's compiler.

      As a specific case from place of business: A new project was developed entirely on an emulated environment, beautifully done in C++. Object oriented and all that - this fit the project very nicely. The code was quite stable and was continuously tested with a dedicated rig that ran a huge battery of sample cases against the product. Then the port to the first target platform began. Turns out that it wasn't x86 and Microsoft C++ didn't run there. No problem - the vendore had a C++ compiler. But wait - it didn't support complicated features like "templates" so there was fiddling. By the time I saw the codebase, there were two sets of #ifdefs - one for WIN32 (which worked) and one for the platform (which did not). Later we took that code base, stripped out a bunch of features, converted to C and had it running on about 4 different embedded architectures.

      Oh, and the low-end platform was a 25MHz 68331 with 2 MB total memory, 1.5MB total that our middleware could use, so effectively about 700 KB. And we used dynamic memory allocation.

      The moral: C compilers are simpler so they tend to be better supported. If your team is more comfortable with some other language and the target supports it, go with that. Remember that maintenance will be the major portion so the more obvious the code is to the developers the better.
      • Oh, and the low-end platform was a 25MHz 68331 with 2 MB total memory, 1.5MB total that our middleware could use, so effectively about 700 KB. And we used dynamic memory allocation.
        Ah. Porting to the Motorola DCT2000 eh? I feel for you.
      • Why were you forced to use the vendor's compiler instead of GCC?
    • One reason I might be tempted to stay away from C++, is once you use it, you might want to use STL or boost. C++ compilers for off-the-mainstream architectures aren't usually as good about code reduction as a maybe GCC for x86 might be. I've seen embedded communications apps go from 40Kb C executables to 1 MB for C++, all so we would be able to use STL streams and such. The problem... there's only 2MB of flash only these devices, we can't upgrade to the new uClinux because we used up so much room.

      If you rea
      • One reason I might be tempted to stay away from C++, is once you use it, you might want to use STL or boost.

        Good idea, stay away from languages with big libraries. That way you have to use third party libraries or implement everything yourself, and then you'll know everything is correct and efficient ;-)

        I'm mostly kidding. I was going to blast you until I remembered that code size, unlike execution time, can't be dramatically improved by making isolated tweaks. I'm guessing it is not pleasant to go t

    • > OK, I realise I know precious little about embedded devices, but exactly why do you even need to port the code from C++ to C. Surely there's a C++ compiler for the device, but if there isn't, naturally I see the issue.

      I have written lots of embedded code, and some of it in C++ (with 512kB RAM!). You may want to be careful with template instantiation and the standard libraries, but otherwise there is little reason not to do it.

      And I notice how old I am suddenly feeling. My first (own!) PC had 4 MB of me
  • by NullProg ( 70833 ) on Wednesday November 16, 2005 @08:55PM (#14048741) Homepage Journal
    No one here can answer it without more information. Whats the target platform, Linux, WinCE, QNX? What type of data needs to be accessed? Whats the storage media, DOC, compact flash, hard disk?

    You really can't expect to receive a practical answer without giving us more information.

    Enjoy,
  • Depends on whether you need to use it or not. Do you? Only you can tell!
  • Gee Only 16MB? (Score:1, Flamebait)

    by Q-bert][ ( 21619 )
    Wow you're just so limited. Bah, when I started learning C I used dynamic allocation and the computer didn't even have 1MB of memory. So wtf are you complaining about? I wouldn't consider 16MB an embedded device. I call that a high end 386 or low end 486. Hell they used to sell the first pentiums with only 8MB of ram and windows 95 on it. Geeze.

  • by hackwrench ( 573697 ) <hackwrench@hotmail.com> on Wednesday November 16, 2005 @09:10PM (#14048833) Homepage Journal
    Yeah, I always love it when a person comes to a public forum, being purposely vague for whatever reason and asks a question that really depends on the unmentioned.
    For example, if your embedded device runs embedded Windows, I don't really see the problem. On the other hand, a Windows GUI app really can't be ported to the vast majority of embedded devices out there.

    Speaking of embedded Windows, the subsystem is going to affect whatever it is you write.

    Given that you are talking about a "next set of managers", I figure it isn't really a flying leap to consider the possibility that you don't really have a specific embedded device in mind, or perhaps your people have looked at a WinCE device and said "Hey these things pretty much come with 16MB standard nowadays, wouldn't it be cool if we could get our application onto one of these babies!"

    The fact is that your app isn't going to be occupying that device alone, so look at how the rest of the programming treats the device.

    I'm bored now, so bye!
  • Do you know exactly how much memory you need? If so, allocate it statically.

    If you don't, there's no good way to avoid dynamic allocation. The best you can do is allocate a certain amount statically and try to get by with that. If you need more, you have to throw an error. If you need less, you're a memory hog - not a good thing in an embedded program.

    Dynamic memory management isn't so scary in C++. If you absolutely have to use C, God be with you.
  • Dont do stupid stuff with your hardware that makes it crap out if you copy a "bad file name" onto it.

    I had a dumb-as-shit problem with the RCA (real crappy audio) Lyra. I had some non-English file names, which every OS handles, as does any usb media.

    Unlike every other MP3 player, this one CRASHES on reading the filename. I called RCA's helpline about getting it not to crash. They said "Take it back, we dont know how to help with that".
    • I guess the helpdesk couldn't think of "rename the file to only use ASCII characters" or something similar. Probably not on their script.
      • First to the poster who said "dont buy garbage", it was a gift from my girlfriend, so taking it back was hard...

        As to you about renaming the files, well, the device bootstrapped and scanned all the mp3s and wma's. While booting, it hits the bad names and crashes.

        The USB disk was accessable AFTER booting of the RCA Lyra.

        So, no renaming for you.

        Other than crashing on a "bad file name", it wasnt that bad, but no memory slot (I then got one with a memcard slot).

  • what i would do (Score:4, Insightful)

    by fred fleenblat ( 463628 ) on Wednesday November 16, 2005 @09:44PM (#14049044) Homepage
    It's fine to malloc/free in 16M, just keep a grasp on the situation.

    * install a malloc debug library on all test boxes to find all unfreed, double freed, or overwritten chunks. don't ship until all mem is accounted for.

    * don't allow, or at least discourage, "little" mallocs like strdup() or allocation of singleton structs.

    * do something sensible when malloc starts returning NULL. don't just seg fault or abort.

    * destruction test: malloc 15.5MB (and incr until failure) of space at the start just to establish the limit, then back it down 50K and see how long your embedded app can live in that space and watch how it dies. as you fix things, expect it to die in different places each time.

    * nm (or elfdump these days) your c lib and look for library functions that link to malloc or calloc. if your embedded OS ships with source, you can just grep for it.

    * if your app has a log file, put a printf(sbrk(0)) in every few minutes (or less as appropriate) so you can watch for unexpected growth over time and spikes related to usage.

    just some ideas.
    • > * don't allow, or at least discourage, "little" mallocs like strdup() or allocation of singleton structs.

      You have no IDEA, how hard that is to debug. I have been working on something which has two allocators and am trying to work out which one to use for a case - when I find that a guy has used strdup() completely freely without thinking whether that codepath needs shared memory or local process heap. And sadly he got it right, so I have to write yet another 200+ line function duplicating the fun

  • Memory allocation is a black art, and fiddling with it is best left up to the wizards.

    That said, the first article linked to gives some good recommendations on memory allocation. But I would go one step further and say that coding the app using malloc then analyzing the behavior is your best bet.

    For example, code the application using just malloc but tag each block of memory with an identifier so you can figure out where the memory was allocated from. Next, wrap the malloc/free calls with timing code that
  • 'we have plenty of RAM - 16MB - so why not use dynamic allocation' It's not just about RAM, it's about speed. malloc() is slow enough even on, for instance, the PS2 cpu (EE) to make it worth avoiding whenever possible. If you have to do it, consider writing your own memory manager, or adapting someone else's. malloc() does a lot of complicated stuff you may not need, and in this case simpler may be a lot faster. Also, you can go a long way with pools of fixed size objects, where the pool mallocs when it n
  • by WolfWithoutAClause ( 162946 ) on Wednesday November 16, 2005 @10:38PM (#14049309) Homepage
    I worked on a *big* embedded telecoms project, with about the same amount of memory available; where initially we used dynamic memory as little as possible. Eventually though, we used it almost everywhere, and the places where we hadn't used it were rather awkwardly written and a source of bugs.

    I never personally saw it fragment to the point of failure, but another engineer said he had had to debug that situation on this system- a particular sequence of allocation did this once (in many years).

    This problem was solved by using a different memory allocator. That was a rare problem on a huge, long lived project.

    Overall, I shouldn't sweat it too much, fragmentation causing the memory allocator to fail is rare enough and there are things you can do to solve the problem if it does occur. But you'll need a guru to solve it if it does happen.

  • by mcgroarty ( 633843 ) <brian DOT mcgroarty AT gmail DOT com> on Wednesday November 16, 2005 @11:51PM (#14049619) Homepage
    Memory fragmentation is murder on embedded devices. Do your best to avoid dynamic allocation unless static pools results in huge memory waste because your app is highly modal. Where dynamic allocation is a must, set up multiple heaps and find points where you can add code to free everything on that heap when switching modes to guarantee a fresh slate. If you have tons of small allocations, also consider setting up special heaps that handle fixed-size allocations. Having these parcelled off into their own area can go a long way toward fragmenting the space where the bigger allocations reside and you might find it's easier to find points where you can wipe heaps without the small allocations in the way.
    • Fragmentation is a problem if you do small allocs/frees frequently, but then need to do a large alloc. It is possible to design around that. For example, large objects can be preallocated.
  • If the system can do page remapping (is that the right term?), then dynamic allocation is acceptable for the most part. Non-contiguous free pages can be grouped together to satisfy requests for large contiguous chunks of ram. malloc() is pretty fast in some implementations. A very, very fast alternative you can try in some cases is alloca(), which in gcc is an intrinsic function that produces only around 2-3 instructions that allocate a variable amount of ram from the stack, which is freed when the function
  • by pslam ( 97660 ) on Thursday November 17, 2005 @12:14AM (#14049730) Homepage Journal
    My company is porting our C++ Windows app to C in an embedded device

    Why would you do that? There's nothing about C++ that rules it out on embedded devices. I smell a bad vendor toolchain. Rule no.1 for sane embedded device development: USE THE GNU TOOLCHAIN. Then you can use C++ and half your porting task is gone. You can use the same toolchain with an x86 target for testing on PCs. You wouldn't happen to be using Green Hills, or god forbid Tasking would you?

    So far I have resisted malloc/free use but it gets tedious having the same argument with the next set of managers to take an interest in the project.

    Removing dynamic memory management is a noble goal but it goes deep into your coding style, to the extent that you basically end up with forked code instead of portable code - one version for each target inside big #ifs. One nice alternative I've used in the past is a stack separate from the call stack. You can either allocate a fixed size stack using malloc (say, 4MB for a task), or from a fixed location such as on-chip RAM. Allocations are stricly last-alloc'd-first-free'd, which actually fits most usage patterns. The key advantage is you can throw away the entire pool in a single call (pop all), for example when an error occurs, or if you run out of memory. This makes error handling much, much simpler than having a ton of delete's depending on how far you got. It also makes allocation overhead extremely small - it's basically just pointer arithmetic. It's just a souped up "alloca", where the stack isn't the function call stack, so it doesn't go away when the function does.

    Best of all, if you use this method you can have a non-embedded version of the "stack" allocator which just uses malloc instead. I've got an example app which does no dynamic memory allocation linked from my profile (it's a Vorbis decoder).

  • My company is porting our C++ Windows app to C in an embedded device and the question of whether to use dynamic memory allocation continues to come up. So far I have resisted malloc/free use but it gets tedious having the same argument with the next set of managers to take an interest in the project. Is there a definitive answer on the subject...?

    You're asking if there's a definitive answer on whether your particular application should use dynamic memory allocation?

    Apparently not. If you don't have
  • by AiY ( 175830 ) on Thursday November 17, 2005 @12:37AM (#14049839) Homepage
    Ahh yes, everyone comments with strong opinions without stating important assumptions. The simple answer is "yes, of course you can dynamically alloc memory". The important question not revealed is "why?". Once you have figured out exactly what the memory-related requirements are for you application then you can determine if you need dynamic allocations.

    To answer that you need to ask questions like:

    Will the app be long-running? If yes, you probably don't want a scheme that will fragment memory because eventually your heap will be too fragmented to allocated necessary contiguous blocks and it will have to reset.

    Does the app need to allocate quickly? If yes, then you'll want to avoid allocating at all. Size your buffers ahead of time and never allocate dynamically. Also note that this is not to be confused by with hard real-time requirements. Many real-time applications need only bounded time on operations, so an ordinary allocator would perform very well on average and have known maximums because of bounded initial heap size.

    Do you have to supply your own dynamic allocator? If there is an allocator available on the system, that may be the best route.

    What are the patterns of your allocations? Does the app allocate many small chunks, a mixture of small and large; are they long lived or short lived? Allocation sizes effect performance and size usage of dynamic allocators.

    I'll stop know because I can't think of any other good reasons off-hand. Static allocation is the best thing for many reasons - easy to use, easy to analyze, very fast. That's not practical for all applications, so my advice is to go with a simple allocator next - for C, malloc is good because everyone should know how to use it. Don't worry about speed or efficiency too much - performance usually isn't a problem. Look up "dl_malloc" (Doug Lea's malloc). It's a good public-domain allocator that looks like malloc and works very well allocating small chunks.

    Malloc isn't a great fit if your app is constantly running out of space. In that case, find a good garbage collector or memory reordering scheme. Several garbage collectors make life easy by making allocations fast and solving the problem of free'ing memory.

    Remember, don't pick an allocation scheme based on the problems you *think* you'll have. Pick an allocator whose major benefit matches your major issue.
  • I agree with other posters in that we would really need to know more about the project before we could give the answer that is best for your particular set of problems. However, my first inclination is to say "no" to dynamic memory allocation in an embedded application. Can you provide more details on the device?
  • by kbielefe ( 606566 ) <karl.bielefeldt@gma[ ]com ['il.' in gap]> on Thursday November 17, 2005 @03:39AM (#14050365)
    If you're going for high standards of reliability like DO-178B certification, then the guidelines are usually to statically allocate as much as possible, use malloc if unavoidable, but never free the memory. This goes contrary to everything they drill into you in college, but it makes the maximum memory usage easily measurable and predictable and also makes the execution time of malloc very consistent. And trust me, you find memory leaks fast.

    Yes, it is possible to calculate your memory needs using dynamic allocation and deallocation, but it is a lot harder to prove and a lot easier to make a mistake. If you really can reliably put an upper bound on the amount of memory your app uses, then there is usually no need for dynamic allocation in the first place. If you don't care about predicting your memory usage or malloc execution time, then why are you even asking the question? Just go with whatever is easier. However, consider that some extra effort now will pay off in the long run.

    You have to let go of your assumptions from the non-embedded world. The natural instinct is to save as much memory as possible, since your memory is so limited. Your undergrad algorithms class taught you that dynamic allocation is the way to have the lowest possible memory usage at any given time. However, in an embedded application, unallocated memory is just as wasted as extra memory allocated in a static buffer, but in the latter you always know it is available as soon as you need it.

  • 2 Points:

    1) A lot of clients in the embedded aviation world that still use C, like to follow SaferC and MISRA guidlines. SaferC basically says, dynamic memory allocation can be bad, but not using it when needed is worse. The point was that a lot of modern algorithms and designs rely on dynamic allocation. If this applies to you, then you will have more problems as a result of trying to fit them into a static allocation methodology than you would of had by using dynamic memory in the first place.

    2) Lockheed
  • One big problem is handling the case when malloc returns 0 for no-more-memory. In hard real time there simply is no way to allow this to ever, ever happen.

    One workaround is to only use malloc when the system boots or maybe when a port opens (free when the port closes). The benefit is you can guarantee you cover the case during testing. For buffers and messages malloc at system boot a fixed number of buffers of a fixed size. Put them on a simple free list. Use them by pulling one off the free list and

  • malloc and glibc (Score:3, Informative)

    by LWATCDR ( 28044 ) on Thursday November 17, 2005 @05:52PM (#14056819) Homepage Journal
    The standard way that glibc works is that the heap grows until the task ends. This can cause problems so yes dynamic memory allocation should be avoided but your question leaves out a lot of variables.
    1. Does the CPU have an MMU.
    2. What OS.
    3. Single or multi tasking?

    Without knowing more no one can give you a firm answer.
  • ...is that you ask the wrong question. Whether people "feel good" or "feel bad" about dynamic allocation in "embedded applications" is completely uninteresting to you. What is interesting is whether it makes sense for your embedded application. And given the overwhelming lack of information you have provided about your project, the answer to that can only be known by you.
  • The question is ot easy answered ...

    But the development is easy approached.

    Instead of thinking wether you write x = malloc (size(x)); you simply do: x = new_X();

    So, you write helper functions that "allocate" you a abunsh of memory when you need it. The helper functions of course use dynamic allocated memory and do nothing else but call malloc(). For every allocating function like new_X() you write a deallocation function delete_X() as well, that one obviously only calls free().

    So you get a "running" system
  • Couple observations from using C++ in embedded soft-real-time systems (and games):

    * The smaller your subset of C++, the more portable. You can make a perfectly functional C++ app without using operator overloading, RTTI, exceptions, even virtual functions.

    * The smaller your subset of C++, the less bugs that you'll blame on the compiler but will turn out to be your misinterpretation of the language (this applies unless you're Stroustrup, or maybe the author of more than one C++ compiler)

    * Stack allocations a

No man is an island if he's on at least one mailing list.

Working...