Dynamic Memory Allocation in Embedded Apps? 102
shootTheMessenger asks: "My company is porting our C++ Windows app to C in an embedded device and the question of whether to use dynamic memory allocation continues to come up. So far I have resisted malloc/free use but it gets tedious having the same argument with the next set of managers to take an interest in the project. Is there a definitive answer on the subject, especially one to counter the 'we have plenty of RAM - 16MB - so why not use dynamic allocation' argument? A quick google search finds that some sites frown on allocations within embedded applications, while others say it is OK in some contexts and yet others hack around it with pseudo-static allocations. How do you feel about this particular subject?"
Take your question... (Score:5, Informative)
Lots of seasoned pros that can give you a good answer.
Re:Take your question... (Score:2)
"Hey d00d- why not just write it in Python? Python is am embedded lang, and you can write hard math parts in C no prob mmkay?"
"Stfu! Perl > C anyday."
"I heard you could do embedded stuff in PILOT, with a PILOT interpreter running as a kernel module on Linux.... I know a guy who know a guy who runs Linux, so I would reccomend it holehartedly!!1"
Tradeoff (Score:4, Insightful)
Re:Tradeoff (Score:2)
However, if you're doing a complex application and have 16 megs of memory (HUGE!) with an ARM or some higher end processor that has t
Re:Tradeoff (Score:2, Insightful)
Re:Tradeoff (Score:3, Insightful)
When I was programming embedded systems and was lucky enough to have actual dynamic memory allocation, I reser
Re:Tradeoff (Score:1)
In the stuff I've worked on, items like linked list were really an array with a class wrapped around it to provide the same equivalent functions. In the end it was finite (60 items) and as a result had to give exception when filled.
But this meant that the memory could be accounted for at all times, instead of running away or being orphaned.
In a deterministic project, the keyword "new" is always replaced by finite static instances of obje
Re:Tradeoff (Score:1)
Strange. This could be construed to mean that it doesn't matter whether or not a full blown desktop or server application behaves predictably. Since nobody seems to think this is wrong, something is seriously broken in this industry.
Another problem is that embedded systems need to be very reliable over long periods of time with no user intervention.
This is next to impossible in C, the reality of the software industry
Re:Tradeoff -- Don't use C (Score:1)
You don't do it by coding the way they tell you to in CS school (by and large). And you don't do it by coding the way many self-taught programmers hack it up. It's a combination of a good CS education and experience.
C has a lot of problems but the alternatives all have problems of their own. C++ is not straightforward and can lead to over-reliance on the heap. There are ways to work around t
Re:Tradeoff -- Don't use C (Score:1)
I wish that was true. They don't tell you about coding in CS school at all, which is a shame. Sometimes they throw buzzwords at you and call that coding, which is even worse. So yes, due to lack of any serious programming being taught, writing robust/working/efficient, basically just good software takes a background in CS and then lots of experience.
C...C++...Java
Was that supposed to be an exhaustive list of all programming languages? I cert
Re:Tradeoff (Score:1)
Of course, a lot will depend on how many limitations avoiding dynamic memory will impose on your system. Keeping it to a minimum in an embedded system is always a good idea.
Why? (Score:3, Interesting)
OK, I realise I know precious little about embedded devices, but exactly why do you even need to port the code from C++ to C. Surely there's a C++ compiler for the device, but if there isn't, naturally I see the issue.
Secondly, dynamic memory in an embedded device?! Something seems awry here. Would preallocating memory work just as well? You, in theory, know all the parameters and tolerences beforehand.
Lastly, going back to the first paragraph, I'd keep away from C if I could, especially considering that the C++ system is likely object oriented. On top of that new/delete is much superior to the malloc/free. But that is just my humble C++ centric opinion!
Re:Why? (Score:3, Interesting)
Also, if performance is important, it'
Re:Why? (Score:1)
Of course, if that is the case, he shouldn't be looking at dynamic
Re:Why? (Score:2, Insightful)
The proof of this is that
Re:Why? (Score:5, Informative)
One problem with C++ is that it's difficult to know that an operation always will take the same amount of time.
Absolute nonsense. C++ doesn't have a garbage collector or anything like that running that can introduce random delays. The same bit of code following the same path will take the same amount of time to execute every time... just like C (well, assuming you don't run into cache misses, etc. but those typically don't crop up on embedded processors.).
There can be quite a lot going on in the background whenever you create a new instance of a class, for example, and figuring that out can be almost impossible (especially if things like overloading have been used).
Dead wrong again. If you know the language you can quite easily see what precisely is going to happen. You might have to go read the constructor, and if the class contains other objects you might have to read their constructors, but all of that code is code that would have to be executed in a C implementation as well -- you have to initialize your data somewhere.
C++ does impose some "hidden" costs, but the language is specifically designed to make sure they're insignificant. Virtual function invocations have a hidden cost: An extra pointer dereference, which is dwarfed by the cost of the function call, even if the function is a no-op. If you're going to worry about that cost, you should probably avoid function calls altogether. The biggest potential hit is exceptions. The compiler has to generate extra code in every function to handle cleanup in the event of an exception, and it can cost a bit. In practice, not using exceptions also costs something, because you have to write a lot more manual error handling, which also has to be compiled in, but, in general, stack frames with exception unwinding may add noticeable run-time to functions that don't do much, but do create objects that must be destroyed. If that's a concern, just compile with -f-no-exceptions and write the error-handling code yourself. Some people like to turn off RTTI also, but that's actually very cheap, and it allows for some very clean solutions to complex problems (but use it sparingly, because it's easy to abuse).
The most likely source of apparent non-determinism in C++ code is the same in C code: memory allocation and deallocation. One way to fix that is to get a real-time implementation of malloc()/free(), but even without doing that, C++ provides great tools for making allocation more reliable: by overloading 'new' on key classes and implementing a pooled allocation scheme, you can ensure that allocations and deallocations are both constant-time and faster than any general-purpose allocator could be.
Where C++ is a big win is in the libraries, especially the template libraries. The STL's sort routine, for example, is usually significantly faster than qsort(), and never slower. High-performance, typesafe, THOROUGHLY DEBUGGED collections implementing a variety of different data structures are a big win for any environment (though platforms with very tight constraints on code size may need to avoid them, since they can bloat the binary -- not likely an issue for our friend with 16MB RAM).
For embedded applications (where you frequently must have an upper bound on reaction time) this can be a deal-breaker.
For _hard_real-time_ applications, you mean -- lots of embedded apps are not real-time, much less hard real-time, you're not going to trust C, either. For hard real-time apps what you do is:
Re:Why? (Score:2)
Cache misses crop up on many embedded processors. But lets ignore that. The real problem is that C++ is harder to estimate the time up front than C. If you have
Re:Why? (Score:2)
If you have a function and you pass it an object, it it going to call the constructor once? Twice? Once and the operator equals once? It differs by compiler.
No, it doesn't differ by compiler, unless the compiler is broken. The ISO C++ standard defines the semantics quite precisely.
To answer the specific question: Pass by value? It will call the copy constructor once. Period. And when the function terminates, it will call the destructor once. But why would you pass objects by value? Do you common
Re:Why? (Score:2)
Re:Why? (Score:2)
Read Effective C++, and you can see all the wierd ways constructors and destructors get called. Its unobvious, and it can be a huge performance hit.
I read Meyers' book when it first came out and found it tedious and often wrong. Perhaps later editions got better, but I haven't looked.
As far as the "unobviousness" goes, I guess I just have to disagree, with a caveat. I've been writing C++ code professionally since 1992 or thereabouts, so it's possible that what's obvious to me is not obvious to others
Re:Why? (Score:2)
ITs not that simple. Read Effective C++, and you can see all the wierd ways constructors and destructors get called. Its unobvious, and it can be a huge performance hit.
Obvioulsy you know that, so no one will convince you, that you only think you know that
Read Steve Meyers again and you will figure that most of what you find "wierd" is only wierd if you lack knowledge. Its completely well defined under what circumstances what is happening. The problem with C++ is, that this number of circumstances easy
Re:Why? (Score:2)
First off- we're talking embedded here. We're lucky if we have serial output. Forget about being able to run a debugger on it.
Secondly- hitting the debugger is rarely the best way to debug a problem. It can help, but
Re:Why? (Score:2)
true its a little on feature poor side compared to a modern desktop debugger but it certainly exists.
Re:Why? (Score:2)
Re:Why? (Score:2)
The real problem is that C++ is harder to estimate the time up front than C. If you have a function and you pass it an object, it it going to call the constructor once? Twice? Once and the operator equals once? It differs by compiler.
First of all: you can't pass an object in C++.
Second: if you like to do this, you either pass a reference or a pointer.
Third if you can't do that, you indeed pass it like it is and then its like any other
Re:Why? (Score:2)
Umm, you sure as hell can pass an object in C++. Go ahead, code up a test.
For the third issue- read Meyer's effective C++, and see all the hidden place
Re:Why? (Score:2)
Second: if you like to do this, you either pass a reference or a pointer.
Third if you can't do that, you indeed pass it like it is and then its like any other value, e.g. an integer, that means there is a copy created on the stack for the called function. And in this case the CTOR is called, gues what
You wrote:
No, you can't. If you attempt that, a copy is created, as I
Re:Why? (Score:1)
It calls the copy constructor exactly once. If you pass a constant reference instead, nothing is called. But of course, if you *do* pass by value, you know what you're doing.
Unless of course you're a dimwitted Java programmer retrofitted to crank C++ because the JVM was larger than the available RAM and you thought, you could code anything that had curly braces.
(Returning an object by value calls the copy constructo
Re:Why? (Score:2)
I've never done hard real-time applications, so I might be wrong here; but it seems to me that trusting profiling is asking for trouble. How do you know that the longest execution time seen by the profiler is the longest possible time ? You don't. Which means that you don't know if the function might,
Re:Why? (Score:2)
it seems to me that trusting profiling is asking for trouble.
It can be. You have to be smart about it. But actually calculating cycles is so much work that it many cases makes sense to optimize developer time by trusting the profiler when it says that this very straightforward, non-looping function runs in 1% of the time budgeted for it.
Exceptions (Score:2)
I don't have time to write a long comment here, but I'll quickly note that the above isn't necessarily true any more. Modern compilers are pretty good at generating efficient code for exceptions. Although to the programmer they're quite an open-ended design tool and might be thrown in many places if you don't know exactly what someone el
Re:Exceptions (Score:2)
Re:Exceptions (Score:2)
Sorry, I'm not sufficiently expert to tell you for sure who's implementing the technique already. IIRC, I first saw it described in some sort of research paper. It was a while back now, but I think it came out of somewhere like HP or Microsoft Research.
I'm pretty sure it's also been mentioned in discussions on a couple of the more serious C++ Usenet groups, so you might like to search archives of things like comp.lang.c++.moderated if you're interested in more details. If nothing else, there's probably a
Re:Why? (Score:2)
I agree with your post but when storage is minimal the CRT wins everytime over the C++ runtime. Nothing against C++. You do know you can OOP using C?
Enjoy.
Re:Why? (Score:2)
And C does have virtual functions- its called a function pointer.
Re:Why? (Score:2)
Exactly, you don't lose what you don't use in C++. It's all about knowing what the costs are. Granted, there are some real small 2k-4k stuff where it might not make much sense to use C++. Then you almost get into the realm of assembly almost. But this guy has 16Mb to play around with. Ram sizes only
Re:Why? (Score:1)
No, it doesn't. C++ doesn't need much runtime, basically just memory management (same as in C) and some support for exception handling (optional, but worth it, imho). The rest is the standard library, and there you only pay for what you use, as in C.
If you compare specific parts of the two libraries, C++ wins. Most striking example: C printf (ever written a program without it?) need to include all the formatting code for dealing with inte
Re:Why? (Score:1)
You'll catch it doing braindead stuff like inserting data into the stream one byte at a time.
No, you don't. You catch it putting single bytes into a buffer (yes, I did look at the code). And these calls can be inlined (while printf("%c",...) cannot). Any more strawmen you want to put up?
Re:Why? (Score:5, Informative)
Second, I'm gonna stay out of the whole "... but <language x> is not an issue because..."
There are many, many reasons to use C instead of C++ on an embedded system. The two biggest are
a) Portability
b) Stability/Conformance
C has a much better record of consistency between embedded platforms. It carries little baggage so most vendors get it right. If the same codebase has to run on several platforms, the specific C compiler for each platform probably produces code that behaves more consistently than C++ compilers.
Stability and confromance come from the fact that embedded vendors don't always spend the time keeping their compilers up to date, and you may be forced to used the vendor's compiler.
As a specific case from place of business: A new project was developed entirely on an emulated environment, beautifully done in C++. Object oriented and all that - this fit the project very nicely. The code was quite stable and was continuously tested with a dedicated rig that ran a huge battery of sample cases against the product. Then the port to the first target platform began. Turns out that it wasn't x86 and Microsoft C++ didn't run there. No problem - the vendore had a C++ compiler. But wait - it didn't support complicated features like "templates" so there was fiddling. By the time I saw the codebase, there were two sets of #ifdefs - one for WIN32 (which worked) and one for the platform (which did not). Later we took that code base, stripped out a bunch of features, converted to C and had it running on about 4 different embedded architectures.
Oh, and the low-end platform was a 25MHz 68331 with 2 MB total memory, 1.5MB total that our middleware could use, so effectively about 700 KB. And we used dynamic memory allocation.
The moral: C compilers are simpler so they tend to be better supported. If your team is more comfortable with some other language and the target supports it, go with that. Remember that maintenance will be the major portion so the more obvious the code is to the developers the better.
Re:Why? (Score:1)
Re:Why? (Score:1)
Re:Why? (Score:1)
If you rea
Re:Why? (Score:1)
Good idea, stay away from languages with big libraries. That way you have to use third party libraries or implement everything yourself, and then you'll know everything is correct and efficient ;-)
I'm mostly kidding. I was going to blast you until I remembered that code size, unlike execution time, can't be dramatically improved by making isolated tweaks. I'm guessing it is not pleasant to go t
Re:Why? (Score:1)
I have written lots of embedded code, and some of it in C++ (with 512kB RAM!). You may want to be careful with template instantiation and the standard libraries, but otherwise there is little reason not to do it.
And I notice how old I am suddenly feeling. My first (own!) PC had 4 MB of me
Is this a Trick Question? (Score:3, Informative)
You really can't expect to receive a practical answer without giving us more information.
Enjoy,
Re:Is this a Trick Question? (Score:1)
Uh... (Score:1)
Gee Only 16MB? (Score:1, Flamebait)
Re:Gee Only 16MB? (Score:2, Funny)
Grumpy
Trouble is, I can't decide if it should be +1 or -1.
Re:Gee Only 16MB? (Score:2, Insightful)
Meta-Moderators do your job.
Enjoy.
About this particular subject? I feel funny! (Score:3, Insightful)
For example, if your embedded device runs embedded Windows, I don't really see the problem. On the other hand, a Windows GUI app really can't be ported to the vast majority of embedded devices out there.
Speaking of embedded Windows, the subsystem is going to affect whatever it is you write.
Given that you are talking about a "next set of managers", I figure it isn't really a flying leap to consider the possibility that you don't really have a specific embedded device in mind, or perhaps your people have looked at a WinCE device and said "Hey these things pretty much come with 16MB standard nowadays, wouldn't it be cool if we could get our application onto one of these babies!"
The fact is that your app isn't going to be occupying that device alone, so look at how the rest of the programming treats the device.
I'm bored now, so bye!
Take exactly what you need (Score:2, Insightful)
If you don't, there's no good way to avoid dynamic allocation. The best you can do is allocate a certain amount statically and try to get by with that. If you need more, you have to throw an error. If you need less, you're a memory hog - not a good thing in an embedded program.
Dynamic memory management isn't so scary in C++. If you absolutely have to use C, God be with you.
Re:Cluestick (Score:1)
Given the C code above, you have to search and replace on malloc and free if you want to tinker with allocation. Very bad.
Second, a C++ programmer would not write that C++ code. He or she would use std::auto_ptr or boost::scoped_ptr instead of calling delete.
Re:Cluestick (Score:1)
Re:Cluestick (Score:1)
void process_file(const string& filename)
{
boost::scoped_ptr<DataFile> file(open_data_file(filename));
Re:Cluestick (Score:2)
Oh yeah, but in reality... (Score:1)
But in C++:
See the difference? Same profit much sooner and someone even does the cleaning up for you.
Re:Oh yeah, but in reality... (Score:1)
It's easy to build a C++ system with what looks like nice clean code. And it works fine when you run it in your debug environment, usually as an emulation in Windows. Then you build for target and it fails 1% of the time because of some subtlety like exceptions being thrown in a constructor in a wierd error case or something. And here you have no JTAG and hence no debugging, so the only thing you have is printfs to the serial port. Which alter the timing enough to hide the bug.
Of course, big
I dont care, but.... (Score:2)
I had a dumb-as-shit problem with the RCA (real crappy audio) Lyra. I had some non-English file names, which every OS handles, as does any usb media.
Unlike every other MP3 player, this one CRASHES on reading the filename. I called RCA's helpline about getting it not to crash. They said "Take it back, we dont know how to help with that".
Re:I dont care, but.... (Score:1)
Re:I dont care, but.... (Score:2)
As to you about renaming the files, well, the device bootstrapped and scanned all the mp3s and wma's. While booting, it hits the bad names and crashes.
The USB disk was accessable AFTER booting of the RCA Lyra.
So, no renaming for you.
Other than crashing on a "bad file name", it wasnt that bad, but no memory slot (I then got one with a memcard slot).
Re:I dont care, but.... (Score:1)
what i would do (Score:4, Insightful)
* install a malloc debug library on all test boxes to find all unfreed, double freed, or overwritten chunks. don't ship until all mem is accounted for.
* don't allow, or at least discourage, "little" mallocs like strdup() or allocation of singleton structs.
* do something sensible when malloc starts returning NULL. don't just seg fault or abort.
* destruction test: malloc 15.5MB (and incr until failure) of space at the start just to establish the limit, then back it down 50K and see how long your embedded app can live in that space and watch how it dies. as you fix things, expect it to die in different places each time.
* nm (or elfdump these days) your c lib and look for library functions that link to malloc or calloc. if your embedded OS ships with source, you can just grep for it.
* if your app has a log file, put a printf(sbrk(0)) in every few minutes (or less as appropriate) so you can watch for unexpected growth over time and spikes related to usage.
just some ideas.
Re:what i would do (Score:2)
You have no IDEA, how hard that is to debug. I have been working on something which has two allocators and am trying to work out which one to use for a case - when I find that a guy has used strdup() completely freely without thinking whether that codepath needs shared memory or local process heap. And sadly he got it right, so I have to write yet another 200+ line function duplicating the fun
Black Art (Score:2)
That said, the first article linked to gives some good recommendations on memory allocation. But I would go one step further and say that coding the app using malloc then analyzing the behavior is your best bet.
For example, code the application using just malloc but tag each block of memory with an identifier so you can figure out where the memory was allocated from. Next, wrap the malloc/free calls with timing code that
speed (Score:1)
I think in many situations you have no choice (Score:4, Informative)
I never personally saw it fragment to the point of failure, but another engineer said he had had to debug that situation on this system- a particular sequence of allocation did this once (in many years).
This problem was solved by using a different memory allocator. That was a rare problem on a huge, long lived project.
Overall, I shouldn't sweat it too much, fragmentation causing the memory allocator to fail is rare enough and there are things you can do to solve the problem if it does occur. But you'll need a guru to solve it if it does happen.
All about the frags (Score:5, Insightful)
Re:All about the frags (Score:1)
I don't know, but here's a guess (Score:2)
Take a step back, try something else (Score:4, Informative)
Why would you do that? There's nothing about C++ that rules it out on embedded devices. I smell a bad vendor toolchain. Rule no.1 for sane embedded device development: USE THE GNU TOOLCHAIN. Then you can use C++ and half your porting task is gone. You can use the same toolchain with an x86 target for testing on PCs. You wouldn't happen to be using Green Hills, or god forbid Tasking would you?
So far I have resisted malloc/free use but it gets tedious having the same argument with the next set of managers to take an interest in the project.
Removing dynamic memory management is a noble goal but it goes deep into your coding style, to the extent that you basically end up with forked code instead of portable code - one version for each target inside big #ifs. One nice alternative I've used in the past is a stack separate from the call stack. You can either allocate a fixed size stack using malloc (say, 4MB for a task), or from a fixed location such as on-chip RAM. Allocations are stricly last-alloc'd-first-free'd, which actually fits most usage patterns. The key advantage is you can throw away the entire pool in a single call (pop all), for example when an error occurs, or if you run out of memory. This makes error handling much, much simpler than having a ton of delete's depending on how far you got. It also makes allocation overhead extremely small - it's basically just pointer arithmetic. It's just a souped up "alloca", where the stack isn't the function call stack, so it doesn't go away when the function does.
Best of all, if you use this method you can have a non-embedded version of the "stack" allocator which just uses malloc instead. I've got an example app which does no dynamic memory allocation linked from my profile (it's a Vorbis decoder).
No definitive answer... (Score:2)
You're asking if there's a definitive answer on whether your particular application should use dynamic memory allocation?
Apparently not. If you don't have
To Dynamically Alloc or Not (Score:5, Informative)
To answer that you need to ask questions like:
Will the app be long-running? If yes, you probably don't want a scheme that will fragment memory because eventually your heap will be too fragmented to allocated necessary contiguous blocks and it will have to reset.
Does the app need to allocate quickly? If yes, then you'll want to avoid allocating at all. Size your buffers ahead of time and never allocate dynamically. Also note that this is not to be confused by with hard real-time requirements. Many real-time applications need only bounded time on operations, so an ordinary allocator would perform very well on average and have known maximums because of bounded initial heap size.
Do you have to supply your own dynamic allocator? If there is an allocator available on the system, that may be the best route.
What are the patterns of your allocations? Does the app allocate many small chunks, a mixture of small and large; are they long lived or short lived? Allocation sizes effect performance and size usage of dynamic allocators.
I'll stop know because I can't think of any other good reasons off-hand. Static allocation is the best thing for many reasons - easy to use, easy to analyze, very fast. That's not practical for all applications, so my advice is to go with a simple allocator next - for C, malloc is good because everyone should know how to use it. Don't worry about speed or efficiency too much - performance usually isn't a problem. Look up "dl_malloc" (Doug Lea's malloc). It's a good public-domain allocator that looks like malloc and works very well allocating small chunks.
Malloc isn't a great fit if your app is constantly running out of space. In that case, find a good garbage collector or memory reordering scheme. Several garbage collectors make life easy by making allocations fast and solving the problem of free'ing memory.
Remember, don't pick an allocation scheme based on the problems you *think* you'll have. Pick an allocator whose major benefit matches your major issue.
more please (Score:2)
Static, if you need reliability (Score:4, Informative)
Yes, it is possible to calculate your memory needs using dynamic allocation and deallocation, but it is a lot harder to prove and a lot easier to make a mistake. If you really can reliably put an upper bound on the amount of memory your app uses, then there is usually no need for dynamic allocation in the first place. If you don't care about predicting your memory usage or malloc execution time, then why are you even asking the question? Just go with whatever is easier. However, consider that some extra effort now will pay off in the long run.
You have to let go of your assumptions from the non-embedded world. The natural instinct is to save as much memory as possible, since your memory is so limited. Your undergrad algorithms class taught you that dynamic allocation is the way to have the lowest possible memory usage at any given time. However, in an embedded application, unallocated memory is just as wasted as extra memory allocated in a static buffer, but in the latter you always know it is available as soon as you need it.
Safety Critical (Score:1)
1) A lot of clients in the embedded aviation world that still use C, like to follow SaferC and MISRA guidlines. SaferC basically says, dynamic memory allocation can be bad, but not using it when needed is worse. The point was that a lot of modern algorithms and designs rely on dynamic allocation. If this applies to you, then you will have more problems as a result of trying to fit them into a static allocation methodology than you would of had by using dynamic memory in the first place.
2) Lockheed
I've been bitten by malloc in embedded systems (Score:1)
One workaround is to only use malloc when the system boots or maybe when a port opens (free when the port closes). The benefit is you can guarantee you cover the case during testing. For buffers and messages malloc at system boot a fixed number of buffers of a fixed size. Put them on a simple free list. Use them by pulling one off the free list and
Re:I've been bitten by malloc in embedded systems (Score:1)
malloc and glibc (Score:3, Informative)
1. Does the CPU have an MMU.
2. What OS.
3. Single or multi tasking?
Without knowing more no one can give you a firm answer.
My feeling... (Score:2)
Easy and not easy ... (Score:2)
But the development is easy approached.
Instead of thinking wether you write x = malloc (size(x)); you simply do: x = new_X();
So, you write helper functions that "allocate" you a abunsh of memory when you need it. The helper functions of course use dynamic allocated memory and do nothing else but call malloc(). For every allocating function like new_X() you write a deallocation function delete_X() as well, that one obviously only calls free().
So you get a "running" system
Meager advice (Score:2)
* The smaller your subset of C++, the more portable. You can make a perfectly functional C++ app without using operator overloading, RTTI, exceptions, even virtual functions.
* The smaller your subset of C++, the less bugs that you'll blame on the compiler but will turn out to be your misinterpretation of the language (this applies unless you're Stroustrup, or maybe the author of more than one C++ compiler)
* Stack allocations a