Pros and Cons of Garbage Collection? 243
ers asks: "Most new programming languages are using garbage collection, rather than programmer-controlled memory management. The advantages are obvious: programmers no longer have to worry about forgetting to delete allocated memory, leading to far fewer memory leaks. The disadvantages are often glossed over by programming language designers - aside from the performance issues, predictable memory management can be used for controlling access to files and similar resources, creating safer thread locking code and even providing better error messages. Some programming languages, which usually predictable memory management, can also be made to behave like they are garbage collected - for example, Boost provides various C++ smart pointer classes. So, given the choice between garbage collection or manual memory management, which would you choose and why? When using a manual memory management language, when do you consider the performance and syntactic overhead of faked garbage collection to be worthwhile?"
C++ basically has it right (Score:3, Insightful)
Of course, the C++ model is not perfect either. Lack of virtual and const constructors can be a nuisance (the workaround being the pimpl idiom and a shared pointer), and not being able to use shared pointers to functions without nasty syntactic hackery occasionally breaks the "stuff pretending to be a pointer" illusion. Still, the power it gives over the Java model is definitely worth the occasional bit of extra effort.
Then again, if you're coding some quick scripting hack rather than a proper program, who cares about memory allocation?
Re:C++ basically has it right (Score:3, Insightful)
Re:C++ basically has it right (Score:2)
Re:C++ basically has it right (Score:2)
Um... Just where do you think the method-local object references are stored to ?-) And anyway, the decision on whether or not to store the fields (data) of an object to stack or to heap is completely mechanical: if you pass references to other functions, the obj
Re:C++ basically has it right (Score:2)
Re:C++ basically has it right (Score:2)
Re:C++ basically has it right (Score:2)
True. However, you can simply analyze the constructor too - it only needs to be done once per constructor. And you could recurse trough all the methods the object gets passes to, altought this might be less usefull since the exact same calling sequence is unlikely to repeat oft
Re:C++ basically has it right (Score:2)
under
or wether you want to allocate it on the ehap. long time allocation.
Disadvantage: this is in most cases class based, so all objects of a given class are value objects and allocated on the stack, or "reference" objects and alloated on the heap.
In Java the GC is a generational GC, so it automatically distinguishes between short and long time allocatin by propagating young uncollected objetcs t
Re:C++ basically has it right (Score:3, Informative)
No, they're allocated "inline" with the variable (if there's one involved) or on the stack if they're part of an intermediate expression. The mantra of "struct => stack" is one which has confused many people in my experience. See http://www.pobox.com/~skeet/csharp/memory.html [pobox.com] for more details. (I'm only picky about this because of the confusion that this has caused.)
For limited-lifetime objects, C# actually has a built-in keyword. It's called using and it works al
Re:C++ basically has it right (Score:3, Insightful)
I have mixed feelings regarding garbage collection. Sure enough when people are learning how to write programs, it's far better to use a language garbage collection, so that one has to really understand what's happening. Also, having to manually keep track of your data can lead to cleaner code (I know one can wri
Re:C++ basically has it right (Score:4, Insightful)
On the contrary, the C++ model is basically correct for some applications.
A "proper program" is programmed in the appropriate language for the job. Sometimes this is a domain-specific language. Sometimes you need the close-to-the-metal-yet-still-maintainable-for-larg e-applications qualities that C++ provides. And sometimes you don't.
Very few people write web applications in C++, and for good reason. Web servers run at the speed of the network card, not the speed of the L1 cache. Pulling out extra cycles is pointless especially if you lose the maintainability that a general purpose language like C++ provides. And yet you wouldn't call many of these "quick scripting hacks".
Depends (Score:5, Insightful)
If you are trying to make something where performance is important, like a 3d game, then manage memory yourself. If you are making a simple business application where reliability and security are important, use garbage collection. If your program uses lots of RAM and you need every last drop either find an expert at RAM management to get every last bit or use garbage collection if your programmers are not so awesome.
And so on and so on...
Re:Depends (Score:3, Insightful)
When debugging a program with a leak (Yes, garbage collected programs have leaks too, they're just nastier, because they don't look like bugs because a reference is persisting somewhere.) if memory is program managed, finding the leak is a deterministic process. You're guaranteed success in a well-defined, and finite amount of time (The amount of time it t
Re:Depends (Score:3, Insightful)
Yes, garbage collected programs have leaks too, they're just na
Re:Depends (Score:2, Interesting)
No, the caller of your module creates a bug when they fail to free the object that you have clearly defined in the interface to be their responsibility. It's no different than any other violation of an interface condition. (If you don't clearly define your interfaces, then yes, you have of course created a bug.)
Are you saying that leaks are not a form of sloppy programmin
Re:Depends (Score:3, Insightful)
Re:Depends (Score:3, Insightful)
I said they become more difficult to debug because of garbage collection. They're certainly not caused by the garbage collection. They're caused (usually) by poor programming.
Garbage collection is a tool. It makes your job as a programmer easier, but it does not free you from the need to understand things like scope. Just because you don't have to worry about the mechanics of managing your memory, you still need
Re:Depends (Score:3, Interesting)
And try to pinpoint which of the hundred thousand totally unrelated functions has modified my data because it happens to use a bad pointer?
I had to debug a C program that started crashing after an unused variable declaration had been removed. The reason? - a dangling pointer.
The program was compiled without any optimization, so the memory for the vari
Re:Depends (Score:2)
There are dozens of simple rules you can follow when you write C code, any one of which would have prevented that problem. Either way, having your memory managed for you doesn't imply that you don't have access to the raw data anyway. Protection is only enforced in some languages.
Re:Depends (Score:5, Insightful)
It depends on what you are trying to make, duh.
Agreed.
If you are trying to make something where performance is important, like a 3d game, then manage memory yourself.
It's not that simple.
In most cases, the total run-time cost of garbage collection is lower than that of malloc/free memory management, at the cost of higher on-average memory usage (which can obviously destroy performance if you end up having to swap). On the other hand, application-tuned manual memory management using pooled allocation is generally faster than GC. Whether or not pooled allocation increases memory usage as much or more than GC depends on many things. Another consideration is that although GC often consumes less total CPU cycles than malloc/free, non-incremental collectors tend to use those cycles in big batches, which can produce GC 'pauses'. That's bad for some applications. Incremental collectors can minimize this effect, but only with some cost in CPU cycles.
Then there's also the whole issue of the effect of different approaches on the multi-tiered memory caching in modern systems.
In short: yes it depends on what you're trying to make. No, it's not nearly as simple an analysis as you describe.
Not only that, in practice other constraints usually dictate the choice anyway. Using GC generally means using something like Java, C#, Python, etc. rather than C or C++, which brings in a whole raft of other considerations, many of them more important than the memory management discussion. Platform, target environment and libraries will often dictate language selection, which will dictate much of memory management approach.
Garbage collection efficiency overstated (Score:3, Interesting)
In addition, an array of objects on the heap requires only a single memory allocation in C or C++, where Java has to allocate and track each separately. As one luminary once said, "C++ is better because there i
Re:Garbage collection efficiency overstated (Score:5, Interesting)
No one (including the compiler) dynamically allocates objects in C/C++ when they can place them on the stack instead.
Are you certain of that? Here:
What would the compiler do? What *could* it do, if it were smarter? And have you really never seen any code that does this? Or written it?
Lots of C and C++ programs dynamically allocate many objects that could be heap allocated. In particular, many C++ objects that are placed on the stack immediately allocate storage on the heap. Think std::string. Many programmers do make an attempt to allocate as much on the stack as possible, but I think most don't really consider it. And keep in mind when I say this that I've been writing C and C++ (mostly C++) professionally for nearly 15 years -- I've seen more than a little code.
Garbage collected languages like Java, on the other hand, require practically everything to be managed on the heap.
Interestingly, Java does *not* require that at all... it's just the most obvious way to implement it. In fact, I read a while back that the next generation of Java compilers will perform escape analysis, looking for objects whose lifetime is associated with a stack frame. Here's a link [ibm.com]. When they find such an object, it will be allocated on the stack. If such an object creates other objects, as long as the analysis can prove that their lifetimes are also frame-associated, they will also be allocate on the stack.
The same analysis will often allow Java objects and their sub-objects to be allocated as a single block. Since the compiler can see that the constructor of class Foo always allocates objects of Bar and Baz, all of fixed size, it can allocate a single block, just like a C++ compiler would be able to for a class like:
The same sort of analysis should also allow your other point to be addressed: An array of objects can be allocated as a single block. The compiler can recognize code like:
And allocate a single block that is n*(sizeof(Foo)+sizeof(Bar)+sizeof(Baz)) in size, and if 'f' has a stack-associated lifetime, allocate the whole pile on the stack.
All of the above is still theoretical, of course, but it's coming quickly.
That might be acceptable, but the worst part is random application pauses of arbitrary duration for garbage collection. Unless that problem can be resolved, garbage collected languages will be always be a poor match for latency sensitive applications, even where the net throughput is otherwise adequate.
As I pointed out in my previous post, whether or not that problem exists depends on the GC implementation. Incremental GCs keep the pauses small, and there are GCs designed for real-time usage that further guarantee maximum latencies. It's worth pointing out also that normal malloc() and free() implementations don't provide any run-time guarantees. Real-time code that uses a heap uses special versions that do provide guaranteed latencies, at the expense of worse average performance.
Re:Garbage collection efficiency overstated (Score:2)
Re:Depends (Score:2)
As you say, it all depends on context. (Score:2)
Transaction programs are event-driven entities, though, and they have very short lifetimes -- they are load
Re:Depends (Score:3, Insightful)
Because in many systems that employ GC, they try to free resources on background threads for performance. The problem is that a resource can be held way beyond what the developer expects, and suddenly they get faults happening in totally unrelated sections of code. I've seen it a million times before, and I personally think that it's one of the biggest weaknesses of the CLR. When a function is done with a resource, clean it up right then and there.
Re:Depends (Score:2)
Re:Depends (Score:3, Insightful)
Not at all. I was just citing the CLR as one example since it's fairly widely used. You'd also think with all that we've learned about GC on a background thread that Microsoft would have done something different for their new programming environment, but that wasn't the case.
I never heard of anyone having a GC-related debugging problem (as in real bug, not performance) for programs written in one of those languages.
Do t
Mainly GC but sometimes... (Score:3, Interesting)
Re:Mainly GC but sometimes... (Score:2)
For the most part, GC for memory is a good thing. (I do business apps, so the immediate performance of memory typically isn't a problem.) But why couldn't they give us a default "going-out-of-scope" method? I love the whole C++ constructor/destructor idiom because it makes using the native classes for resource management a breeze. Want a class to wrap a file handle? Sure, no problem, we'
Re:Mainly GC but sometimes... (Score:2)
They did.
protected void finalize()
Re:Mainly GC but sometimes... (Score:3, Informative)
Re:Mainly GC but sometimes... (Score:2)
Though, since GC is not guaranteed, finalize() is not guaranteed to be called either (like a C++ destructor is).
Re:Mainly GC but sometimes... (Score:2)
In theory it might not be. In reality, considering the rate at which most Java programs I've seen create new objects, it is (except for objects created just before the program exits) ;).
Re:Mainly GC but sometimes... (Score:2)
Re:Mainly GC but sometimes... (Score:2)
That would be the "IDispoable" interface and the "using" statement in c#.
Re:Mainly GC but sometimes... (Score:2)
Re:Mainly GC but sometimes... (Score:2)
I wish Java had that, even if I didn't get to clean up memory with it. If only so I could use it to close files that go out of scope. Even if I had to declare the vari
Re:Mainly GC but sometimes... (Score:3, Insightful)
They have. Visual Studio 2005 adds syntax to Managed C++ (C++/CLI) to allow you to manage your lifetime and memory separately. Herb Sutter has been talking about this for at least a year IIRC. Dinkumware even made the STL work with it.
See for instance this article [codeguru.com]. I'm not currently developing on .Net, but I'm hoping that these extensions can be considered at sometime for standard C++.
Re:Mainly GC but sometimes... (Score:2)
What "performance issues"? (Score:5, Insightful)
Re:What "performance issues"? (Score:2)
Re:What "performance issues"? (Score:5, Insightful)
Re:What "performance issues"? (Score:2)
Re:What "performance issues"? (Score:3, Insightful)
Find the middle ground (Score:2)
A lot of posts in this discussion almost imply that there is 100% manual memory management, or some sort of super-generational-buzzwordy-GC, and nothing in between. That simply isn't the case.
I write C++ for a living. I work with intricate, graph-like data structures, using performance-sensitive algorithms, with pointers all over the place. And yet I can't remember the last time I had to use the delete operator, nor any sort of super-ref-coun
not true (Score:3, Funny)
Re:What "performance issues"? (Score:3, Insightful)
The reason is that most programmers tend to not realize that the free() operation actually takes up a decent amount of CPU cycles, and when you're freeing a bunch of little things all over the place, the overhead tends to add up.
This depends entirely on the underlying memory manager. Using pooled allocation or other "zone-based" allocators can obviate the hit of these frees. As with many things, it's a tradeoff between the time spent putting a block back on its free list (naive implementation) to storin
Pros and cons (Score:5, Insightful)
Re:Pros and cons (Score:5, Insightful)
I've always thought that the use of the term "memory leak" to describe resource management problems in Java is a really poor choice, as it's quite a different problem from a memory leak in (say) C.
Keeping memory allocated and referenced for longer than you need it isn't really a leak, to my mind. It's just bad programming. To me, a memory leak is when you lose the pointers to a piece of allocated memory, so the code is no longer able to deallocate it.
In other words, your developers might give a better answer if you ask "Are there objects you keep around longer than necessary?", rather than "Are there memory leaks?"
Or maybe I'm the only one.
Re:Pros and cons (Score:2)
Beware, for by openly objecting to this usage, you open yourself to the Real Java Programmer for characterization as an old-school programmer (the bad, bit-flipping kind) or worse, a n00b in need of a lecture. The best approach, rather, is
Re:Pros and cons (Score:2)
Re:Pros and cons (Score:2)
So those objects in your program are garbage, and you do have a memory leak.
People keep confusing concepts (Score:2)
It's not just you. A lot of people in this discussion are confusing fundamentally different concepts whose implementations often happen to coincide.
In particular, whether or not something is ever cleaned up is different from whether or not it is cleaned up promptly. Also, releasing memory is not the same as destroying/finalising an object that happens to be stored in that memory.
Garbage collection addresses exactly one of the four possible combinations: making sure that memory is always released.
The m
Re:Pros and cons (Score:2)
Re:Pros and cons (Score:2)
If you can, your garbage collector is broken. The whole point of garbage collection is that it reclaims the memory used by objects that can no longer be accessed. If your collector is doing it's job, your program can't leak memory.
Re:Pros and cons (Score:2)
Re:Pros and cons (Score:2)
The article you link to provides a nice example of how to write incredibly bad software. Instead of fixing the problem at its root by put
Re:Pros and cons (Score:4, Insightful)
Reference counted garbage collection models are inherantly flawed. Leaks are harder to find and easier to provoke. You might as well not have them if you've got to "delete" the references to the other objects.
Modern garbage collection algorithms do not have this sort of problem.
What bothers me about garbage collection is that it only solves part of the problem: memory is not the only resource your application holds onto, and the kludges you have to make to deal with them in garbage collected languages are just annoying (hey, you don't have to worry about cleaning up after an allocation
Re:Pros and cons (Score:2)
Could you elaborate on that a bit? I'm too tired right now to think of any cases that cannot be easily dealt with.
Garbage Collection vs. Manual Memory Management (Score:2, Interesting)
But then, the question is rather amb
It all depends (Score:3, Interesting)
GC v. Direct allocation (Score:2, Insightful)
When I programmed professionally, I craved the control of memory management. Objects did _exactly_ what was _explicitly_ told to do.
Now I'm a ruby junkie, and love the OO, GC, Etc.
Still, yes, for performance reasons, there are good reasons to do it yourself.
For programming reasons, there are reasons to go GC.
all in all, GC tends to be great. wouldnt work without it. But there are times I'm mystified as to why an object left
Re:GC v. Direct allocation (Score:2)
Short run applications don't neccessarily need any memory management - since the application is going to exit shortly, you can just leak the memory and let the kernel reclaim it on exit. Might actually be faster that way than wasting time freeing memory that's going to be reclaimed soon anyway.
After all, if the application is short-run enough, the GC doesn't neccessarily have time to run even once.
Check this out. (Score:3, Informative)
It's definitely worth checking out before people go spouting off the traditional rants against garbage collection.
Of course, determining which one is best always depends on your application and your available resources, among other things. There are good arguements for both in various situations. I code C++ for embedded devices for a living, which means that I am working with the new/delete/malloc/free model, but for school projects I really like to work with Java, because it lets me focus entirely on implementing an algorithm without having to spend any time thinking about memory allocation or the underlying hardware.
Getting it backwards (Score:2)
aside from the performance issues, predictable memory management can be used for controlling access to files and similar resources, creating safer thread locking code and even providing better error messages.
This is silly. None of these have any connection to garbage collection; you can write "destructor's" in a garbage collected language, and do everything in them just as you would have in a non-GC language.
The advantage comes from the RAII style of coding, not from the absence of a garbage collector.
Re:Getting it backwards (Score:3, Interesting)
You mean like Java's Object.finalize()?
The same one that causes significant performance problems fundamental to how GCs work, and is not guaranteed to execute in any specific order, or even at all?
C++ and others.... (Score:5, Interesting)
C++ programmers should be making very little use of new and delete, though; they should be using smart pointers. I think the article poster misunderstands smart pointers. boost::shared_ptr is a reference counted pointer, but std::auto_ptr and boost::scoped_ptr have nothing to do with garbage collection - they certainly aren't "faked garbage collection" and they certainly aren't unpredictable. They use C++'s object scoping and copying mechanisms to manage memory in a way completely unlike garbage collection. scoped_ptr is the simplest and most predictable memory management tool of all. Taking programmer error into account, it's more predictable than using delete. Even shared_ptr is predictable; when the reference count falls to zero, the object is immediately destroyed, not just marked for destruction.
Sadly, although C++ is a very powerful language and can be used to write code with few errors, the language as used by beginners is as dangerous as C, perhaps even more dangerous. It takes programmers years to become proficient in all the methods and idioms that make C++ a usable language.
(I would love to see a language that allows programmers to choose scoped allocation, smart pointer heap allocation, or garbage-collected heap allocation, and uses types to avoid dangerous combinations such as garbage-collected objects pointing to scoped objects or an object pointing to an object in an unrelated scope. Every object would have two types - the object type (int, file, circle, etc.) and the memory management type (scoped with scope S1, scoped with scope S2, garbage-collected, etc.))
Re:C++ and others.... (Score:3, Interesting)
RIAA (Score:2)
Because the concept is more fundamental than merely "resource release is destruction", although the latter is arguably the most important aspect of it.
What you're doing in C++ is tying the period between allocation and release of a resource to the lifetime of an object. If you like, the resource-owning class's invariant conditions include the fact that the resource is allocated
Re:C++ and others.... (Score:2)
usually, I prefer GC (Score:2, Informative)
Most new programming languages are using garbage collection
You mean like Lisp and Smalltalk? ;-)
The advantages are obvious: programmers no longer have to worry about forgetting to delete allocated memory, leading to far fewer memory leaks.
In other words: the computer is perfectly capable of figuring out what to do, so let it! This is almost always the best thing.
When using a manual memory management language, when do you consider the performance and syntactic overhead of faked garbage collectio
GC (Score:5, Informative)
If you don't CONS, you never need to collect garbage. *rimshot*
More seriously, GC isn't so much about pros and cons, as it is about tradeoffs between the various GC algorithms: time vs. space, low-latency vs. high-throughput, parallelism, etc.
If you're designing a new language, it should include garbage collection, or nobody will use it (i.e., your target audience can already program in C). You may wish to have multiple GC implementations available for different purposes, perhaps to be selected at compile-time.
For a good overview of what's available, see http://www.memorymanagement.org/ [memorymanagement.org]
My personal favorite is the good old Cheney semi-space collector (and Ephemeral/Generational Garbage Collectors, which are more advanced versions designed to generally have low latency), as it is very straightforward (both to understand and to implement), compacting (it defragments memory, and can perhaps improve cache locality by grouping related objects), and it has high throughput (work is proportional to the amount of live data, not total data).
If memory usage is of more concern than fragmentation and throughput, a mark-sweep collector may be more your style.
There are also "real-time" (and "soft-real-time", i.e. bounded latency [see Henry Baker's Treadmill]) collectors, parallel collectors [including an interesting case for reference counting, usually considered a dog performance-wise, as a viable parallel/remote GC method], "conservative" collectors for C/C++ (see Hans-J Boehm's libgc), collectors for real and hypothetical computers with special hardware and/or OS support for GC features, and some collectors that are just plain weird.
Note also that garbage collection algorithms are considered hard to measure for performance, especially with regard to wall-time latency, so just because a paper(*) claims that a certain GC has certain performance characteristics, be sure to benchmark if it really matters.
(*) Did I mention papers? If you're serious about implementing GC, getting comfortable reading CS research papers is a must. The book "Garbage Collection" [kent.ac.uk] is your best friend here, as it provides a very good overview/survey of said papers and algorithms, and it discusses a lot of pros and cons between various algorithms, and useful variants or adaptations that have been applied to previously-published work.
Also check out Henry Baker's papers, because he is a memory management demigod: http://home.pipeline.com/~hbaker1/home.html [pipeline.com].
VM aware GC (Score:3, Interesting)
It needs a modification of the VM, but IMHO this is better than having to handtune the memory used by the GC. (Note: I'm not an expert in GC)
http://www.cs.umass.edu/~emery/pubs/04-16.pdf [umass.edu]
Cocoa and Objective-C (Score:3, Interesting)
Re:Cocoa and Objective-C (Score:2, Insightful)
RAII is a bad reason for manual memory management (Score:4, Insightful)
Re:RAII is a bad reason for manual memory manageme (Score:3, Insightful)
RAII-like techniques, GC, and closures (Score:2)
(Note: Those comments were indented properly, but Slashdot messed
Re:RAII-like techniques, GC, and closures (Score:2)
Java supports closures just fine, as objects. A java anonymous class is a closure. If that really offends you, try calling them multi-closures, since they combine multiple functions with a lexical scope instead of being limited to just one function.
In some languages you have a closure of just one f
Re:RAII-like techniques, GC, and closures (Score:2)
they combine multiple functions with a lexical scope instead of being limited to just one function.
I prefer to add such complication only when it is actually needed. In a typical functional language you can always group a bunch of closures into an aggregate if that's what you want.
Explicit management has its own costs (Score:5, Insightful)
The answer, as always, is "it depends". I'm firmly inside the "right tool for the job" camp.
Manual memory management is not free. In some circumstances, it can be quite expensive. There is a group of programmers who are best described as "rabidly anti-GC". These people are almost all completely unaware of the costs that manual memory management can impose on your code.
A multi-threaded program, for example, can allocate memory from any arena, but it MUST return a block to the arena from whence it came, which can cause all sorts of difficult lock contention problems, making free() much more expensive than malloc(). (Ask anyone who has written high-performance memory-intensive multi-threaded programs.)
In some languages, like C, the situation is even worse. In structure-hungry programs, you can end up structuring your code around data lifetimes, which precludes you from using the most natural, maintainable and efficient algorithms. Garbage collection frees you from this, as the GCC people have discovered.
I do recommend reading Paul Wilson's excellent survey paper [utexas.edu] on the topic. It answers a lot of your questions, though it's by no means the final word.
Bad examples (Score:2)
GC is DRY (Score:2, Interesting)
I wonder how many of the people who use the "C++ model" bother to unit test that they have freed all their resources.
C has problems too (Score:3, Informative)
Re:C has problems too (Score:5, Insightful)
The weirdest thing is C++ programmers. They freak out about every single cycle, but modern C++ idioms push the use of smart pointers, which are usually quite slow compared to a good generational GC.
I went to a job interview . . . (Score:2)
I said "That would of course depend heavily on the project"
This got me the job, because I was the only person who didn't answer "Java" or "PHP" - a clear indicator that the prospective employee was either feeding you the line they'd gotten in CSCI 102 at the local university, or reacting against that line.
The same thing about garbage collection. Come on, if I'm writing a web applicat
There's more than one way to skin a cat. (Score:2, Informative)
garbage collection is often very nice... but I don't really mind the "lack" of garbage collection in C and especially don't miss it in C++.
My opinion is that it takes some effort on the programmer's part to learn to use C safely. I'm not sure why, but this answer seems to suprise some people. Do they seriously expect that in the real world-- of software or of anything else-- that they should be able to pick up any tool they want and u
Personally... (Score:4, Funny)
Sure the hell beats me keeping the trash around, remembering where it is, and putting it in my truck and hauling it to the heaping landfill myself. I'm not here to manage trash, I'm here to get something done.
Is this post about programming?
False dichotomies (Score:4, Interesting)
For cases where static analysis can't do this automatically, it isn't that hard to use a design methodology that achieves the same result; it's certainly still much easier than doing manual allocation and deallocation and ensuring that the deallocation is done (or not done) correctly in all cases.
And if you are using a reference-counting GC, or a hybrid GC that includes reference-counting, you don't have to do anything special at all.
The same applies to the claimed mutex and error message disadvantages, since those are just specific uses of RAII.
Java GC != No leaks (Score:3, Insightful)
No doubt some reference is left in a persistent collection of some sort (hash, list, array, etc)
Just As C/C++ programmers must remember to free when done, so Java programmers must remember do undo such "life maintaining" references when they are done.
Sam
Re:Java GC != No leaks (Score:4, Insightful)
Yes, but it is unlikely that somebody you know is trying to track down a Java double free error.
Refactoring and Program Evolution (Score:2)
So they get refactored. Classes get reused at unexpected places. References to objects are kept on places where it was not anticipated. Calling delete now is unapropritated at the old point as it can't take the new references and the changed lifetime into account.
So the memory management needs to get refactored just because you "reuse" a class?
Simple example (controverse because it shows where GC leads to problems also
For some reason you implement a cash for a ce
Depends on the job. (Score:2)
But for longer running programs which launch other programs like root processes, server processes, and such; they might hang around long enough to run out of memory.
To me the crux is, how does a garbage collector itself allocate memory? Somewhere down the line something has to keep solid track of resources, GC is an option for many subsystems,
Re:Situational (Score:5, Insightful)
Wha? The evidence is against you. It's not the GC'ed languages that have buffer overflows, and that's the number one security flaw at the moment (though #2, "improperly escaped strings resulting in spilling across a boundary", i.e., XSS, SQL injection, etc. is coming up on it fast as more people use GC'ed languages).
If security is an issue, you want GC and automatic buffer management like Java, Python, Perl, what have you, not manual management and the resulting opportunities for misallocation like in C and C++.
(Yeah, yeah, if you program perfect C++ code it's possible to get it right. But I'm not talking theory, I'm talking about what happens in the real world, and in the real world, there seems to be quite a supply of less-than-perfect C/C++ programmers allocating buffers. You have to be on crack to argue otherwise.)
Memory Access vs. Memory Allocation Re:Situational (Score:2, Informative)
Re:Memory Access vs. Memory Allocation Re:Situatio (Score:2)
Not inherently. It is perfectly possible to write a GC implementation that stores data that can only be accessed in a certain scope into the stack, and frees it automatically and immediately when the scope exits. From what I've understood, Sun's upcoming JVM does just this.
Ga
Re:Memory Access vs. Memory Allocation Re:Situatio (Score:2)
This is why I mentioned I was talking real-world, not theory. I can conceive of a "safe" (out-of-scope data not accessible by any in-language construct) language that uses manual allocation. But I am aware of no such beast, which doesn't prove it doesn't exist but is pretty strong evidence that it's not very popular if it does.
And going back to the ori
Re:If C++ Memory Management (Score:2, Insightful)
Re:If C++ Memory Management (Score:2, Interesting)
Manual memory management is a control issue. Unchecked memory access is a matter of asceticism.
Buffer overruns happen because the devotion to performance and minimalism among a certain crowd is religious. Because of this, the C++ standards guys were terrified of encouraging the use of anything slower and safer than what a C programmer woul