How Do You Deal w/ "Heisenbugs"? 32
horos1 asks: "I was wondering how people out there deal with 'Heisenbugs': bugs that have no logical, programmatic cause (mostly from C and C++ programs and especially threaded C/C++ programs), that may change or disappear if you modify the state of the program.
For example: we have a multi-threaded C++ app which cores in about
5 places, at a different memory address each time, and which disappear if we turn threading off. They seem to be caused because of a memory overrun error, but this too is exceedingly hard to fix with tools like purify, because they tend to give several 'false positives' on memory errors, as well as core when we link with certain libraries. Anyways, this is getting very annoying... Any help with this, as well as pointers on how to deal with bugs like this would be greatly appreciated."
Star Trek solution (Score:4)
Thread safety (Score:3)
Mutexes
Critical Sections
printf() statements
Breakpoints
Logging
Also keep in mind that many of C and C++ built-ins are not necessarily thread-safe.
Dancin Santa
Learn something from quantum physics? (Score:2)
What I do (Score:2)
Re:Thread safety (Score:4)
Initialization. (Score:4)
Rig macro wrappers for malloc() and calloc() and the like that initialize new memory to 0xdeadbeef if you have a memory debug flag active. This should make your program crash more abruptly when it starts doing something questionable.
This won't help much for finding the race condition or non-thread-safe code causing the problem, but it may give you some idea of what's being stomped, and make the heisenbug more predictable.
when I bend my arm like this... (Score:5)
Patient: "Doctor, when I write multi-threaded programs in C++ they dump core all over the place and I don't understand why!"
Doctor: "Don't do that."
I know it sounds glib, but it really is the heart of the matter. Writing proper multi-threaded programs is difficult and takes a fair amount of skill. If you or your programming staff don't have the skill to do it, you are better off sticking to something less challenging. This isn't meant as a put down, either: there are plenty of simple solutions to problems that, at first blush, may look like they require threads (or dynamic memory allocation, or any of a number of other complex and error prone tactics). You could replace your multiple threads with a polling loop and a switch statement, or spawn completely separate processes and communicate through pipes (you might be supprised how much performance you can get out of either solution compared to the threaded code).
If you feel that you really must use the cool threaded code (or a complex, dynamically allocted data structure, or whatever) then you may need to dedicate a week or two to a carefull re-examination of the code and the accompanying re-implementation to eliminate race-conditions/memory-leaks/mutual exclusion errors/etc. While there are a lot of cool debugging tools out there, that can help you find some kinds of errors, there is really no substitute for a deep and thorough understanding of your code. You can either spend the time understanding the mess you have, or try replacing it with something less complex but easier to maintain. (but, maybe, harder to extend/scale/etc.)
Here are some of my favorite high-tech problems with low-tech solutions:
Here are a few tips: (Score:4)
Debugging tips
0. If you are running linux: apply the patch that causes the thread that had the segfault to dump core. The default Linux semantics (under 2.2.x at least) are for the threads to exit STARTING with the one that had the problem. Then, the LAST thread dumps its program counter / stack info into your core file. The result: you get what look like "random" crashes when really they aren't very random.
There is a patch which fixes this behavior. try this patch [dynhost.com].
1. See if you can get your program to crash in a debugger or dump core. I presume that you are getting this by your comments.
Record the places that you get crashes. Each time anyone gets a crash, have them record where, as best as they can.
Try to figure out what's getting overwritten, even if it's not clear when or how.
3. Try to increase the frequency of the crash (e.g. by running on an SMP machine). this usually provides people with more incentive and a better chance to test if any given change really fixed things.
4. The next few hints fall into the category of "reduce the problem code". It sounds to me like you don't have a good feeling as to where the problem is happening. Try to eliminate sections of the code, by any means necessary. examples include one big lock to force serialization, test programs that only excercise certain modules of your code, etc. I know there is often a temptation to just jump in, but some extra scaffolding to reduce the possible problems is almost always valuable on hard debugging problems.
Reduce the amount of data shared between threads. We are using a message passing interface, where each thread more or less has its own data. This has been a big win. We often copy data before passing on to the next thread, just to make sure.
5. Understand what is and isn't thread safe in your libs. For example, did you know that it is almost impossible to make the C++ std string thread safe, without changing the implementation? that's because the implementation is copy on write. So, even when you're not sharing any data between threads, you are....
I hope some of these help. threading problems aren't easy, particularly in c++...
Use a memory profiler (Score:3)
Now, one answer to this problem is to use smart pointers and automatic garbage collection. That won't help array boundaries, but you can use an equivalent wrapper for arrays. It's not a bad practice to get into for large-scale C++ development.
Another solution, and one I have found to be *incredibly* useful, is to use a memory profiler. There are loads of memory profilers on the market today. Visual C++ has one included, the GNU Foundation has one [gnu.org], and loads of commercial companies offer profilers. Most of the commercial ones have nice functions, such as pretty (and useful) graphical displays and the ability to profile code that has already been compiled. I remember profiling Solaris (the OS itself) back in the early 90s, finding loads of memory leaks and memory management problems.
Even though I'm not a betting man, I would lay money that if you rid yourself of memory management problems, your code will no longer contain these "Heisenbugs".
Re:Use a memory profiler (Score:3)
Preventing and Detecting Bugs (Score:3)
Consider using something other than C/C++.
You can never be too paranoid. Check all passed parameters for validity. Do range checks on indices. Verify that a pointer to foo actually points to an object of type foo.
Don't use dynamic memory allocation.
Have a recovery strategy for resource exhaustion.
Check for things that "can't happen".
For every global data structure, you should be able to say which procedures access/modify it, and under what conditions.
Read some good books on real-time and concurrent programming.
Another solution (Score:1)
Re:Use a memory profiler (Score:4)
"Heisenbugs", as you call them, are almost always the result of memory management bugs.
Absolutely. In about 75% of the cases I've seen they were from clobbering something on the stack. In one case I built malloc() and free() wrappers and preceeded every array or memory reference with a (#ifdef DEBUG) check to make sure the index was in bounds. I found dozens and dozens of cases where the indexes went out of bounds.
One of my favorite is code sort of like this:
main() { int x[100]; int i; for ( i=0; i != 101; i++ ) { x[i] = i * i; } }
Where overflowing the array steps on the loop index.
Another case I saw that my team chased off and on for weeks was one where we didn't initialize one field of a time-related structure.
It probably doesn't impact anyone these days, but we spent an hour one day stripping some 16-bit Windows or DOS code down to just a dang printf("Hello world."); and it cored inside the printf(). Finally we noticed that there were a lot of large arrays declared locally in main(), so the stack was almost completely used up. The next function call would core no matter what.
Can't happen, and understanding. (Score:4)
I have a lot of can't happen checks. They never, or rarely trigger (anymore). I have to maintain code written by someone less paranoid about buggy hardware (for 100 triggered can't happen except on hardware errors, I've fixed 110 software bugs. This on new untested hardware where I have found hardware bugs by other means.
I've also fixed several crashes because I knew the code well enough to know where it should be an what it should be doing. Once I proved it wasn't in that state I had to figgure out why not, and from there the fix was easy. Unfortunatly figgureing out why I was in the wrong state is hard.
Duplicatable problems are easy to fix. If you can crash in one of 5 cases, then splatter printf's all over those areas. Consider writting your own printf which just writes to memory, not the screen so you don't block. Then when your program crashes you pull that memory from the core file and you know where each function was last. just knowing what function each thread was in last is a big clue.
Finially, code inspections are a must. Get some good programers who have never seen that section of code and have them inspect it. If nothing else it will assure that your comments are meaningful before the programmer quits.
Re:This person is right (Score:2)
Re:Another solution (Score:1)
Re:Thread safety (Score:1)
Dancin Santa
Thread-safe libraries (Score:1)
My own approach, at least in C, matches the suggestion of a previous post: a polling loop and a switch statement (or equivalent). Breaking code up into appropriate blocks to handle the underlying events (I got a packet; I got a button click; I got something from a pipe) can certainly yield some ugly code. But it does let you guarantee that critical sections are handled properly, and any code that invokes library routines that you can't verify as thread-safe are critical sections.
I wouldn't want to try this approach if there are a lot of threads (dozens, hundreds) involved.
There are no heisenbugs. (Score:4)
My two favorite examples of past debugging:
My one biggest debugging trick... (Score:3)
Quite seriously. There are lots of debugging things you could do where first thought is "Ug, that would take me forever." True. It might. But if you start now you'll be done that much sooner.
Consistency, too.
The worst thing in the world for debugging is to sit back, go "Hmmmm, I wonder if this is it?", throw in a debug statement and see what happens. That's what you do when you don't really want to find the problem (I used to do that in the old days when I knew that firing up the system took 10 minutes each time). Be thorough and consistent. If you know that your program crashes in one of 12 methods, don't assume it happens in method foo() and only breadcrumb that one -- sit down and do them all. You can always take it out later.
(I have something of a reputation on my team that after a bug has stumped the developers for days, I am given it and I solve it in hours. Normally the first thing I do is turn debugging statements on for every component. Sure, it takes me a little while to do it, and when I'm done I get megs of output, but I almost always find the problem straightaway, too.)
Allocation & testing (Score:3)
Built in test code that periodically checks structures to make sure the values in them make sense is also very useful and easy to do in C++
Also, having a few "signature" fields in your structures (essentially, unused data items that are initialized to a specific value and never changed) can help locate tough-to-find memory overwrites. Just check the signature field of a structure before using the other data in that structure. If the signature isn't right, the memory has been overwritten
It can also be helpful to have a pointer check routine that looks at the value of a pointer and determines if it is legal for the platform you are on. For example, checking to make sure it falls in a range where the hardware actually has memory, or making sure it doesn't overlap system reserved areas
Use a Memory Tool... (Score:1)
Actually, a Heisenbug is ... (Score:3)
What heisenbugs are... (Score:2)
Re:What I do (Score:1)
Re:Thread safety (OT) (Score:1)
Re:Another solution (Score:1)
Or look at Ada 95 [adapower.com] next time you need to design a piece of multithreaded software. Ada has threading capabilities build into the language, and has proper array datatypes so there is no need to fiddle with pointers and worry about checking of boundaries.
Ada [cam.ac.uk] rights many of the wrongs of Pascal and has a strong and rich type system build in.
I am amazed that so many insist on staying away from modern languages that make it easy to find bugs at compile time. Oh well.
The GNAT [gnat.com] compiler is GPL'ed and available for Win32 and Linux and quite a number of other platforms.
Consider co-routines instead of threads (Score:2)
If you have a number of "parallel" calculations or procedures to perform, nothing beats a good implementation of co-routines, hunks of code that are called from a master scheduler or process loop. When done correctly, the response time of a co-routine implementation can be better than a multi-threaded implementation because the operating system never has to worry about context switching.
Now, that doesn't mean that threads don't have their place. In my programs, the only use for threads (or processes, for that matter) is real-time I/O monitoring, and I do the absolute minimum I can in each thread/process so as to avoid tromping on shared variables. Everything else is under the control of the main program.
The primary purposes for forking is when you are dealing with human-time activities -- that's one reason that some Web servers will fork a process to handle a request, so that each user sees some response early. Another good reason to fork is to reduce the number of file pointers that have to be monitored. (There has been discussion of this in a number of different places.)
Just remember that threading imposes an overhead that you may be ill able to afford. Within embedded systems with real-time implications (single-CPU modems are a great example) the allocation of time is critical to balance the need to stream data with the absolute requirement to maintain synchronization with the modem line.
I believe you will also find that an appropriate co-routine implementation is much easier to debug, because the time relationships between the routines is strictly controlled by the routines themselves -- it is very easy to implement "critical path" segments without the use of semaphores or other tricks.
The down side? In real-time applications, the key to successfully using co-routines is to design each co-routine to do a useful amount of work without hogging the CPU and still relinquish control often enough to let other time-critical routines run appropriately. Judicious use of event trigger flags set by interrupts and periodically tested by your co-routines can ease this task. The same can be done in Unix applications with judicious implementation of signal-based I/O.
One aspect of co-routine implementation coupled with the correct use of select() timeouts is that it becomes very simple to implement an "I'm-sane" indicator in the user interface, reassuring your user that just because nothing appears to be happening doesn't mean the program is napping on the job.
Re:Consider co-routines instead of threads (Score:1)
Use Insure++ (Score:1)
Re:Thread safety (OT) (Score:1)
Eiffel? (Score:1)