Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Programming Technology

Do Programmers Actually Use Assertions? 170

P.Chalin queries: "Do programmers actually use assertions (like the assert statement of various programming languages)? If so, what should be done when errors or exceptions are raised during the evaluation of an assertion? I am collecting opinions and stats via a short questionnaire. Thanks."
This discussion has been archived. No new comments can be posted.

Do Programmers Actually Use Assertions?

Comments Filter:
  • by dtfinch ( 661405 ) * on Monday March 28, 2005 @04:51PM (#12069434) Journal
    and to avoid putting my email in a survey, I use debugging messages as needed while debugging. If it's a web site, I'll also have it catch and forward them to my email. I don't really use assertions except that I write a lot of code to deal with unusual input in a non-fatal fashion. I was an on error resume next sort of guy back in the VB days.

    My philosophy has been that unless security issues are involved, failing on an error is not much better than not checking for an error at all, if the program crashes in both instances, as happens when you use an assert(). Ideally, either errors are handled in the least-fatal manner possible, or you develop the right abstractions to enable you to write error-free code.
    • by Phleg ( 523632 ) <stephen AT touset DOT org> on Monday March 28, 2005 @06:06PM (#12070426)
      Sorry, but this is an extremely naive way of coding, in my opinion. Errors occur for a reason; if you don't do something about their existence, you're likely to start experiencing unexpected problems. Crash early crash often isn't a joke; it's far better to die when you can gracefully than it is to ignore all errors and crash losing data.
      • It probably depends a lot on the type of software you write, and the language used. I certainly test all my code, inspect code for errors I might have missed while testing, and fix any errors as they are found.

        I've lost data to assert() checks in other people's software, over trivial things that could have been safely ignored. I've also recovered from crashed software by loading it in a debugger and skipping over the failed instruction. I've never lost data that would have been saved by a fatal assert().
        • by Metasquares ( 555685 ) <slashdot.metasquared@com> on Tuesday March 29, 2005 @12:23AM (#12073212) Homepage
          If an assertion can fail under normal circumstances in released code, assert is being misused. IMHO, the idea behind an assertion is to prevent violations of preconditions set down in the code from occurring. These preconditions do not cover runtime conditions, such as memory errors; that's the sort of thing that exception handling should be used for. An assertion indicates something that you are doing wrong rather than a problem with the environment.
          • There are preconditions, and there are preconditions. Then there are preconditions.

            There are preconditions which are dictated by the design, a priori conditions which should not be violated, by design. It is easy to identify two broad categories of subcases:

            There are preconditions which, when violated, will cause a catastrophic failure of the code which assumes them.

            There are preconditions which, when violated, may or may not cause a degraded mode of operation.

            I use conditionalized asserts liberally,
    • The value of an assert si that you can #define assert() to mean different things in different builds. In development mode, assert may halt the system and print an error. In release builds, assert can write an error file and continue without comment. Or just be defined as a no-op.
    • If it's a web site, I'll also have it catch and forward them to my email.

      This is ok for production systems, where you are accessing a database or something like that, but otherwise I'd just output debug info within <!-- --> pairs.

      I don't really use assertions except that I write a lot of code to deal with unusual input in a non-fatal fashion.

      You are really, really asking for trouble coding that way. Not only will it make your software unneccesarily slow, it will also mask problems on different

  • unit testing only (Score:3, Interesting)

    by anomalous cohort ( 704239 ) on Monday March 28, 2005 @04:51PM (#12069443) Homepage Journal

    I only use assertions in unit test code such as here [junit.org] and here [nunit.org].

    • This is the way to do it -- abstract out the error-testing code (assertions and all that) and use unit tests enough to catch the assert-style bugs before the software hits a user's desktop, not after. Remember, YOU know what to do with an "assert failed" message. The user doesn't have a clue.
  • Test Drivers (Score:4, Interesting)

    by frooyo ( 583600 ) on Monday March 28, 2005 @04:54PM (#12069481)
    I use them like crazy when I write 'test drivers' for functional units of code (albeit Object Oriented classes etc).
  • I write programs for algebra (in Maple). I never use assertions. The programs must work correctly for all valid input. Invalid input is caught with a type check, and an appropriate error is returned. Assertion failures can only frustrate users, who typically do not understand what is going wrong.
    • Re:No (Score:4, Insightful)

      by ciroknight ( 601098 ) on Monday March 28, 2005 @05:07PM (#12069676)
      You also must realize, that with Maple, it's quite easy to detect errors in calculations, and in coding. In practical programming with an object oriented language, it's almost essential to have some kind of error handling routines. Whether you use try-catch or the actual "assert" function, or some other error checking setup, it will save you ages in debugging a larger code base.

      I think the newer Object Oriented languages (Java, C#, Objective C) have great error handling that foolishly gets ignored when younger coders go to work, myself included.
    • (in Maple). I never use assertions.

      That's because Maple uses plenty of assertions. I don't suppose you have to deal with being passed a null pointer or not having enough memory, do you? The coders of Maple took care of these situations with plenty of error-handling code, so you're guaranteed that what you think is a number is indeed a number.

  • Indubitably (Score:5, Interesting)

    by Screaming Lunatic ( 526975 ) on Monday March 28, 2005 @04:58PM (#12069529) Homepage
    At work we have custom assert, breakpoint, trace, and crash handler code. This is in C/C++. Whenever an assert fires off, it will break in to the debugger if it is present. If not, it will do a stack trace package up a email and send it to our symbol server. The symbol server looks up function names and line numbers and sends out an email to the developers. We get an email within about 5 minutes of when an assert fires on a user's machine.

    Design-by-contract is tha shiznit. It keeps your code base quite stable.

    • That sounds super cool. Is that all in-house stuff? Does something like that exist in the form of an SDK?

      • Re:Indubitably (Score:3, Interesting)

        by NullProg ( 70833 )
        For gcc (AIX/Linux) look up signal handlers/backtrace functions. For Win32 (Borland/Watcom/MSC) lookup _try/_except statements with exception filters.

        You don't need an SDK to implement structured exception handling.

        Enjoy,
      • Re:Indubitably (Score:5, Interesting)

        by Screaming Lunatic ( 526975 ) on Monday March 28, 2005 @06:23PM (#12070589) Homepage
        It is super sexy. If Paris Hilton was a coder she would say that it was hot. It's all in house. Using a bunch of Win32 calls. Here's some info:

        Symbol Server [microsoft.com] and IsDebuggerPresent [microsoft.com] and look in imagehlp.h.

        Our crash handler works similiar to Netscape/Mozilla talkback.

        Maybe someday I'll put together something equivalent at home for Linux.

    • I agree wholeheartedly with your comment on Design by Contract. So much so, in fact, that I'm writing a DBC processor for .NET languages. Put in contact through attributes, and the DBC parser will go in and insert the correct assertions and checks for you.
  • by Dr. Bent ( 533421 ) <ben&int,com> on Monday March 28, 2005 @04:59PM (#12069555) Homepage
    Assertions (usually) are a preversion of the failfast principle [failfast.com], because they can be turned off. For the same reason you can't fit a 120v plug in a 240v outlet, error checking in software should be a permanent and reliable function of the design of the system.

    Either your software is broken, or it isn't. If it is, you (and the users) need to know about it as soon as possible to prevent little errors from causing big errors.

    Now, if you want to have an assertion that cannot be disabled, and is basically just syntax sugar for if(condition) throw new SomeException();, that could be useful. But exceptions that can be disabled only lead to a false sense of security.
    • Assertions can turn little errors into big errors.
      • Assertions can turn little errors into big errors.

        It turns out that that's exactly what assertions are supposed to do.
      • More seriously, assertions that can be disabled can potentially hide little errors altogether. Consider what happens if evaluating the condition for your assertion has side effects, but in release builds that condition is never evaluated. Oh, dear: now you can test quite happily on your diagnostic build, but the one you ship to your customer has an extra bug. :-(

      • I think that the idea behind them is to turn little errors into big errors so that a "little" error to the program (loaded garbage data for an account balance, for example) becomes a big error before any harm is done. It is generally accepted that errors that halt execution are preferrable to logic errors, which seems to be the philosophy behind assert().
    • We know our software is broken(All large software is). So an assert is just a way to detect that our software is broken in a way we did not know about. That way we can hopefully fix the bug, and have a program that is a little less broken. Atleast until we add a new feature.
    • by AltaMannen ( 568693 ) on Monday March 28, 2005 @05:27PM (#12069921)
      There are two kinds of error checking, one for debugging and one for using the final product.

      Relying on assert() for a final product is not helpful for the user, the user does not generally have a method to fix and rebuild the product even if the user knows the line and file that found the error. For the developer the assert() is very useful to find places where bad conditions exist.

      The end user needs a more sophisticated error checking that visibly explains what is wrong, or simply ignores creating a spheres that have a 0 or negative radius for example (or simply makes sure the sphere radius is nonzero and positive).
      • by vadim_t ( 324782 ) on Monday March 28, 2005 @07:57PM (#12071605) Homepage
        Hell, no way.

        Assertions are for internal self-checking. Code like checking that a pointer isn't NULL can't be turned into a useful error message for an user. At best, it comes out as "Internal error in module foo", which isn't really helpful to anybody. You could remove it, but that's even worse, now the application will just continue and crash at some random point.

        Error checking should ideally be done in layers. By that I mean that the DrawSphere function, Sphere class or whatever should either FAIL HORRIBLY the moment you try to use a negative radius, or do nothing and return an error to the caller, but definitely NOT pretend that everything is going fine. That's a sure recipe for getting some really strange bugs. The user interface should prevent that from ever happening anyway.

        Now, why? Because checks in the lower levels like the drawing function are there to make sure everything works as intended. If a negative radius somehow got passed it means that the UI is bad, or the caller did something wrong. Once at the lowest level you've determined that something is wrong, very often the only sensible option is a fatal abort.

        The sphere drawing code doesn't know what it's drawing, and what are the consequences of stuff not drawing as it should. The application's UI is a lot more qualified to control these things, and that's where it should be done.
        • You could remove it, but that's even worse, now the application will just continue and crash at some random point.

          Or not. A lot of times, things that are asserted just don't cause a problem later on. Especially if the rest of the code is fairly well written and free of lots of intermodule dependencies. Why have a probability of crashing at 1.0, when you can have one that is less than 1.0? Yes data corruption is an issue, but again, see the comment above about intermodule dependencies. Good code check
          • Assertions are simply not for cases where graceful recovery is easy. For instance, suppose an assertion in code that implements a doubly linked list trips because of some impossible state, like having a list where the first item is NULL, but the last isn't.

            Well, what exactly do you do about a linked list where there is an end but no beginning? That kind of thing either indicates a serious error in the logic or memory corruption. Either way, graceful recovery is next to impossible.

            Now, if this is happening
  • They drive me nuts (Score:3, Insightful)

    by n1ywb ( 555767 ) on Monday March 28, 2005 @05:01PM (#12069585) Homepage Journal
    I'm working on some old library code for my job now and it's chock full of assertions. I think it's rediculous that a library call should cause the calling application to exit because of a failed assertion. In a "normal" application, assertions don't belong outside of main().
    • by treerex ( 743007 )

      I think it's rediculous that a library call should cause the calling application to exit because of a failed assertion.

      Really? Seems to me that the library is telling you that you are caling it wrong, and you should fix your code. Which is, of course, the whole point.

      Assertions are a Good Thing on many levels, especially when developing new code, since they will enforce the invariants in your API during development. Obviously you would have include conditional code that checks arguments and returns an

      • by Anonymous Brave Guy ( 457657 ) on Monday March 28, 2005 @08:52PM (#12072073)
        Seems to me that the library is telling you that you are caling it wrong, and you should fix your code.

        Perhaps, but no library code should ever cause the whole application to abort, ever. There are few absolutes in software development, but this is one of them.

        The library code is entitled to screw up whatever internal data it likes, and to give me whatever garbage out if I put garbage in, but it is not allowed to screw the rest of my program (and thus to circumvent any graceful-shutdown error-handling system I have in place there).

    • by miu ( 626917 ) on Monday March 28, 2005 @06:43PM (#12070829) Homepage Journal
      Assertions do not detect runtime errors, they do not even detect incorrect assumptions about data formating, they detect programming errors. An assertion is a statement you make what the state of some portion of the program at that point, if the library is asserting on you then either the assert (and the library itself) is wrong or your use of the library is wrong, either way the assert has done exactly what it was supposed to do - alerted you to a programming error.
  • Assuming that assertions are used only for the "debug" version of the program, they are ideal for code that needs to be as fast as possible but can still work if there are slowdowns, such as device drivers. They're great for parameter checking during development. Once the code has been verified to work, they can be automatically compiled-out for the "release" version of the program.
  • No. (Score:5, Funny)

    by Saeed al-Sahaf ( 665390 ) on Monday March 28, 2005 @05:01PM (#12069593) Homepage
    Real programmers do not use them. Code works fine the first time. You have "bugs" in your code???
  • by ulatekh ( 775985 ) on Monday March 28, 2005 @05:04PM (#12069627) Homepage Journal

    I read the Eiffel [eiffel.com] book, but I've never been in a position to actually write code in it. But I love the concept of programming by contract [eiffel.com].

    I just use assertions to do preconditions, postconditions, and checks. Invariants are a nice idea, but in practice seem to be a big performance hit. I just do invariant-like assertions as needed.

    I assert the heck out of my code. You can see some of it here [sourceforge.net].

    I don't see too many assertions in other people's code. Then again, I don't see too much that looks like planning or insight in other people's code most of the time, so why should I be surprised. I can't believe how sloppy we are as a profession. Like my coffee cup says...if builders build buildings the way programmers write programs, then the first woodpeckers that came along would destroy civilization.

    • Programming by contract is all very nice until your methods start suing one another...
    • What strikes me as insane is how industry best-practice is to take a foot-shooting language like C++, Java, or C# and incrementally add (and pay for) braces to keep you from being able to point the gun at your foot and pull the trigger -- when you could just not give your programmers guns!

      Unit tests are an attempt to duct tape stronger typing and contracts on to languages that are unsuited for them, leading to the necessity of kludges like mock objects and test generators. But no matter what tools you add,
  • by c0d3h4x0r ( 604141 ) on Monday March 28, 2005 @05:11PM (#12069714) Homepage Journal
    At my place of employment we use Asserts liberally but with an emphasis on using them properly. Specifically, asserts are not a substitute for appropriate error handling. An assert should be used only as a mechanism for bringing developer or tester attention to a special case or flaw and making it convenient to debug (by providing a change to break in). Subsequent error handling should still follow. Another way to look at this is that asserts should always be ignorable (the product doesn't crash, corrupt data, or enter an unrecoverable state if the assert is ignored).

  • by HopeOS ( 74340 ) on Monday March 28, 2005 @05:11PM (#12069719)
    Session Timeout
    Your session has timed out. Please register again.

    Perhaps posting to slashdot wasn't the best way to collect data, but it will give them something to think about with regards to server application reliability.

    -Hope
    • Their site requires cookies for state management. I had them turned off for all but specific sites. Perhaps an assert(cookie_is_valid) would be in order before that session lookup.

      -Hope
  • Quite a bit actually; if I'm starting a new project one of the first things I do is add a few simple assert-like macros that use the appropriate error-reporting infrastructure.

    I find that they can help quite a bit to catch less obvious bugs.

    If I'm adding code to an existing large project though, I tend to follow whatever the general convention of that project is (i.e., I don't add my own assert macros :-).

    [I was gonna fill in your survey too, but I'm not at work, so I can't reply to the email check...]
  • My Ass Erts (Score:5, Funny)

    by lbmouse ( 473316 ) on Monday March 28, 2005 @05:16PM (#12069767) Homepage
    ...when I sit and code too long.
  • Yes, look into GCC (Score:5, Informative)

    by norwoodites ( 226775 ) <pinskia@ g m a il.com> on Monday March 28, 2005 @05:17PM (#12069793) Journal
    Yes and in fact if you look into GCC there is more checking code than most people think because most of the time in a released compiler these checks are not enabled. (--disable-checking). In fact in some cases even on the main development some checking is not enabled by default because it just take so long (like 5 days) to just bootstrap the compiler. I am taking about gcac checking. Also RTL checking takes a long time too.

    --enable-checking=assert,fold,gc,gcac,misc,rtlfl ag ,rtl,tree is fun as it takes 5 days to build a compiler even on a fast computer.
  • by a1englishman ( 209505 ) on Monday March 28, 2005 @05:20PM (#12069827) Journal
    Tom Yager of InfoWorld had an article [infoworld.com] that spoke to this issue, a few weeks ago. He was talking about how most OSes fail to guard applications against timeouts and hardware failures. They leave it up to application developers to bloat their code will all kinds of handlers. Most applications simply die when faced with these kinds of problems. It would take far too long to code for all the possibilities, and cost too much.
    • by AuMatar ( 183847 ) on Monday March 28, 2005 @07:19PM (#12071230)
      The problem with the OS doing it is that means the OS enforces too much policy. Lets say I have a multifunction printer hooked up to my PC. The fax fails. WHat should the OS do? SHould it eliminate all access to the device? Just to the fax? Should it cut off scan and copy oo in case the scanner is what broke? What about print, it may have been the printer that broke.

      Timeouts? How does the OS know whats a reasonable timeout? Program A may expect data over TCP every 30 seconds or it signifies a failure- for example straming data. Program B may be a telnet session being used infrequently. It may not have data for minutes, even hours. WHat timeout should the OS enforce? When it occurs, what should the OS do- shut down the connection? Ping the other machine? Send a keepalive message?

      The OS *can't* handle these because every application requires different handling. Adding in such behavior would make it impossible to write apps that need other behavior.

      The correct answer is not to push it into the OS. Its to write better abstractions and libraries on top of the OS, so programmers can reuse the code and not have to write the same program repeatedly. For example, I use my own networking library that replaces the recv() with a custom recv. My version loops calling recv on the socket until either n bytes have been read, an error occurs, the socket is closed, or a timeout (passed in as an int) goes up. It then returns what condition occurs to the caller.

      Other layers then go from there. An HTTP or FTP layer would handle those timeouts and failures in a standard way. Some situations may require it to chain an error on up saying the protocol layer failed. And so on and so forth. THis allows maximum flexibility and reuse of all levels, because all layers leave the policy decisions up to the calling code rather than making their own. If each layer decided how to handle every situation, you'd end up having to rewrite each layer for each application, the same solutions would not be applicable. Worse, some applications would need different decisions for different parts of the application. Then you have a real nightmare.
      • "Lets say I have a multifunction printer hooked up to my PC. The fax fails."

        Just to say, the computer doesn't see a "multifunction printer". It just sees a USB hub with several devices connected. If the fax fails, then it's just that device that has failed.
  • by Dormann ( 793586 ) on Monday March 28, 2005 @05:25PM (#12069901)
    If you are developing for large, flexible software environments, The best coding practice is to hard-code logic for all situations, right? Write alternate code paths if that float is NaN or if that array size is negative. Catch, throw, and all that.

    As your software environment and processing power shrink (think game consoles, PDAs, cell phones...or simply low-level code that can't afford the bloat), you have to make assumptions. It's no longer realistic to believe that every code module should handle every set of states and inputs like a perfect black box.

    As you increase the lifespan of the code and the number of coders working with it, the usefulness of asserts also increases. When you write your brilliant, super-fast method that only works on normalized floats or ints with 22 significant bits, you'll know the calling code is following your rules... but in 6 months when the function becomes more popular without your knowledge, you're going to want it to blow up the moment it's misused.

  • by JPyObjC Dude ( 772176 ) on Monday March 28, 2005 @05:33PM (#12070000)
    I write very large applications that have a huge customer base. Null Pointer exceptions or the like are totally unacceptable errors for the user to see.

    Assertions allows the developers of large and complex applications to support their code over the long run which is always the biggest flaw of most large applications.

    It is always better to have a slower application verbose application that a fast and silent one. Example:
    --} IE - Overly optimized for performance improvements and pretty much impossible to debug complex JavaScript with.
    --} Mozilla - Well written and fantastic to debug complex JavaScript with.

    Obviously simple, small, run a few time applications should not need assertions.

    JsD
    • You mentioned that assertions are valuable in large applications, but then you talk about IE vs. Mozilla and debugging JavaScript -- that's not assertions at all; that's error-handling.

      Any browser takes HTML, JavaScript, CSS, etc. as user input -- as such, it could be *anything*, and the application must parse it safely. Handling errors in this input should never involve assertions, though, and assertions should always be turned off before distribution, because they are by definition a very unpleasant way
      • When I am speaking of IE vs Moz it is on the issue that assertions and *other* good programming methodologies are often skipped for improved performance. This paradigm has burned MS many times in the past and will probably continue to do. Stable, usable and maintainable is more important than faster. (Unfortunately it's harder to get the marketing guru's pumped up about stability and user friendliness)

        IE JavaScript and many other IE frameworks are poorly written as a trade off for speed. For instance, any
  • I am collecting opinions and stats via a short questionnaire. Thanks.

    No...no this is not right at all. I already got tricked into giving out my personal info in exchange for a candy bar...no more surveys for me.

    Wait, are there movie passes being given away?
  • by Alomex ( 148003 ) on Monday March 28, 2005 @05:50PM (#12070229) Homepage
    I checked out a copy of shipping code and asserted it all over, including some asserts that other developers said "now you are just being silly, how can this assert not be true". Within hours we found tons of dormant bugs all over the place and two of the "silly" asserts were triggered.

    Our bug count list went down by 50% within a week of asserting the code, and later on, in several occasions when some customers reported bugs all we had to do was run the instrumented, asserted version and the asserts caught the bug at once.
  • semantic (Score:5, Insightful)

    by BinLadenMyHero ( 688544 ) <binladen@9[ ]ls.org ['hel' in gap]> on Monday March 28, 2005 @05:54PM (#12070267) Journal
    Many people use assert() like an exception, and they're not made for that.
    You should use assert() to check for situations when the condition should never be false, unless there's a serious flaw in the software logic.

    For example, assert(malloc() != NULL) is bad, but something like this is ok:

    if(list->head != NULL) {
    void* last = get_last_element_of_list(list);
    assert(last != NULL);
    }
    • For example, assert(malloc() != NULL) is bad
      It's bad, but code that blindly uses the result of malloc() without checking it at all is much worse, and code like that can be found all over the place.
      • Re:semantic (Score:3, Informative)

        Sure, better assert() than nothing, but using assert() in this case is semanticaly wrong. The right thing to do is to check for the error condition:


        if(malloc() != NULL) {
        perror("malloc");
        exit(1); // or whatever
        }


        Again, assert() is for checking for situations that should never hapen (but can happen by a fault in programming logic, that the assert() is made to catch), not for possible runtime errors.
  • Abort (Score:4, Interesting)

    by file-exists-p ( 681756 ) on Monday March 28, 2005 @06:00PM (#12070347)

    If assert fails, I abort().

    I use assert to detect the problem instead of detecting its byproducts.

    --
    Go Debian!
  • by Ozwald ( 83516 ) on Monday March 28, 2005 @06:04PM (#12070404)
    - Assertions are for notifying you that something occurred during debugging/testing that should be impossible. This could be notifications of bad data that's slipped past validation. Note that assertions are stripped out of release builds.

    - Exceptions are errors that cannot be ignored. For example, failing to open a file before reading it.

    - Error returns are errors that occur that are not a big deal to ignore. This could be parsing an empty or invalid string.

    For example, you might have a constructor that allocates space for a private pointer and a function (call it SetIt) that copies data to it.

    In this case, the constructor would throw an Exception if new[] fails. This is an error that cannot be avoided.

    SetIt should assert if the private pointer is invalid. It could also assert if the incoming data is NULL, which should alert you that there's a way to send in invalid pointers.

    SetIt can then return an error if the incoming data (user typed I assume) is too long or incorrect format.

    Oz
  • Did anyone else notice you can take it multiple times?

  • grep -R assert /usr/src/linux-2.6.10 | wc -l
    2490

    Unless of course you're convinced you can write perfect bug-free code.. Am I the only one that thinks this is too obvious of a question to constitute an "Ask Slashdot" ?

    oh well
  • I use C++. I don't literally use "assert" (which is a macro, not a statement in C++), beause in C++ it is throw-away code. It stops working once NDEBUG is undefined. Someone wrote that turning off checks in production code is like unbuckling your seatbelt when you leave the driveway. I read that, and believed it. So much for assert.

    I do use "throw", which isn't a statement either (it's an expression). Using throw leads to the question "what does the catch do?" Simple rule: If a catch can't fix an program

    • by nytes ( 231372 ) on Monday March 28, 2005 @08:28PM (#12071889) Homepage
      I disagree. To co-opt your analogy, asserts are more like the walk-around you do on your car before you get in to drive away. (Er, you did read the owner's manual for your car, didn't you?)
      e.g.
      assert(CarStillHasFourWheels);
      asser t(HoodIsLatchedDown);

      These are assumptions that the program, once it is debugged, should be able to rely upon. They check that the program in internally consistent. However, it pays during debugging to have the program test those assumptions. It could just be that what you assumed to be true, isn't.

      When an assumption proves to be wrong (the assert fires) the program stops and tells you where the fault was found, lets you bring up a debugger, etc.

      Checks are a different thing from asserts. Checks should, once the program is released, keep the program from processing corrupt/illegal data, when such data might be expected (like input from an operator). If your program is (truely) internally consistent, then only the inputs (and status of output operations) should need to be checked for the program to run.

      You never turn off checks in production code, but at some point you can take off the training wheels (assertions).

      And before anyone can jump on me about assuming anything while programming, take a look at your own code. Unless you test every freakin' index and/or pointer every freakin' time you use it, you are making assumptions on almost every line (at least, with C/C++ code).

  • Assertions are good (Score:2, Informative)

    by BagMan2 ( 112243 )
    Assertions allow the programmer to do checking for conditions that would indicate a bug in the code, in an attempt to identify the problem sooner, as the sooner you see the problem, the easier it is to find and fix.

    Having assertions only exist in debug code is also an important principle, that actually allows them to be more useful. For example, if I have some very low-level routine that is time critical, I am not going to want to do parameter validation in the low level routine, particularly if bad param
  • by BenjyD ( 316700 )
    Yes, of course. Almost all functions should have at least one assert (ErrFatalDisplayIf in PalmOS) in the debug build, IMO. I generally add sanity checks as well - if I know an integer argument should never be much above 200, assert that it's not.
    • How do assert "should never be much above 200" in C/C++? I guess you could use assert(!(x>200+MUCH)) but how much is MUCH?
      • Whatever's sensible, I guess. In my Free software work (PalmOS text editor) I use a lot of unsigned 16-bit integer values, for example, so if the number of characters that fit across the screen becomes 65,000, then I've probably done something like "x=3-5" and overflowed x.
  • by Brandybuck ( 704397 ) on Monday March 28, 2005 @09:06PM (#12072174) Homepage Journal
    If so, what should be done when errors or exceptions are raised during the evaluation of an assertion?

    Scream and throw up. Or in other words, loudly dump core. That's what the asserts are for.

    Asserts should always be turned off in production software, so it doesn't matter how noisy the asserts are. If you're an open source project and are worried about 1&m3r distros building your stuff with debug on (I've had it happen to me), then turn it off by default.
  • One of the most important things an assertion does is to document the code. Even if they were only comments, they would still be useful. It's just an added bonus that you can make them "live," either at debug time or all the time.

    I would consider it a mistake to assert that memory allocation was successful; that's the kind of thing that has to be handled by code, not an assertion. But you might assert that a pointer is non-null if you require it to be initialized.

    In other words, an assertion should e

  • by digitalEric ( 527320 ) on Tuesday March 29, 2005 @12:03AM (#12073131)

    Assert() is a big assest to program maintanence and debugging.

    Bad uses of assert() are like bad comments: they do nothing to help you. Good uses of assert() serve two purposes: (1) to document assumptions made by the code, and invariants that must be maintained, and (2) to make debugging easier when and assumption/invariant is violated.

    When reading code, you know there are probably some corner cases that don't work correctly. There are going to be assumptions embedded in the code. Future maintainers of the code need to know these assumptions; they can either find it by violating them and then tripping over the resulting bugs, or else by reading comments.

    assert(foo < columns);
    assert(hash_contains_key(bar, foo));

    is as readable as a

    /* foo is less than the # of columns, and bar is a hashtable which contains a value for the key foo */
    to the same effect, except that the assert() statments are machine-checkable.

    Assert() follows the principle of "fail fast": when something goes wrong, you want the program to stop right away, before it starts corrupting things. When you get a backtrace at (or soon after) the problem occurred, it is much easier to track down what's gone wrong, than if the program crashes from a null pointer exception a few million instructions later. Or worse, the corrupt data might not be noticed for a long time; at that point you can all you know is that _some_ piece of code corrupted data. An assert() can significantly narrow the search for the offending line(s).

  • I don't think you can assert that.
  • I work on kernel device drivers. Some functions assume that they are called with a lock held. Having an assert at the begining of this block of code adds a guarantee - if a new routine calls this function, it will fail if the lock is not held.

    I use asserts to guarantee future maintenance won't break the code.
  • The only valid place to use assertions in a real application is in unit tests. Aside from that, this concept should be in the history books. There is simply no valid reason to use assertions otherwise. Why?

    1) In general, programs should not simply terminate because they are not happy. They should recover gracefully and continue on. In the cases where it is appropriate to terminate, an assertion is an exceptionally poor way to handle it.

    2) Some say to use them during development. I say no fucking way

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...