Become a fan of Slashdot on Facebook


Forgot your password?
Programming Technology

Optimizations - Programmer vs. Compiler? 1422

Saravana Kannan asks: "I have been coding in C for a while (10 yrs or so) and tend to use short code snippets. As a simple example, take 'if (!ptr)' instead of 'if (ptr==NULL)'. The reason someone might use the former code snippet is because they believe it would result in smaller machine code if the compiler does not do optimizations or is not smart enough to optimize the particular code snippet. IMHO the latter code snippet is clearer than the former, and I would use it in my code if I know for sure that the compiler will optimize it and produce machine code equivalent to the former code snippet. The previous example was easy. What about code that is more complex? Now that compilers have matured over years and have had many improvements, I ask the Slashdot crowd, what they believe the compiler can be trusted to optimize and what must be hand optimized?"
"How would your answer differ (in terms of the level of trust on the compiler) if I'm talking about compilers for Desktops vs. Embedded systems? Compilers for which of the following platforms do you think is more optimized at present - Desktops (because is more commonly used) or Embedded systems (because of need for maximum optimization)? Would be better if you could stick to free (as in beer) and Open Source compilers. Give examples of code optimizations that you think the compiler can/can't be trusted to do."
This discussion has been archived. No new comments can be posted.

Optimizations - Programmer vs. Compiler?

Comments Filter:
  • by xlv ( 125699 ) on Friday February 25, 2005 @04:50PM (#11781223)
    Donald Knuth wrote "We should forget about small efficiencies, about 97% of the time. Premature optimization is the root of all evil."

  • Code Twiddling (Score:3, Informative)

    by bsd4me ( 759597 ) on Friday February 25, 2005 @04:53PM (#11781269)

    There really is no one answer to this, as it depends on the compiler itself, and the target architecture. The only real way to be sure is to profile the code, and to study assembler output. Even then, modern CPUs are really complicated due to pipelining, multilevel cache, multiple execution units, etc. I try not to worry about micro-twiddling, and work on optimizations at a higher-level.

  • Huh (Score:5, Informative)

    by NullProg ( 70833 ) on Friday February 25, 2005 @04:54PM (#11781290) Homepage Journal

    As a simple example, take 'if (!ptr)' instead of 'if (ptr==NULL)'.

    Both forms resolve to the same opcode. Even under my 6502 compiler.

    CMP register,val

  • Wrong, wrong, wrong (Score:5, Informative)

    by JoeBuck ( 7947 ) on Friday February 25, 2005 @04:54PM (#11781299) Homepage
    Don't give advice when you don't know C. C requires that when a 0 is converted to a pointer, the result is NULL, so it is absolutely false to claim that NULL could be defined as -1.

    "ptr == 0" must give the same result as "ptr == NULL", always.

  • by Sycraft-fu ( 314770 ) on Friday February 25, 2005 @04:56PM (#11781333)
    Each compiler is different. Some will optimise things other won't.

    In general, however, systems are now fast enought that when in doubt, write the clearest code possible. I mean for most apps, speed is not critical, however for all apps stability and lack of bugs is important and obscure code leads to problems.

    Also, for things that are time critical, it's generall just one or two little parts that make all the difference. You only need to worry about optimizing those inner loops where all the time is spent. Use a profiler, since programmers generally suck at identifying what needs optimising.

    Keep it easy to read and maintain, unless speed is critical in a certian part. Then you can go nuts on hand optimization, but document it well.
  • by hpa ( 7948 ) on Friday February 25, 2005 @04:58PM (#11781368) Homepage
    The former is a lot more concise, and conciseness helps getting the *big* concepts down.

    Anyone with any familiarity with C will consider the latter form unnecessarily more verbose, and therefore less clear.

    There is one exception to that, and that is when "ptr" is in fact a complex expression that it isn't obvious at a glance is a pointer expression. In that case, == NULL or != NULL spells out to the reader of the code "oh, and by the way, it's a pointer." That is the ONLY reason to write this for clarity.

    There is a whole category of "bad commenting" in which comments left that are only useful to someone who don't know the programming language actually makes the code a lot harder to read. A comment like:

    a += 2; /* Add two to a */

    ... is not helpful in any shape, way or form, and just provides mental clutter.

  • by Dan Ost ( 415913 ) on Friday February 25, 2005 @04:59PM (#11781371)
    Donald Knuth was quoating Tony Hoare when he said that.
  • by EccentricAnomaly ( 451326 ) on Friday February 25, 2005 @05:01PM (#11781413) Homepage
    Trying to optimize in your head rarely does any good... Use a good profiling tool (like Apple's Shark []) to find out what part of your code uses the most time and then just concentrate on making that part faster.

    Using a profiler and your own brain you can often significantly improve over what a compiler can do.
  • Re:NULL not always 0 (Score:4, Informative)

    by HeghmoH ( 13204 ) on Friday February 25, 2005 @05:04PM (#11781468) Homepage Journal
    NULL isn't necessarily equal to 0 at the machine level. However, at the language level, 0 is always interpreted exactly the same as NULL if it's being converted to a pointer, and vice versa. This is explicitly spelled out in the C standard, so any compiler that doesn't obey this is breaking the standard. Saying !ptr and ptr == NULL is always identical on a conforming compiler (which they pretty much all are).
  • by Medievalist ( 16032 ) on Friday February 25, 2005 @05:08PM (#11781545)

    Back when I was doing real-time programming in FORTRAN-II on a PDP-11/34, I was able to very slightly decrease execution time by changing "divide by two" instructions to arithmatic shifts and "square root" operations to "power of 1/2" (because x^.5 = sqrt(x)).

    Since those instructions were in a loop that was running 80,000 times per second, this meant that I could get more data when destructively testing rocket motors... in the end, I got a 40% increase in processing speed that was worth a whole lot of money to the company. AND it got me a raise ;) so don't tell ME little increases in efficiency aren't worthwhile!

    The answer to your question is TEST your unique combination of hardware and software, find out what works better/faster/more-elegantly, and use the most human-readable form that you can afford to given your unique requirements.

    If you are doing real-time digital data acquisition involving usecond events, you will probably end up with some pretty cryptic source code, so be nice and put in lots of comments.

    If you are programming an office automation application, your code should probably be readable by kindergarteners, since you won't have very difficult performance requirements.
  • Re:NULL not always 0 (Score:2, Informative)

    by LiquidCoooled ( 634315 ) on Friday February 25, 2005 @05:10PM (#11781568) Homepage Journal
    Your right about none zero memory pointers (I was just re-reading a FAQ [] compiled from the K&R bible).

    BUT as part of an C compiler, using a "0" in a pointer context will always compile to a null pointer on the target system.

    The only problems with zeros comes from the cast applied when passing parameters, and in such cases, even a NULL definition may require explicit casting.

  • Re:NULL not always 0 (Score:1, Informative)

    by Anonymous Coward on Friday February 25, 2005 @05:11PM (#11781590)
    Well, just to be a bit anal, the standard does *not* guarantee that a NULL pointer will be == to an integer 0. It guarantees that it is always == to a *constant* 0, *in the source*. So, this will not work:

    int main(void) { int a = 0; int *p = NULL; if (p == (int *)a) printf("Equal!\n"); return 0; }
  • by Anonymous Coward on Friday February 25, 2005 @05:17PM (#11781679)
    From the ANSI C specification, section

    An integral constant expression with the value 0, or such an expression cast to TypeIs_VoidPointer, is called a null pointer constant. If a null pointer constant is assigned to or compared for equality to a pointer, the constant is converted to a pointer of that type. Such a pointer, called a null pointer, is guaranteed to compare unequal to a pointer to any object or function. A null pointer constant has TypeIs_NULL.

    Once again, it should be said:

    Don't give advice not to give advice to not give advice when you don't know C when you don't know C when you don't know C.
  • Re:Huh (Score:2, Informative)

    by gatkinso ( 15975 ) on Friday February 25, 2005 @05:18PM (#11781685)
    Sort of correct.

    According to the language definition, a constant 0 in a pointer context is converted into a null pointer at compile time. That is, in an initialization, assignment, or comparison when one side is a variable or expression of pointer type, the compiler can tell that a constant 0 on the other side requests a null pointer, and generate the correctly-typed null pointer value.

    Check out section 5.2
  • by Anonymous Coward on Friday February 25, 2005 @05:19PM (#11781703)
    Actually, it doesn't. ANSI C, at least, does not define NULL - that is system-dependent. There may not be any systems that choose to use anything except 0, but this is not standardized at all (unless C99 covers it).
  • by opticsguy ( 862820 ) on Friday February 25, 2005 @05:19PM (#11781709)
    I recently attended a code optimization workshop for the IBM PPC compiler for Mac OSX. The compiler designer stressed that hand optimized code (i.e. unrolled loops, register variables, stupid pointer tricks etc.) only confused the optimizing compiler and would usually result in slower code overall when -O4 and higher optimizations are enabled. He provided a number of examples why this was the case and convinced us that modern compilers are much better at optimizing than humans. He also stressed the need to profile code and look for things the compiler cannot optimize. Examples are using double floats where single precision will suffice and in the case of the PPC unecessary conversions between floats and integers.
  • Re:Huh (Score:3, Informative)

    by PitaBred ( 632671 ) <> on Friday February 25, 2005 @05:21PM (#11781734) Homepage
    As people said above, the language definition dictates that 0 == NULL. Mod parent idiot.
  • by ron_ivi ( 607351 ) <(sdotno) (at) (> on Friday February 25, 2005 @05:21PM (#11781738)
    Using cryptic, short variable names also shaves valuable microseconds off compile time and run time.

    There's nothing wrong with short variable names.

    Thus quoth Linus in the Linux Coding Style guide.

    Chapter 3: Naming

    C is a Spartan language, and so should your naming be. Unlike Modula-2
    and Pascal programmers, C programmers do not use cute names like
    ThisVariableIsATemporaryCounter. A C programmer would call that
    variable "tmp", which is much easier to write, and not the least more
    difficult to understand.

    HOWEVER, while mixed-case names are frowned upon, descriptive names for
    global variables are a must. To call a global function "foo" is a
    shooting offense.

    GLOBAL variables (to be used only if you _really_ need them) need to
    have descriptive names, as do global functions. If you have a function
    that counts the number of active users, you should call that
    "count_active_users()" or similar, you should _not_ call it "cntusr()".

    Encoding the type of a function into the name (so-called Hungarian
    notation) is brain damaged - the compiler knows the types anyway and can
    check those, and it only confuses the programmer. No wonder MicroSoft
    makes buggy programs.

    LOCAL variable names should be short, and to the point. If you have
    some random integer loop counter, it should probably be called "i".
    Calling it "loop_counter" is non-productive, if there is no chance of it
    being mis-understood. Similarly, "tmp" can be just about any type of
    variable that is used to hold a temporary value.

    If you are afraid to mix up your local variable names, you have another
    problem, which is called the function-growth-hormone-imbalance syndrome.
    See next chapter.
  • by papaver1 ( 846310 ) on Friday February 25, 2005 @05:22PM (#11781742)
    I took a compliers class a year or so ago. We had to make a complier in c using lex/yacc for pascal. We also put in some optimization in it. But or profressor who has been doing this for around 10 years or so alwasy said to never trust a complier for optimizing your code. most don't do a very good job of it. and each complier optimizes stuff diffrently. some even broke code. His comparisons were using gcc, ibm complier, and a couple others, as well as optimizing in the code itself. The optimized code ran better in 90% of the runs.
  • Re:Clear Code (Score:2, Informative)

    by jallen02 ( 124384 ) on Friday February 25, 2005 @05:22PM (#11781748) Homepage Journal
    Not to mention if you write "correct" and cohesive code you can easily optimize it.

    As long as you don't over architect or go to extremes of abstraction your can do some very well abstracted code that pretty nicely separates all of the things that change from the things that stay the same. When you create this kind of separation optimization becomes easier and easier.

    Good (simple) examples are creating a generic interface to sort. You may start off with a simple insertion sort (why bother with heap/quick if you don't need it?). Anyone can understand insertion sort and its fine for relatively small sets of data. Then you expand the scope of the program and realize you need a better sorting algorithm. No problem! You go in and change the behind the scenes sort. Your interface to sort did not change and your program happily works just how it did. You apply this very same idea to as much of the software you write as is reasonable. Separate out data models and logic and displays. Keep each part of the system doing one task and doing it well.

    If the system is written good enough you don't even need anything but high level comments explaining the "big picture" of what is going on. I hate comments that are like the following:

    /* Set foo to 10 */
    int foo = 10;

    Well thank you captain obvious!

    Much better is an explanation over a block of 5-10 lines giving you an idea of what you are trying to achieve. Comment any thing that is not clear, like if your using bitwise shifts to multiply and divide, for example.

    Strive for simplicity :)

  • ANSI C (Score:2, Informative)

    by bAdministrator ( 815570 ) on Friday February 25, 2005 @05:24PM (#11781784)

    ANSI C: "The macro NULL is defined in <stddef.h> (and other headers) as a null pointer constant."

    The definition of a null pointer constant: "An integer constant expression with the value 0, or such an expression cast to type void *, is called a null pointer constant."

    Source: Programming languages - C (ISO/IEC 9899:1999)
  • Re:Indeed (Score:3, Informative)

    by Rew190 ( 138940 ) on Friday February 25, 2005 @05:41PM (#11782037)
    Variables named "i" or "j" are typically used for iterating through loops. I find most programmers tend to stick to this convention, so if I'm reading through someone else's code and glance at an "i" or a "j," I know it's going to be used as a loop or iterator of some sort.

    For example, if you just saw "FIELD_COUNT" by itself without code, would you have a good idea what sort of object that was knowing anything else? (A static variable). If you saw "_count"? (Instance variable). "getCount"? (Accessor method)

    So in that sense, yes, it is meaningful. You'll pick up on things like that when you start coding larger projects with other programmers.
  • by zephc ( 225327 ) on Friday February 25, 2005 @05:44PM (#11782079)
    maybe use 'gcc -S sample.c' (is that the right option? dont have GCC in front of me) where sample.c is a short program with the !ptr and have another file where u use ptr==NULL and see which produces tighter code. Unless optimizations occur *afterwards*, in which case discard this message :-)
  • Re:NULL not always 0 (Score:3, Informative)

    by Malc ( 1751 ) on Friday February 25, 2005 @05:46PM (#11782106)
    Maybe you've been listening to C++ people? Stroustrup certainly has a chip on his shoulder about NULL:

    5.1.1 Zero

    Zero (0) is an int. Because of standard conversions, 0 can be used as a constant of an integral, floating-point, pointer, or pointer-to-member type. The type of zero will be determined by context. Zero will typically (but not necessarily) be represented by the bit pattern all-zeros of the appropriate size.

    No object is allocated with the address 0. Consequently, 0 acts as a pointer literal, indicating that a pointer doesn't refer to an object.

    In C, it has been popular to define a macro NULL to represent the zero pointer. Because of C++'s tighter type checking, the use of plain 0, rather than any suggested NULL macro, leads to fewer problems. If you feel you must define NULL, use

    const int NULL = 0;

    The const qualifier prevents accidental redefinition of NULL and ensures that NULL can be used where a constant is required.
  • by rjstanford ( 69735 ) on Friday February 25, 2005 @05:56PM (#11782249) Homepage Journal
    LOCAL variable names should be short, and to the point. If you have
    some random integer loop counter, it should probably be called "i".
    Calling it "loop_counter" is non-productive, if there is no chance of it
    being mis-understood.

    That last clause is an important one that often gets neglected. In fact, you should never, ever, call a variable loop_counter. That's as bad as pure reverse hungarian - it tells you how its used, not what it means.

    I suggest that, for all non-trivial cases (and I'd prefer to see people err verbosely than compactly), you should use descriptive names. Not loop_counter, but maybe something like curRow? It doesn't have to be long, but at least then as the loop grows over time someone can understand a piece of code more easily than having to scroll back up to check that you are indeed in the "i" loop. Its even more critical when someone comes along and adds a nested (or containing) loop. Or whatever.

    Same with "tmp". If its truly temporary, such as:

    int tmp = getFooCount();

    then it should be removed and rewritten as:


    If its not that temporary, give it a real name. If you insist that it is temporary then you may have a scoping issue - having variables useful but only in part of your function could indicate that your function is doing too much work. If you insist its truly temporary, scope it down: ...
    int foo = getFoo();
    moreCode(); ...

    At least now you've guaranteed that it is temporary. Better yet, just name it usefully.
  • Easy one (Score:2, Informative)

    by Anonymous Coward on Friday February 25, 2005 @05:57PM (#11782265)
    $ cat one.c
    int main( int argc, char* argv[] ) {
    if( argv == NULL ) return 0; else return 1;
    $ cat two.c
    int main( int argc, char* argv[] ) {
    if( !argv ) return 0; else return 1;
    $ gcc -S one.c
    $ gcc -S two.c
    $ diff one.s two.s
    - .file "one.c"
    + .file "two.c"
    $ gcc -O3 -S one.c
    $ gcc -O3 -S two.c
    $ diff one.s two.s
    - .file "one.c"
    + .file "two.c"
    $ gcc --version
    gcc (GCC) 3.4.2 20041017 (Red Hat 3.4.2-6.fc3)

    No diference at all...
  • Re:Clear Code (Score:2, Informative)

    by B'Trey ( 111263 ) on Friday February 25, 2005 @05:58PM (#11782271)
    Some experts seem to disagree [] with you.
  • valgrind (Score:4, Informative)

    by cyco/mico ( 178541 ) on Friday February 25, 2005 @06:12PM (#11782431)
    If in doubt, use valgrind and kcachegrind []. One run with callgrind gives you all the information you want:
    • How often are functions called (and branches taken)
    • Which functions take most of the time
    • See the assembler code for each line with a mouse click (no need to guess anymore)
    callgrind/kcachegrind is by far the easiest profiling solution I ever tried, and it seems answer more or less all of your questions.
  • by Anonymous Coward on Friday February 25, 2005 @06:17PM (#11782486)
    For C++, use the STL algorithms. That is all.
  • i, j, k, ... (Score:5, Informative)

    by bsd4me ( 759597 ) on Friday February 25, 2005 @06:17PM (#11782487)

    I think that most people forget that the reason that i, j, k, etc. are used for loop counters is that unless otherwise declared, I..N default to INTEGER in FORTRAN. This convention just carried over as programmers migrated from FORTRAN to other languages and has been passed down through the ages.

  • by swillden ( 191260 ) * <> on Friday February 25, 2005 @06:31PM (#11782640) Homepage Journal

    Althoug the comparision (0 == ptr) might for some twisted reason com out as a non-zero value, you are not guaranteed that (!ptr) will have the same value.

    Wrong []

    Don't give advice... ah, screw it. You get the idea.

  • by r00t ( 33219 ) on Friday February 25, 2005 @06:38PM (#11782697) Journal
    If by "const references" you really mean a
    C++ reference, OK. If you mean a pointer though,
    and you use C, the compiler is prohibited from
    performing this optimization unless you also use
    the "restrict" keyword.

    Since you did mention "restrict", it appears that
    you are working with C. "restrict" is not a C++

    BTW, inlinedamnit is __attribute__((__alwaysinline__))
    for gcc and __forceinline for Microsoft.

  • by beelsebob ( 529313 ) on Friday February 25, 2005 @06:50PM (#11782797)
    That's not really the point being made though - the question really is "is it worth spending the next week trying to make optimisations, or coding a better algorithm to do this."

    And while we're at it - n^2 vs 2^n *is* a big deal when working on 1ghz+ systems... If we have 1000 objects, an operation that takes 10 cycles and an n^2 algorithm, then we get a runtime of 0.01 seconds (10 x 1000 x 1000 cycles ), if we have a 2^n algorithm then we get a runtime in the hours, and no amount of optimizing the code in the loop (even down to one instruction) is going to get us anywhere near the n^2 algorithm.

  • by Anonymous Coward on Friday February 25, 2005 @06:53PM (#11782833)
    Heap sort can be done in place on an array of random data, since it's easy to interpret an array as a binary tree:
    parent(i) = i/2
    children(i) = i*2, i*2+1
  • by roman_mir ( 125474 ) on Friday February 25, 2005 @06:56PM (#11782867) Homepage Journal
    I got this job as a contractor 4 years ago now where the project was developed by over 30 junior developers and one crazy overpaid lady (hey, Julia,) who wouldn't let people touch her code so fragile it was (and it was the main action executor,) she would rather fight you for hours than make one change in the code (she left 2 months before the project release.) Now, I have never witnessed such monstrocity of a code base before - the business rules were redefined about once every 2 weeks dor 1.5 years straight. You can imagine.

    So, the client decided not to pay the last million of dollars because the performance was total shit. On a weblogic cluster of 2 Sun E45s they could only achieve 12 concurrent transactions per second. So the client decided they really did not want to pay and asked us to make it at least 200 concurrent transactions per second on the same hardware. If I may make a wild guess, I would say the client really did not want to pay the last million, no matter what, so they upped the numbers a bit from what they needed. But anyway.

    Myself and another developer (hi, Paul) spent 1.5 months - removing unnecessary db calls (the app was incremental, every page would ask you more questions that needed to be stored, but the app would store all questions from all pages every time,) cached XML DOM trees instead of reparsing them on every request, removed most of the session object, reduced it from 1Mb to about 8Kb, removed some totally unnecessary and bizarre code (the app still worked,) desynchronized some of the calls with a message queue etc.

    At the end the app was doing 320 and over concurrent transactions per second. The company got their last million.

    The lesson? Build software that is really unoptimized first and then save everyone's ass by optimizing this piece of shit and earn total admiration of the management - you are a miracle worker now.

    The reality? Don't bother trying to optimize code when the business requirements are constantly changing, the management has no idea how to manage an IT dep't, the coders are so nube - there is a scent of freshness in the air and there is a crazy deadline right in front of you. Don't optimize, if the performance becomes an issue, optimize then.

  • Re:Clear Code (Score:3, Informative)

    by eric76 ( 679787 ) on Friday February 25, 2005 @06:57PM (#11782878)
    Because of the way the data is stored.

    It is faster to process the data sequentially than to skip around.

    That is especially true if the data elements are small such as chars. You would end up having to fetch the same data from memory multiple times in the processing of one loop compared to only once in the other loop.
  • by neil.pearce ( 53830 ) on Friday February 25, 2005 @07:03PM (#11782942) Homepage
    Paradox man, you're wrong. You are so wrong.
    Trust me - or not, just PLEASE google for the C++ FAQ, and read what they have to say about NULL and the null-pointer...
    Then, google for any C/C++ spec you can grab (without having to payout $200 for) and check what they have to say...
    Then google for the source code of GCC/TCC (nice and complete) and examine it...
    Then try to explain how "if (ptr)" works using your logic...
    It will enlighten you as to why "NULL" MUST always be defined as 0, not just as a "convention", and why comparing *ANY* pointer to 0 is strictly MOST necessary and GUARANTEED allowable and ABSOLUTELY FUNDAMENTAL to all C/C++ compilers.
    Learn the difference between NULL (always 0) and the null-pointer representation (any bit pattern) and why that matters not whether your system base address (0) is a valid memory address of not.
    Incidently, look up the specification for the Motorola 680x0 processor. Memory address (bytes) 0 to 8 are NOT valid. They are BY DEFINITION the initial stack pointer and code start addresses. It matters not...
  • Re:Clear Code (Score:5, Informative)

    by GryMor ( 88799 ) on Friday February 25, 2005 @07:39PM (#11783262)
    It's working on a 2d array of data and is presuming that it is ordered as such:



    This, in memory, is:

    Similarly, the accsess order for the second loop is:

    But for the first one, it is:

    The first one hits memory sequentially, which is good for caching as each cache line stores a large chunk of sequential memory.

    Considering hitiing the cache as oposed to hitting main memory is at least 100 times faster, you'll be lucky if the first loop is only 50 times slower.

    This still presumes data stored in the specified order in memory (which is common for image formats, but not the only way things are done).
  • Re:Not always. (Score:3, Informative)

    by clarkcox3 ( 194009 ) <> on Friday February 25, 2005 @08:11PM (#11783514) Homepage
    In C, this is because 0 != NULL. NULL = (void*) 0

    Actually, in C, NULL can be (0), just like in C++

    This is because C++ has points to members, I believe (not sure, don't use void* pointers).

    Actually, it's because, in C++, there is no implicit conversion from (void*) to any other pointer type; as there is in C

  • by nerdsv650 ( 175205 ) <nerdsd.nerdy1@com> on Friday February 25, 2005 @08:55PM (#11783865) Homepage
    A couple of points.

    First off *any* compiler will make that particular optimization.

    You should only think of instruction level optimization when you know with certainty that it will pay off, for example because you've run your code under a profiler and found the areas where it will actually make a difference. Once you've found the (probably tiny) areas where optimizing actually helps, do whatever it takes, and document your reasons as well as your methods.

    You can always ask your compiler to output assembly and look it over, if you aren't fluent in your proc's assembly you probably shouldn't be trying to out-optimize the compiler anyhow.

    That being said, "if (!ptr)" is legitimate and bears a different connotation from "if (ptr == NULL)", at least in my mind. One is truth, the other is zeroness. In some cases the former is actually the more obvious test. There are also cases where compactness yields more readable code because the whole idea fits in a space easily acquired at a glance, for example, "if (structp && structp->member == VAL)" is natural and obvious to anyone who's been at this for any amount of time.

    All of this, of course, IMO.

  • by Anonymous Coward on Friday February 25, 2005 @09:24PM (#11784058)

    NULL equals (void*)0


    (void*)0 may or may not have the same bitpattern as (int)0 due to implementation... stuff.

    The standard does make one thing clear:
    If null pointers are not represented as the value 0 then they must be explicitly converted to 0 when converting the null pointer to an integer.

    In general, anything converted to (void*) and back again shall retain it's original value.

    HOWEVER, I can't find where the standard enforces the idea that the size of a pointer type must be of less than or equal to int or long. Yet, when pointer type is converted to an arithmetic type it is treated as an unsigned integer. Thus, the ability to convert pointer types to long - without losing information - is implementation dependent. Hence, the standard allows that object and function pointers could have different representations. If arithmetic and pointer types have different representations then some sort of conversion is done, and it is up to the programmer to understand what is going on.

  • by bonniot ( 633930 ) on Friday February 25, 2005 @09:30PM (#11784091) Homepage Journal
    Oh look, a null pointer... Do you want me to go into an infinite loop or throw an exception right where I'm standing?

    What about using a language where null pointer errors are caught at compile time []? Oh, I guess our compiler [] must have a brain the size of a whole solar system to be able to do that.

  • Re:Not always. (Score:2, Informative)

    by BasilBrush ( 643681 ) on Friday February 25, 2005 @11:00PM (#11784665)
    You're not old enough to have been working for 20 years, I can tell by your use of language. And you certainly haven't been programming commercially for long if at all. Drop the pretence. And pick up that copy of Code Complete, it will help you. Believe it or not I'm giving you a hint that will help you.
  • Amen (Score:3, Informative)

    by Bozdune ( 68800 ) on Friday February 25, 2005 @11:43PM (#11784899)
    Let's not forget how long it took them, either. I worked with some of the Shuttle programmers. I shared an office with one pudgy little 40-something bald guy who wrote about three lines of code per month. He had a big loose-leaf notebook full of all his test cases and his test jigs and his interfaces and his error checks. He worked for another guy who used to hold 1/2 day meetings every two days. In the time I shared a cubicle with him, probably 3 months, he had accomplished a whole lot of nothing.

    As far as how together and structured the Shuttle group was, I remember the day there was a head crash on the 3330 drive that held all their source code. It was like turning over an ant nest, programmers scurrying around the halls screaming, etc. Don't believe everything those people write about how well they were organized and how wonderful everything was.

  • Readibility (Score:3, Informative)

    by Kell_pt ( 789485 ) on Saturday February 26, 2005 @01:02AM (#11785306) Homepage
    I must say I don't agree with the author's remarks on readability. I use
    if ( !ptr )
    throughout my code for ages and always felt that to be far more readable, and so do my colleagues. It's also faster to type.
    It reads "if no pointer" which is far understandable than "if pointer is null". Because pointers aren't really null, they are zero.

    Mostly, it's just a matter of style. Imho, if you have trouble reading either of these though, you're using the wrong language - for your own safety stay away from Perl aswell. ;)
  • HLSL (Score:4, Informative)

    by Saville ( 734690 ) on Saturday February 26, 2005 @01:56AM (#11785490)
    As far as I'm concerned compilers are better than 99% of the programmers out there. Just write clear code and let the compiler do it's trick. However there are a couple cases where things aren't automatically optimized that I can think of.

    It's not really a coding trick like an XOR swap, but most compilers don't yet seem to fully unroll parallel loops into good SIMD instructions or multiple threads.

    The only time I've needed to bother to look at assembly output in recent years (other than debugging a release mode program) is when writing HLSL shaders. HLSL is the high level shading languge (C like) for shaders that is part of Direct3D 9. HLSL can be compiled to SM1, SM2, or SM3 assembly.

    With pixel shader 2.0 you've only got 64 instruction slots, and some important instructions like POW (power), NRM (normalize), and LRP (interpolate) take multiple slots. 64 slots is not enough for a modern shader. I curse ATI for setting it bar so low. ;)

    There are two flaws I've found with the Dec 04 HLSL compiler in the DirectX. Sometimes it will not automatically detect a dot product opportunity. I had some colour code to convert to black and white in a shader and wrote it as y = colour.r*0.299 + colour.g*0.587 + colour.b*0.114; as I thought that was the most clear way to write it. Under certain circumstances the compiler didn't want to convert to a single dot instruction so I had to write as y = dot( colour.rgb, half3(.299h, .587h, .0114h)); I'm not sure if that bug still exists in the current compiler release or not.

    Another is often a value in the range of -1 to +1 is passed in as a colour, which means it must be packed into 0-1 range. To get it back you've got to double it and add 1.
    a = p*2+1; gets converted into a single MAD instruction which takes one slot.
    a = (p-0.5)*2; gets converted into an ADD and then a MUL.

    Also conventional wisdom says you've got to write assembly to get maximum performance out of pixel shader 1.1 as it is basically just eight instruction slots. I don't have any snippets to verify this though.

    I think this thread demonstrates that either compilers are mature enough you don't need any code tricks to help them do their job or /. posters just aren't aware of the short commings of compilers (see first sentence of this post) and would rather post obvious advice than not post at all. :)
  • In most cases (Score:3, Informative)

    by Z00L00K ( 682162 ) on Saturday February 26, 2005 @02:56AM (#11785666) Homepage
    I try to write code that is simple to read and avoid complex statements. This is often easily optimized by the compiler and more important readable for whoever is doing maintenance on the code.

    For the rare performance critical parts it is however worth the effort to try various constructions to get the best performance out of the code. The most problematic issue is to identify the hotspots in the code and figure out which variables that should be declared as 'register' and those that shouldn't. Ordering of statements are also important in order to match the various performance improvments the CPU can offer. One very good document on this is actually found at AMD [].

    One code construct that I am using that I found is very useful is to place the matching '{' and '}' in the same column in the code. This eases the effort trying to find where a block begins.


    if (a == b)
    In my opinion this produces code that has an improved readability compared to the constructs placing the '{' on the same line as the if-statement where it is much easier to miss.
  • by R1ch4rd ( 710276 ) on Saturday February 26, 2005 @06:21AM (#11786109)
    That's 99% true, but then there are those 1% applications were it does matter. I'm writing real-time computer vision applications and 1/2 a millisecond still counts. And you have a piece of code running 1000's of times which MUST be as fast as possible. So it's a valid question after all. Richard
  • by bonniot ( 633930 ) on Saturday February 26, 2005 @06:24AM (#11786114) Homepage Journal
    Did you check the links? I'm actually pointing to a specific language where null pointer (OK, references) errors at caught at compile time. It *is* possible [].
  • by flok ( 24996 ) <> on Saturday February 26, 2005 @08:55AM (#11786360) Homepage Journal
    Why? There are platforms (maybe there were, not sure) where NULL is actually not the same as 0.
  • by bonniot ( 633930 ) on Saturday February 26, 2005 @10:48AM (#11786772) Homepage Journal
    Correct. Although ideally, if it tells you the variable may be null, then there exists some situation where the variable *will* be null. So you better handle this case. Clearly, the problem of reporting an issue exactly in the cases where it will happen is undecidable, so a safe compiler has to be a bit restrictive. In practice, it seems that most cases where it reports a spurious problem is when it depends on the way clients call your code. And in that case, even if it cannot happen now, it could later happen because of a client change, so it's a good idea to handle that case now anyway.

Civilization, as we know it, will end sometime this evening. See SYSNOTE tomorrow for more information.