Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Programming Technology

Optimizations - Programmer vs. Compiler? 1422

Saravana Kannan asks: "I have been coding in C for a while (10 yrs or so) and tend to use short code snippets. As a simple example, take 'if (!ptr)' instead of 'if (ptr==NULL)'. The reason someone might use the former code snippet is because they believe it would result in smaller machine code if the compiler does not do optimizations or is not smart enough to optimize the particular code snippet. IMHO the latter code snippet is clearer than the former, and I would use it in my code if I know for sure that the compiler will optimize it and produce machine code equivalent to the former code snippet. The previous example was easy. What about code that is more complex? Now that compilers have matured over years and have had many improvements, I ask the Slashdot crowd, what they believe the compiler can be trusted to optimize and what must be hand optimized?"
"How would your answer differ (in terms of the level of trust on the compiler) if I'm talking about compilers for Desktops vs. Embedded systems? Compilers for which of the following platforms do you think is more optimized at present - Desktops (because is more commonly used) or Embedded systems (because of need for maximum optimization)? Would be better if you could stick to free (as in beer) and Open Source compilers. Give examples of code optimizations that you think the compiler can/can't be trusted to do."
This discussion has been archived. No new comments can be posted.

Optimizations - Programmer vs. Compiler?

Comments Filter:
  • Clear Code (Score:5, Insightful)

    by elysian1 ( 533581 ) on Friday February 25, 2005 @04:48PM (#11781211)
    I think writing clear and easy to understand code is more important in the long run, especially if other people will have to look at it.
  • Re:Clear Code (Score:5, Insightful)

    by normal_guy ( 676813 ) on Friday February 25, 2005 @04:50PM (#11781228)
    That should be "especially _since_ other people will have to look at it."
  • by American AC in Paris ( 230456 ) * on Friday February 25, 2005 @04:50PM (#11781229) Homepage
    This is marginally away from the submitter's question, but it warrnats attention:

    The sad truth is that, as far as optimization goes, this isn't where attention is most needed.

    Before we start worrying about things like saving two cycles here and there, we need to start teaching people how to select the proper algorithm for the task at hand.

    There are too many programmers who spend hours turning their code into unreadable mush for the sake of squeezing a few milliseconds out of a loop that runs on the order of O(n!) or O(2^n).

    For 99% of the coders out there, all that needs to be known about code optimization is: pick the right algorithms! Couple this with readable code, and you'll have a program that runs several thousand times faster than it'll ever need to and is easy to maintain--and that's probably all you'll ever need.

  • Re:Clear Code (Score:5, Insightful)

    by daveho ( 235543 ) on Friday February 25, 2005 @04:52PM (#11781262)
    I agree 100%. Write code that is easy to understand and modify, then optimize it, but only after you have profiled it to find out where optimization will actually matter .
  • NULL not always 0 (Score:3, Insightful)

    by leomekenkamp ( 566309 ) on Friday February 25, 2005 @04:53PM (#11781266)

    Aren't there machines out there where the C compiler specifically defines NULL as value that is not equal to 0? I recall reading that somewhere, and that was my reason for using ==NULL instead of !. My C days are long gone though...

  • Tradeoffs (Score:5, Insightful)

    by Black Parrot ( 19622 ) on Friday February 25, 2005 @04:53PM (#11781273)


    Hard to measure, but what is the tradeoff between increased speed and increased readability (which is a prerequisite for correctness and maintainability)? And if you can estimate that tradeoff, which is more important to the goals of your application?

    As a side note, it is far more important to make sure you are using efficient algorithms and data structures than to make minor local optimizations. I've seen programmers use bizarre local optimization tricks in a module that ran in exponential time rather than log time.

  • Re:Bad example (Score:3, Insightful)

    by hpa ( 7948 ) on Friday February 25, 2005 @04:53PM (#11781274) Homepage
    Bullshit.

    Read the C standard about the definition of a null pointer constant.
  • by El Cubano ( 631386 ) on Friday February 25, 2005 @04:53PM (#11781277)

    What about code that is more complex? Now that compilers have matured over years and have had many improvements, I ask the Slashdot crowd, what they believe the compiler can be trusted to optimize and what must be hand optimized?

    Programmers cost lots more per hour than computer time. Let the compiler optimize and let the programmers concentrated on developing solid maintainable code.

    If you make code too clever in an effort to try to pre-optimize, you end up with code that other people have difficulty understanding. This is leads to lower quality code as it evolves if the people that follow you are not as savvy.

    Not only that, but the vast majority of code written today is UI-centric or I/O bound. If you want real optimization, design a harddrive/controller combo that gets you 1 GBps off the physical platter (and at a price that consumers can afford).

  • by slavemowgli ( 585321 ) * on Friday February 25, 2005 @04:53PM (#11781278) Homepage
    The most important optimization is still the optimization of the algorithms you use. Unless under the most extreme circumstances, it doesn't really matter anymore whether the compiler might generate code that takes two cycles more than the optimal solution on today's CPUs; instead of attempting to work around the compiler's perceived (or maybe real) weaknesses, it's probably much better to review your code on a semantic level and see if you can speed things up by doing them differently.

    The only exception I can think of is when you're doing standard stuff where the best (general) solution is well-known, like sorting; however, in those cases, you shouldn't reinvent the wheel, anyway, but instead use a (presumably already highly-optimized) library.
  • $.02 (Score:5, Insightful)

    by MagicM ( 85041 ) on Friday February 25, 2005 @04:54PM (#11781296)
    1) Code for maintainability
    2) Profile your code
    3) Optimize the bottlenecks

    That said, (!ptr) should be just as maintanable as (ptr == NULL) simply because it is a frequently used 'dialect'. As long as these 'shortcuts' are used throughout the entire codebase they should be familiar enough that they don't get in the way of maintainability.
  • micro optimization (Score:5, Insightful)

    by fred fleenblat ( 463628 ) on Friday February 25, 2005 @04:54PM (#11781298) Homepage
    What you're talking about it micro-optimization.
    Compilers are pretty good at that, and you should let them do their job.

    Programmers should optimize at a higher level: by their choice of algorithms, organizing the program so that memory access is cache-friendly, making sure various objects don't get destroyed and re-created unnecessarily, that sort of thing.
  • by smug_lisp_weenie ( 824771 ) * <cbarski.4503440@bloglines.com> on Friday February 25, 2005 @04:55PM (#11781303) Homepage
    ...are doomed to repeat the biggest trap in computer programming over and over again:

    "Premature optimization is the root of all evil"

    If there's only one rule in computer programming a person ever learns, "Hoare's dictum" is the one I would choose.

    Almost all modern languages have extensive libraries available to handle common programming tasks and can handle the vast majority of optimizations you speak of automatically. This means that 99.99% of the time you shouldn't be thinking about optimizations at all. Unless you're John Carmack or you're writing a new compiler from scratch (and perhaps you are) or involved in a handful of other activities you're making a big big mistake if your spending any time worrying about these things. There are far more important things to worry about, such as writing code that can be understood by others, can easily be units tested, etc.

    A few years ago I used to write C/C++/asm code extensively and used to be obsessed with performance and optimization. Then, one day, I had an epiphany and started writing code that is about 10 times slower than my old code (different in computer language and style) and infinitely easier to understand and expand. The only time I optimize now is at the very very end of development when I have solid profiler results from the final product that show noticable delays for the end user and this only happens rarely.

    Of course, this is just my own personal experience and others may see things differently.
  • by swillden ( 191260 ) * <shawn-ds@willden.org> on Friday February 25, 2005 @04:55PM (#11781306) Journal

    With regard to your example, I can't imagine any modern compiler wouldn't treat the two as equivalent.

    However, in your example, I actually prefer "if (!ptr)" to "if (ptr == NULL)", for two reasons. First the latter is more error-prone, because you can accidentally end up with "if (ptr = NULL)". One common solution to avoid that problem is to write "if (NULL == ptr)", but that just doesn't read well to me. Another is to turn on warnings, and let your compiler point out code like that -- but that assumes a decent compiler.

    The second, and more important, reason is that to anyone who's been writing C for a while, the compact representation is actually clearer because it's an instantly-recognizable idiom. To me, parsing the "ptr == NULL" format requires a few microseconds of thought to figure out what you're doing. "!ptr" requires none. There are a number of common idioms in C that are strange-looking at first, but soon become just another part of your programming vocabulary. IMO, if you're writing code in a given language, you should write it in the style that is most comfortable to other programmers in that language. I think proper use of idiomatic expressions *enhances* maintainability. Don't try to write Pascal in C, or Java in C++, or COBOL in, well, anything, but that's a separate issue :-)

    Oh, and my answer to your more general question about whether or not you should try to write code that is easy for the compiler... no. Don't do that. Write code that is clear and readable to programmers and let the compiler do what it does. If profiling shows that a particular piece of code is too slow, then figure out how to optimize it, whether by tailoring the code, dropping down to assembler, or whatever. But not before.

  • by Anonymous Coward on Friday February 25, 2005 @04:55PM (#11781316)
    I second that.

    Optimisations at such a low level (especially without profiler evidence to prove it) is often a complete waste of time when the remainder of the code is slow due to crappy algorithm or structure choices.

    ...I remember a guy I worked with wrote a "faster" atol type function. His had less code and did much less. I suggested we profile it to demonstrate is coding prowess. Of course his executed slower than the shipped crt version...his suggestion of taking the crt verson and "hacking out the junk" amused the rest of us for a while hehe (Lee, you know who you are)

  • Re:Clear Code (Score:1, Insightful)

    by Anonymous Coward on Friday February 25, 2005 @04:56PM (#11781321)
    Other people don't have to have a look at it if it works. If it doesn't work, you've better write new code to replace it. Modifying legacy code is always a security risk.
  • by SamSeaborn ( 724276 ) on Friday February 25, 2005 @04:56PM (#11781323)
    "Programs should be written for people to read, and only incidentally for machines to execute."
    - Structure and Interpretation of Computer Programs [tinyurl.com]
  • by flynt ( 248848 ) on Friday February 25, 2005 @05:01PM (#11781414)
    But this would require people to actually get computer science degrees, or have enough self-motivation to read books on algorithms and do the excercises. For most, that's too much to ask, since they cannot see how to apply the theory they learn in school to practice. The ones that can apply the theory are the good programmers. The ones that can't or never learned the theory in the first place probably aren't.
  • by Felonious Monk ( 784998 ) on Friday February 25, 2005 @05:01PM (#11781421)
    A good point, but code for embedded devices, or any code that has to interface with real-time physical processes, is really a different ball game.
  • by HeghmoH ( 13204 ) on Friday February 25, 2005 @05:02PM (#11781435) Homepage Journal
    Putting the value on the left side prevents == VS = mixups

    So does using -W -Wall -Werror, and that lets you write your statements naturally and protects you when comparing two variables.

    Relying on NULL #define-ition to be compatible with all pointer types is risky.

    Relying on behavior that is explicitly specified in the ISO/ANSI standard is never risky.
  • Indeed (Score:2, Insightful)

    by Man in Spandex ( 775950 ) <prsn DOT kev AT gmail DOT com> on Friday February 25, 2005 @05:04PM (#11781473)
    Short variable names indeed. Some book authors make me laugh.

    A few books of programming that I've used, they all use the infamous i counter name in their for loops and then, they come up and say that you have to give variable names that make sense, and then you see again the for int i...
  • Stupid question (Score:2, Insightful)

    by PrismaticBooger ( 103265 ) on Friday February 25, 2005 @05:04PM (#11781476) Homepage
    I have been coding in C for a while (10 yrs or so) and tend to use short code snippets. As a simple example, take 'if (!ptr)' instead of 'if (ptr==NULL)'. The reason someone might use the former code snippet is because they believe it would result in smaller machine code if the compiler does not do optimizations or is not smart enough to optimize the particular code snippet.
    That's simply inane. Why don't you check the assembly your compiler generates? If your're really up for shits and giggles, compare it to a C compiler from 10 years ago.
    IMHO the latter code snippet is clearer than the former, and I would use it in my code if I know for sure that the compiler will optimize it and produce machine code equivalent to the former code snippet.
    So why are you asking here? Check what your compiler generates. Incidentally, I find the former more readable. While you might be under the illusion that people do use it as an optimization technique, many simply find it easier to read and write. It's a widely accepted and understood idiom for checking pointer validity. And in C++, it has the benefit of being able to look the same whether ptr is a smart pointer or a raw pointer.
    The previous example was easy. What about code that is more complex? Now that compilers have matured over years and have had many improvements, I ask the Slashdot crowd, what they believe the compiler can be trusted to optimize and what must be hand optimized?
    Write readable code. Ask a profiler what you need to optimize.
  • by jmcmunn ( 307798 ) on Friday February 25, 2005 @05:04PM (#11781484)

    I think that the more important question is "Should I bother to hand optimize my code at all?" since as you pointed out we don't really know how the compiler is going to optimize everything. It could take your perfectly optimized code and ruin it completely, thus wasting all of the time you spent optimizing.

    Personally, I try to write code that is easily readable by myself and others. If it isn't readable by someone in the future, it does no good IMHO. I say write the code how it is easy to read, and let the speed of modern processors, and the advancement of compilers do the hard work.

    Now, of course I don't mean you should write terribly slow algorigthms just to be neat and tidy, you should still take the time to think of a good/clean/fast snippet of code as well.
  • by HalWasRight ( 857007 ) on Friday February 25, 2005 @05:06PM (#11781503) Journal
    • "Premature optimization is the root of all evil" -- C.A.R. Hoare

      "This mission is too important to allow you to jeopardize it." -- HAL

    Seriously, why would you waste your time obfuscating your code when you don't have too? Unless you know through profiling that detailed statement level code is bad then you are shooting yourself in the foot.

    This isn't to say that when making architecture level decisions that you shouldn't optimize. O(N^2) is bad, Um'Kay? O(N) is alright for small N, but O(log N) is better when you know you'll have a significant N. That's the stuff a compiler can't do for you today.

    Once you've profiled, and you know something is critical and can be done better and matters, then start obfuscating. There is a lot you can do in C to optimize, especially with DSP codes, so resorting to ASM should only be done for the most extreme cases.

  • Re:Huh (Score:1, Insightful)

    by slashjames ( 789070 ) on Friday February 25, 2005 @05:06PM (#11781514)
    if (ptr==NULL) is better in my opinion. There is no gaurantee that NULL == 0 for all platforms!
  • by Anonymous Coward on Friday February 25, 2005 @05:07PM (#11781535)
    Yeah, well, programming those old 8- and 16-bit CPUs in C has always been a bit of an exercise in masochism. Take cc65, the C compiler for the 6510 for an example. The only reason that thing produces working code is that it emulates a kind-of 8/16 bit machine through scads of subroutines which implement a sort of a stack machine, apart from the memory store and load operations and basic things like that. Actual code produced by it looks like piles upon piles of JSRs to the library routines.

    Then again, I wouldn't want to have to construct an optimizing compiler for an 8-bit, "one accumulator and 2 index registers" load-store architecture either...
  • Re:Clear Code (Score:2, Insightful)

    by SIGPUNKT ( 853627 ) on Friday February 25, 2005 @05:08PM (#11781543)
    Amen! If you're not already sweating optimization (i.e., you've got some supremely high-performance code), then it's better to write straightforward code. Especially with C, which has pretty mature compilers.

    Another thing to consider is that compilers for a modern RISC architecture have pretty intense optimization built in just to handle the instruction scheduling (re-ordering instructions to avoid pipeline stalls, etc.) that any trivial optimizations you might make would be "lost in the noise" anyway.

    That said, the big optimizations will always be worthwhile: cacheing results so you don't have to read from a file/database again, using lazy initialization to avoid populating data structures you may not use, validating inputs so you don't get halfway through an expensive operation and then have to roll back the transaction and throw an error, etc. But moving loop invariants? Maybe in a new language with an immature compiler, or a scripting language (just how efficient is PHP, anyway? Python?), but any modern compiler will make that irrelevant.

  • Re:Clear Code (Score:3, Insightful)

    by pz ( 113803 ) on Friday February 25, 2005 @05:08PM (#11781549) Journal
    This is absolutely true. Even if you are the only programmer who will ever look at the code. 10 years from now, when you're called on to fix a bug in something you wrote, you'll be extra glad that you took the time to write clearly, and comment liberally. Anyone else who comes across your code will thank you as well. And I'm speaking with nearly 30 years' experience as a programmer (and two CS degrees from MIT).

    In particular, unless you have very specific efficiency needs, modern CPUs are more than up to the task for nearly anything we can think of these days, further ameliorating the need for trading optimization against clarity. That said, there still remain applications which are CPU-bound. In such cases where hand optimization makes a difference, I usually first write a clear, general-purpose version of the code to make sure it works correctly. Then, I'll special case highly optimized versions where all bets are off for readability and maintainability, but will retain the clear version for the general purpose case. This does two things: first, it provides a soft intro to the algorithm in unoptimized fasion, aiding future maintenance; second, it provides a benchmark against which performance and correctness of the optimized code can be tested. If code size becomes a serious issue (for example in embedded applications), the clearly written reference code gets put in a comment above the highly optimized code. But in any case, the clear, correct version is retained.

    The importance of clearly written code (and the process of writing code clearly) is difficult to overstate.
  • by LakeSolon ( 699033 ) on Friday February 25, 2005 @05:09PM (#11781554) Homepage
    You do realize slashdot is written in perl, yes? Or have you not noticed the contents of the URL, your own comment for example http://ask.slashdot.org/comments.pl?sid=140256&cid =11781397

    ~Lake
  • Re:Huh (Score:5, Insightful)

    by DunbarTheInept ( 764 ) on Friday February 25, 2005 @05:11PM (#11781584) Homepage
    Not true. Many CPUs have a unary jump-if-zero, or a jump-if-nonzero operation. Thus the comparasin step can be bypassed since you know you're comparing to zero.

    However, any compiler worth anything should find that and optimize it very easily in the case where you're comparing to a constant that evaluates to zero.

  • by jdunn14 ( 455930 ) <jdunn&iguanaworks,net> on Friday February 25, 2005 @05:13PM (#11781607) Homepage
    I agree with you to a point. Do not try to squeeze a couple cycles here and there. Pick the right algorithm, but realize that n vs. n^2 vs. 2^n is not a big deal in 99% of applications. Code it so it works first. Make it go. Then, if it sucks performance-wise, replace those slow algorithms. I've seen too many people, myself included, who try to ensure that they've picked the perfect algorithm for the job, but take 4 times longer to design and write the code. Write the damn thing first. Keep it readable and maintainable first.

    The worst people are those college students completing early algorithms classes. They've just been shown the power of the algorithms, and now they feel that everything must use them. Realistically, if you're working with 10 objects, 100 objects, 1000 objects n^2 often isn't a big deal on a 1ghz+ processor. Learn the tools called profilers that enable you to find where the slow points are and optimize after it basically works. Of course, original design does need to be reasonably modular to allow you to replace algorithms as needed.

    Anyway, just my 2 cents after having not finished some projects because I got too caught up in the algorithm and speed details that on reflection were probably not necessary.
  • Re:Huh (Score:1, Insightful)

    by Anonymous Coward on Friday February 25, 2005 @05:13PM (#11781612)
    That's no kind of 6502 code I have ever seen. I think you meant:

    LDA ptr
    bne $1
    lda ptr+1
    bne $1
  • Re:Clear Code (Score:1, Insightful)

    by Anonymous Coward on Friday February 25, 2005 @05:14PM (#11781630)
    Now... back to my realtime system... gotta make those blade servers smoke!

    It sounds unlikely that those blade servers are a realtime system, and it sounds even more unlikely you even know what a real time system is. Hint - blade servers almost certainly have far too many components with uncertain timing characteristics to be used in a real time system.

    I think writing code that executes SO FAST would be useful only in real time systems and large servers.

    It's perhaps even more valuable in high-volume low-cost systems. A DVD player (a very soft real-time system) that can get by with 1 microcontroller instead of 2 is worth tens to hundreds of millions of dollars to DVD manufacturers.

  • by Cthefuture ( 665326 ) on Friday February 25, 2005 @05:15PM (#11781637)
    I have seen this syntax used sometimes. Personally, I find it difficult to read. The problem is that you have reversed the logic in a LISP-like way. There is a reason LISP is not mainstream, think about it.

    IMHO, a good programming language is an extension of your thought processes. No current language is particularly great, but some are better than others because they work more like the way we think. This is a huge part of Object-Oriented programming, its purpose is mostly grouping and categorizing things. That's what humans do, group and catagorize. Note that I'm not saying OOP is the answer to everything, I believe all programming idioms have there place. However, there are good reasons why OOP got popular.

    When I'm working with a particular variable, in my head I'm thinking "I need to check this variable against NULL" (ie. variable == NULL). I absolutely do not think "NULL, what kinds of things are NULL." That would be backwards.

    Anyway, back to the optimization thing. I use things like if (!ptr) all the time but not for optimization purposes. People use that test so often that I don't think anyone thinks it's confusing. Sometimes I will use the more verbose test if the code is particularly complex but the thought of it being less optimal never entered my mind because even if it were slower, it would be such a small difference that it wouldn't matter.

    Too often I see people "optimizing" code that doesn't need to be optimized due to the fact that there are other places in the code that are much, much slower making the optimization such a small benefit that there is no reason to do it in the first place.

    On the other side of that, I see people ignoring optimization thinking that if they need to make it faster they can worry about that later. Then after 10,000 lines of code they realize that the system is too slow and there is nothing they can do about it because of bad (slow) design decisions made throughout the process.
  • by lgw ( 121541 ) on Friday February 25, 2005 @05:15PM (#11781645) Journal
    Wasn't that long ago that every other guy's homegrown 3D engine (software rendering, mind you, this was the 100mhz pentium era) had an ultra-optimized version of bubblesort doing the depth sorting of polygons in a painter's algorithm type affair.

    I certainly hope those people are better educated these days.


    It's almost impossible to beat an optimised assembly bubble-sort when sorting small data sets (up to hundreds, sometimes even thousands). It takes a while for better algorithmic performance to overcome a good static effeciency multiplier.

    It's becoming less of an issue as CPU instruction pipelines get larger (and on-chip cache gets faster), but at that time fitting your entire algorithm into the instruction pipeline could mean execution 10 to 30 times as fast.
  • Language paradigms (Score:3, Insightful)

    by alexo ( 9335 ) on Friday February 25, 2005 @05:16PM (#11781662) Journal
    > I have been coding in C for a while (10 yrs or so) and tend to use short code snippets.
    > As a simple example, take 'if (!ptr)' instead of 'if (ptr==NULL)'.
    > The reason someone might use the former code snippet is because they believe it would result
    > in smaller machine code if the compiler does not do optimizations or is not smart enough
    > to optimize the particular code snippet.


    No programmer believes that.
    In C, NULL is #define-ed to 0 and the "!" operator also compares against zero so every compiler should generate exactly the same code for both.

    > IMHO the latter code snippet is clearer than the former, and I would use it in my code

    Actually I prefer to write (and read) the former and I do find it clearer, mostly because it is idiomatic in C et al.

    Another good reason is that the former works better in C++ because it enables you to substitute "smart" objects for plain pointers and use them in a more natureal way (especially in templates).

    (Aside: most platforms that have C compilers also have deccent C++ compilers)

    > if I know for sure that the compiler will optimize it and produce machine code equivalent to the former code snippet.

    See above. There is nothing to optimize.
  • Re:Clear Code (Score:3, Insightful)

    by Anonymous Coward on Friday February 25, 2005 @05:18PM (#11781683)
    Naturally. However, the example is retarded. I use the simpler form precisely because it's clearer and more expressive.

    "if (!ptr)" translates perfectly clear into english as "if no (valid) pointer" while "if (ptr==NULL)" involves some spurious special case value that I need to spend extra tinkering with.

    It's like comparing booleans with "if (foo==true)" instead of "if (foo)". If that's better why not go all the way and write "if (((...((foo==true)==true)==true)...==true)==true)" ? For extra clarity you should probably make a recursive function out of it.
  • by Chemisor ( 97276 ) on Friday February 25, 2005 @05:22PM (#11781756)
    > Every programmer worth his/her salt knows that
    > source code is self documenting...

    And it's true too. Although comments are indeed a good thing, writing code that does not require them is a much better one. If your code needs comments, it's probably too complex for continued maintenance.
  • by soft_guy ( 534437 ) on Friday February 25, 2005 @05:23PM (#11781764)
    Bullshit. Some basic checks on performance are always appropriate as part of your debugging. For example, on MacOS X, I recommend you at least do two things in your app:

    1. Run top and look at the amount of CPU usage your app has during different parts of its operation. It should not, for example, run at 99% CPU usage while idle.
    2. Run QuartzDebug to make sure you aren't doing gratuituous amounts of extra drawing. Examples: redrawing more often than necessary, redrawing more area than necessary.

    And yes, for the average application, I still care about these things.

    If certain operations seem to be slow, run an optimization tool and see what "low hanging fruit" you can address.

    I've worked on several professional applications and while some of them are "weird", some level of optimization has always been important.
  • by DunbarTheInept ( 764 ) on Friday February 25, 2005 @05:24PM (#11781786) Homepage
    I agree that you should write the more clear form, and damn the optimization. But I disagree that the second form is the more clear one. The first one reads as "if not pointer", which very consisely and completely conveys the meaning that is intended, which is "If this thing isn't really a pointer to anything."

    The problem comes from the fact that some functions that return integers do so in a way that has the inverse of the intuitive boolean interpretation. (Zero means true). One example is strcmp(), I'd much rather see if( strcmp(s1,s2) == 0 ) than if( ! strcmp(s1,s2) ), since the boolean version has 100% inverted meaning from what it looks like. System calls (man page 2) typically have the same problem. It's not that the calls themselves are bad (they have good reasons to return zero for success - becuase they have more than one kind of failure), but that the people using them should never have gotten into the habit of using inverted boolean symbology to interpret them in their code.

    If an integer doesn't behave like a boolean, then just treat it as an integer. Don't take advantage of the lose typing of C to treat it like a boolean that means the opposite of what it means.
  • by dpbsmith ( 263124 ) on Friday February 25, 2005 @05:30PM (#11781872) Homepage
    ...then the code isn't important enough to optimize. Plain and simple.

    Never try to optimize anything unless you have measured the speed of the code before optimizing and have measured it again after optimizing.

    Optimized code is almost always harder to understand, contains more possible code paths, and more likely to contain bugs than the most straightforward code. It's only worth it if it's really faster...

    And you simply cannot tell whether it's faster unless you actually time it. It's absolutely mindboggling how often a change you are certain will speed up the code has no effect, or a truly negligible effect, or slows it down.

    This has always been true. In these days of heavily optimized compilers and complex CPUs that are doing branch prediction and God knows what all, it is truer than ever. You cannot tell whether code is fast just by glancing at it. Well, maybe there are processor gurus who can accurately visualize the exact flow of all the bits through the pipeline, but I'm certainly not one of them.

    A corollary is that since the optimized code is almost always trickier, harder to understand, and often contains more logic paths than the most straightforward code, you shouldn't optimize unless you are committed to spending the time to write a careful unit-test fixture that exercises everything tricky you've done, and write good comments in the code.
  • by wizbit ( 122290 ) on Friday February 25, 2005 @05:31PM (#11781895)
    Or a paranoid, depressed android.
  • My experience (Score:4, Insightful)

    by pclminion ( 145572 ) on Friday February 25, 2005 @05:36PM (#11781950)
    First, let me say what sort of code I write. I work almost exclusively with high-performance, 2D graphics code. Most of what I do involve manipulating bits, worrying about cache utilization, and squeezing the last bits of performance out of a three line inner loop. I'm just going to rattle off what I know from my experience with gcc and VC++:

    The compiler will perform strength reduction in all reasonable instances.
    The compiler will raise invariant computations from inner loops in almost all cases that do not involve pointers.
    The compiler knows how to optimize integer division in ways I wouldn't have even thought of.
    The compiler sometimes "forgets" about a register and produces sub-optimal code for inner loops.
    The compiler can't always tell what variable is most important to keep in a register in an inner loop.

    Other stuff:

    x^=y; y^=x; x^=y; optimizes to an XCHG instruction with gcc on x86. I was amazed that it could do that. (Yes, that piece of code exchanges x and y). On the other hand, tmp=x; x=y; y=tmp; doesn't get optimized to an XCHG. Obviously, the compiler is using a Boolean simplifier or identity-prover.

    The compiler always assumes a branch will be taken (unless you use certain compiler switches to change this behavior). Thus you should always arrange your conditional tests so that the less-often executed code is within the braces.

    Don't be afraid to write complex expressions. Subexpression elimination is almost foolproof in all instances where pointers are NOT involved. It's better to leave your code clear, and let the compiler optimize it.

    And ABOVE ALL:

    No matter how much the compiler optimizes your code, you can throw it all down the toilet with bad design by screwing the cache utilization. This is EXTREMELY important especially in graphical applications which process huge raster buffers. Row-wise processing is always more efficient than column-wise. Random access will kill your performance. Do not trust the memory allocator to keep your allocations together. Write your own allocator if you are dealing with thousands or millions of small, related chunks of information.

    I could go on... But I must also second what others have said, which is to perform algorithmic optimizations FIRST and do not bother with constant-factor optimizations until you are CERTAIN that you are using the best algorithm. If you ignore this advice you might waste a week optimizing a three-line inner loop and then come up with a better algorithm the next week which makes all your hard work redundant.

  • exactly! (Score:2, Insightful)

    by pyrrho ( 167252 ) on Friday February 25, 2005 @05:43PM (#11782070) Journal
    comments can be misleading, but the code never lies, it always works exactly as written.
  • Re:Clear Code (Score:2, Insightful)

    by joshdick ( 619079 ) on Friday February 25, 2005 @05:47PM (#11782120) Homepage
    I've heard this very example used many times in the past, and I think it's ridiculous. Any programmer knows that (!ptr) is the same as (ptr == NULL), so what makes you think one is clearer than another?
  • by neonstz ( 79215 ) * on Friday February 25, 2005 @05:47PM (#11782123) Homepage
    A few years ago I used to write C/C++/asm code extensively and used to be obsessed with performance and optimization. Then, one day, I had an epiphany and started writing code that is about 10 times slower than my old code (different in computer language and style) and infinitely easier to understand and expand. The only time I optimize now is at the very very end of development when I have solid profiler results from the final product that show noticable delays for the end user and this only happens rarely.

    It is important to be aware of that here are different types of optimizing. Optimizing code where the compiler probably does a good job is just stupid unless the code turns out to be a major bottleneck.

    However, not thinking about optimization/speed early can IMHO be very dangerous. If the project is a bit large and complex, a nice design on the whiteboard may very well turn up to be dead slow with no chance in hell to make it run significantly faster without redesigning/rewriting the entire thing (this doesn't really have anything to do with compiler optimization though).

    I've been working in a project (I wasn't in it in the beginning), where the design probably looked good for some people in the design document (although I don't really agree on that neither), but the performance aspect was neglected until the application turned out to be quite slow. Adding mechanisms to make it run faster has been quite "challenging". (My personal opinion in this piece of software is that performace issues was ignored even from early design because the wrong people making the decisions. Basically they didn't focus on where performance really was needed.

    So after a few years my experience bottles down to: "If you have a performace requirement, make sure your code keeps up the entire time." and "You can't get both high performance and general purpose stuff in the same piece of code".

  • Re:Not always. (Score:4, Insightful)

    by zaffir ( 546764 ) on Friday February 25, 2005 @05:49PM (#11782142)
    I make my code easy to read for my own sanity. I've lived out this bash.org quote [bash.org] way too many times.
  • Re:Clear Code (Score:2, Insightful)

    by Smallpond ( 221300 ) on Friday February 25, 2005 @05:52PM (#11782191) Homepage Journal
    My rule is never comment what the program does, comment why it does it.
    // Will crash if no files are open
    if (count == 0) {
  • Re:Clear Code (Score:2, Insightful)

    by oliverthered ( 187439 ) <oliverthered@nOSPAm.hotmail.com> on Friday February 25, 2005 @05:54PM (#11782211) Journal
    I don't see where there is a contradiction, well unless the guy reading you code can't understand a quick-sort, even with enough comments to write a book on the subject.

    The optimisation rules are.

    Good algorithms, beats any optimisation of bad ones hands down.

    The make sure you know what the algorithm does, that way you can possibly minimise work.
    (e.g. a div is just a lot of subtracts and shifts)

    then good 'hints' for the compiler.

    then do it by hand if the compiler it making a mess of things.

    Consider that I could write a 'fast' Word processor in VB3(all interpreted) when compared to one written in C fully optimised, because I make good algorithm choices.

    There's a HSV colour picker in one application that is slow as a dog, they could have used a look-up table and made it possibly a hundred times faster. (near real-time, vs visible delay)

    as far as the code examples.

    I think you should use if(NULL == something), that way the compiler will choke if you type if(NULL = something) by accident.

    some people say just use HeapFree, because it checks nulls and is fast, I say check the null when expected and not when not.
  • Re:Clear Code (Score:5, Insightful)

    by Rei ( 128717 ) on Friday February 25, 2005 @05:56PM (#11782241) Homepage
    An important lesson that I wish I had learned when I was younger ;) It is crazy to start optimizing before you know where your bottlenecks are. Don't guess - run a profiler. It's not hard, and you'll likely get some big surprises.

    Another thing to remember is this: the compiler isn't stupid; don't pretend that it is. I had senior developers at an earlier job mad at me because I wasn't creating temporary variables for the limits of my loop indices (on unprofiled code, nonetheless!). It took actually digging up an article on the net to show that all modern compilers automatically dereference any const references (be they arrays, linked lists, const object functions, etc) before starting the loop.

    Another example: function calls. I've heard some people be insistant that the way to speed up an inner loop is to remove the code from function calls so that you don't have function call overhead. No! Again, compilers will do this for you. As compilers were evolving, they added the "inline" keyword, which does this for you. Eventually, the compilers got smart enough that they started inlining code on their own when not specified and not inlining it when coders told it to be inline if it would be inefficient. Due to coder pressure, at least one compiler that I read about had an "inlinedamnit" (or something to that effect) keyword to force inlining when you're positive that you know better than the compiler ;)

    Once again, the compiler isn't stupid. If an optimization seems "obvious" to you, odds are pretty good that the compiler will take care of it. Go for the non-obvious optimizations. Can you remove a loop from a nested set of loops by changing how you're representing your data? Can you replace a hack that you made with standard library code (which tends to be optimized like crazy)? Etc. Don't start dereferencing variables, removing the code from function calls, or things like this. The compiler will do this for you.

    If possible, work with the compiler to help it. Use "restrict". Use "const". Give it whatever clues you can.
  • Small Potatoes (Score:1, Insightful)

    by VeryApt ( 852702 ) on Friday February 25, 2005 @05:57PM (#11782268)
    You are seriously worried about a C compiler optimizing a NULL compare? How mind-numbingly unproductive. You should see some of my code. I program in a real high-level language (SML). I use a real high-level compiler (MLton). I count on the compiler to do things like flatten my data structures, turn unknown function calls into switches and do closure conversion of my inner functions. The computer industry needs to stop living in the 1950s. If we had 1/10 the man hours put into compilers for high-level langauges like SML/Haskell/OCaml as has been put into C compilers, we would be able to produce 2x as fast code without _ever_ worrying about abstracting.
  • Re:Clear Code (Score:2, Insightful)

    by Coz ( 178857 ) on Friday February 25, 2005 @05:59PM (#11782285) Homepage Journal
    Remember - compilers don't care about comments!
  • by snipercat ( 649263 ) <erik_k_anderson@yahoo.com> on Friday February 25, 2005 @06:01PM (#11782318) Homepage

    First, reading through the existing comments, the general opinion appears to be, write clear code, unless you *really* need to optimize it. Ounce for ounce I have to agree with this.

    Second, regarding the embedded system portion of the question, we have to remember that the rules for embedded systems are different than the rules for general purpose systems. Specifically, embedded systems are resource constrained and (more times than not) have real-time deadlines.

    At least so far, I have never programmed an embedded system that I needed to optomize my code for speed (best case execution time), or for space. I have needed to change an algorithm around for complexity reasons, but never for minor incremental speed improvements.

    Real time systems are more about executing on time, rather than executing fast. And yes, there is a difference. Pay close attention to your worst case execution time. If your missing deadlines occasionally, it is most likely due to unpredictable interrupts and other events in your system, not because the compiler couldn't optimize your code.

    In short, regarding any compiler/code optimization you may want to do on your embedded system, write your code first to be dependable, predictable, and on time. Worry about raw speed later.

  • Re:Clear Code (Score:5, Insightful)

    by DJStealth ( 103231 ) on Friday February 25, 2005 @06:05PM (#11782359)
    Take the following example that is clear, but only 1 is considered optimized.

    Lets say you're traversing a 2D array of data (e.g., an image).

    for(x=0; x < width; x++)
    {
    for(y=0; y < height; y++)
    {
    ...
    }
    }

    versus

    for(y=0; y < height; y++)
    {
    for(x=0; x < width; x++)
    {
    ...
    }
    }

    The latter piece of code is just as clear as the first; however, will likely run about 50 times faster than the first, due to caching issues.

    Will the compiler optimize the first piece of code to look like the second? Probably not (tell me if I'm wrong), as there may be a reason to process things in a particular order.

    In addition, the latter piece of code may actually be less clear, as in some cases, it may not read well to do height before width in the for loop.

    As a result, you'll still need to write code thinking about optimization.
  • by fizban ( 58094 ) <fizban@umich.edu> on Friday February 25, 2005 @06:06PM (#11782376) Homepage
    Premature Optimization is the DEVIL! I repeat, it is the gosh darn DEVIL! Don't do it. Write clear code so that I don't have to spend days trying to figure out what you are trying to do.

    The biggest mistake I see in my professional (and unprofessional) life is programmers who try to optimize their code is all sorts of "733+" ways, trying to "trick" the compiler into removing 1 or 2 lines of assembly, yet completely disregard that they are using a map instead of a hash_map, or doing a linear search when they could do a binary search, or doing the same lookup multiple times, when they could do it just once. It's just silly, and goes to show that lots of programmers don't know how to optimize effectively.

    Compilers are good. They optimize code well. Don't try to help them out unless you know your code has a definite bottleneck in a tight loop that needs hand tuning. Focus on using correct algorithms and designing your code from a high level to process data efficiently. Write your code in a clear and easy to read manner, so that you or some other programmer can easily figure out what's going on a few months down the line when you need to add fixes or new functionality. These are the ways to build efficient and maintainable systems, not by writing stuff that you could enter in an obfuscated code contest.
  • by lgw ( 121541 ) on Friday February 25, 2005 @06:08PM (#11782390) Journal
    The places code most needs optimizaion are unlikely to be helped by the compiler anyway:
    • Unnecessary use of malloc/new instead of just using the stack
    • Unnecessary disk I/O
    • Not considering on which side of the network a calculation should be performed
    There are a few places where algorithmic optimization can help, but geenrally only with very new programmers. Far more often, it's the unintended and quite expensive consequences of library calls that cause much performance greif.
  • by Trillan ( 597339 ) on Friday February 25, 2005 @06:10PM (#11782409) Homepage Journal

    With the greatest respect to Linus, but writing a kernel does not make you the authority on programming. It does make you the authority on what particular style you allow in your CVS tree, but that's it.

    I certainly agree that loop_counter is a bad name, though. But rather than use i, I prefer to at least make a note of what sort of objects I'm looping through.

    For instance:

    int taskI;
    int taskCount = GetTaskCount();
    for (taskI=0; taskI<taskCount; taskI++)
    {
    ...
    }

    Code can never be 100% self documenting, but that's no reason not to settle for 0%. Whether you use CamelCase or words_broken_with_underscores is a matter of style, and you should stick with the style of the code base you're working on.

    Anyone who can't or won't work with multiple languages or adopt the necessary style for an existing project is a poor programmer. When you create project, you create the rules. When you work on someone else's project, you follow the rules.

  • Algorithm (Score:4, Insightful)

    by mugnyte ( 203225 ) on Friday February 25, 2005 @06:29PM (#11782615) Journal
    For a cheap, fast batch lookups I once wrote a hashed matrix using STL. Loaded all the cells, dynamically typed, added indexes on the data for that run, and then passed around this collection of in-memory tables to our routines. Ran fast and was simple to debug, since all the traversing was O(ln(n)) based (or a variant thereof). Adding serialization, we could distribute to machines overnight dynamically and cut the run to a few minutes - from almost 8 hours.

    Until it came time to dipose the memory. The STL slowly crawled tons of our objects, and the C++ dispose pattern was just too inefficient for all the stack hits. So we pointed the library at a custom heap and never disposed the dictionary - we just disposed the heap in bulk.

    All written without hesitation for "longhand" syntax. (and btw, its "if ( NULL == var ) " to those that care). The code optimized fine, with just a few choice inlines we got to stick. No reg vars, no assembly piles littering the code.

    But this was an in-house business app, and the lifecycles / requirements are different than other products. However, because of the nice algorithms, optimization wasn't difficult, and didn't rely on code tricks. If you're squabbling over code tricks for optimization, you're choosing the wrong algorithm, to me.

  • Re:Clear Code (Score:3, Insightful)

    by Matt Perry ( 793115 ) <perry DOT matt54 AT yahoo DOT com> on Friday February 25, 2005 @06:34PM (#11782663)
    An important lesson that I wish I had learned when I was younger ;) It is crazy to start optimizing before you know where your bottlenecks are.
    It's like that saying:

    Rules of Optimization:
    Rule 1: Don't do it.
    Rule 2 (for experts only): Don't do it yet.

  • Re:Not always. (Score:1, Insightful)

    by Anonymous Coward on Friday February 25, 2005 @06:37PM (#11782693)
    you all keep talking about clarity, but: why, oh why, is NULL considered to be more clear than 0? it's a #define, for chrisake, and a cast to boot. what could be less clear? equal to zero, not equal to 0, this is a fundamental concept in C. NULL is an artificial conceit added to the language by people who want to create some artificial nullity for pointers. I don't get it. Yes, write for clarity, use a zero
  • Re:Clear Code (Score:2, Insightful)

    by Tony Hoyle ( 11698 ) <tmh@nodomain.org> on Friday February 25, 2005 @06:41PM (#11782717) Homepage
    Indeed.. the example is totally backward.

    if(!ptr) is 'if not pointer' - which makes perfect sense to any programmer and anyone who has even a passing knowledge of boolean logic.

    I've seen the '== true' and '== false' things around and really hate them as they're not nearly as legible.. Worse is '!= false' and '!= true' - you have to actually stop an think what that means as it's a double negative.
  • Re:Clear Code (Score:2, Insightful)

    by oliverthered ( 187439 ) <oliverthered@nOSPAm.hotmail.com> on Friday February 25, 2005 @06:41PM (#11782719) Journal
    That should be "especially since _I_ will have to look at it _later_"

    Don't just write nice code for other people to read, write it for yourself.
  • Re:Clear Code (Score:2, Insightful)

    by Tony Hoyle ( 11698 ) <tmh@nodomain.org> on Friday February 25, 2005 @06:45PM (#11782753) Homepage
    How will one run 50 times faster than the other? All you did was swap the variable names around.
  • by eric76 ( 679787 ) on Friday February 25, 2005 @06:47PM (#11782763)
    I've known some very good first rate programmers who religiously put the constants on the left. I've never known a second rate programmer who did.
  • Re:Clear Code (Score:5, Insightful)

    by lubricated ( 49106 ) <michalp.gmail@com> on Friday February 25, 2005 @06:48PM (#11782780)
    well the first thing to optimize is the algorithm. Use a O(n^2) algorithm that does the same job as an O(e^n) algorithm if you can. Algorithmical optimization makes the most difference. I am working on a program who's speed is directly proportional to how how often a particular function is called. Well, I try to reduce calls to this function by various means, no compiler I've sean can optimize an algorithm, only the implementation of it. With that I'm happy to have the compiler do the work.
  • by MSBob ( 307239 ) on Friday February 25, 2005 @06:56PM (#11782860)
    First: Avoid doing what you don't have to do. Sounds obvious but I rarely see code that does the absolute minimum it needs to. Most of the code I've seen to date seems to precalculate too much stuff, read too much data from external storage, redraw too much stuff on screen etc...

    Second: Do it later. There are thousands of situations where you can postpone the actual computations. Imagine writing a Matrix class with the invert() method. You can actually postpone calculating the inverse of the matrix until there is a call to access on of the fields in the matrix. Also you can calculate only the field being accessed. Or at some sensible threshold you may assume that the user code will read the entire inverted matrix and you can just calculate the remaining inverted fields... the options are endless.


    Most string class implementations already make good use of this rule by only copying their buffers only when the "copied" buffer changes.

    Third: Apply minimum algorithmic complexity. If you can use a hashmap instead of a treemap use the hash version it's O(1) vs Olog(n). Use quicksort for just about any kind of sorting you need to do.

    Fourth: Cache your data. Download or buy a good caching class or use some facilities your language provides (eg. Java SoftReference class) for basic caching. There are some enormous performance gains that can be realized with smart caching strategies.

    Fifth: Optimize using your language constructs. User the register keyword, use language idioms that you know compile into faster code etc... Scratch this rule! If you're applying rules one to four you can forget about this one and still have fast AND readable code.

  • Screw comments (Score:4, Insightful)

    by rs79 ( 71822 ) <hostmaster@open-rsc.org> on Friday February 25, 2005 @06:56PM (#11782863) Homepage
    "My rule is never comment what the program does, comment why it does it."

    Bah. Comments lie. Code never lies.

  • Re:Not always. (Score:3, Insightful)

    by shaitand ( 626655 ) on Friday February 25, 2005 @07:08PM (#11782995) Journal
    I do too. Unfortunately some of the coding standards floating around make code very difficult to write and to read. There are people claiming that mixed case is a good idea.

    Dear god no, mixed case leads to gobs of errors from nothing more than incorrect case. All code should be lowercase unless some idiot has set an ancient precedent that Thingy(R) should always be all caps, period. This one thing can double the speed at which you write and debug code.

    Next is this BS about self-documenting code. Code is not meant to be self documenting. Proper scoping will prevent names from clashing. Write the most efficient function you can and choose names that make sense in the smallest scope possible.

    And then *gasp* COMMENT the code well. You can even include comments near where a variable or function is to indicate in plain English what it is for! Believe it or not, code should not be self commenting, code is not a spoken language and is a poor medium to use to express messages between people who speak one. Nothing works better than your actual spoken language in complete sentence structures in a comment to remind you what a variable or function is or what a block of code does.

    By_using_variable_nms_that_dnt_lk_like_this 40 times in 6 lines of code you will save enough time writing those 6 lines that you can add a nice comment that says # vnameshort is used to demonstrate a horribly verbose and lengthy variable name in this function. and please for god sake comment EVERY call of a function that is not part of the standard c libraries within 20 lines explaining where it came from! Doing this will not only easier for people who know the project well to see what is happening and where to refer to but it will also help those who are NOT familiar jump in and possibly change one thing.

    Believe it or not, I do not want to read your 100,000 lines of source to make one change. I want to be able to look at the main routine and b-line right from there to the portion of the code I need using the commented function calls.
  • Must be nice (Score:5, Insightful)

    by peccary ( 161168 ) on Friday February 25, 2005 @07:09PM (#11783003)
    one product
    one customer
    420,000 lines
    260 staff
    no competition
    no trade shows
    no salespeople selling new features that have never been discussed

    It's interesting to talk about their attention to detail, but to hold it up as a model for all software development neglects to consider that they are working under an entirely different set of constraints from most everyone else.
  • by GunFodder ( 208805 ) on Friday February 25, 2005 @07:18PM (#11783079)
    I think the example is fine; you just displayed an assumption that highlights one of the quirks of C.

    ! means "not" or "inverse of"; it is a boolean function. The variable ptr is a pointer; it is a reference to data, which means it isn't really data itself. !ptr shouldn't compute; a boolean operator should only work on boolean data. But C logical comparators are designed to work on everything. You are just supposed to know that 0 == NULL == false. This supposition is totally arbitrary and doesn't hold up in any language with strong typing.

    This is what makes C difficult for beginners. Bad code compiles even though it has logical flaws, and ends up failing in mysterious ways.

    The second case makes more sense. Equality is an operator that should work on all types of data. NULL is necessary if you are going to abstract data through the use of pointers or objects. Doing away with NULL would be equivalent to eliminating true and false and using 1 and 0 instead. Or eliminating strings and using sequences of ASCII codes. These substitutions are technically correct but in reality they make code unreadable.
  • Re:Clear Code (Score:3, Insightful)

    by severoon ( 536737 ) on Friday February 25, 2005 @07:28PM (#11783176) Journal

    At every place I've worked, I've always had a standing bet open to any and all takers. You branch your code and I'll branch mine, you preoptimize and I'll write legible code that sticks religiously to good OO design principles. After the development cycle ends, if a profiling tool proves that your way is better in a significantly demonstrable fashion to the product, you win $100. If not, you owe me $50. (You can hammer out the terms of "significantly demonstrable," but it basically means that the code we're talking about falls into the 20% of the codebase--per the 80/20 rule about where execution flow spends most of its time--and my way is so much more grossly inefficient than yours it actually causes consternation in the end user.)

    Put like this, no one ever takes the bet. It's easy money for me. Even the strongest preoptimization advocates get pretty silent once the discussion is framed this way.

    Here's the thing. Except in certain, specific circumstances, e.g. real-time or embedded high-performance systems, lack of performance comes from bad design 99.9% of the time. If the design of the product accommodates the performance requirements properly, no optimization is usually necessary.

    Does optimization help in these situations? It depends on what you mean by "help". If you mean, does it make it faster, then yes. If you mean, does it make a better product, then the answer is most cases is indubitably: no. Faster, more fragile, less maintainable, and more poorly documented usually means worse for the product. And the kicker is, in most cases it doesn't improve performance in any meaningful way.

    Much better is to approach your architecture and design with an eye towards the specific performance factors and requirements of your product, and design things properly. This will get the big stuff (that you can't usually optimize your way out of anyway). Then somewhere near code freeze, set aside time to do profiling to the extent that gives you a good idea of where the CPU is burning most of its cycles. Usually that's somewhere between 5% and 10% of your code, and usually it's only reasonable to optimize in 5%-10% of that code, and usually there are only opportunities to do intelligent optimization in 5%-10% of that code. (Of course, this doesn't take into account all of the boneheaded code that somehow found its way into the product where just sticking to best practices shows not only markedly better code in terms of performance, but also in every other sense.)

    If you're still finding problems performance-wise, chances are you started off without addressing the right requirements in the first place. Go back to architecture, do not pass Go, do not collect $200.

  • Dear Lord (Score:5, Insightful)

    by sholden ( 12227 ) on Friday February 25, 2005 @07:35PM (#11783221) Homepage
    Ten years of programming in the language and you:

    1) Don't know when two things are obviously equivalent to any non-brain dead compiler.

    2) Think something other than readability matters.

    3) Think the non-idiomatic way of doing something is more readable.

    But I'm sure I'm just repeating the comments I can't be bothered reading.
  • by Ninja Programmer ( 145252 ) on Friday February 25, 2005 @07:42PM (#11783290) Homepage
    Saravana Kannan asks: "I have been coding in C for a while (10 yrs or so) and tend to use short code snippets. As a simple example, take 'if (!ptr)' instead of 'if (ptr==NULL)'. The reason someone might use the former code snippet is because they believe it would result in smaller machine code if the compiler does not do optimizations or is not smart enough to optimize the particular code snippet. IMHO the latter code snippet is clearer than the former, and I would use it in my code if I know for sure that the compiler will optimize it and produce machine code equivalent to the former code snippet. The previous example was easy. What about code that is more complex? Now that compilers have matured over years and have had many improvements, I ask the Slashdot crowd, what they believe the compiler can be trusted to optimize and what must be hand optimized?"
    Most compilers come with something called a disassembler. Or better yet, you can trace the code with an assembly level debugger. If you want to know whether or not your compiler produces good code, why don't you just look at your code and find out? I'll bet dollars to donuts that you have one of these tools sitting on your hard drive that will tell you what your compiler did. Seriously, if you don't know how to get the answer to the question for yourself, then you don't deserve to know the answer.

    Most compilers today will get all the simple stuff like if (!ptr) vs if (NULL == ptr) optimization. Its the more complex things that the compiler cannot "prove" where it has trouble. For example:

    void h(int x, int y) {
    for (i=0; i < N; i++) {
    if (0 != (x & (1 << y))) {
    f(i);
    } else {
    g(i);
    }
    }
    }

    Very few compilers will dare simplify this to:

    void h(int x, int y) {
    if (0 != (x & (1 << y))) {
    for (i=0; i < N; i++) f(i);
    } else {
    for (i=0; i < N; i++) g(i);
    }
    }

    Because the compilers have a hard time realizing that the conditional is constant and should be hoisted to the outside of the for loop. The compiler has the opportunity to perform loop unrolling in the second form that its may not try in the first instance.

    You can learn these things from experience, or you can simply figure it out for yourself with the afore mentioned decompilation tools.
  • Re:Not always. (Score:3, Insightful)

    by BasilBrush ( 643681 ) on Friday February 25, 2005 @07:44PM (#11783302)
    I'd never employ you. You are wrong on just about every point in that post. You sound like you've not been through many code review processes, if any.

    If you are serious about coding, I recommend you pick up a copy of the book "Code Complete" and read it cover to cover. You need it.

  • Re:Clear Code (Score:3, Insightful)

    by Ninja Programmer ( 145252 ) on Friday February 25, 2005 @07:53PM (#11783379) Homepage
    Unfortunately, I've posted -- otherwise I would used my mod points to indicate this as a troll or overrated or something.

    The two expressions are semantically identical. There is no difference between the two. So "any programmer" who sees a distinction between the two, is just a defective programmer. Just use the one that follows your "coding conventions" if you've got them, or just use the one you are used to.

    This level of inane minutia is not what I would call "real programming", any more than typing skills are.
  • by doktor-hladnjak ( 650513 ) on Friday February 25, 2005 @08:03PM (#11783451)
    Heap sort can be worth it if you don't need all of the elements to be sorted. For example, say you wanted only the smallest 1% of elements from an array sorted.

    A heap can be constructed in linear time, but extracting each next smallest element takes log time. Hence, getting the m smallest/largest elements out of an array of n elements takes O(n + m log n). If m is small, we're talking linear sorting time wrt to the total size overall. If n = m, the whole thing becomes O(n log n), which is the provably lowest bound on sorting any ordered sequence. However, in practice the heap overhead usually makes something like quicksort faster in such cases where m is close to n.

    A classical example of where you might use this is Kruskal's minimum spanning tree algorithm. For a large graph, typically only a smallish fraction of the edges is ever needed. With heap sort, you avoid having to fully sort all the edges by weight.
  • Re:Not always. (Score:3, Insightful)

    by Foz ( 17040 ) on Friday February 25, 2005 @08:18PM (#11783561)
    Bullshit. He's spot on in many cases (although admittedly a tad overzealous).

    Comments should be used LIBERALLY, albeit intelligently as well. If you do something that isn't intuitively obvious to even the most casual observer, just take 30 seconds to write a fucking COMMENT explaining why you did what you did.

    Believe it or not, eschewing comments because "oh, well, if you want to understand it just read the code" just pisses those of us off who have to come along and clean up your miserable excuse for a codebase... and it sure as hell doesn't prove how studly a programmer you are.

    That doesn't mean half your codebase should be comments, but it does mean that you should at least make a passing nod to demystifying your own attempts at cleverness. I have lots of better things to do than to spend all fucking day picking apart your rabbit's nest of code before I can make a change, add a feature or fix a bug.

    People that honestly believe that "if it's well written it doesn't NEED comments" should be strangled with their mousecord and hung in their cubicles as a warning to the rest.

    -- Gary F.
  • by pboulang ( 16954 ) on Friday February 25, 2005 @08:42PM (#11783739)
    Tighter code? is that how you are defining optimized? Hmmm... I beg to differ.
  • by LoveMe2Times ( 416048 ) on Friday February 25, 2005 @09:21PM (#11784038) Homepage Journal
    I'm going to presume that you've *already* picked a reasonably effecient algorithm, 'cause otherwise there's no point. Second, I'm going to presume that you've already run the profiler, so you know which lines of code are important.

    Here's my "guide to optimizing":

    1) Are you disk I/O bound? You might need to switch to memory mapped files, or you might need to tweak the settings on the ones you have. You might need to use a lower level library to do your I/O. Many C++ iostreams implementations are slow, and many similar libraries involve lots of copying.

    2) Are you socket I/O (or similar) bound? If so, you may need to rewrite with asynchronous I/O. This can be a PITA. Suck it up.

    3) Are your threads spending all their time sitting in locks waiting for other threads? One, make sure you're using an appropriate number of worker threads optimized by the number of CPUs the host has. If you've already got the right number of threads, this can be a really tough decision. Presumably, the threads are helping your program readability, and trying to rework things into fewer threads is often a *bad idea*.

    4) Are you spending all your time in malloc/new/constructor free/delete/deconstructor? Maybe you need to keep things on the stack, use a garbage collector, use reference counted objects, use pooled memory techniques, etc. In the right places, switching from some "string" library to const char* and stack buffers can give a huge benefit. Make sure, of course, that you use the "n" version of all standard string functions (the ones that take the size of the buffer as an argument) to avoid buffer overruns.

    5) Are you spending all of your time in some system call? Like maybe some kind of WriteTextToScreen or FillRectangleWithPattern type of thing? For drawing code in general, try buffering things that are algorithmically generated in bitmaps, and only regenerate the parts that change. Then just blit together the pieces for your final output. Perhaps you need to rely on hardware transparency support for fast layer compositing. You might need fewer system level windows so you draw more in one function. Maybe you need to reduce your frame rate.

    6) Are you using memcpy as appropriate?

    If any of the previous items are true, you have no business worrying about the compiler. However, once you've gotten this far, you can start worrying about optimizing your code line by line.

    7) Since you've gotten this far, the line(s) of code you're worried about are all inside some loop that gets run. A lot. They may be inside a function that's called from a loop too, of course. So, a few things to consider. A) You may need to use templates to get code that is optimized for the appropriate data type. B) You may need to split off a more focused version of the function from the general purpose function if it's also used in non-critical areas. This has negative maintainance ramifications. C) Do the bonehead obvious stuff like moving everything out of the loop that you can. D) Look at the assembly actually generated by your compiler. If you're not confortable with this, you have no business doing further optimization.

    After looking at the assembler, then you'll know if the following are important. In my experience, they are.

    1) Change array indexing logic to pointer logic:

    MyType stuff[100];
    for( int i = 0; i < sizeof(stuff) - 1; i++)
    {
    stuff[i] = abs(stuff[i+1]/PI);
    if (stuff[i] < 0)
    stuff[i] = 0;
    else if (stuff[i] > maxval)
    stuff[i] = maxval;
    }

    can change to:

    MyType stuff[100];
    for( MyType* ptr = stuff; ptr < &stuff[98]; ptr++)
    {
    *ptr = abs(*(ptr+1)/PI);
    if (*ptr < 0)
    *ptr = 0;
    else if (*ptr > maxval)
    *ptr = maxval;
    }

    This eliminates lots of redundant addition. All of those stuff[i] = val type of statements tend to generate:

    mov

  • Re:Not always. (Score:5, Insightful)

    by Foz ( 17040 ) on Friday February 25, 2005 @09:37PM (#11784121)
    No, you're adopting a black or white approach. You are, in essence, saying that you don't need to comment at all. The original poster was saying that comments needed to be everywhere, on everything. I believe in a middle ground approach.

    I comment things that are non intuitive. I comment things that I *think* may be non intuitive. I comment things that I think someone else might have some difficulty understanding, because I happened to be deep into a code burn and consequently wrote something pretty tight, pretty sweet, but also pretty obfuscated. Finally, I comment things that I think *I* may not understand when I go back and look at the code again 3 months from now.

    I don't comment every single line... I don't comment simple data structures, loops "/* this is a for loop using the integer variable I */" etc which would be stupid. I do however disassemble the complex portions of my code, describe how I'm dispatching events and best of all *why* I decided to do things a certain way instead of a different way.

    I have, however, been handed 30k lines of code with zero documentation and not a single comment anywhere in it, with absolutely no clue at all how it worked and no access to the original programmer and been told "We need such and such fixed|updated|added by friday" and had to spend the entire week basically tracing every single line of code to figure out that the original programmer must have been smoking crack with NO indication of why he wrote things how he did and NO help when he decided to be exceedingly "clever"
    in his code. That time was wasted.

    Would it have killed him to simply put a comment block explaining his event dispatch model? Or to tell me what his functions and methods did and best of all why they did it?

    There *is* a middle ground, believe it or not.

    -- Gary F.
  • by Catullus ( 30857 ) on Friday February 25, 2005 @09:38PM (#11784129) Journal
    The grandparent poster was completely right. The implied meaning of "if (!ptr)" is "if ptr is not valid". The fact that NULL is equivalent to "not valid" is essentially irrelevant to understanding the statement.

    The key aspect - and the interesting thing - about coding style is that you are writing something for other humans to read. Everything you write contains hints to those humans about what you mean. Saying "a == NULL" is subtly different to saying "!a".

    Being able to read programs and pick up stuff like that is possibly something that takes a long time to learn, but (imho) it's very important. Code written by true experts is fascinating because of the way that they make the meaning of what they're writing clear.

    This is why (again imho) programming is an art, not a science.

    Incidentally - pointers are not references to data. They are data like anything else. Unless you understand this, pointers to pointers are fairly meaningless. Always remember: in C, everything is a bunch of bytes.
  • by omb ( 759389 ) on Friday February 25, 2005 @10:19PM (#11784424)
    As someone who has been in the industry for a long time:

    This issue is in like this,

    You need to understand the language, both syntax AND semantics you are using

    this ranges from the simple to mind-bending e.g. C++ (I am convinced that not even Bjarne Stroustrup understands this evil language);

    at that point you have two bi-furcations (a) interpreted languages eg Java, Perl, PHP and Python -v- (b) cpmpiled languages, and (c) finally DIY (do it your self) Assembler

    So: what does it amount to in practice? A) Rock Bottom, understand the architecture, including virtual memory, architecture and instruction set issues, read and understand the chip data sheet. Hard! See bottom line, architecture dependand code in Linux, bsd ...

    B) use 'gcc -S' and write the code in C, hand improve the assembler output, this is what I normally do, but you need to keep an open mind otherwise you miss things, I once took a compute intensive algorithm for the M68020 and made it run 10'000 times faster using this approach

    C)consider hardware optimisation; strictly price/performance.

  • by gidds ( 56397 ) <slashdot.gidds@me@uk> on Friday February 25, 2005 @10:50PM (#11784606) Homepage
    The knack is knowing when to go for something flash, and when to use something simple, even if it theoretically performs worse.

    And that knack is called profiling.

    It doesn't need to be anything fancy, or use flash tools -- in fact, when it's most needed, the best method is counting seconds in your head!

    For example, an application I've worked on recently started with a bubble sort, which was taking the best part of a minute to run (handheld machine). We tried a quicksort, but the slowness of recursion in this language made hardly any faster. So I ended up with a combsort, which is a bubble sort variation -- much simpler than the quicksort, and with a higher big-O order, but the much lower overhead made it run in a fraction of the time. It was nowhere near as flash, but it was a better choice for the app.

    The important points here are a) I wouldn't have realised how inappropriate quicksort was if I hadn't compared it, and b) an advanced algorithm can run slower than a simpler one, especially with small numbers or bad language support). Don't rely on preconceptions.

  • Optimization (Score:4, Insightful)

    by AaronW ( 33736 ) on Friday February 25, 2005 @11:03PM (#11784685) Homepage
    In terms of optimizing, generally compilers do a pretty good job, however there are several areas that no compiler I know of can help.

    1. Choose the right algorithm. For example, in an embedded project I worked on an engineer used a linked list to store thousands of fields that must be added and deleted. While adding is fast, it didn't scale for deleting. Changed it to a hash table and it sped it up significantly.

    2. Know your data and how it is used. Knowing how to organize your data and access it can make a huge difference. As a previous poster pointed out, sequential memory accesses are much faster than random accesses. I had to do some 90 degree image rotation code. The simple solution just used a couple for loops when copying the pixels from one buffer to another. In another, I took into account the processor cache and how memory is accessed and broke it down into tiles. The first algorithm, while simple and elegant ran at 30 frames per second. The other ran at over 200 frames per second. Looking at the code the first algorithm should be faster since the code is simpler. Both algorithms operate in O(N) time, where N=width * height.

    Further optimization attempts to hint to the CPU cache about memory made no difference (Athlon XP 1700+). The only possible way I see to speed it up further would be to write it in hand-coded assembler.

    3. Reduce the number of system calls if possible. Some operating systems can be very painful when calling the kernel. Group reads and writes together so fewer calls are made.

    4. Profile your code to find bottlenecks.

    5. Try and keep a tradeoff between memory usage and performance. A smaller tightly packed data set will execute faster with CPU caches and will reduce page faults when loading and starting up.

    6. Try debugging your code at the assembler level, stepping through it. It will help you better understand your compiler.

    7. Don't bother trying to optimize things like getting every ounce of performance when the next function you call will be very slow. I.e. in one section of MS DOS's source code which was hand-coded assembly language it was calculating the cluster or sector of the disk to access. First the code checked if it was running on a 16-bit or 32-bit CPU. Next it took the 16-bit or 32-bit path for multiplication, then it read from the disk. Why the hell write all this code to check the CPU if it's 16 or 32 bit for the multiply when the frigging disk is going to be slow. They should have just stuck with the 16-bit multiply rather than be clever.

    In general applications with GCC, I rarely see much difference between -O2 or -O3. For that matter, I often don't see a noticable difference between -O0 and -O3 for a lot of code.

    I only see improvements in some very CPU intensive multimedia code. I also saw a significant improvement in some multimedia code when I told the compiler to generate code for an Ultrasparc rather than the default, but that's because the pre-ultrasparc code didn't use a multiply instruction.

    -Aaron
  • by dumbnose ( 190140 ) on Friday February 25, 2005 @11:18PM (#11784773)
    Writing less clear code because you believe it is more efficient is the worst thing you can do for your code. It will only cause bugs in the short-term and create less manageable code in the longer-term.

    Do not perform minor optimizations without first: a) Determining there is a performance problem b) Profiling your code to determine what areas should be optimized.

    This does not mean that you should choose naive algortithms for the problem at hand. Choosing the proper algorithm for the problem at hand is always important.

    Hand-optimized code should be reserved for those times when you have profiled your code with reasonable inputs and have shown that the lack of clarity is compensated for by the increased performance.

    The example you gave is a perfect example of a hand optimization that is completely worthless with today's compilers.

  • Re:Clear Code (Score:2, Insightful)

    by Impy the Impiuos Imp ( 442658 ) on Friday February 25, 2005 @11:19PM (#11784776) Journal
    ==true and ==false can cause bugs if what you happen to be using isn't a C++ bool but some BOOL that's #defined as an int somewhere.

    Proper naming helps out here.

    Keep in mind people also follow other standards that say DON'T PUT CONSTANTS ON THE RIGHT in a Boolean expression.

    Hence instead of:

    if (!DeviceIsFunctioning)

    you get the "improved" crap of:

    if (false == DeviceIsFunctioning)

    where rules pile on each other to make things even less intelligible. Now throw in some Hugarian munging, and life is grand!
  • by aixou ( 756713 ) on Saturday February 26, 2005 @12:26AM (#11785131)
    Why not stick with insertion then? It has a run time of n on perfectly sorted data...
  • /. posters (Score:5, Insightful)

    by Saville ( 734690 ) on Saturday February 26, 2005 @01:21AM (#11785383)
    "I ask the Slashdot crowd, what they believe the compiler can be trusted to optimize and what must be hand optimized? Give examples of code optimizations that you think the compiler can/can't be trusted to do."

    Somehow 99% of the readers took this to mean "What is the difference between NULL and the zero bit pattern and do you think it is a good idea to write clear code and do the profile/algorithm change cycle until there is nothing left to optimize or should I write low level optimized code from the start?"

    sigh.. I've only found two comments with code so far after going through hundreds of posts. This is possibly the worst signal to noise ratio I've witnessed on /.
  • by Anonymous Coward on Saturday February 26, 2005 @01:26AM (#11785400)
    'if (!ptr)' instead of 'if (ptr==NULL)'. The reason someone might use the former code snippet is because they believe it would result in smaller machine code if the compiler does not do optimizations or is not smart enough to optimize the particular code snippet. IMHO the latter code snippet is clearer than the former, and I would use it in my code if I know for sure that the compiler will optimize it and produce machine code equivalent to the former code snippet.


    The example given is down right stupid!!!
    Who cares if it takes an extra usec to process ptr != NULL. How irrelevant. It won't change the efficiency of the program at all!!

    This is what bugs me about "the C culture".. always concerned about micro efficiency. It is what stopped the C and C++ crowd from ever getting to real questions of over all design and efficiency (which are more often the cause of project failure).

    Projects fail because the data requires moving/transformation, because you do more drawing than necesary, because you use bad algorithms and can't change them because you didn't design for that, because code is messy. None of these things can be solved by a compiler.

  • by Bruce Perens ( 3872 ) <bruce@perens.com> on Saturday February 26, 2005 @03:40AM (#11785799) Homepage Journal
    People have 4 GHz CPUs these days and somebody is worried about the difference between !p and p==NULL? If any compiler actually assembles code with a speed difference for those examples these days, it might well be that the speed difference is one billionth of a second. And even if that code is in a tight loop, it's entirely irrelevant given that the memory bandwidth is probably the governing factor rather than how fast the CPU can do integer vs. 0 tests.

    Hey, my phone has a 400 MHz processor.

    Bruce

  • by mobiGeek ( 201274 ) on Saturday February 26, 2005 @10:13AM (#11786594)
    As someone else who has been in the industry a long time, I find that only a very small amount of code actually needs to be optimized in the method you mention above.

    The biggest problem I run into are programmers who "know the compiler" so much that they make impossible to decypher all-in-one-if-statement code blobs.

    Write the damn code in a clear and precise way. Compile and run it. If performance is an issue (which for the majority of s/w it is not), then profile the code and make sure you know where the problem is.

    Then, and only then, should the programmer consider rewriting code for optimization. And even then, often it is the algorithm that needs to be fixed, not the fact that the compiler's optimization is missing something obvious. These compiler thingies tend to to be pretty decent these days.

    One of my favourite quotes [billharlan.com] I share with new grads as they come on-board with their fancy compiler theory classes under belt:

    In "Literate Programming," Donald Knuth wrote "We should forget about small efficiencies, about 97% of the time. Premature optimization is the root of all evil."
  • Compiler wins (Score:3, Insightful)

    by Lars Clausen ( 1208 ) on Saturday February 26, 2005 @02:41PM (#11788202)
    I go by the First Rule of Optimization: "Don't Do It" (occasionally, I will follow the Second Rule of Optimization: "Don't Do It Yet"). Two reasons:

    1) Hand-optimized code tends to be harder to write, debug, understand, and maintain.
    2) The compiler frequently does a better job anyway. Try comparing the standard strcpy function (while (*s != 0) *t++ = *s++;) with one that uses array indexing (while (s[i] != 0) { t[i] = s[i]; i++; }) using gcc -O3. On some versions and CPUs, the array-indexing code will actually use fewer instructions because the compiler gets more chances for optimization when you tell it that you're working with arrays. Pointer manipulation is for stupid compilers.

    Of course, compilers cannot save you from bad design. Make sure to think about your O() factors.

    -Lars

Say "twenty-three-skiddoo" to logout.

Working...