Optimizations - Programmer vs. Compiler? 1422
Saravana Kannan asks: "I have been coding in C for a while (10 yrs or so) and tend to use short code snippets. As a simple example, take 'if (!ptr)' instead of 'if (ptr==NULL)'. The reason someone might use the former code snippet is because they believe it would result in smaller machine code if the compiler does not do optimizations or is not smart enough to optimize the particular code snippet. IMHO the latter code snippet is clearer than the former, and I would use it in my code if I know for sure that the compiler will optimize it and produce machine code equivalent to the former code snippet. The previous example was easy. What about code that is more complex? Now that compilers have matured over years and have had many improvements, I ask the Slashdot crowd, what they believe the compiler can be trusted to optimize and what must be hand optimized?"
"How would your answer differ (in terms of the level of trust on the compiler) if I'm talking about compilers for Desktops vs. Embedded systems? Compilers for which of the following platforms do you think is more optimized at present - Desktops (because is more commonly used) or Embedded systems (because of need for maximum optimization)? Would be better if you could stick to free (as in beer) and Open Source compilers. Give examples of code optimizations that you think the compiler can/can't be trusted to do."
Time to post the famous Knuth quote... (Score:5, Informative)
Code Twiddling (Score:3, Informative)
There really is no one answer to this, as it depends on the compiler itself, and the target architecture. The only real way to be sure is to profile the code, and to study assembler output. Even then, modern CPUs are really complicated due to pipelining, multilevel cache, multiple execution units, etc. I try not to worry about micro-twiddling, and work on optimizations at a higher-level.
Huh (Score:5, Informative)
As a simple example, take 'if (!ptr)' instead of 'if (ptr==NULL)'.
Both forms resolve to the same opcode. Even under my 6502 compiler.
CMP register,val
JNE
Enjoy,
Wrong, wrong, wrong (Score:5, Informative)
"ptr == 0" must give the same result as "ptr == NULL", always.
Not a question that can be asked generally (Score:5, Informative)
In general, however, systems are now fast enought that when in doubt, write the clearest code possible. I mean for most apps, speed is not critical, however for all apps stability and lack of bugs is important and obscure code leads to problems.
Also, for things that are time critical, it's generall just one or two little parts that make all the difference. You only need to worry about optimizing those inner loops where all the time is spent. Use a profiler, since programmers generally suck at identifying what needs optimising.
Keep it easy to read and maintain, unless speed is critical in a certian part. Then you can go nuts on hand optimization, but document it well.
Re:write clearer code first (Score:3, Informative)
Anyone with any familiarity with C will consider the latter form unnecessarily more verbose, and therefore less clear.
There is one exception to that, and that is when "ptr" is in fact a complex expression that it isn't obvious at a glance is a pointer expression. In that case, == NULL or != NULL spells out to the reader of the code "oh, and by the way, it's a pointer." That is the ONLY reason to write this for clarity.
There is a whole category of "bad commenting" in which comments left that are only useful to someone who don't know the programming language actually makes the code a lot harder to read. A comment like:
a += 2;
That's a Tony Hoare quote, not Donalded Knuth (Score:5, Informative)
use a good profiler!! (Score:3, Informative)
Using a profiler and your own brain you can often significantly improve over what a compiler can do.
Re:NULL not always 0 (Score:4, Informative)
Devise an appropriate test (Score:2, Informative)
Back when I was doing real-time programming in FORTRAN-II on a PDP-11/34, I was able to very slightly decrease execution time by changing "divide by two" instructions to arithmatic shifts and "square root" operations to "power of 1/2" (because x^.5 = sqrt(x)).
Since those instructions were in a loop that was running 80,000 times per second, this meant that I could get more data when destructively testing rocket motors... in the end, I got a 40% increase in processing speed that was worth a whole lot of money to the company. AND it got me a raise
The answer to your question is TEST your unique combination of hardware and software, find out what works better/faster/more-elegantly, and use the most human-readable form that you can afford to given your unique requirements.
If you are doing real-time digital data acquisition involving usecond events, you will probably end up with some pretty cryptic source code, so be nice and put in lots of comments.
If you are programming an office automation application, your code should probably be readable by kindergarteners, since you won't have very difficult performance requirements.
Re:NULL not always 0 (Score:2, Informative)
BUT as part of an C compiler, using a "0" in a pointer context will always compile to a null pointer on the target system.
The only problems with zeros comes from the cast applied when passing parameters, and in such cases, even a NULL definition may require explicit casting.
Re:NULL not always 0 (Score:1, Informative)
int main(void) { int a = 0; int *p = NULL; if (p == (int *)a) printf("Equal!\n"); return 0; }
Re:Wrong, wrong, wrong (Score:4, Informative)
An integral constant expression with the value 0, or such an expression cast to TypeIs_VoidPointer, is called a null pointer constant. If a null pointer constant is assigned to or compared for equality to a pointer, the constant is converted to a pointer of that type. Such a pointer, called a null pointer, is guaranteed to compare unequal to a pointer to any object or function. A null pointer constant has TypeIs_NULL.
Once again, it should be said:
Don't give advice not to give advice to not give advice when you don't know C when you don't know C when you don't know C.
Re:Huh (Score:2, Informative)
According to the language definition, a constant 0 in a pointer context is converted into a null pointer at compile time. That is, in an initialization, assignment, or comparison when one side is a variable or expression of pointer type, the compiler can tell that a constant 0 on the other side requests a null pointer, and generate the correctly-typed null pointer value.
Check out section 5.2
http://www.faqs.org/faqs/C-faq/faq/
Re:Wrong, wrong, wrong (Score:1, Informative)
IBM says let the compiler optimize (Score:2, Informative)
Re:Huh (Score:3, Informative)
Re:You should always... (Score:5, Informative)
There's nothing wrong with short variable names.
Thus quoth Linus in the Linux Coding Style guide.
Never trust a complier (Score:2, Informative)
Re:Clear Code (Score:2, Informative)
As long as you don't over architect or go to extremes of abstraction your can do some very well abstracted code that pretty nicely separates all of the things that change from the things that stay the same. When you create this kind of separation optimization becomes easier and easier.
Good (simple) examples are creating a generic interface to sort. You may start off with a simple insertion sort (why bother with heap/quick if you don't need it?). Anyone can understand insertion sort and its fine for relatively small sets of data. Then you expand the scope of the program and realize you need a better sorting algorithm. No problem! You go in and change the behind the scenes sort. Your interface to sort did not change and your program happily works just how it did. You apply this very same idea to as much of the software you write as is reasonable. Separate out data models and logic and displays. Keep each part of the system doing one task and doing it well.
If the system is written good enough you don't even need anything but high level comments explaining the "big picture" of what is going on. I hate comments that are like the following:
int foo = 10;
Well thank you captain obvious!
Much better is an explanation over a block of 5-10 lines giving you an idea of what you are trying to achieve. Comment any thing that is not clear, like if your using bitwise shifts to multiply and divide, for example.
Strive for simplicity
Jeremy
ANSI C (Score:2, Informative)
ANSI C: "The macro NULL is defined in <stddef.h> (and other headers) as a null pointer constant."
The definition of a null pointer constant: "An integer constant expression with the value 0, or such an expression cast to type void *, is called a null pointer constant."
Source: Programming languages - C (ISO/IEC 9899:1999)Re:Indeed (Score:3, Informative)
For example, if you just saw "FIELD_COUNT" by itself without code, would you have a good idea what sort of object that was knowing anything else? (A static variable). If you saw "_count"? (Instance variable). "getCount"? (Accessor method)
So in that sense, yes, it is meaningful. You'll pick up on things like that when you start coding larger projects with other programmers.
Re:Ask the compiler... (Score:1, Informative)
Re:NULL not always 0 (Score:3, Informative)
5.1.1 Zero
Zero (0) is an int. Because of standard conversions, 0 can be used as a constant of an integral, floating-point, pointer, or pointer-to-member type. The type of zero will be determined by context. Zero will typically (but not necessarily) be represented by the bit pattern all-zeros of the appropriate size.
No object is allocated with the address 0. Consequently, 0 acts as a pointer literal, indicating that a pointer doesn't refer to an object.
In C, it has been popular to define a macro NULL to represent the zero pointer. Because of C++'s tighter type checking, the use of plain 0, rather than any suggested NULL macro, leads to fewer problems. If you feel you must define NULL, use
const int NULL = 0;
The const qualifier prevents accidental redefinition of NULL and ensures that NULL can be used where a constant is required.
Re:You should always... (Score:5, Informative)
some random integer loop counter, it should probably be called "i".
Calling it "loop_counter" is non-productive, if there is no chance of it
being mis-understood.
That last clause is an important one that often gets neglected. In fact, you should never, ever, call a variable loop_counter. That's as bad as pure reverse hungarian - it tells you how its used, not what it means.
I suggest that, for all non-trivial cases (and I'd prefer to see people err verbosely than compactly), you should use descriptive names. Not loop_counter, but maybe something like curRow? It doesn't have to be long, but at least then as the loop grows over time someone can understand a piece of code more easily than having to scroll back up to check that you are indeed in the "i" loop. Its even more critical when someone comes along and adds a nested (or containing) loop. Or whatever.
Same with "tmp". If its truly temporary, such as:
int tmp = getFooCount();
doSomething(tmp);
then it should be removed and rewritten as:
doSomething(getFooCount());
If its not that temporary, give it a real name. If you insist that it is temporary then you may have a scoping issue - having variables useful but only in part of your function could indicate that your function is doing too much work. If you insist its truly temporary, scope it down:
someRandomCode();
{
int foo = getFoo();
doSomething(foo);
doSomethingElse(foo);
}
moreCode();
At least now you've guaranteed that it is temporary. Better yet, just name it usefully.
Easy one (Score:2, Informative)
#include
int main( int argc, char* argv[] ) {
if( argv == NULL ) return 0; else return 1;
}
$ cat two.c
#include
int main( int argc, char* argv[] ) {
if( !argv ) return 0; else return 1;
}
$ gcc -S one.c
$ gcc -S two.c
$ diff one.s two.s
-
---
+
$ gcc -O3 -S one.c
$ gcc -O3 -S two.c
$ diff one.s two.s
-
---
+
$ gcc --version
gcc (GCC) 3.4.2 20041017 (Red Hat 3.4.2-6.fc3)
No diference at all...
Re:Clear Code (Score:2, Informative)
valgrind (Score:4, Informative)
Re:The algorithm that must not be named! (Score:1, Informative)
i, j, k, ... (Score:5, Informative)
I think that most people forget that the reason that i, j, k, etc. are used for loop counters is that unless otherwise declared, I..N default to INTEGER in FORTRAN. This convention just carried over as programmers migrated from FORTRAN to other languages and has been passed down through the ages.
Re:Wrong, wrong, wrong (Score:4, Informative)
Althoug the comparision (0 == ptr) might for some twisted reason com out as a non-zero value, you are not guaranteed that (!ptr) will have the same value.
Wrong [eskimo.com]
Don't give advice... ah, screw it. You get the idea.
illegal optimization there, for C at least (Score:4, Informative)
C++ reference, OK. If you mean a pointer though,
and you use C, the compiler is prohibited from
performing this optimization unless you also use
the "restrict" keyword.
Since you did mention "restrict", it appears that
you are working with C. "restrict" is not a C++
keyword.
BTW, inlinedamnit is __attribute__((__alwaysinline__))
for gcc and __forceinline for Microsoft.
Re:Algorithms, Not Stupid Processor Tricks (Score:5, Informative)
And while we're at it - n^2 vs 2^n *is* a big deal when working on 1ghz+ systems... If we have 1000 objects, an operation that takes 10 cycles and an n^2 algorithm, then we get a runtime of 0.01 seconds (10 x 1000 x 1000 cycles ), if we have a 2^n algorithm then we get a runtime in the hours, and no amount of optimizing the code in the loop (even down to one instruction) is going to get us anywhere near the n^2 algorithm.
Re:Algorithms, Not Stupid Processor Tricks (Score:1, Informative)
The never overused example that I have (Score:4, Informative)
So, the client decided not to pay the last million of dollars because the performance was total shit. On a weblogic cluster of 2 Sun E45s they could only achieve 12 concurrent transactions per second. So the client decided they really did not want to pay and asked us to make it at least 200 concurrent transactions per second on the same hardware. If I may make a wild guess, I would say the client really did not want to pay the last million, no matter what, so they upped the numbers a bit from what they needed. But anyway.
Myself and another developer (hi, Paul) spent 1.5 months - removing unnecessary db calls (the app was incremental, every page would ask you more questions that needed to be stored, but the app would store all questions from all pages every time,) cached XML DOM trees instead of reparsing them on every request, removed most of the session object, reduced it from 1Mb to about 8Kb, removed some totally unnecessary and bizarre code (the app still worked,) desynchronized some of the calls with a message queue etc.
At the end the app was doing 320 and over concurrent transactions per second. The company got their last million.
The lesson? Build software that is really unoptimized first and then save everyone's ass by optimizing this piece of shit and earn total admiration of the management - you are a miracle worker now.
The reality? Don't bother trying to optimize code when the business requirements are constantly changing, the management has no idea how to manage an IT dep't, the coders are so nube - there is a scent of freshness in the air and there is a crazy deadline right in front of you. Don't optimize, if the performance becomes an issue, optimize then.
Re:Clear Code (Score:3, Informative)
It is faster to process the data sequentially than to skip around.
That is especially true if the data elements are small such as chars. You would end up having to fetch the same data from memory multiple times in the processing of one loop compared to only once in the other loop.
Re:(ptr == NULL) is wrong in C, bad in C++ (Score:2, Informative)
Trust me - or not, just PLEASE google for the C++ FAQ, and read what they have to say about NULL and the null-pointer...
Then, google for any C/C++ spec you can grab (without having to payout $200 for) and check what they have to say...
Then google for the source code of GCC/TCC (nice and complete) and examine it...
Then try to explain how "if (ptr)" works using your logic...
It will enlighten you as to why "NULL" MUST always be defined as 0, not just as a "convention", and why comparing *ANY* pointer to 0 is strictly MOST necessary and GUARANTEED allowable and ABSOLUTELY FUNDAMENTAL to all C/C++ compilers.
Learn the difference between NULL (always 0) and the null-pointer representation (any bit pattern) and why that matters not whether your system base address (0) is a valid memory address of not.
Incidently, look up the specification for the Motorola 680x0 processor. Memory address (bytes) 0 to 8 are NOT valid. They are BY DEFINITION the initial stack pointer and code start addresses. It matters not...
Re:Clear Code (Score:5, Informative)
123
456
789
data[y][x];
This, in memory, is:
123456789
Similarly, the accsess order for the second loop is:
123456789
But for the first one, it is:
147258369
The first one hits memory sequentially, which is good for caching as each cache line stores a large chunk of sequential memory.
Considering hitiing the cache as oposed to hitting main memory is at least 100 times faster, you'll be lucky if the first loop is only 50 times slower.
This still presumes data stored in the specified order in memory (which is common for image formats, but not the only way things are done).
Re:Not always. (Score:3, Informative)
Actually, in C, NULL can be (0), just like in C++
Actually, it's because, in C++, there is no implicit conversion from (void*) to any other pointer type; as there is in C
should you be optimizing? (Score:2, Informative)
First off *any* compiler will make that particular optimization.
You should only think of instruction level optimization when you know with certainty that it will pay off, for example because you've run your code under a profiler and found the areas where it will actually make a difference. Once you've found the (probably tiny) areas where optimizing actually helps, do whatever it takes, and document your reasons as well as your methods.
You can always ask your compiler to output assembly and look it over, if you aren't fluent in your proc's assembly you probably shouldn't be trying to out-optimize the compiler anyhow.
That being said, "if (!ptr)" is legitimate and bears a different connotation from "if (ptr == NULL)", at least in my mind. One is truth, the other is zeroness. In some cases the former is actually the more obvious test. There are also cases where compactness yields more readable code because the whole idea fits in a space easily acquired at a glance, for example, "if (structp && structp->member == VAL)" is natural and obvious to anyone who's been at this for any amount of time.
All of this, of course, IMO.
-michael
Re:Wrong, wrong, wrong (Score:1, Informative)
NULL equals (void*)0
Yet...
(void*)0 may or may not have the same bitpattern as (int)0 due to implementation... stuff.
The standard does make one thing clear:
If null pointers are not represented as the value 0 then they must be explicitly converted to 0 when converting the null pointer to an integer.
In general, anything converted to (void*) and back again shall retain it's original value.
HOWEVER, I can't find where the standard enforces the idea that the size of a pointer type must be of less than or equal to int or long. Yet, when pointer type is converted to an arithmetic type it is treated as an unsigned integer. Thus, the ability to convert pointer types to long - without losing information - is implementation dependent. Hence, the standard allows that object and function pointers could have different representations. If arithmetic and pointer types have different representations then some sort of conversion is done, and it is up to the programmer to understand what is going on.
Re:Ask the compiler... (Score:3, Informative)
What about using a language where null pointer errors are caught at compile time [sf.net]? Oh, I guess our compiler [sf.net] must have a brain the size of a whole solar system to be able to do that.
Re:Not always. (Score:2, Informative)
Amen (Score:3, Informative)
As far as how together and structured the Shuttle group was, I remember the day there was a head crash on the 3330 drive that held all their source code. It was like turning over an ant nest, programmers scurrying around the halls screaming, etc. Don't believe everything those people write about how well they were organized and how wonderful everything was.
Readibility (Score:3, Informative)
if ( !ptr )
throughout my code for ages and always felt that to be far more readable, and so do my colleagues. It's also faster to type.
It reads "if no pointer" which is far understandable than "if pointer is null". Because pointers aren't really null, they are zero.
Mostly, it's just a matter of style. Imho, if you have trouble reading either of these though, you're using the wrong language - for your own safety stay away from Perl aswell.
HLSL (Score:4, Informative)
It's not really a coding trick like an XOR swap, but most compilers don't yet seem to fully unroll parallel loops into good SIMD instructions or multiple threads.
The only time I've needed to bother to look at assembly output in recent years (other than debugging a release mode program) is when writing HLSL shaders. HLSL is the high level shading languge (C like) for shaders that is part of Direct3D 9. HLSL can be compiled to SM1, SM2, or SM3 assembly.
With pixel shader 2.0 you've only got 64 instruction slots, and some important instructions like POW (power), NRM (normalize), and LRP (interpolate) take multiple slots. 64 slots is not enough for a modern shader. I curse ATI for setting it bar so low.
There are two flaws I've found with the Dec 04 HLSL compiler in the DirectX. Sometimes it will not automatically detect a dot product opportunity. I had some colour code to convert to black and white in a shader and wrote it as y = colour.r*0.299 + colour.g*0.587 + colour.b*0.114; as I thought that was the most clear way to write it. Under certain circumstances the compiler didn't want to convert to a single dot instruction so I had to write as y = dot( colour.rgb, half3(.299h,
Another is often a value in the range of -1 to +1 is passed in as a colour, which means it must be packed into 0-1 range. To get it back you've got to double it and add 1.
a = p*2+1; gets converted into a single MAD instruction which takes one slot.
a = (p-0.5)*2; gets converted into an ADD and then a MUL.
Also conventional wisdom says you've got to write assembly to get maximum performance out of pixel shader 1.1 as it is basically just eight instruction slots. I don't have any snippets to verify this though.
I think this thread demonstrates that either compilers are mature enough you don't need any code tricks to help them do their job or
In most cases (Score:3, Informative)
For the rare performance critical parts it is however worth the effort to try various constructions to get the best performance out of the code. The most problematic issue is to identify the hotspots in the code and figure out which variables that should be declared as 'register' and those that shouldn't. Ordering of statements are also important in order to match the various performance improvments the CPU can offer. One very good document on this is actually found at AMD [amd.com].
One code construct that I am using that I found is very useful is to place the matching '{' and '}' in the same column in the code. This eases the effort trying to find where a block begins.
Example:
In my opinion this produces code that has an improved readability compared to the constructs placing the '{' on the same line as the if-statement where it is much easier to miss.Re:Does anyone think this even matters? (Score:2, Informative)
Re:Ask the compiler... (Score:3, Informative)
Comment removed (Score:2, Informative)
Re:Ask the compiler... (Score:3, Informative)