CPUs/Compilers for Numerical Simulations? 56
X43B asks: "I'm building a 'luggable' computer for numerical simulation work (very niche, I know). My goal is to have the best single precision floating point performance for under $1000. I have decided on a Shuttle XPC layout. I can build a AMD 3500+ for ~$80 less than a Prescott 3.4Ghz. I know the AMD is supposed to be a better 'general purpose' CPU however I found this comparison which says the Intels are better for floating point. Additionally, even though the AMD is somewhat cheaper, I have found the free Intel Linux FORTRAN compiler quicker than gfortran. So even if the AMD had similar performance for cross compiling, the Intel would be ~10% faster with the free compiler. Does anyone have any recommendations on AMD vs Intel for single precision floating point operations? If you recommend the AMD, what (cheap or free) compiler can be used that is comparable to the Intel?"
My Beliefs (Score:5, Informative)
The P4 has amazing floating point performance, but you have to use packed SSE2/3 to get it. For general (non-packed SSE or x86) floating point performance, the Athlon lines are strong.
If you can get a low end Athlon 64 (like one of the single channel versions) that might be great for you. They are the "budget" versions but have great FPUs, more registers if your software can use it, and are true 64-bit.
As for the Athlon (non-64), I wouldn't personally. I would think you could get a low end Athlon 64 (like I said above) for a reasonable price that would smoke it.
Last of all, the Intel compiler is designed for Intel chips (duh), but the code can be run by Athlons and Opterons and even on the AMD chips it's code is often better performing than GCC code. That said, if you get a P4, using that compiler is probably a must because it is sooooo good at seting up floating point stuff and gets much better performance (but then again, what do you expect?). So give it a try no matter what you buy, it will probably help your performance.
So those are my theories/impressions. You can get SFF PC that will hold just about any processor. Too bad money is an object because that dual-cpu Iwill Opteron SFF that will come out later this year would kill anything else in a SFF (assuming you can take advantage of the 2nd CPU with whatever you're doing, which I assume you can).
Re:My Beliefs (Continued) (Score:5, Informative)
The last thing I have to say is that as another poster pointed out, are you stuck in the Wintel world? Because the G4 and G5s (the later especially) are supposed to be VERY good at this kind of thing. So you could use a XServe G5 (pretty small) or just a normal G5 (not as small). They aren't that cheap (probably couldn't get one in your budget, but maybe used) but they should preform great. They are also true 64-bit. Also, IBM sells G5 computers, so you're not stuck buying an Apple (you might be able to get a cheaper one that way too). Not sure about the sizes of those though.
Just more stuff to think about.
Re:My Beliefs (Continued) (Score:2)
Re:My Beliefs (Continued) (Score:2)
Re:My Beliefs (Continued) (Score:1)
The iMac G5s provide only a 600 MHz FSB, not to mention topping out at a single 1.8 GHz G5.
The PowerMac G5, on the other hand, *starts* at DUAL 1.8 GHz G5s, has up to 1.25 GHz front side bus per processor.
An utterly oversimplified model based on no real data would estimate that reducing the G5 CPU to flat-panel dimensions has cost a factor of 2--4 in
Re:My Beliefs (Continued) (Score:2)
I don't know what caused it, but our units were in P3 cycles. One P3 cycle is worth one AMD (x86-32) cycle, and a P4 cycle is worth 1/2 a P3 cycle. A G5 cycle is worth about one 3 cycle as well (they just ported the program).
Re:My Beliefs (Continued) (Score:1)
The picture is similar in 3D rendering (OpenGL) - the AMD Athlon XP's three FPU units helped it to outstrip the Pentium 4, with 2 FPU units. Ideally, you can employ the following equation:
Performance = Clock Speed x Operations/Cycle
This equation helps explain the theory behind why the AMD Athlo
Re:My Beliefs (Score:1)
Things are different if you can vectorize your code and use Single Instruction Multiple Data - there the Pentium4 is supposed to be faster. But this was not the case for my code.
The other point, is that single-precision FP is not faster than dual precision FP, because the ix86 and PowerPC FPUs always use full precision internally - so it only makes a difference if you do
$80? (Score:5, Insightful)
I think you're best bet is the little extra you spend for the Prescott model.
Re:$80? (Score:3, Interesting)
And the little extra you spend on the massive heatsink you buy to keep it cool in an enclosed space.
A few SFF systems come with their own heatsink, most notably Shuttle XPCs with their proprietary ICE heatsink. I'd go for one of those if using a Prescott.
save the money for tools (Score:5, Insightful)
Also, factor in your work time. A few percentage difference between CPUs won't count nearly as much as the 2 hours you saved because you had a good debugger, or the 4 hours saved because your editor makes it easier to write and jump around your code.
Your accuracy will be pretty much the same, you just have to understand how computers represent floats and plan accordingly. Use accurate representations even if they're slower to get the numerical accuracy you need, then optimize the slow parts.
Only optimize the stuff that runs slow. That means profile, don't just guess. You'll often be surprised by where the bottlenecks are.
Higher accuracy usually means more memory (going with doubles rather than floats) or work (converting to integers within your desired floating range to control floating point accuracy). CPU won't be a biggie, but having lots of RAM will help.
If you have a choice between spending 3 months writing and optimizing code, or spending 1 month writing code that isn't optimized, think of what 2 months of runtime will do. If it takes you a while to write the code, just buy your super-accurate machine _after_ coding, when it's time to do your real runs (since chip speeds already increase).
In fact, if it takes you 3 months to optimize, you'd be better off keeping the slow code, doing another project, and 3 months later just buying a faster PC to run the slow old code
All this off the top of my head, hope it helps.
Re:save the money for tools (Score:2)
Wow, you must have been on the Windows development team, working on KDE and Gnome on the side. What ever happened to writing efficient code? I miss the days of Linux 1.2.13, when a kernel could still fit on a floppy, and when window managers didn't have 500 features constantly running that you never use.
Re:save the money for tools (Score:4, Insightful)
If what you want is....results, then efficient code is one of many ways of getting it, and not necessarily the most efficient one.
Re:save the money for tools (Score:2)
>"if it takes you 3 months to optimize, you'd be better off keeping the slow code, doing another project, and 3 months later just buying a faster PC to run the slow old code
innosent asked:
>What ever happened to writing efficient code?
There's slow, and there's efficient. Sometimes brute force and ignorance _is_ better. Here's an example.
We had a routine using an ephemeris. It loaded in the data at 1 minute intervals, since we needed 1 minute resolution accuracy. 3 coordinates for eac
Re:save the money for tools (Score:2)
Re:save the money for tools (Score:2)
Bwah ha ha. Tossing 12MB as a basic data structure is huge-- in a real program, that uses that basic data structure to create others. Plus that's not including visualizing it. Think 2000 targets, each with a dozen mathematical alterations to that structure. Then plot them. 12MB*2000*12*plotting overhead=a lot more than you want to store in ram.
750,000 points when we coded took 12 seconds to gronk through subsequent calculations. On a realtime system, do you want a late
Re:save the money for tools (Score:1)
I know I know... you don't own monitors or work underground or near a particle accelerater that destroys crt's....
Re:there is a difference (Score:2)
because what you're saying "does not compute".
especially the last bit.
and come on, "I've heard of people comparing the results and saying the output produced by Intel appears more attractive, but I haven't seen it." yeah right!
Re:there is a difference (Score:1)
Re:there is a difference (Score:2)
The floating point units in AMD and INTEL processors don't have to deal with infinite decimal points (like mathematicians often do) but 32 bits of depth or 64 (if you use double registers or an AMD system) or so on. It's entirely inside the programming.
Since both computers use the x86 instruction set the calculations they do on the data are carried out in identical ways, if there was a diffrence on
Re:there is a difference (Score:2)
Re:there is a difference (Score:2)
you could do the calculus by hand and get to the same endpoint.
different strategies don't necessarely mean different results either.
please, don't spread misinformation and cyber legends.
Re:there is a difference (Score:2)
Re:there is a difference (Score:5, Interesting)
No FPU meeting this standard will produce different results than any other FPU. They're just faster or slower at doing it.
You'll only start getting differences when you hack non-standard speed optimisations into your code. It's unfair to blame Intel and AMD for people writing incompetently coded software - they just provide the stick, it's the coder who's beating you with it.
Re:there is a difference (Score:3, Insightful)
Correct as far as arithmetic operations go, but not for other functions. Trigonometric functions are quite a different story, and the results will vary between processors -- older Intel (co-)processors were accurate to 4.5 ulp, whereas recent ones are accurate to 1.5 or 1.0 ulp, for example.
For that matter, as far as I'm aware IEEE 754 doesn't make *any* requirements of the trigonometric functions; they might beha
Re:there is a difference (Score:2)
That may change though - 754 is under revision right now...
Re:there is a difference (Score:4, Informative)
AMD and Intel both subscribe to the IEEE 754 standard for FPU units,
This is true for "normal" floating-point operations and SSE, but 3DNow! is not IEEE-compliant. There are also some ways to introduce non-compliance in SSE, such as the LDMXCSR, RCP, and RSQRT instructions. (The former can change over/underflow behavior, and the other two are approximation functions.)
References (Score:2)
http://developers.slashdot.org/comments.pl?sid=10
Re:References (Score:2)
Re:References (Score:2)
Are you good with your hands? (Score:2)
This will be the fastest solution for i86 and should outperform a single 3500 Intel or Opteron by a considerable amount on 32-bit apps -- about a fifty-to eighty per cent increase can be expected from a 2-processor system. Tyan makes some interesting, relatively inexpensive SMP mobos, check them out http://www.tyan.com
If you must use standard off-the-shelf cases and such, the
g4 (Score:2)
Intel has heard of /. (Score:1)
IWILL Dual Opteron SFF (Score:4, Informative)
Even if you don't have the money for both CPU's right now...it's a good start and you could add the 2nd CPU later. This would be the most powerful small form factor number cruncher.
opteron (Score:1)
Re:opteron (Score:1)
I would prefer an Athlon64 over an Opteron because of the extra reliability that ECC offers. If I know it well, A64 has the ECC circuit builtin, but many motherboards do not use it. With an Opteron you know that you get true ECC and also registered memory (which is SLOWER but more reliable).
IMHO reliability is more important than speed in numerical apps. You could lose months of work just because of a single bit error. Opteron is 100% realiable and has been used successfully
Intel also has great libraries & VTune (Score:4, Interesting)
Intel also provides the VTune Performance Analyzer, which allows you to trace the path through your programs and determine where the bottlenecks are.
I've used the Intel Linux Fortran compiler and I am very happy with it. Code that runs fine on my Sun workstation (950 Mhz, 6 gig RAM) at school works 4-5x faster on my home PC (2.8 Ghz, 1 gig RAM). It's got all the fancy optimization options, but a simple -O3 -ipo will get you 90% there.
My two bits.
ICPC and AMD (Score:3, Interesting)
don't worry about it (Score:4, Insightful)
Instead, check for yourself which system (not processor, but system) gives you the most bang for the buck using the most standard compiler you can find. If you use gcc, I believe systems based on AMD's 64bit chips still win.
And, realistically, 10-20% differences are not worth investing a lot of time or energy in anyway; that corresponds to a few months of progress in processor and systems development.
CPU? Use the GPU! (Score:5, Interesting)
More info here: http://www.gpgpu.org/
Whatever you do, make sure you have a properly tuned ATLAS library:
http://math-atlas.sourceforge.net/
I don't know if anyone has got ATLAS or BLAS to work on GPUs yet.
Baz
No IEEE floats (Score:2, Insightful)
ATLAS (Score:2)
Just in case anybody interested hasn't heard of it, the ATLAS library [sourceforge.net] is a C / Fortran 77 library for linear algebra (which is a significant part of scientific programming). It tunes itself at compile-time, to your particular processor and number of CPUs (and whatever else might be affecting your FP performance) by doing tests.
The author also has some quick n' dirty notes [utk.edu] for floating-point issues.
Exactly what math? (Score:2)
Go see:
SPEC FPU results [spec.org]
Look at the details, you'll see that different processors have different strengths depending on the tasks. To see what tasks they are you can look at:
SPEC FPU [spec.org]
Another thing the Intel compiler does work well for AMD too. But you may have to force it to recognize it as AMD to turn on even more optimizations. Apparently you get a boost as a nonIntel CPU, but if you disable the "intel-only" detection [google.ca] you can get even more of a boost.
Use the GPU and a suitable language (Score:2)
Re:Use the GPU and a suitable language (Score:2, Informative)
One major disadvantage of the GPU at the moment is that, as far as I know, no standard software (such as LINPACK, FFTW, etc) supports it.
Re:Use the GPU and a suitable language (Score:2)
Some interesting results (Score:2, Insightful)
Portland Compilers (Score:3, Informative)