Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Software AMD Intel

CPUs/Compilers for Numerical Simulations? 56

X43B asks: "I'm building a 'luggable' computer for numerical simulation work (very niche, I know). My goal is to have the best single precision floating point performance for under $1000. I have decided on a Shuttle XPC layout. I can build a AMD 3500+ for ~$80 less than a Prescott 3.4Ghz. I know the AMD is supposed to be a better 'general purpose' CPU however I found this comparison which says the Intels are better for floating point. Additionally, even though the AMD is somewhat cheaper, I have found the free Intel Linux FORTRAN compiler quicker than gfortran. So even if the AMD had similar performance for cross compiling, the Intel would be ~10% faster with the free compiler. Does anyone have any recommendations on AMD vs Intel for single precision floating point operations? If you recommend the AMD, what (cheap or free) compiler can be used that is comparable to the Intel?"
This discussion has been archived. No new comments can be posted.

CPUs/Compilers for Numerical Simulations?

Comments Filter:
  • My Beliefs (Score:5, Informative)

    by MBCook ( 132727 ) <foobarsoft@foobarsoft.com> on Saturday September 25, 2004 @09:41PM (#10352318) Homepage
    OK, here is my impression from years of reading hardware sites.

    The P4 has amazing floating point performance, but you have to use packed SSE2/3 to get it. For general (non-packed SSE or x86) floating point performance, the Athlon lines are strong.

    If you can get a low end Athlon 64 (like one of the single channel versions) that might be great for you. They are the "budget" versions but have great FPUs, more registers if your software can use it, and are true 64-bit.

    As for the Athlon (non-64), I wouldn't personally. I would think you could get a low end Athlon 64 (like I said above) for a reasonable price that would smoke it.

    Last of all, the Intel compiler is designed for Intel chips (duh), but the code can be run by Athlons and Opterons and even on the AMD chips it's code is often better performing than GCC code. That said, if you get a P4, using that compiler is probably a must because it is sooooo good at seting up floating point stuff and gets much better performance (but then again, what do you expect?). So give it a try no matter what you buy, it will probably help your performance.

    So those are my theories/impressions. You can get SFF PC that will hold just about any processor. Too bad money is an object because that dual-cpu Iwill Opteron SFF that will come out later this year would kill anything else in a SFF (assuming you can take advantage of the 2nd CPU with whatever you're doing, which I assume you can).

    • by MBCook ( 132727 ) <foobarsoft@foobarsoft.com> on Saturday September 25, 2004 @10:38PM (#10352642) Homepage
      I have something to add that another post reminded me of. If memory latency is important to you (I know know much about numerical simulations so I don't know), then you want an x86-64 chip by AMD. Becuase of the on-die memory controller, the memory latency is substantially lower than on P4s (especially the high end P4s with the huge clock speeds).

      The last thing I have to say is that as another poster pointed out, are you stuck in the Wintel world? Because the G4 and G5s (the later especially) are supposed to be VERY good at this kind of thing. So you could use a XServe G5 (pretty small) or just a normal G5 (not as small). They aren't that cheap (probably couldn't get one in your budget, but maybe used) but they should preform great. They are also true 64-bit. Also, IBM sells G5 computers, so you're not stuck buying an Apple (you might be able to get a cheaper one that way too). Not sure about the sizes of those though.

      Just more stuff to think about.

      • the iMac G5 [apple.com] is *almost* in this guys budget. ;)
        • OOh. Excelent point. They are G5s (fast, should be good for what he's doing), contain the monitor already (which a SFF PC wouldn't), and are relativly small (since they are only 2" thick). That would probably be about ideal for him. Greap point!
          • While I think the iMac G5 is an extremely interesting computer, your use of the term "fast" is somewhat misleading, in that "G5" is an extremely vague term.

            The iMac G5s provide only a 600 MHz FSB, not to mention topping out at a single 1.8 GHz G5.

            The PowerMac G5, on the other hand, *starts* at DUAL 1.8 GHz G5s, has up to 1.25 GHz front side bus per processor.

            An utterly oversimplified model based on no real data would estimate that reducing the G5 CPU to flat-panel dimensions has cost a factor of 2--4 in
      • I have a big numerical intigrator that runs for minutes to weeks (depending on the problem). All calculations are done on doubles and the memory bandwidth didn't appear to mater much.

        I don't know what caused it, but our units were in P3 cycles. One P3 cycle is worth one AMD (x86-32) cycle, and a P4 cycle is worth 1/2 a P3 cycle. A G5 cycle is worth about one 3 cycle as well (they just ported the program).

        • There is a commercial reason: Intel now wants to sell the Itanium for "workstation" use, so it cut down the number of FPUs in Pentium 4, relying only on SSE for multimedia performance. From "Athlon XP Meets P4" [tomshardware.com]:
          The picture is similar in 3D rendering (OpenGL) - the AMD Athlon XP's three FPU units helped it to outstrip the Pentium 4, with 2 FPU units. Ideally, you can employ the following equation:
          Performance = Clock Speed x Operations/Cycle
          This equation helps explain the theory behind why the AMD Athlo
    • My experience is that, on real-world FPU-intensive code, an AthlonXP or a PowerPC G3 are about twice as fast as a Pentium 4, per MHz.
      Things are different if you can vectorize your code and use Single Instruction Multiple Data - there the Pentium4 is supposed to be faster. But this was not the case for my code.

      The other point, is that single-precision FP is not faster than dual precision FP, because the ix86 and PowerPC FPUs always use full precision internally - so it only makes a difference if you do
  • $80? (Score:5, Insightful)

    by jeffy124 ( 453342 ) on Saturday September 25, 2004 @09:49PM (#10352373) Homepage Journal
    Try thinking this through in terms of return-on-investment. If $80 is all you're going to save on a product that's around $1000, it may not be worth it, especially given what else you know is going into each product. First, it's an 8% savings, hardly a significant bargain. Second, from what it sounds like you'll be doing with it, $80 extra for an Intel chip is very small sum for the increased performance (10% from your description above) you'll be getting in return. Third, assume you do find similar performance for an AMD, but might require payment for a compiler negating the $80 savings. Finally, ask yourself if searching for a free (as in beer) AMD compiler is worth $80 of your time if you already have everything in place for an Intel.

    I think you're best bet is the little extra you spend for the Prescott model.
    • Re:$80? (Score:3, Interesting)

      ...[your] best bet is the little extra you spend for the Prescott model.

      And the little extra you spend on the massive heatsink you buy to keep it cool in an enclosed space.

      A few SFF systems come with their own heatsink, most notably Shuttle XPCs with their proprietary ICE heatsink. I'd go for one of those if using a Prescott.
  • by ghostlibrary ( 450718 ) on Saturday September 25, 2004 @09:49PM (#10352376) Homepage Journal
    Save your CPU money for the compiler (e.g. PGF90, or what have you), or for development tools, and for more RAM. A good compiler will give you a better handle on numerical accuracy than a 'better' CPU stuck with an off compiler. More RAM will keep you from being stingy in single/double precision allocations.

    Also, factor in your work time. A few percentage difference between CPUs won't count nearly as much as the 2 hours you saved because you had a good debugger, or the 4 hours saved because your editor makes it easier to write and jump around your code.

    Your accuracy will be pretty much the same, you just have to understand how computers represent floats and plan accordingly. Use accurate representations even if they're slower to get the numerical accuracy you need, then optimize the slow parts.

    Only optimize the stuff that runs slow. That means profile, don't just guess. You'll often be surprised by where the bottlenecks are.

    Higher accuracy usually means more memory (going with doubles rather than floats) or work (converting to integers within your desired floating range to control floating point accuracy). CPU won't be a biggie, but having lots of RAM will help.

    If you have a choice between spending 3 months writing and optimizing code, or spending 1 month writing code that isn't optimized, think of what 2 months of runtime will do. If it takes you a while to write the code, just buy your super-accurate machine _after_ coding, when it's time to do your real runs (since chip speeds already increase).

    In fact, if it takes you 3 months to optimize, you'd be better off keeping the slow code, doing another project, and 3 months later just buying a faster PC to run the slow old code :)

    All this off the top of my head, hope it helps.
    • In fact, if it takes you 3 months to optimize, you'd be better off keeping the slow code, doing another project, and 3 months later just buying a faster PC to run the slow old code :)

      Wow, you must have been on the Windows development team, working on KDE and Gnome on the side. What ever happened to writing efficient code? I miss the days of Linux 1.2.13, when a kernel could still fit on a floppy, and when window managers didn't have 500 features constantly running that you never use.
      • by samael ( 12612 ) <Andrew@Ducker.org.uk> on Sunday September 26, 2004 @04:57AM (#10353814) Homepage
        Writing efficient code is great - if what you want to end up with at the end is some really efficient code.

        If what you want is....results, then efficient code is one of many ways of getting it, and not necessarily the most efficient one.
      • I wrote:
        >"if it takes you 3 months to optimize, you'd be better off keeping the slow code, doing another project, and 3 months later just buying a faster PC to run the slow old code :)"

        innosent asked:
        >What ever happened to writing efficient code?

        There's slow, and there's efficient. Sometimes brute force and ignorance _is_ better. Here's an example.

        We had a routine using an ephemeris. It loaded in the data at 1 minute intervals, since we needed 1 minute resolution accuracy. 3 coordinates for eac
        • Save the taxpayers money by using $2,000 laptops? What about desktops at less than half that price?

          ;-)

          I know I know... you don't own monitors or work underground or near a particle accelerater that destroys crt's....
  • If so, build your own luggable case from a convenient toolbox, cabinet or whatever and use a multiprocessor outfit of either brand.

    This will be the fastest solution for i86 and should outperform a single 3500 Intel or Opteron by a considerable amount on 32-bit apps -- about a fifty-to eighty per cent increase can be expected from a 2-processor system. Tyan makes some interesting, relatively inexpensive SMP mobos, check them out http://www.tyan.com

    If you must use standard off-the-shelf cases and such, the
  • If you are looking for good cheap single precision performance, vecocity engine is for you (IBM/Motorola g4s and g5s). The single precision performance is about 4 flops / Hz.
  • On the Intel site, if you try to download the free compiler it asks where you heard of the site. /. Is one of the options.
  • by DeadBugs ( 546475 ) on Saturday September 25, 2004 @11:42PM (#10352897) Homepage
    You may want to look into the IWILL Dual Opteron SFF PC [iwill.net] It's in a small form factor design like a Shuttle XPC, but with support for Dual AMD Opterons.

    Even if you don't have the money for both CPU's right now...it's a good start and you could add the 2nd CPU later. This would be the most powerful small form factor number cruncher.
  • I would get an Opteron148/150 or 244/246 depending on whether the work is threaded or not.
    • But why Opteron, you might ask.
      I would prefer an Athlon64 over an Opteron because of the extra reliability that ECC offers. If I know it well, A64 has the ECC circuit builtin, but many motherboards do not use it. With an Opteron you know that you get true ECC and also registered memory (which is SLOWER but more reliable).
      IMHO reliability is more important than speed in numerical apps. You could lose months of work just because of a single bit error. Opteron is 100% realiable and has been used successfully
  • by Salis ( 52373 ) on Sunday September 26, 2004 @01:50AM (#10353352) Journal
    Intel has a set of optimized mathematical libraries for all sorts of applications (linear algebra, image processing, random number generation, FFT's, etc). Not only are they optimized for Intel systems, but they save you the time of coding it yourself.

    Intel also provides the VTune Performance Analyzer, which allows you to trace the path through your programs and determine where the bottlenecks are.

    I've used the Intel Linux Fortran compiler and I am very happy with it. Code that runs fine on my Sun workstation (950 Mhz, 6 gig RAM) at school works 4-5x faster on my home PC (2.8 Ghz, 1 gig RAM). It's got all the fancy optimization options, but a simple -O3 -ipo will get you 90% there.

    My two bits.
  • ICPC and AMD (Score:3, Interesting)

    by blackcoot ( 124938 ) on Sunday September 26, 2004 @02:23AM (#10353441)
    i have not tried this with intel's fortran compiler, *but*, from what i've heard, the intel c/c++ compiler produces code that performs substantially better than gcc on an amd processor. does the code running on an amd perform comparably to code running on an "equivalent" (take that term to mean what you will) intel box? i have no idea, but *if* i remember correctly, i was seeing a good 15-20% increase over gcc on the stuff i was doing targeting a p4. since amd makes chips which are, in theory, binary compatible with p4s, it may be worth a shot. another thing to recommend the intel compilers: their native support of openmp. if you do go with a dual box, you can give the intel compiler hints about how to parallelize your code to take full advantage of all n processors (don't know if you'll have ht turned on or not). hope this helps at least a little bit.
  • by jeif1k ( 809151 ) on Sunday September 26, 2004 @04:49AM (#10353803)
    I wouldn't start relying on special compilers; once you go down that road, you start putting processor-dependent features into your code, you start battling with compatibility issues, your code becomes less usable by others, and you have less choice in software from others that you can use; it all becomes a huge waste of time.

    Instead, check for yourself which system (not processor, but system) gives you the most bang for the buck using the most standard compiler you can find. If you use gcc, I believe systems based on AMD's 64bit chips still win.

    And, realistically, 10-20% differences are not worth investing a lot of time or energy in anyway; that corresponds to a few months of progress in processor and systems development.
  • CPU? Use the GPU! (Score:5, Interesting)

    by Bazman ( 4849 ) on Sunday September 26, 2004 @05:25AM (#10353863) Journal
    Maybe you just want a better graphics card? Nowadays you can run numerical calculations on the graphics card's processor - and no, you don't get random noise all over your screen, its not simple memory-mapped graphics! Plus it gives you the excuse to buy a machine that can play Doom 3.

    More info here: http://www.gpgpu.org/

    Whatever you do, make sure you have a properly tuned ATLAS library:

    http://math-atlas.sourceforge.net/

    I don't know if anyone has got ATLAS or BLAS to work on GPUs yet.

    Baz
  • by Piquan ( 49943 )

    Just in case anybody interested hasn't heard of it, the ATLAS library [sourceforge.net] is a C / Fortran 77 library for linear algebra (which is a significant part of scientific programming). It tunes itself at compile-time, to your particular processor and number of CPUs (and whatever else might be affecting your FP performance) by doing tests.

    The author also has some quick n' dirty notes [utk.edu] for floating-point issues.

  • Exactly what math do you want to do?

    Go see:

    SPEC FPU results [spec.org]

    Look at the details, you'll see that different processors have different strengths depending on the tasks. To see what tasks they are you can look at:

    SPEC FPU [spec.org]

    Another thing the Intel compiler does work well for AMD too. But you may have to force it to recognize it as AMD to turn on even more optimizations. Apparently you get a boost as a nonIntel CPU, but if you disable the "intel-only" detection [google.ca] you can get even more of a boost.
  • In terms of sheer numerical processing ability modern GPUs leave CPUs standing in the dust. Get a top of the line nVidia graphics card. Preferably PCI Express because the big bottleneck is getting data back out of the card. The hardest part is that the code typically needs to be 'disguised' as a rendering problem but if you use a programming language like Brook [stanford.edu] you can write in a C-like way and get amazing performance without having to touch a graphics API. One catch is precision - but many iterative algori
  • by rgbe ( 310525 )
    I spent a summer benchmarking a couple of new computers for a the University of Otago Physics department. They were looking into buying a cluster for their Bose-Einstien condensate experiments, it was my job to see where things were going slow. I found the major bottleneck was in the network. But I also made comparisons between a P4 2.4GHz and AMD Athon-XP 2400+, the results [gamma.net.nz] are interesing.
  • Portland Compilers (Score:3, Informative)

    by XenonOfArcticus ( 53312 ) on Sunday September 26, 2004 @11:46PM (#10359581) Homepage
    Compiler support is critical. Forget GCC. It's not a high-performance compiler. Look into the CodePlay C++ compiler under Windows, or the Portland Compiler Group products under Windows and Linux.

Say "twenty-three-skiddoo" to logout.

Working...