Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Intel Technology

Why Doesn't the Itanium Get the Respect It's Due? 668

happycorp wonders: "As in recent years the Itanium does well, easily beating x86 processors even at its low clockspeed (1.4Ghz). The supercomputer people are serious about benchmarking (no easily tricked microbenchmarks or reliance on closed-source commercial apps), so the discrepancy between the performance and perception of this chip is serious. With a single-CPU Itanium2 system at around $2000 their price is already reasonable, and the price would come down (and software would be ported) if the Itanium ever became a mass market chip. Having an affordable chip one step above a Xeon or Opteron in floating-point performance would not be such a bad thing for gaming enthusiasts (or 3D artists). So, the recent article on the Top 500 supercomputers list brings up a question I've been meaning to ask: Why do we see so many disparaging opinions of the Itanium processor (all those 'Itanic' jokes, etc.)?"
"It seems computing enthusiasts' sentiment is set against this processor, and its likely that it's going to be abandoned sooner or later. We'll be paying for x86 compatibility indefinitely (recall the Xeon has roughly three times the number of transistors of the ppc970 for example; but we hardly get three times the performance).

These are a couple scores from the top 20, with the total gigaflops divided by the number of processors to obtain a per-processor speed:


rank processor ghz (gflops / #procs) speed
#5 ppc970 2.2 (27910 / 4800) 5.81
#7 itanium2 1.4 (19940 / 4096) 4.86
#10 opteron 2.0 (15250 / 5000) 3.05
#20 xeon 3.06 (9819 / 2500) 3.92

Given this, consider what a 2 or 3 Ghz Itanium could do.

(fine print: I am not affiliated with the Itanium or the top500 list in any way)."
This discussion has been archived. No new comments can be posted.

Why Doesn't the Itanium Get the Respect It's Due?

Comments Filter:
  • by XanC ( 644172 ) on Monday July 11, 2005 @01:16PM (#13034269)
    They should have called it the "Dangerfield".
    • I love Dangerfield. I can imagine the jokes now.

      -I told this good looking Pentium IV that I have instructions that are very wide. She said true, but the pipeline isn't long. I get no respect!

      -The other day I was doing trillions of floating point operations a second. My wife said "Honey, could you mispredict a branch or something? I'm getting sore." I get no respect!

      -My wife finally told me that she's leaving because she's tired of my architecture. I said "Baby, I can change." Then I found out that she's seeing a Transmeta processor. I get no respect!
    • by demachina ( 71715 ) on Monday July 11, 2005 @06:00PM (#13037200)
      But seriously, it gets no respect because its a complete dog on anything other than vectorizable Fortran codes. Its inherent in the design.

      The compiler has to do a LOT of work to pack instructions in to the VLIW(Very long instruction word). To get max performance I think you need to schedule 4? instructions in each word. You can do that with carefully written vectorizable Fortran with the help of a talented supercomputing class code tuner.

      When you get to C and C++ it is nearly impossible. Pointers and pointer aliasing completely frustrate the compiler, and in general most C and C++ code don't have the vectorized nature of the class vectorized Fortran codes.

      The IA32 emulation is inherently much slower than a Pentium or Athlon at the same clock and they have much higher clocks than the Itanic. So any application you carry a binary over from an IA32 box is a real dog. It takes advantage of none of the chips strengths and hits all its weaknesses.

      IA64 has a place on some supercomputing applications that exloit its strenghts. On others I wager x386_64 is both cheaper(higher sales volume and easier to manufacture), faster and easier to develop code for. On any C or C++ code IA32 and x86_64 will win hands down.

      With Itanium Intel was betting bumping up the clock on chips would run out of gas sooner than it did. They thought you would have to go to VLIW to keep increasing performance. Unfortunately clocks kept going up enough that the high end AMD and Penitum left it in the dust. AMD also developed x86_64 which gave people 64 bit address space which is needed for some apps, but PC prices and high clocks.

      IA64 is doomed in any place other than niche supercomputing apps and its struggling there against Power, x86_64 etc.
  • by Anonymous Coward on Monday July 11, 2005 @01:17PM (#13034286)

    The chipmaker has released two new Itaniums for two-processor servers as part of its effort to eliminate price premiums on the chip.

    Intel announced on Monday two new Itanium processors for two-processor servers, another step in the company's efforts to eliminate price as a barrier to Itanium acceptance.

    The 1.4GHz Itanium 2 with 3MB of cache is designed for servers in clusters. The new chip will provide about 25 percent more performance and cost much less than the initial Itanium optimised for clusters, which came out last year, said Jason Waxman, director of multiprocessor platform marketing at Intel.

    The second new chip, a 1.6GHz Itanium 2 with 3MB of cache, is optimised for higher performance in general-use two-processor servers, he said.

    Waxman reiterated that Intel is working on several technologies that will eliminate any price premium on Itanium by 2007 and thereby allow its performance advantages to, hopefully, blossom.

    "The price/performance balance will be heavily in favour of Itanium," Waxman said.

    With the focus on price, the Itanium melodrama is once again reaching a turning point. After several years of delays, the chip family debuted in 2001 to poor reviews and negligible customer acceptance. A second version of the chip that appeared in 2002 dramatically improved performance but failed to spark the market.

    Itanium finally began to gain acceptance in 2003 with Madison, a new version of Itanium 2 that substantially improved performance again and lowered the cost. Intel shipped about 100,000 Itaniums in 2003, compared with only around a few thousand for the first two years. Itanium volume is expected to double this year, chief executive Craig Barrett said in February.

    But in 2004, Intel announced that it would come out with a version of its Xeon chip that runs both 32- and 64-bit code. Xeon and Pentium chips typically run 32-bit code. Itanium runs 64-bit code, which, among other advantages, lets a computer maker pack far more memory into a computer.

    Itanium, however, requires completely different software to work well, a factor that has hindered adoption. Part of the appeal of the Opteron chip is that it can handle larger memory loads in 64-bit mode on essentially the same software base.

    Lowering the cost of Itanium servers won't eliminate the software issue, but it will begin to create an environment in which greater acceptance could occur, which in turn could cause software developers to gravitate to Itanium. Analysts and PC makers have viewed this theory with various doses of scepticism, but the range of opinion is generally substantially less negative than it was 18 months ago.

    Price drops have already had some effect. In 2002, a two-processor Itanium server cost about $18,000 (£9,859). With the new chips, a similarly configured system can sell for less than $8,000, while basic one-processor Itanium servers will go for just more than $2,000.

    Some of these price cuts have come as a result of Moore's Law, which predicts that the number of transistors on a chip will double every 18 months. But Intel has also expanded its product line to better suit the economic realities of two-processor servers. The company also designs and partly manufacturers many of the Itanium servers on the market, which cuts independent engineering costs.

    To lower the price further, Intel will begin to create products and add features to Itanium so that Itanium servers can be made out of many of the same components as Xeon servers. In 2005 and 2006, Itanium servers will be able to use the same memory or other components of Xeon servers, Waxman said.

    In 2005, Intel will also come out with two different chipsets for Montecito, the next major version of the chip. One chipset will wring maximum performance out of the chip, Waxman said, while the other will allow server makers to insert Montecito into their Madison-based servers, thereby cutting down independent design efforts.

    By 2007, Intel will
  • compatibility (Score:5, Insightful)

    by Anonymous Coward on Monday July 11, 2005 @01:17PM (#13034290)
    Because Intel tried to force everyone to jump on the 64bit bandwagon at once, while windows didn't even support it yet, without backwork compatibility to existing 32bit software. It's a good design, just doesn't (didn't ?) fit well with the mass market at the time of the release.
    • Re:compatibility (Score:4, Informative)

      by hackstraw ( 262471 ) * on Monday July 11, 2005 @02:07PM (#13034833)
      Itaniums do run 32bit applications. At least for Linux.

      However, if your running a 32bit OS on a 64bit machine, something is not right.
      • Re:compatibility (Score:5, Insightful)

        by TykeClone ( 668449 ) * <TykeClone@gmail.com> on Monday July 11, 2005 @02:09PM (#13034857) Homepage Journal
        Sounds like the days of DOS and Windows 3.1 - a 16 bit operating system on a 32 bit platform.
      • Re:compatibility (Score:5, Informative)

        by prisoner-of-enigma ( 535770 ) on Monday July 11, 2005 @04:11PM (#13036174) Homepage
        Sure, an Itanium will run all your existing 32-bit stuff...in compatibility mode, which means you get performance akin to a 300MHz Pentium-II on your $2000 CPU. Remind me again why I'm supposed to buy Itanium?

        But to return to seriousness again for a moment, the Itanium isn't pitched at mainstream anymore, and it's debatable whether it ever was. It's an entirely new ISA -- and a very good one at that -- and software developers just didn't see a good reason to jump on it when cheap x86 CPU's were selling like hotcakes.

        Intel would've loved to have forced the entire industry to move to IA64 years ago. If it had done so before the Athlon XP ever hit the scene, it's possible the chip giant could have pulled it off. However, with the advent of the Athlon XP (and MP's as well), if Intel abandoned x86, AMD would be there to pick up the pieces, giving customers the option of (a) continued use of their paid-for apps and paid-for OS's on a cheap, fast, x86 chip or (b) loss of all practical use of your 32-bit apps and OS's, total rewrites and recompilations of all core software bits, all on a $2,000 CPU. It's quite clear why Intel didn't try to do such a stupid thing.

        So, on the one hand, we can thank AMD for giving us cheap, fast CPU's that run pretty much whatever you want these days. On the other hand, we can thank AMD for keeping us stuck on x86 to begin with, for without AMD we'd almost certainly all be on IA64 today. But, since I like competition, I can say I'm extremely glad things turned out the way they did. IA64 would've been the death-knell for AMD and any other kind of competition, and Intel would be milking us for all we're worth today if it could.
      • Re:compatibility (Score:3, Informative)

        by Miguelito ( 13307 )
        Itaniums do run 32bit applications. At least for Linux.

        Yeah, and the speed sucks ass. Even Intel recommends against running any 32bit code on the Itaniums.

        I think the reason it's not doing as well as they hoped, and compared to the benchmarks, is because, in real-world performance, they're not that great. We've got 13 itanium boxes in house, all of them cost huge $ (we can't get the cheaper low-power ones, we need speed above all else) and their usage levels over the last few months has bottomed out.
    • Re:compatibility (Score:5, Insightful)

      by drgonzo59 ( 747139 ) on Monday July 11, 2005 @02:20PM (#13034977)
      Their hardware might be really good but the days of every hardware company making its own OS and applications is long gone. Software is just as important. So now hardware companies have to release products that will run the existing software and have room for future improvement. Intel when it released the 64 bit Itanium was still living in the 80s thinking it was controlling the computer market. Also in the late 80's and early 90's there wasn't as much software around a lot of companies could afford to switch to a new platform, today it is much much harder to do it.

      I think AMD has clearly won the market in terms of the consumer 64bit processor. And I can buy a double-core AMD today but I couldn't get a double-core Intel offering for a good price.

    • Re:compatibility (Score:5, Insightful)

      by man_of_mr_e ( 217855 ) on Monday July 11, 2005 @02:53PM (#13035332)
      It's not just compatibility, though that's also a big issue. The problem is that the compilers for the Itanium just aren't that mature. It's the same reason the PPC sucks so bad on a lot of benchmarks.

      Hand optimized assembly will give you screaming fast results. Unfortunately, you can't build modern applications that way and you end up having to rely on the compiler to optimize for you. On the x86, the compilers are amazingly efficient these days by contrast.

      If you've got a 64 bit database, and a 64 bit OS and a 64 bit middleware, what more do you need? You don't need to run photoshop on it. Compatibility is only marginally an issue on servers.
    • Actually there are several more reasons all related to compatibility.
      1. It was slower than a P3 running 32 bit code.
      2. Required a "brilliant" compiler to get good speed.
      3. Half hearted windows support.

      So it became a bit player for thoses that needed really fast floating point and that would write custom software to get it.
  • by ratta ( 760424 ) on Monday July 11, 2005 @01:18PM (#13034294)
    the dead ones were always much better :)
  • Brand issues? (Score:4, Insightful)

    by darth_MALL ( 657218 ) on Monday July 11, 2005 @01:18PM (#13034295)
    I have certainly noticed a general move away from Intel in the past few years. I think they may have had a run of bad press and serious competition from other manufacturers lately.
    They just aren't the juggernaut they used to be. There was a time when they built it and people came. I presume choice is what's keeping the sales down.
  • Follow the herd! (Score:5, Insightful)

    by bwalling ( 195998 ) on Monday July 11, 2005 @01:18PM (#13034296) Homepage
    Why do we see so many disparaging opinions of the Itanium processor (all those 'Itanic' jokes, etc.)?

    Because people repeat what they hear. Many people here only know what has been said on Slashdot about the Itanium. They've never used one. MrDicker64 said it was crap, so it must be!
    • by pdbogen ( 596723 ) <(tricia-slashdot) (at) (cernu.us)> on Monday July 11, 2005 @01:22PM (#13034346)
      Hey, bwalling is right! We shouldn't just take what other people say and assume it's true!

      Wait...

      *brain asplodes*
    • by MrDicker64 ( 898965 ) on Monday July 11, 2005 @01:33PM (#13034481)
      I protest! For the record, I have *never* publically stated that Itanium was "crap". I reserve such sentiments exclusively for products from Microsoft.
    • Re:Follow the herd! (Score:5, Interesting)

      by chrismcdirty ( 677039 ) on Monday July 11, 2005 @01:36PM (#13034501) Homepage
      I only ever called it the Itanic because one of my professors, who works (or worked) at Intel and researched the architecture very extensively to document it also called it the Itanic. According to him, it was basically what everyone else has been saying so far.. great idea, bad execution.
    • Re:Follow the herd! (Score:5, Interesting)

      by jarich ( 733129 ) on Monday July 11, 2005 @01:42PM (#13034556) Homepage Journal
      Many people here only know what has been said on Slashdot about the Itanium. They've never used one.

      I worked at a startup that was building a database ~70 gigs in size. It took 2 months to build said database. Lots and lots of very small lookups and inserts.

      Memory was our bottleneck. More ram equals more speed. So we spent BIG bucks and bought a quad Itanium with 12 or 16 gigs of memory (I forget exactly how much it had).

      The Itanium was slower than a dual X86 with 2 gigs of memory! And not just a little slower. We spent weeks trying to get the database optimized.

      Why does no one respect the Itaniums? Intel made a slow chip. Then they released the sequel. I've already paid my dues on that line once. I'm not playing this round.

      • by operagost ( 62405 ) on Monday July 11, 2005 @02:03PM (#13034792) Homepage Journal
        Itanium I, right? You didn't mention when this was.

        Your anecdote probably has little to do with the current processor.

      • by Hythlodaeus ( 411441 ) on Monday July 11, 2005 @02:49PM (#13035288)
        Memory was our bottleneck. More ram equals more speed.

        Don't blame Itanium that you picked the wrong chip for your needs. A little back-of-the-envelope calculation could have saved you a lot of money. With your 70 gb database and 2 gb of ram, assuming there wasn't much locality in the lookups you have about a 2.85% chance that your next lookup is already in memory. Up it to 12 gb and you have 17.14% - still not much, so either way your main bottleneck is going to be the bandwidth of your memory system. There was no secret that the first batch of Itaniums used 133 MHz RAM while DDR ram for x86 was up to 266 or maybe even 333 MHz by that time. Itanium's niche has always been floating-point intensive applications, which yours was not.
        • by Anonymous Coward on Monday July 11, 2005 @04:31PM (#13036379)
          I have to concur. System and memory bandwidth are often overlooked when designing a system.

          "It's got 2 xeons and 4 GB of ram! :P"
          It sounds great to management.

          As my colleague noted above, db operations are IO bound. This means you have to get data from point a to point b very quickly, whether from memory or disk.

          To do that job you need high memory and system bus clock speeds, so there is no vaccuum happening at the cpu in.

          There is a point of diminishing returns with adding memory. Sometimes adding too much memory can slow things down, considerably.

          The CAS latency increases as does the latency due to memory management overhead.

          With a db you face the same exact issue that professional audio engineers do. Getting lots of data to the cpu, and back out somewhere.

          Anecdote:
          my buddy, a pro dj, got a dual xeon 2.4 Ghz (I think it was) 4GB memory for producing music. at around 47 audio streams, snap crackle pop. cpu usage was around 10% or so. It was a 12000 dollar mistake on his part.

          I walked in with a $700 amd 64 3200+ (thats 2Ghz) with 1/4 of the memory, one cpu, and I loaded up a project with 134 audio streams and it played like butter.

          Both were running windows xp 32, both were running Steinberg Cubase SX.

          Xeon specs:
          FSB 533
          DDR 133 on 4 1GB sticks

          AMD 64 spec:
          FSB 800
          DDR 400 on 1 1GB stick

          Read the specs for VIA K8T800 chipset and compare it to any Xeon chipset. This time period was a year ago Christmas. Read up on how the memory architecture works for both CPU's.

          Database tests were similar. Just about any IO bound process will produce a similar result. Music, video, db, etc.

          It isn't that the xeon sucks, it's a computations per second centric architecture. Unfortunately for intel, they focused on clock speed when they should have been removing architecture bottlenecks which would allow people to take advantage of all those cycles. The G5 has similar issues but not as bad.

          It's all about the memory and bus architecture... The sad uninformed people say (pinch airflow from nose so you sound geeky) "It's a 64 bit processor, you need a 64 bit OS to take advantage of it, period".

          What they fail to realize is the 64 bit memory and bus architecture happen *below the HAL*. the OS doesn't even see it, let alone need to be 64 bit to take advantage of it. I politely let them flap their gums and went out and bought one anyway, then proved them wrong.

          Computationally = intel, tho that gap is narrowing
          IO = AMD64 or opteron.

          I don't care if you are running windows 95, amd64 will be faster for IO bound stuff, than any 32 bit architecture is capable of even getting close to.

          If you are crunching spreadsheets, word processing or videogaming, *generally*, intel is better, tho that gap is getting smaller. It will likely disappear when 64 bit OS's get apps caught up to them.

          For DB, working with large files, shunting lots of streams around the mobo, AMD 64 *smokes* intel 32 right now, 32 bit apps notwithstanding. There is simply no contest.

          l8,
          AC
    • by jav1231 ( 539129 )
      Yeah, it's called research. Sorry, I don't have $12K to fork out for a machine that will run it. So yeah, I rely on what I read. You read and read and read and if you get a chance to see one you look for yourself. If not, you go with the preponderance of evident. So, how does the one YOU bought perform?
    • by pla ( 258480 ) on Monday July 11, 2005 @03:05PM (#13035473) Journal
      Many people here only know what has been said on Slashdot about the Itanium.

      You only need to know three things about the Itanium to pretty much automatically rule it out:

      1) Heat (and the related, power consumption). Not a joke, not a rumor. The Itanium makes the Prescott core look cool and energy-efficient by comparison.

      2) Not designed to run the software in use by 99.5% of the PC market. Great for a custom supercomputer, okay for some servers, complete shit for normal desktop use.

      3) Price. They hope to make it competitive by 2007? How long has it existed now, at 3-10x the price of the highest end x86 CPU? And someone actually needs to ask why it hasn't hit mainstream use yet?


      That about does it for me, anyway. Did I miss something obvious here? I don't see this as a case of the rumor mill damning it, just its own HUGE shortcomings to offset its single good point (namely, good performance for a very limited set of uses).
    • Re:Follow the herd! (Score:3, Interesting)

      by codeguy007 ( 179016 )
      Don't forget that an IBM employee called Itanium a science project when IBM dropped there line of Itaniums.

      The problem is and will remain that you don't get enough performance for what you pay for from Itanium.

      When I can build a Athlon64 X2 cluster for under $500,000 that can place somewhere around 150 on the top 500 list. It's hard for itanium to compete pricewise.
  • Itanium2 (Score:5, Interesting)

    by Anonymous Coward on Monday July 11, 2005 @01:18PM (#13034297)
    I had to study the chip in one of my EE class. The technology in it is really really impressive. I love the memory architecture provisions!
  • by Thornkin ( 93548 ) on Monday July 11, 2005 @01:18PM (#13034304) Homepage
    I think the big problem is that it cannot run x86 software very quickly. Most software that people want to run in the mass market is precompiled, binary x86 software. That stuff just does not run well on the Itanic. That, combined with the fact that the mass market still doesn't really benefit from a 64-bit address space means that the Itanium was a more expensive, slower processor. It's no wonder that it didn't sell.
    Early versions also had problems with heat. Where I work we have some Itanic workstations and in the winter, if we were chilly, we literally turned them on to help warm up our offices.
    • by hackstraw ( 262471 ) * on Monday July 11, 2005 @02:04PM (#13034809)
      I think the big problem is that it cannot run x86 software very quickly.

      Yeah, that is why semi trailers don't get respect like Dodge Neons. They use diesel fuel instead of unleaded!

      My point is that if your buying a 64bit system that is fast in order to run your old 32bit programs slowly. Wrong tool for the job.

      I've got 65 Itanium processors downstairs. They are fast and reliable for high memory bandwidth floating point calculations, which is what we use them for. They may be a disappointment with running IE or Outlook, but for crunching numbers they are great. I have yet to of tried an Opteron but will in the next couple of weeks. From what I understand those too have become great at high memory bandwidth number crunching, but I'll wait for the numbers vs marketing speak. Now, Itaniums do suck in the power consumption and heat dissipation department.

      Itaniums get such a bad rep here on Slashdot because its cool to do so. Itaniums are made by the "big guy", Intel. If they were made by AMD they would not get the same rap as they do.

      The other big thing against the Itaniums is market need. A generic x86 that you can throw in the trash and replace for about $1k if there are any problems are sufficient for 99% of the servers out there. If not even preferred. Now, what other market would want a fast 64bit architecture with high memory bandwidth -- databases. Sun and Oracle fill this void. Well except for the fast and high memory bandwidth part, but Oracle+Sun is a proven combination with years of experience. Solaris does not run on Itaniums. Linux does (flawlessly), but even Oracle+Linux is not that widely adopted. I have no clue about Windows state on an Itanium. I see no real use to run Windows on an Itanium, but someone else might, but I doubt its very common.

      Although Intel has some more to go with the low-voltage Itaniums because they are capped at 1.3GHz, but they are working on that. Also, Intel has dropped the price of these guys considerably. This too was an issue with Itaniums, but they have dropped by about 1/2 the price over the years.

      IMHO, Intel should continue on the power management issues and price and market these chips more for number crunching. Their performance on the top500 site is impressive, but even if all of the top 500 computers used 4,000 Itanium processors each, that would only be 2,000,000 processors total, and a super computer that size is not purchased very frequently.
      • by bani ( 467531 ) on Monday July 11, 2005 @02:54PM (#13035336)
        Itaniums get such a bad rep here on Slashdot because its cool to do so. Itaniums are made by the "big guy", Intel. If they were made by AMD they would not get the same rap as they do.

        bullshit. itaniums get a bad rep on slashdot for any number of reasons, and they cannot all be distilled down to "because it's hip and trendy to bash itanium".

        slashdot would still be bashing itanium if it were from amd.

        few people like paying $1000+ for a cpu alone, for example.

        itanium is a niche processor filling a tiny tiny tiny market. and it is already hitting scaling issues.

        itanium also has yet to deliver on most of its performance promises. just about the only one it's delivered on so far is memory bandwidth :-)

        intel gambled itanium's future on its dependency of a number of risky and unproven technologies (eg VLIW). in order for itanium to succeed, ALL of these technologies had to succeed. instead what happened is virtually NONE of them did.

        it's quite telling when a lot of the intel engineers and scientists involved with itanium are calling it a huge mistake. the p4 guys aren't impressed either :-)

        itanium is doomed longterm. most of intel's itanium partners have long since bailed on the architecture, most projects for itanium have been killed off (including windows), which guarantees itanium has no longterm future.

        some lessons from itanium may be rolled into other intel mainstream products, but as a product itself itanium's days are numbered. itanium has been a huge black hole sucking billions of r&d from intel while amd has been constantly chipping away at intel's market share with x86_64. itanium has never turned a profit, in over a decade of development on the damned thing. it's only a matter of time before stockholders demand itanium be hauled out to the barn and given both barrels.

        most people who have studied itanium closely conclude itanium is an r&d project that should have remained in the labs as pure r&d, and never turned into a product.
    • It's not popular because previous generations were Hot, Expensive, and Hard to Program. Adoption was slowed mainly by #2, as it was a significant investment for many research groups to even put one on the floor for testing purposes. The Hard to Program meant that you had wonky versions of Linux, or HP-UX with new compilers, as your OS options, which just increased the resistance.

      I ran an Itanium-2 cluster (and had briefly an Itanium-1 loaner) and if the compilers were stable, the first generation easil
    • In addition, real-world performance sucks without tons of cache and memory bandwidth. In fact, the original Itanium's entire bus and cache subsystem were redesigned for the release of Itanium 2, doubling the bus width, increasing the L2 cache size and tweaking the latency on the L3.

      No surprise, the Itanium 2 performs much better than the original Itanium, but it's name was alreay soiled by the mediocre preformance of the original.

      In addition, all that cache and high-performance bus architecture means th
  • Two things: (Score:4, Interesting)

    by grahamlee ( 522375 ) <graham@iamlUUUeeg.com minus threevowels> on Monday July 11, 2005 @01:18PM (#13034305) Homepage Journal
    One, it gets no respect because nobody uses it. Where is the kudos for the transputer? Why does nobody love the Apple ///? Second, yes it beats the x86 into the ground. I'm not surprised. Now show me how it compares against a real CPU. We've already seen that the Itanium is competing in a different space (supercomputers), so show me how it compares with the MIPS that SGI have ditched in its favour. I wouldn't be surprised if an n GHz MIPS stuffs an n GHz Itanic into the floor.
    • Re:Two things: (Score:5, Informative)

      by Anonymous Coward on Monday July 11, 2005 @01:35PM (#13034491)
      Now show me how it compares against a real CPU. ... I wouldn't be surprised if an n GHz MIPS stuffs an n GHz Itanic into the floor.

      Guess what? It doesn't. Itanium really does outperform MIPS and if you'd care to look it up yourself, you'd see. Itanium and POWER have been rougly neck and neck in vying for the top performance spot since the Itanium 2 was first released. Each new processor from either vendor bests the other.

      As for your disparaging remarks about X86, consider that it offers the highest performance outside of Itanium and POWER on floating point and overall keeps pace on integer code. Topping X86 is, believe it or not, a real feat. Top of the line AMD64 and Intel chips are engineering marvels as far as processors go. MIPS certainly can't touch them.

      It may be fashionable to dis X86 but if you look at the numbers and the microarchitecture, you'll be hard pressed to find anything significantly better.
  • by Proc6 ( 518858 ) on Monday July 11, 2005 @01:18PM (#13034307)
    Probably because when it mattered a single CPU Itanic was more like $12,000 and not $2,000. After fucking up all their marketing and delivering strategies no one wants one anymore.
    • by RelliK ( 4466 ) on Monday July 11, 2005 @02:24PM (#13035032)
      Out of curiosity, I just checked itanic prices at dell. The cheapest configuration for a single (dual capable) 1.5GHz itanic with 2GB RAM and 36GB SCSI HD is over $17K. For comparison, a similarly configured 3.6GHz Xeon (also dual capable, 2GB RAM) is just over 5K.

      The article poster is simply trolling. Where the fuck can you get an itanic for $2000? The cpu *alone* costs that much! The article that the moron linked to confirms this: "The 1.4GHz Itanium 2 comes out Monday for $1,172 in 1,000-unit quantities. A 1.6GHz version comes out in May for $2,408 in similar quantities." (last paragraph)

      Need I give any more reasons for why it's not popular?
  • by drhamad ( 868567 ) on Monday July 11, 2005 @01:19PM (#13034314)
    Hundreds and hundreds of products have been killed or permanently crippled because their first versions were terrible. Itanium is the same thing. With the public perception of the Itanium still the same as it was for the first (pathetic) iteration of it, how are you going to convince your manager to spend the money to get it? Benchmarks only go so far.
    • It was worse then just a flop, it sank alot of companies bankrolls waiting for this technology -- meanwhile AMD delivered a fine performing chip for way way less, and lets face it, the big market in Desktop PC's is consumer electronics.


      Intel was so late in delivery that all the high performance workstation people abandoned the Itanic.

    • Hundreds and hundreds of products have been killed or permanently crippled because their first versions were terrible.

      There's the answer! Now it's clear what has to be done to make this processor a success!

      The first version was terrible, you say? Well, then simply apply the one and only strategy that always guarantees that an absolutely horrible first version becomes a great market success.

      Put a sticker on it with the name "Microsoft".

      --
  • Here are.. (Score:5, Informative)

    by th1ckasabr1ck ( 752151 ) on Monday July 11, 2005 @01:19PM (#13034320)
    a few reasons [zdnet.co.uk].
  • Comment removed (Score:4, Interesting)

    by account_deleted ( 4530225 ) on Monday July 11, 2005 @01:19PM (#13034323)
    Comment removed based on user account deletion
    • by team99parody ( 880782 ) on Monday July 11, 2005 @01:34PM (#13034487) Homepage
      From a business point of view, it was quite the success.

      When Itanium started, Intel was absolutely nowhere in 64 bit and high-end computing. Thanks to Itanium, over half Intel's competitors simply walked away from the market with little more than a few press releases from Intel.

      Consider that at the time, you had Alpha (Dec), PA-RISC (HP), MIPS (SGI), and Sparc as leading 64-bit computing platforms.

      HP in it's infinite wisdom was suckered the worst - giving up their own leadership position just to be strung along for many years in Intel's PR bluff. However Wall Street loved the "ooh, intel's story's so aWsUM that even HP is giving up" that SGI spun off and MIPS gave up on the high-end space; and Dec->Compaq->HP undervalued Alpha and it went away.

      This has to be the most successful come-from-zero-to-wipe-out-half-the-market story in the history of computing. How can it be considered a failure.

      • Re: (Score:3, Insightful)

        Comment removed based on user account deletion
        • It's a failure because Intel shrunk the market,. and doesn't sell any chips. Reducing competition is only half the battle.

          GP poster was trying to say, that for Intel, shrinking the 64 bit market was a strategic goal. Why?

          1. If you as a purchaser of servers really didn't need 64 bit, and could do with many 32 bit machines, guess who you're buying from now
          2. After some time, when noone is around, Intel comes by and re-invents 64 bit (and everyone adores them for it)... this part of the plan was majorly fux0re
  • 64-bit Gaming (Score:3, Insightful)

    by Anonymous Custard ( 587661 ) on Monday July 11, 2005 @01:20PM (#13034324) Homepage Journal
    http://techworthy.com/PCUpgrade/SeptOct2004/64-Bit -Gaming.htm [techworthy.com]

    Because for Itanium compatibility they'd have to port everything over to the Itanium proprietary instruction set. You can see how eager they've been to do that for Macs, so guess how likely they are to port it for Itanium.
  • Inertia. (Score:3, Insightful)

    by AsbestosRush ( 111196 ) on Monday July 11, 2005 @01:21PM (#13034336) Homepage Journal
    Inertia, would be my answer to this question: Inertia of the technological kind keeps x86 on the desktop, even with the 64 bit extensions.

    Inertia keeps Microsoft on the desktop, even though it being low hanging fruit for crackers.

    Inertia can be a good thing... in this case, it's a bummer. I can safely say that my next game rig will be A64 powered, simply because of... inertia.
  • but my understanding, from the rumor mill, says that the Itanium was too little, too late and was partially aborted in an effort to get it out of the lab. It was a joint HP/Intel effort that was supposed to be the "next big thing" in processors, but dragged on so long in the lab (more than 10 years) that, by the time it was released, contemporary competitors already had nearly comparable horsepower and an established mindshare.
  • by Anonymous Coward on Monday July 11, 2005 @01:22PM (#13034349)
    I may be entirely wrong, but I believe the dislike for the Itanium stems from the fact that you can't compile any decently optimized code for it. Apparently, even Intel can't create a good compiler/linker and toolkit for creating machine code that makes good use of EPIC. Even though the processor itself is more efficient and faster, the same thing compiled to machine code running side by side with an Opteron or any other x86-64 chip will see the x86 win. If somebody could come up with a decent compiler/linker that provided full EPIC optimizations, they would be bangin, but they don't have it so we don't use it.
  • by Surt ( 22457 ) on Monday July 11, 2005 @01:23PM (#13034363) Homepage Journal
    The people who work on scientific applications take performance seriously. They put a lot of effort into optimization. The itanium architecture is hard to optimize for, and the compilers just aren't there yet for the general case. So you wind up with a disparity between the performance in scientific applications and general purpose applications.

    Other reasons itanium can't compete:

    1) Compare the performance of itanium with xeon/opteron in running native x86 code.

    2) Compare the costs of building real end user systems.

    3) Compare the availability of windows xp drivers.
  • A few reasons: (Score:5, Informative)

    by NaruVonWilkins ( 844204 ) on Monday July 11, 2005 @01:23PM (#13034366)
    One, market penetration. Windows *kind of* works on Itaniums. Code has to be compiled specifically for the platform - they're not very good at x86 code through WoW.

    The BIOS replacement they use is not functional. It's very difficult to set up disks for use, and if you lose the disk that the BIOS data is kept on, you're screwed. As far as I know, there is no way to make that fault-tolerant short of manually storing the contents of that partition on another drive.

    Support for the Itaniums has been terrible. The HP systems are riddled with hardware problems, and their support personnel (at the enterprise level) have no idea how to comprehend that they don't operate quite like any other workstation.
  • by rtkluttz ( 244325 ) on Monday July 11, 2005 @01:24PM (#13034370) Homepage
    People don't want a processor whose main purpose in life was to artificially refresh Intels control on much of the Intellectual Property associated with the processors. AMD is getting too close, so they change everything and hope to charge royalties.
  • by GGardner ( 97375 ) on Monday July 11, 2005 @01:24PM (#13034374)
    While, the IA64 has always had great floating point performance, there's an awful lot of us out here that don't need fast FPUs -- e.g. code development, database, web serving, network i/o etc. Sure, IA64 is a winner for the teraflop oriented supercomputing community, but for the rest of us, integer performance matters more. And for price/performance, x86 and x86_64 beat ia64.
  • by unsinged int ( 561600 ) on Monday July 11, 2005 @01:24PM (#13034377)
    to compile for Itanium. Speaking as a compiler researcher, Itanium is great for generating research papers because there are all sorts of things that you can do from a compiler perspective. The problem is, outside a research environment, someone has to implement a lot of the ideas in an Itanium compiler to make it useful. Unfortunately, most of the stuff in the Itanium research papers isn't easy to implement and most of what gets put into commercial compilers are the easily implementable ideas.
    • by Jeffrey Baker ( 6191 ) on Monday July 11, 2005 @01:53PM (#13034691)
      Yeah, nice CPU, difficult for software authors. I read a paper [usenix.org] recently wherein the authors managed to reduce L4 microkernel message passing (up to 8 bytes) to 36 clock cycles, which is far faster than any other platform. But this was done by hand, and the compiler blurted out a routine that required 508 cycles. The gulf between what you can really do with an Itanium, and what normal software writers can do with it, remains huge.
  • Feel good factor (Score:4, Insightful)

    by jellomizer ( 103300 ) * on Monday July 11, 2005 @01:25PM (#13034382)
    Why anything doesn't get the respect that it is due. It is because people don't want to give it respect. The Unix People go Well Sun Ultra Sparc (Or any other of the 64 bit Unix platforms) has be 64 bit for many years before the Itanium. The Apple crowd went well the Power PC is now 64 bit (although this is changing, and may possibly give Itanium some respect). The windows users are afraid of Itanium because it may break a lot of compatibility in their legacy apps. The Linux users are afraid of a complete Intel Dominance and put their development efforts to AMD 64bit chips. It is a state where you see the old king dieing and this is your only opportunity to get a change in government before the kings son gets in power. Why doesn't FreeBSD get the respect it deserves, or why doesn't Python get the respect it deservers. The winner is not always the best or even close to the best, the winner is often the one that people feel good about.
  • Why as why (Score:3, Interesting)

    by D3 ( 31029 ) <daviddhenningNO@SPAMgmail.com> on Monday July 11, 2005 @01:25PM (#13034388) Journal
    May as well as why Linux/Mac/*BSD/etc. doesn't get the "respect it deserves." There is no real answer.

    My personal thought is that price:performance was not in line with other choices available to the end consumer.
  • by SirCrashALot ( 614498 ) <{jason} {at} {compnski.com}> on Monday July 11, 2005 @01:26PM (#13034401)
    My systems professor told us that they chose to create a very complicated assembly language, that while may be efficient, makes programming un-nesceissarly difficult. If people don't want to program on your platform, you have a problem.
  • Itanium (Score:5, Informative)

    by myrick ( 893932 ) * <amyrick@@@gmail...com> on Monday July 11, 2005 @01:28PM (#13034418) Journal
    Itanium is definitely a brilliant architecture in many ways, and lessons will have to be learned from it some day. It takes a little history to know why it's called "Itanic," however.

    The Itanium was designed to change the way processors worked. Most processors today are some sort of dymically scheduled behemoth that are capable of detecting instruction collisions on the fly, and reordering instructions for optimal parallelism and thus performance in the light of those collisions. Itanium takes a completely different approach. It is an extremely wide processor that has absolutely no collision detection or reordering. All of the work in this respect is placed on the compiler's shoulders. In theory, a good compiler could make this chip very, very fast, and in reality, as you see, this can be the case. So why did it fail? Intel hyped the hell out of this processor, and then missed their release date by a full two years. That is microprocessor suicide in the land of Moore's law. So, when Intel delivered a chip too late that failed to perform the way they marketed it to, the chip died. In recent years, Itanium has really come around, but it's hard to escape your past in this industry.

    Other relevant problems for adoption are tied to this need for a good compiler. Making a compiler as smart as it needs to be for Itanium to live up to its potential is not cheap, and Intel is not known for just giving away such technology. I'm sure the fees to license Intel's compiler are nontrivial, and that does not encourage development. Realistically, Itanium will never become a desktop chip just because of the massive adoption effort that would go into such a switch.

    One thing to note, however, is that other chips aren't that far away. You suggest that a 2ghz or 3ghz Itanium would be incredibly fast, and I agree, but I seriously doubt Intel can ramp it that fast. Also, the Opteron specs you show are for 2.0ghz, and I believe Opteron is up around 2.6 or 2.8 ghz nowadays.

    Ultimately, Itanium is a great design, but wrapped in a poorly executed initial implementation. It does teach a good lesson that compilers can really help improve chip performance, and down the road, architectures that take this into account may reign supreme. But I wouldn't look to Itanium to do any more than instruct us for the future. She is not a desktop chip.

  • Many reasons.... (Score:5, Informative)

    by loony ( 37622 ) on Monday July 11, 2005 @01:28PM (#13034422)
    Well, I can talk only for myself but...
    • Windows on itanium is a joke... What software are you going to get running well there? We tried it and 80% of the software we needed to certify a new OS wasn't there.
    • HP-UX is better off but still - if you have any legacy software at all in your system you're screwed.
    • Linux is doing alright - but if you use a Itanium box running Linux and pit it against new xeon with the same number of CPUs, the Itanium looks like a dog...
    • Most business apps are integer processing - itanium doesn't look that great in the int benchmarks...
    • I'm frankly just tired of hearing about it... Since 7 years we hear that itanium is going to be the future and all - hasn't happened yet and I doubt it ever will at the pace its moving. Why port to a platform that already feels dead before it even took off?
    • You can't compare a Xeon and an Itanium box by the per cpWe already support 5 different platforms - why would I want to add a 6th one if the performance gains are going to be pretty meger...

    Peter.
  • Two words (Score:5, Insightful)

    by overshoot ( 39700 ) on Monday July 11, 2005 @01:29PM (#13034429)
    No applications.

    Microsoft apps are nonexistent, and open-source apps tend to have crappy performance due to the fact that IA-64 depends overwhelmingly on compiler optimization. Developers can use Intel's compiler, but it requires work to use with most Linux systems (the only other platform that supports IA-64 besides MS, AFAIK).

    Net result: no applications => no uptake, QED.

    Egg, chicken, all that.

  • FLOPS isn't enough (Score:5, Insightful)

    by timster ( 32400 ) on Monday July 11, 2005 @01:29PM (#13034431)
    You have floating-point listed there, which is great for science I'm sure, but where are the integer numbers?
  • by stienman ( 51024 ) <adavis@@@ubasics...com> on Monday July 11, 2005 @01:32PM (#13034471) Homepage Journal
    The itanium is an amazing architecture with so many performance boosting upgrades that it would have blown everything out of the water.

    If it came out on time.

    It was so late that by the time it came out it was still better than existing processors, but not by a large enough margin to justify its cost.

    As the clock speed goes up, and as the other processors find their limitations and drop out of the race, the Itanium will look better and better. There is, however, a large investment in time and software that must be made before it becomes truly useful. It is unlikely that MS is going to support more than one architecture simultaneously for the desktop or server as it tried to do for x86/alpha.

    The big marketing push and the number of companies signing on to the good ship itanic coupled with the constant pushback of the release date caused Intel to lost a lot of the press attention they should have received when it did come out.

    It'll be interesting to see what happens over time, especially as Intel wants it to be a server chip.

    Of course, this could all be a big leadup to the announcement that Apple is going with the Itanium.

    -Adam
  • by bADlOGIN ( 133391 ) on Monday July 11, 2005 @01:37PM (#13034512) Homepage
    Intel figured it was big enough to set the trend by making a radical change. It was wrong and paid the price when the market didn't follow. IBM thought it was big enough to set the trend by making a radical change with Micro Channel Architecture (replacement for the ISA Bus). It went nowhere and helped kill IBM's dominance of the X86 PC world it created. The fact that Intel didn't bet the farm and loose everything is either good planning or dumb luck on thier part.
    • I think the biggest thing that doomed the Micro Channel Architecture (MCA) was the fact that IBM did not bother to license the technology at very low cost.

      If IBM had done a proper job of licensing MCA at a low cost then not only would MCA have replaced the old ISA bus, but alternative bus connection architectures like EISA, VL-Bus, PCI, AGP and PCI Express would have never happened! This is because we know now that MCA could be easily expanded all the way to 64-bit bus connections and support very fast bus
  • Infanticide (Score:3, Interesting)

    by william_w_bush ( 817571 ) on Monday July 11, 2005 @01:41PM (#13034549)
    Itanium was killed by intel's megahertz marketing. Why get an expensive 1.4Ghz itanium when you can get 2 3.0ghz xeons for less? The amd-intel 1Ghz race hit it even harder, since intel had to totally sell out itanium's higher ipc for the p3's higher frequency, and meant the p3 could be brute-forced to equal or greater performance as the new, non-mainstream itanium architecture.

    In my opinion the p4 was the worst thing ever to come out of any microprocessor house in the last 20 years, as it not only comprimised microprocessor design for the horrible and blind-sighted goal of mainstream marketing, but essentially caused a large part of the current TDP crises the industry is in now, and reinforced our mentally handicapped reliance on single-threaded programming.

    The humor in the itanic label has nothing to do with the chips, it has to do with intel trying to have it both ways: intel chips are the most powerful, with the only metric that matters, frequency, and ipc and design efficiency matter little, but also that "oh yeah and we have this amazing chip that is so powerful but runs at half the clock speed." It was a blatent contradiction in marketing messages.

    For f*cks sake, they called their double-clocked alu "NetBurst"... seriously, why not add an onboard memory controller and claim it's "SuperBandwithMaker", which uses it's amazing technology to increase the speed of your dial-up connection...

    Yes, if you market to customers by treating them as idiots, expect them to choose the stupid product, and ignore you when you claim to offer another product that "no really this is a good chip, not like that other one which we said was the fastest", which is actually better for you in the long run, because you can set a new foundation for improvement.

    When amd came out with the opteron at 64-bit, and with surprisingly competitive performance while still running legacy apps at faster speeds, how do you compete with that?

    Here's hoping they do manage to resurrect the alpha lines, Ibm even went a little over to the marketing darkside with the g5, trading frequency scaling for TDP, but they usually manage to rebalance the two after a few years of revisions.
  • by vlad_petric ( 94134 ) on Monday July 11, 2005 @01:44PM (#13034572) Homepage
    1. transistor count. You do need more transistors for decoding x86 into micro(mu)-ops, but in the end your L2(3) cache is gonna be >50% of your chip area. Interestingly enough, Itanium chips are overloaded with L3, and in fact, the first chip to break the 1billion transistors is an Itanium II chip. The good performance of Itanium comes a lot from its shitload of caches; nothing's preventing Intel from loading the P4 with caches though.

    2. x86 is bad/ugly/dirty/whatever, however Itanium is not exactly clean either. The stacked register file is a good example of that. I personally prefer x86-64, which takes the evolutionary approach: fixes quite a few of the problems of x86, while still retaining the core features.

    3. x86 chips do out-of-order execution; Itanium, OTOH relies on the compiler to schedule instructions and bundle them together. The main problem here is that doing instruction scheduling statically is much, much harder than doing it dynamically. An average program has a basic block size that is less than 10 instructions. It's very hard to find parallelism within such small basic blocks, so to be efficient at all, you need to do profiling to build traces/hyperblocks. In fact, profiling on the Itanium can give you a performance boost of 30%. However, profiling is hardly desirable from a software developer's perspective

  • Itanium problems. (Score:3, Interesting)

    by stnuke ( 898973 ) on Monday July 11, 2005 @02:02PM (#13034784)
    Well, there are many reasons the Itanic failed. It was a great architecture, a neat idea. Shift all of the intelligence in the chip up to the compiler, execute in-order, optimised code, get rid of deep bypassing, etc. Generally, get rid of the extra 50% of the chip that's dedicated to turning an instruction stream into a series of vectors.

    Note, it *was* a neat architecture.

    Then, everybody got involved. Imagine a roomfull of architecture, compiler, and systems PhD's, each with their own pet idea. And this chip had them ALL in it. Anybody remember the i432? In a way, this was the i433.

    BUT. This meant a complete break with the current codebase, and in the final analysis intel didn't have the guts for it. Especially once their hopes for compilers weren't being borne out (once, Intel was a HUGE player in the market for compilers PhD's). So the guys at Intel decided to add x86 hardware compatilbility to this. Then, since their compiler plans weren't working out, they added out-of-order execution.

    Now, all of these things had crazy interactions. Suddenly, who knew what it was doing? Then the power... all those units, executing all those dead instructions - it ran HOT. Then the fact that x86 compat and o-o-o were a gigantic boat anchor in terms of chip real estate, driving the cost through the roof pretty much sealed its fate. It became a "server processor". And if you get 7 or 8 P4's for the price of one Itanium... well, your cluster is better served with those 7 or 8 P4's.

    Pride goeth.
  • by RobKow ( 1787 ) on Monday July 11, 2005 @02:04PM (#13034808)
    The decision to move instruction-level parallelization from runtime (in the CPU, hardware, expensive) to compile-time (software, cheap on a marginal cost basis) ended up being a poor one for general-purpose computing. You save silicon not having all the fancy instruction scheduling, reordering, etc., but you lose the knowledge of the runtime environment the hardware has when you move it into the compiler.

    Sure, there's a lot more processing you can do off-line in the compiler, but you also have a lot less information about how the code is actually going to be executed at compile time.

    Theoretically, JIT compilers (Java and .NET bytecode to native code and Transmeta x86 to native VLIW) can do a better job because they can profile the running code and get a better handle on likely execution paths. These would be a good match to the VLIW Itaniums to compensate for them lacking that "complex" hardware to keep the execution units supplied.

    The Itanium2 makes a good supercomputer chip because you can optimize your code very carefully and you've got a good idea what the data looks like and what branches will be taken, etc. at compile time.
  • by Jhan ( 542783 ) on Monday July 11, 2005 @02:13PM (#13034895) Homepage

    Let me tell y'all a little story.

    Back in '94-'95 i was doing the third grade of the Computer Science course at the Royal Institute of Technology, which meant I had to choose a specialization. I chose "Computer Systems", ie. processors, busses, caches and what-not.

    This was a very exiting time to be studying processors since (for a fleeting moment) Intel processors where the absolutely worst processors among the serious combatants.

    Yes, you read that right. The Alpha was (of course) and unstoppable juggernaut, but through a freak act of development schedules the new MIPS had managed to outstrip the latest Alpha.

    After MIPS and Alpha we had PA-RISC, SPARC, PPC and then finally the pathetic, lowly Intel x86.

    Alpha had strong plans of totatlly replacing the x86 by offering Alpha based x86 emulations that were faster than the fastest x86 in running x86 code.

    But now, Intel announced the Itanium.

    • It will be 64 bit (all the above architectures were, of course already 64 bit).
    • It will be multi-processor (all the above architectures had cache coherency logic to allow 8+ processors).
    • But, most of all, it will have THIS!, and I mean <blink>THIS!!!!</blink> much preformance! (Intel pulls wildly insane numbers out of an orifice of your choice).
    ...and the monster thing will ship in 1998.

    Apparently, all the CPU makers sat down and discussed this, and agreed that "They may be last right now, but they have piles of cash. They could do this. They really could."

    So, what did the competiton do?

    • Alpha tried to stay agressive, but didn't sell enough, so they tanked. Bought by Compaq, then HP then sweet nothingness (see HP).
    • SGI and MIPS didn't know what to do. They made some noises about shifting to the Itanium... Maybe. While still developing the MIPS... Just a little. A very little. Now, as Netcraft confirms, SGI is dying. :-)
    • HP promptly shat their pants, threw their PA-RISC processor platform (which was third fastest in the world at the time) out the window and partnered with Intel, making plans to replace all HP/UX PA-RISC machines with Itaniums. ...which is what they have been doing for some time now, and loosing customers by the droves for it.
      Because of aquisitions, they also happened to be saddled with the best processor ever made, the Alpha.
      Stick with dying Intel... Develop best processor. Hmm...
      Well, you all know where HP is going.
    • Sun, I'm sad to say, didn't ruin the Sparc platform because of Itanium, but just by being their usual ineffectual self.
    • The PPC concertium tried to press on, and did quite good. Motorola was to obsessed with embedded chips, but even now, I personally think IBM's "G5"s are very good, and believe they have it in them to produce several new generations of kick-ass chips.

    And then what happend?

    Intel didn't deliver... and didn't deliver... and didn't deliver some more.

    Year after year passes...

    When the Itanium was finally delivered, it was obvious that every other platform could have kept up, if they would just have kept developing their processors!

    But they didn't and now they sleep with the fishes.

    Conclusion: By making their Itanium announcement, Intel slew four out five serious competitor. It doesn't relly matter if the Itanium sucks. In fact, the Itanium would be Intels greatest success even if they had never delivered it.

  • Wrong question (Score:4, Interesting)

    by redelm ( 54142 ) on Monday July 11, 2005 @02:40PM (#13035194) Homepage
    Itanium gets _exactly_ the respect it is due. People pay as much attention as they want. Your question really should be phrased: "Why doesn't Itamium get the respect I think it's due?"

    That question answers itself: You think differently from most people. Highly specialized, hand optimized massively parallel predictable crunching seems to matter to you. It doesn't to most people. You're in a minority. Get used to it.

    BTW, i860 and Alpha suffered from basically the same problem.

  • by dlapine ( 131282 ) <<lapine> <at> <illinois.edu>> on Monday July 11, 2005 @02:47PM (#13035260) Homepage
    I have a nice cluster with ~1800 Itanium II's. It's fast, the CPU's stable, and it runs on Linux. I have a lot of hands-on experience with it.

    A couple of points that seem to have been missed when looking at why the itanium less widespread:

    • each CPU is quite large, having a square surface area for the unit about 2" x 5" and it's about 2" high
    • That area includes a voltage regulater and the passive cooling fans
    • It doesn't include any of the necessary active cooling
    If you add these physical factors to the points already made about heat, power and EFI bios, it's obvious to say that Itanium won't run in your mini-ATX destop or laptop. This isn't a slam on the design, as it was never designed to run in those form factors, but it's hard to see how any cpu today is going to have a wide use if it isn't available for dual use for destop and servers. Once you eliminate the desktop market, (and I'm going to lump the workstation market in with the servers) the number of places you can sell these processors drops considerably.

    Once you start adding in the lack of Windows support for itanium, the strides that the 86_64 architechture has made in capability, and the low numbers of current adopters, it's not looking like Itanium will ever gain widespread acceptance.

  • by Nom du Keyboard ( 633989 ) on Monday July 11, 2005 @02:55PM (#13035348)
    Having an affordable chip one step above a Xeon or Opteron in floating-point performance would not be such a bad thing for gaming enthusiasts (or 3D artists).

    Why this chip is not for me are two reasons:

    1: I'm not buying one before the software is ported to it -- and at a comparable price to its PC equivalent!
    2: It may be a step above an Opteron for floating point, but is it still that step about a dual processor Opteron that I can buy today for less money than a mono-processor Itantium?

    As for the "Itanic" jokes (all of which are way off-base, since heat output of any H.M.S. Itanic would melt any iceberg long before it could do any damage), blame The Register. I saw them use the term long before anyone else.

  • Simple (Score:5, Informative)

    by bored ( 40072 ) on Monday July 11, 2005 @02:56PM (#13035370)
    Your quoting FP performance. The "integer" (aka general purpose) performance isn't nearly as competitive. This is because its a static VLIW machine, and its hard to write a good VLIW compiler. Writing fast FP code is simpler. Then there is the fact that the Itanic is 3x the hardware of the machines your comparing it to. Bigger caches, and all that. Your misunderstanding of clock rate is also simplistic. In order to get the Itanic faster they would have to create a longer pipeline, this would more than likely decrease the IPC and keep the processor from scaling lineraly.

    Basically it was pointless. we don't need yet another processor targeted into the same market the POWER64/SPARC64/PARISC and now the X86-64 etc are in.
    The whole arch is a mess in my opinion its accually probably worse than the x86, this is evident in how long it took to get the thing out the door. For a processor based on the idea that superscaler wasn't easy and wouldn't perform its beginning to look like the itanic is accually in that boat. Its a dead arch, there are orders of magnitude more x86-64 machines out there even though the itanic had a two year lead. Why should I use itanic when there is a larger software base for PPC/POWER and its multivendor?
    POWER is cheaper,faster and more mature and it can barely compete with x86 in the desktop area. ARM has pretty much taken over the smaller chores (cellphones, PDA's MP3 players etc..) and smaller chips like the 8051 clones sit below that.

    Give it up, it was stupid, Intel was wrong. My opionion is that itanic was a marking plan to lock up the processor market. If we were all forced to run itaniums back in 96-98 then we would all be buying intel chips for everything. Instead intel had to release the P-Pro to keep ahead of Cyrix/AMD, only they never got far enough ahead to kill AMD to release the pressure and transition everyone to Itanic, where theyhold all kinds of patents and copyrights on the instruction set. Plus they couldn't make the thing work and it slipped for 5 years.

  • by turgid ( 580780 ) on Monday July 11, 2005 @03:02PM (#13035448) Journal
    For about the millionth time:

    itanium (itanic) is a poor design for anything other than numbercrunching. It is a relic of theoretical supercomputer designs that were popular in the late 1970s. itanic shines on floating-point benchmarks, and is mediocre at best on everything else.

    Since the late 1970s, we have had RISC and then superscalar RISC, some now with elements of VLIW. This provides better real-world (general-purpose) performance using substantially less power and fewer transistors than itanic.

    Modern RISC processors (including x86 which are RISC internally) can reschedule execution of instructions dynamically (i.e. at run time). itanic can not. It relies on the compiler to schdule the code. It is only possible to schedule code well at compile time for very well-defined problem sets i.e. floating-point maths intensive programs like numerical simulations. NASA currently owns 5% of the world's itanic processors (in a single machine).

    itanic was intel's attempt to kill the 64-bit RISC market, putting all of its competitors out of business. Like all great megalomaniacal plans, it has failed. It was a marketing-driven processor, and a failure.

    It can't compete with clunky old UltraSPARC IV on server-oriented workloads. Even that market, which isn't big enough to sustain Sun and its processors, is orders of magnitude bigger than the market in which itanic has any relevance.

    For big servers nowadays, you have a choice between Opteron and POWER.

    In science and engineering, you're often better with something like Opteron, POWER or something fancy from Cray, NEC or Fujitsu. itanic runs hot and consumes too much electricity.

    Has anyone ever seen one? I haven't. There was one at a show once on the Red Hat stand, but they wouldn't let me performance test it... and they wouldn't even let me see it because it had over-heated.

    itanic is about the most expensive turkey in computing history.

    • just a little detail from a purist assembler coder:
      in real supercomputing you do not want your processor to 'auto-schedule' or rearrange your code.

      in the end, real special code is still hand oprimized, since no compiler nor any built-in rescheduling algorithms can actually know what I really want to achieve.

      Maybe I just want to accept the half ready value because I don't care for part of it.

      Maybe I want to put one instruction way ahead to prime a set of registers for what is coming.

      A processor which

  • by Colonel Panic ( 15235 ) on Monday July 11, 2005 @03:43PM (#13035872)
    Given your little performance comparison chart:
    rank processor ghz (gflops / #procs) speed
    #5 ppc970 2.2 (27910 / 4800) 5.81
    #7 itanium2 1.4 (19940 / 4096) 4.86
    #10 opteron 2.0 (15250 / 5000) 3.05
    #20 xeon 3.06 (9819 / 2500) 3.92


    Maybe the question should be, why doesn't the ppc970 get the respect it deserves? I suspect that the ppc970 has a much smaller die than the itanic. Sure the clock speed of the ppc is 0.8 GHz higher, but who cares if the ppc costs 1/2 to 1/4 as much? Also, it would be interesting to know how much power each of them uses.

"What man has done, man can aspire to do." -- Jerry Pournelle, about space flight

Working...