Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Technology

What's Next in CPU Land after Itanium? 589

"I work for a major research organization. Of late a lot of the normal big computer companies have been visiting and preaching the gospel of Itanium. My question to them, and to the assembled masses here at Slashdot is what happens next when Itanium is real? My world view is that Itanium based systems will become commodity products very quickly after good silicon is available in reasonable volume. At that point, why should one spend $8-10k for that hardware from the likes of HP, Compaq, Dell and others when one can build it for $2k (or even less)? In other words, has Intel finally done in most of their customers by obliterating all the other CPU choices (except IBM Power4 [& friends G4, et al] and AMD Hammer) and turned the remainder of the marketplace into raw commodity goods? Lest you defend the other CPUs... Sparc is dead, Sun doesn't have the money (more than US$1B we'll guess) to do another round. PA-RISC is done, as HP has given away the architecture group. MIPS lacks funding (and perhaps even the idea people at this point). Alpha is gone too (also because of the heavy investment problem no doubt). Most other CPUs don't have an installed base that makes any difference, especially in the high end computing world. So what's next? I don't like the single track future that Intel has just because it is a single track!"
This discussion has been archived. No new comments can be posted.

What's Next in CPU Land after Itanium?

Comments Filter:
  • AMD's 64-bit K8 (Score:2, Informative)

    by EricKrout.com ( 559698 ) on Monday February 18, 2002 @05:42PM (#3028579) Homepage
    AMD has a 64-bit K8 chip in the works right now.

    I searched and managed to find an old comparison of the K8 vs. Itanium and a few other chips.

    The article (page 5 of 5 of a review) is here:
    http://www.sysopt.com/articles/k8/index5.html [sysopt.com]

    EricKrout.com :: A Weblog On Crack [erickrout.com]
  • Re:SPARC is dead? (Score:4, Informative)

    by Anonymous Coward on Monday February 18, 2002 @05:46PM (#3028612)
    Actually, I was just transferred to the UltraSPARC 4 project at Sun [sun.com] in Burlington, MA. I don't know of the official release date, though I've heard rumors of early 2003. I'm amazed at the quality of FUD in this "article" and that it actually made it to the front page of Slashdot.
  • by Anonymous Coward on Monday February 18, 2002 @05:47PM (#3028615)
    and it is a 64bit chip that can also run 32 bit programs for backward compatibility...then I think Itanium will have a run for the money. Especially since IBM released their Power4 which (too my knowledge) is the first to have 2 processor cores on one die...something DEC was planning for the Alpha. It would be nice to see the G5 have something along the lines of the Power4 for sub $5,000 price. Of course now that Intel owns all the DEC stuff they gleen the good stuff from DEC technology and graft it on to their own. I am hoping that the Apple / PPC Linux world will be able to get the rest of the world to move away from x86. But...I also hoped the Alpha would survive. Who knows maybe even stuff from Starbridge Systems [starbridgesystems.com] might be the next best thing....
  • by Anonymous Coward on Monday February 18, 2002 @05:50PM (#3028632)
    I wish people would stop using the term 'FUD' [tuxedo.org] as a synonym for 'I don't agree with this bullshit'.
  • by wizzy403 ( 303479 ) on Monday February 18, 2002 @06:00PM (#3028697)
    Umm... Given how well Sun is entrenched in the financial world, I think you saying the platform is dead is just plain FUD. Check with the IT department at any major financial company and ask them how many 4500 or better systems they have. (I know, I used to work for one) And yes, a lot of them are upgrading to the new UltraSparc III machines.

    And for those folks doing hard research (or special effects companies with lots o' money) SGI is still king. Despite what nvidia would like us to believe, SGI's not going anywhere anytime soon for big 3d rendering projects.
  • Re:I don't get it. (Score:5, Informative)

    by HiredMan ( 5546 ) on Monday February 18, 2002 @06:02PM (#3028702) Journal
    Sparc is dead - FUD.
    You're right. The new 1.05Ghz Cu chip is pretty frickin' fast - and speed is NOT always been Sun's selling point.

    PA-RISC is done - FUD.
    Not true. HP is moving to IA-64 - even their boxes are starting to wired to ship with either PA-RISC or McKinley.
    McKinley is essentially an HP design... PA has lived longer than expected but that's just because IA-64 is so late.

    MIPS lacks funding - FUD.
    Actually this probably true. SGI is not a well comany and they will probably need to move to a new chip arch soon. There R14s are G5s rumored - who knows?

    Alpha is gone - FUD.
    Nope - it's gone. Intel bought it and swallowed it whole. No new development, no new generations, it'll only live on in some parts in IA-64.

    This guy works for either IBM or Intel. Probably IBM, as he favors the Power4 and G4. Don't take him seriously!

    I can't say where he works, but he has a point. Maybe you should look at the recent server chip landscape before dismissing this guy's claims.

    =tkk

  • by jayslambast ( 519228 ) <slambast@@@yahoo...com> on Monday February 18, 2002 @06:03PM (#3028716)
    You have a very valid question, but you're statement,

    "At that point, why should one spend $8-10k for that hardware from the likes of HP, Compaq, Dell and others when one can build it for $2k (or even less)?"


    Is missing something. HP, Compaq and Dell provide more than the hardware. They provide services that go along with the HW. They use the hardware to suck you into to using their services. While small companies can build these systems on their own for cheaper, the larger companies are the ones that need to outsource some things that HP, Compaq and Dell's services provide.

    Also its kind of silly to think that these IA-64 systems will be able to be built for $2k each (given the cost of similiarly performance) Sparc's and IBMs. Intel is hoping for their backwards compatibility and clout to push ISVs into programming for their systems. Once they have those vendors in their camps, the chip and server prices will go up again.

    And finally, most people that would need a 64bit solution will probably need multiproc systems. OEM's will be able to provide the small systems, but once you go past the 4-8 way space, there really isn't a cheep way of scaling up any higher (, and btw, clustering is really only a solution for tasks that don't involve large sharing of data between processors that is time sensitive.) Which is where HP, Compaq, Fujitsu, NEC, and IBM will be with their high-end systems. I doubt I will ever see Dell release a system with more than 8 IA-64 processors.

    Of course only time will tell what will happen next. OH, one last thing. The guy who posted should be informed that HP did not sell any processor guys, they sold some chipset guys to Intel. I'm surprised that someone that is in a processor research group would not know this. Checkout:
    http://slashdot.org/comments.pl?sid=22319&threshol d=0&commentsort=3&tid=118&mode=thread&cid=0
  • Re:I don't get it. (Score:2, Informative)

    by Cyberdeck ( 15901 ) on Monday February 18, 2002 @06:09PM (#3028743)
    Simple. If nobody is making or selling them, then they are dead.

    SPARC: Who buys them anymore? *Every* application in the last year that I have heard about the management has stated that they will buy a commodity PC rather than a Sun Workstation because of Price/performance ratios.

    PA-RISC: Not enough info to comment on.

    MIPS: I know one hardware guy who is trying to build an embedded MP3 player using a MIPS CPU. That's it. I've never heard of anybody using them commercially.

    Alpha: Compaq stopped making them last year (go and check their old press releases for March), nobody makes systems based on them, nobody buys them that I have ever heard of. In fact, I can't remember ever seeing one.

    Others: Programming the 8-bit CPU that runs the engine in your car can be fun if you like machine code, but it's not really satisfying.

    Anything whos sales are decreasing, zero, or that is not being manufactured anymore is dead. The Z-80 is dead after the longest run of any CPU out there (26 years!) but it is gone. Alpha is gone. SPARC is going. MIPS is going. The world will be a poorer place for their loss.

    (If you have evidence to the contrary, please post. I'd like to be wrong.)

    -C
  • by javiercero ( 518708 ) on Monday February 18, 2002 @06:17PM (#3028779)
    They already tried that. Guess what? NT was supposed to be multiplatform! And geez do you see any of the non-X86 versions out there? Nope....

    In fact, NT was developed on MIPS. And M$ is in no way interested in having the CLR running on non windows based platforms. CLR is not designed to make code machine-independent, but rather location-independent. M$ still wants you to be using Windows, it just wants to have a tighter grip on you no matter where you go.

    Why would anyone even think about adopting .NET is beyond me.
  • by Anonymous Coward on Monday February 18, 2002 @06:18PM (#3028781)
    Anyone who has actually used an Itanium knows that they are slow. Here are the base integer and floatping point SPEC benchmarks so that you can see for yourself how slow it is compared to a high end PC.

    The benchmarks are from:
    http://www.spec.org/osg/cpu2000/results/cpu2000. ht ml

    Dell Precision 730 (800MHz Itanium)
    CINT2000 314
    CFP2000 645

    Dell Precision 340 (2.2 GHz Pentium 4)
    CINT2000 790
    CFP2000 779

    As you can see the Itanium sucks at integer applications. Check the table and you'll see even a Dell 700MHz Pentium III system beats it!

    In short the current Merced based Itaniums suck and are extremely overpriced. Even Intel and HP have said to wait for McKinley, the next IA-64 chip.
  • by jproudfo ( 311134 ) on Monday February 18, 2002 @06:45PM (#3028916)
    Give me a break.

    Anyone who sees the recent Sun announcements (re: Linux) as the end of SPARC or Solaris, clearly doesn't know anything about the business world or about Sun.

    Yes, Sun has made an announcement to start supporting Linux. This is no big surprise, especially after the Cobalt aquisition.

    This doesn't mean that they are switching to Intel or giving up on the SPARC architecture.

    SPARC is far from dead. All you have to do is talk to anyone within Sun to see the U4 and U5 roadmaps. Sun firmly believes in their architecture and has/will spend the R&D to to continue to develop it.

    Plus, the install base of these technologies is much too large for them to just give up on them.

    Look at HP, for example... Here is a company that is part of the engineering process for Itanium. They've already committed to use Itanium on their higher end servers, but they aren't completely giving up on their PA series CPUS (yet). All of their new systems take both.

    No company wants to alienate the majority of their install base. :)

  • by Pharmboy ( 216950 ) on Monday February 18, 2002 @06:53PM (#3028969) Journal
    Yes. Yes I do. I still have 4 servers (IBM Personal Server 325) that each have two of the Pentium Pro 200's in them. One is running Win2k, the others are running Linux (rh6.2). They each have 4 SCSI drives (ufw/40mbps/2.1gb). They each have around 35000+ hours on them (4 years x 24/7). I have not replaced them because I have only had one go bad (now is spare parts).

    I kept them because the quality of the ppro is UNREAL. I have not replaced them because the quality of PIIs and PIIIs are, well, OK at best, and Xeon's are simply overpriced for what they are.

    Yes, I am just strengthening your point, to make a point. Those of us in the "smaller" world of serving will take durability over speed, reliability over clock ticks. What will get me to switch to AMD or Itaniums is that warm fuzzy feeling you get when you go to sleep, and don't have to worry about driving into town (30 minutes away) at 3am to reboot (or switch over) a server.

    I did just order a Dell dual p3/1000, but it wont replace any of those machines until I have 90 days with it in place. (average uptime on the Linux IBM's is over 6 months)
  • by sirwired ( 27582 ) on Monday February 18, 2002 @06:53PM (#3028973)
    No, you can't build something like a Netfinity (oops. er - xSeries eServer) in your garage for $2k. Built into a high-level xSeries is:

    1) Hot-pluggable power supplies, drives, and PCI - slots.
    2) Built-in hot-plug SCSI
    3) Integrated service processor for diagnostics (essentially a computer within a computer)
    4) Extremely well-tested box. (Very important to do integration testing on high-end units.)
    5) Very nice, serviceable, rack-mount chassis
    6) Crap-load of PCI slots
    7) Light-path diagnostics. (Lets somebody without training figure out what's broke.)
    8) IBM Director
    9) Well-designed cooling that would be impossible to achieve with a garage box. (Do you know how to do airflow modeling?)
    10) Support.

    The list goes on...

    Yes, they will become a commodity, in that you will be able to get them from multiple major manufacturers, but don't expect to build it yourself in your basement anytime soon.

    SirWired
  • by Anonymous Coward on Monday February 18, 2002 @07:11PM (#3029072)
    Yes the Power4 is a monster processor. But what makes the Power4 so much more of a monster is the way it handles I/O.

    IBM has said at every step of the way that Power4 goes further than pure chip architecture. When they say Power4 they want you to think not just about the chip but the whole I/O scheme.

    In fact the product managers/Power4 designers at IBM have insisted that what makes a P690 special isn't so much the chip but the capacity to move enormous amounts of data around the system in a highly configurable manner.

    Of course only time will tell, but it seems that a P690 can do some pretty dancing with that huge optional external level 3 cache if it's configured the right way.

    Pure clock speed was not the primary factor in Power4 design. Overall system I/O was. They have implemented so much new technology into this design that I'm convinced that it will give anything else out there a serious hiding when the new compilers reach maturity.

  • by HuguesT ( 84078 ) on Monday February 18, 2002 @07:15PM (#3029089)
    Hi,

    Speed of light is 3.10^8 m/s

    In a nanosecond (10^-9s), light travels 30cm,
    not 1cm like you wrote.

  • Re:compilers (Score:0, Informative)

    by Anonymous Coward on Monday February 18, 2002 @08:25PM (#3029399)
    You make some good points, but please, take a physics class or read a physics book. Electrons travel no where near the speed of light. Not even close. The real problem is gate speed and idle/on current in the transistors. Multiply that by what millions of them? This is why when Intel etal find a process to make the channel widths thinner they cheer and throw parties.
  • by BadlandZ ( 1725 ) on Monday February 18, 2002 @08:34PM (#3029428) Journal
    Sparc is dead, Sun doesn't have the money (more than US $1B we'll guess) to do another round

    Someone remind me to post a link back to this story in a month or two when Sun announces their faster processors with solved ecache solutions...

  • by spinlocked ( 462072 ) on Monday February 18, 2002 @08:34PM (#3029431)
    Fud, fud, fud. I can't speak for the other companies but Sun can easily afford to fund R&D on the next generation SPARC chip, they've got 6 billion $ cash in hand [sun.com]. Let alone investments, and have done for over 2 years. BTW the current generation is UltraSPARCIII, UltraSPARCIV is just a fabrication improvement. Work is already underway on UltraSPARCV's design. Sun's crown jewels are SPARC/Solaris, when Sun stops working on their own OS/CPU/Server platform it's time to stop investing in them.
  • Re:I don't get it. (Score:2, Informative)

    by javiercero ( 518708 ) on Monday February 18, 2002 @08:36PM (#3029437)
    Yeah, PS1 and PS2 use MIPS. CISCO uses MIPS, MIPS is one of the key players in the embedded arena. Don't worry though, the guy has no idea what he was talking about, like a large portion of Slashdot's posters. They are a bunch of idiots w/o any real engineering experience....
  • by Christopher Thomas ( 11717 ) on Monday February 18, 2002 @08:49PM (#3029496)
    4X AGP is a 32-bit 266 MHz bus. That's more throughput than possible with PCI.

    Unless you buy into Intel's PCI-X, which is 64/133.

    And most graphics cards are not limited by bus bandwidth with *any* flavour of AGP (see the various Tom's Hardware benchmarks). The usual limit is fill rate for new cards, and lack of geometry processing for old cards (assuming you're playing a new game). Textures are stored on-card by any sane game, so the only thing going across the bus is lists of triangles.

    AGP doesn't have contention with other devices on the bus so it doesn't have to do any logic for mastering or controlling and can allocate all its clocks to doing a data transfer.

    While this would be an issue for very short data transfers, graphics cards will likely be transferring large batches of data. This is done in burst mode, which gives one transfer per clock.

    Why would you want PCI? The only advantage PCI gives is that you can hang multiple devices off of it. But while that lets you get multiple monitor support easier, it will really kill your limited bandwidth.

    You have bandwidth to spare; all you'd be doing in a multi-monitor setup is sending the same triangle lists over the bus, not cutting and pasting image data or doing texturing. Have one one dominant card and leave the others snooping traffic, and you have zero extra overhead for this.

    The real benefit of having multiple video cards is that it lets you easily do render farming for things like games. Have each card render half the screen, and copy all cards' partial renderings to one card's frame buffer. 32/33 PCI is too slow to be practical for this, but 64/66 has more than enough bandwidth. I studied the feasibility of this at one of my past jobs.
  • by homer_ca ( 144738 ) on Monday February 18, 2002 @08:49PM (#3029498)
    "the ONLY reason the Pentium Pro didn't catch on was because Microsoft released a 16bit OS and told everyone it"

    I wouldn't say ONLY. There was also the slight problem of the double chip package (separate cache and cpu dies mounted on one substrate) being horrendously expensive to produce. Looks like Itanium will have thesame problem [slashdot.org].
  • Re: Wrong again (Score:2, Informative)

    by pdp11e ( 555723 ) on Monday February 18, 2002 @08:55PM (#3029515)
    The speed of the signal propagation trough the medium is not equal to the speed of electrons. The actual speed (group velocity) depends on properties of the "transmission line". For the good old coax cable it is about 0.66% of the speed of light. It is obvious that electrons in the coax cable do not even remotely approach that velocity but, the fact remains that signal travels ~20 cm during period of 1 ns. The actual fraction of c for the lines on silicon chip is very similar to the previous example.
    When you discuss physics it helps if you know basic facts.
  • Re:compilers (Score:3, Informative)

    by maraist ( 68387 ) < ... tsiaram.leahcim>> on Monday February 18, 2002 @09:15PM (#3029614) Homepage
    So basically you're saying that computers are magical radio-wave transcievers? Funny, I thought computers were based on capacitively switched [Bi]CMOS transistors. This means the "logical operation" travels at the speed of the capacitor charge / discharge times. After the ramp-up, ramp-down time (further delayed by theinnefficiencies of junctions), then the signal travels at the drift velocity of the electrons trapped within the conduction-band; significantly slower than a stream of free-flowing electrons, much less a single electron going full-tilt.

    In fact, when electrons start going close to the speed of light within a silicon, there's typically an avalanching effect (utilized in zenor diodes). Channel break-down can easily occur under such situations (caused by relatively high voltages).

    To my understanding, the single biggest speedup in the past several years was the introduction bipolar transistors into the CMOS frame-work. Bipolar are very fast (non-capacitively switched), have high current, high amplification, but are power-hogs and require difficult geometries to manufacture. My understanding of BiCMOS is that FET's are used everywhere, but when a FET needs to be charged quickly (or generally requires high current output), a bipolar device is attached on the output as an amplifier. You get the best of both worlds (with the possible exception of the geometry limitations).

    Wiring obviously was an issue because new copper based CPUs can run cooler and faster.

    I only have an undergraduate understanding of the processes, but the simple point is that there are paracitics all throughout the architecture, and we're discovering efficincies everyday which provide percentage increases in overall performance. Thus it's not the speed, but the sophistication of the design.

    There's lots of work going into light-based computing, but I don't think this will ever win out because they're plagued with even bigger interconnect problems and thus paracitics.

    -Michael
  • Don't forget Samsung (Score:3, Informative)

    by DABANSHEE ( 154661 ) on Monday February 18, 2002 @11:14PM (#3029827)
    They have the legal right to develop & make Alphas without Intel's blessing

To the systems programmer, users and applications serve only to provide a test load.

Working...