What's Next in CPU Land after Itanium? 589
"I work for a major research organization. Of late a lot of the normal big computer companies have been visiting and preaching the gospel of
Itanium. My question to them, and to the assembled masses here at Slashdot is what happens next when Itanium is real? My world view is that Itanium based systems will become commodity products very quickly after good silicon is available in reasonable volume. At that point, why should one spend $8-10k for that hardware from the likes of HP, Compaq, Dell and others when one can build it for $2k (or even less)? In other words, has Intel finally done in most of their customers by obliterating all the other CPU choices (except IBM Power4 [& friends G4, et al] and AMD Hammer) and turned the remainder of the marketplace into raw commodity goods? Lest you defend the other CPUs... Sparc is dead,
Sun doesn't have the money (more than US$1B we'll guess) to do another round. PA-RISC is done, as HP has
given away the architecture group. MIPS lacks
funding (and perhaps even the idea people at this point). Alpha is
gone too (also because of the heavy investment problem no doubt). Most other CPUs don't have an installed base that makes any difference, especially in the high end computing world. So what's next? I don't like the single track future that Intel has just because it is a single track!"
AMD's 64-bit K8 (Score:2, Informative)
I searched and managed to find an old comparison of the K8 vs. Itanium and a few other chips.
The article (page 5 of 5 of a review) is here:
http://www.sysopt.com/articles/k8/index5.html [sysopt.com]
EricKrout.com
Re:SPARC is dead? (Score:4, Informative)
If any of the rumors of the G5 are true.... (Score:1, Informative)
Misunderstanding of the term FUD (Score:2, Informative)
Sparc dead? And what about SGI? (Score:2, Informative)
And for those folks doing hard research (or special effects companies with lots o' money) SGI is still king. Despite what nvidia would like us to believe, SGI's not going anywhere anytime soon for big 3d rendering projects.
Re:I don't get it. (Score:5, Informative)
You're right. The new 1.05Ghz Cu chip is pretty frickin' fast - and speed is NOT always been Sun's selling point.
PA-RISC is done - FUD.
Not true. HP is moving to IA-64 - even their boxes are starting to wired to ship with either PA-RISC or McKinley.
McKinley is essentially an HP design... PA has lived longer than expected but that's just because IA-64 is so late.
MIPS lacks funding - FUD.
Actually this probably true. SGI is not a well comany and they will probably need to move to a new chip arch soon. There R14s are G5s rumored - who knows?
Alpha is gone - FUD.
Nope - it's gone. Intel bought it and swallowed it whole. No new development, no new generations, it'll only live on in some parts in IA-64.
This guy works for either IBM or Intel. Probably IBM, as he favors the Power4 and G4. Don't take him seriously!
I can't say where he works, but he has a point. Maybe you should look at the recent server chip landscape before dismissing this guy's claims.
=tkk
It depends on what you need (Score:2, Informative)
Is missing something. HP, Compaq and Dell provide more than the hardware. They provide services that go along with the HW. They use the hardware to suck you into to using their services. While small companies can build these systems on their own for cheaper, the larger companies are the ones that need to outsource some things that HP, Compaq and Dell's services provide.
Also its kind of silly to think that these IA-64 systems will be able to be built for $2k each (given the cost of similiarly performance) Sparc's and IBMs. Intel is hoping for their backwards compatibility and clout to push ISVs into programming for their systems. Once they have those vendors in their camps, the chip and server prices will go up again.
And finally, most people that would need a 64bit solution will probably need multiproc systems. OEM's will be able to provide the small systems, but once you go past the 4-8 way space, there really isn't a cheep way of scaling up any higher (, and btw, clustering is really only a solution for tasks that don't involve large sharing of data between processors that is time sensitive.) Which is where HP, Compaq, Fujitsu, NEC, and IBM will be with their high-end systems. I doubt I will ever see Dell release a system with more than 8 IA-64 processors.
Of course only time will tell what will happen next. OH, one last thing. The guy who posted should be informed that HP did not sell any processor guys, they sold some chipset guys to Intel. I'm surprised that someone that is in a processor research group would not know this. Checkout:
http://slashdot.org/comments.pl?sid=22319&thresho
Re:I don't get it. (Score:2, Informative)
SPARC: Who buys them anymore? *Every* application in the last year that I have heard about the management has stated that they will buy a commodity PC rather than a Sun Workstation because of Price/performance ratios.
PA-RISC: Not enough info to comment on.
MIPS: I know one hardware guy who is trying to build an embedded MP3 player using a MIPS CPU. That's it. I've never heard of anybody using them commercially.
Alpha: Compaq stopped making them last year (go and check their old press releases for March), nobody makes systems based on them, nobody buys them that I have ever heard of. In fact, I can't remember ever seeing one.
Others: Programming the 8-bit CPU that runs the engine in your car can be fun if you like machine code, but it's not really satisfying.
Anything whos sales are decreasing, zero, or that is not being manufactured anymore is dead. The Z-80 is dead after the longest run of any CPU out there (26 years!) but it is gone. Alpha is gone. SPARC is going. MIPS is going. The world will be a poorer place for their loss.
(If you have evidence to the contrary, please post. I'd like to be wrong.)
-C
Re:Intel CPUs will be killed by Microsoft's CLR (Score:2, Informative)
In fact, NT was developed on MIPS. And M$ is in no way interested in having the CLR running on non windows based platforms. CLR is not designed to make code machine-independent, but rather location-independent. M$ still wants you to be using Windows, it just wants to have a tighter grip on you no matter where you go.
Why would anyone even think about adopting
SPEC benchmarks for Itanium (It's slowwwww...) (Score:1, Informative)
The benchmarks are from:
http://www.spec.org/osg/cpu2000/results/cpu2000
Dell Precision 730 (800MHz Itanium)
CINT2000 314
CFP2000 645
Dell Precision 340 (2.2 GHz Pentium 4)
CINT2000 790
CFP2000 779
As you can see the Itanium sucks at integer applications. Check the table and you'll see even a Dell 700MHz Pentium III system beats it!
In short the current Merced based Itaniums suck and are extremely overpriced. Even Intel and HP have said to wait for McKinley, the next IA-64 chip.
Re:Just wondering, not a troll. SUN IS OUT (Score:2, Informative)
Anyone who sees the recent Sun announcements (re: Linux) as the end of SPARC or Solaris, clearly doesn't know anything about the business world or about Sun.
Yes, Sun has made an announcement to start supporting Linux. This is no big surprise, especially after the Cobalt aquisition.
This doesn't mean that they are switching to Intel or giving up on the SPARC architecture.
SPARC is far from dead. All you have to do is talk to anyone within Sun to see the U4 and U5 roadmaps. Sun firmly believes in their architecture and has/will spend the R&D to to continue to develop it.
Plus, the install base of these technologies is much too large for them to just give up on them.
Look at HP, for example... Here is a company that is part of the engineering process for Itanium. They've already committed to use Itanium on their higher end servers, but they aren't completely giving up on their PA series CPUS (yet). All of their new systems take both.
No company wants to alienate the majority of their install base.
Remember the Pentium Pros (Score:2, Informative)
I kept them because the quality of the ppro is UNREAL. I have not replaced them because the quality of PIIs and PIIIs are, well, OK at best, and Xeon's are simply overpriced for what they are.
Yes, I am just strengthening your point, to make a point. Those of us in the "smaller" world of serving will take durability over speed, reliability over clock ticks. What will get me to switch to AMD or Itaniums is that warm fuzzy feeling you get when you go to sleep, and don't have to worry about driving into town (30 minutes away) at 3am to reboot (or switch over) a server.
I did just order a Dell dual p3/1000, but it wont replace any of those machines until I have 90 days with it in place. (average uptime on the Linux IBM's is over 6 months)
Build an $8-10 server for $2k - um, no. (Score:5, Informative)
1) Hot-pluggable power supplies, drives, and PCI - slots.
2) Built-in hot-plug SCSI
3) Integrated service processor for diagnostics (essentially a computer within a computer)
4) Extremely well-tested box. (Very important to do integration testing on high-end units.)
5) Very nice, serviceable, rack-mount chassis
6) Crap-load of PCI slots
7) Light-path diagnostics. (Lets somebody without training figure out what's broke.)
8) IBM Director
9) Well-designed cooling that would be impossible to achieve with a garage box. (Do you know how to do airflow modeling?)
10) Support.
The list goes on...
Yes, they will become a commodity, in that you will be able to get them from multiple major manufacturers, but don't expect to build it yourself in your basement anytime soon.
SirWired
There's more to this than the processor (Score:1, Informative)
IBM has said at every step of the way that Power4 goes further than pure chip architecture. When they say Power4 they want you to think not just about the chip but the whole I/O scheme.
In fact the product managers/Power4 designers at IBM have insisted that what makes a P690 special isn't so much the chip but the capacity to move enormous amounts of data around the system in a highly configurable manner.
Of course only time will tell, but it seems that a P690 can do some pretty dancing with that huge optional external level 3 cache if it's configured the right way.
Pure clock speed was not the primary factor in Power4 design. Overall system I/O was. They have implemented so much new technology into this design that I'm convinced that it will give anything else out there a serious hiding when the new compilers reach maturity.
Wrong math, was Re:compilers (Score:3, Informative)
Speed of light is 3.10^8 m/s
In a nanosecond (10^-9s), light travels 30cm,
not 1cm like you wrote.
Re:compilers (Score:0, Informative)
Bwa Ha ha ha ha ha ... (Score:3, Informative)
Someone remind me to post a link back to this story in a month or two when Sun announces their faster processors with solved ecache solutions...
Who's paying for this researcher? (Score:4, Informative)
Re:I don't get it. (Score:2, Informative)
Re:Peripheral communication. (Score:4, Informative)
Unless you buy into Intel's PCI-X, which is 64/133.
And most graphics cards are not limited by bus bandwidth with *any* flavour of AGP (see the various Tom's Hardware benchmarks). The usual limit is fill rate for new cards, and lack of geometry processing for old cards (assuming you're playing a new game). Textures are stored on-card by any sane game, so the only thing going across the bus is lists of triangles.
AGP doesn't have contention with other devices on the bus so it doesn't have to do any logic for mastering or controlling and can allocate all its clocks to doing a data transfer.
While this would be an issue for very short data transfers, graphics cards will likely be transferring large batches of data. This is done in burst mode, which gives one transfer per clock.
Why would you want PCI? The only advantage PCI gives is that you can hang multiple devices off of it. But while that lets you get multiple monitor support easier, it will really kill your limited bandwidth.
You have bandwidth to spare; all you'd be doing in a multi-monitor setup is sending the same triangle lists over the bus, not cutting and pasting image data or doing texturing. Have one one dominant card and leave the others snooping traffic, and you have zero extra overhead for this.
The real benefit of having multiple video cards is that it lets you easily do render farming for things like games. Have each card render half the screen, and copy all cards' partial renderings to one card's frame buffer. 32/33 PCI is too slow to be practical for this, but 64/66 has more than enough bandwidth. I studied the feasibility of this at one of my past jobs.
Re:Itanium vs. Hammer vs. All Others. (Score:3, Informative)
I wouldn't say ONLY. There was also the slight problem of the double chip package (separate cache and cpu dies mounted on one substrate) being horrendously expensive to produce. Looks like Itanium will have thesame problem [slashdot.org].
Re: Wrong again (Score:2, Informative)
When you discuss physics it helps if you know basic facts.
Re:compilers (Score:3, Informative)
In fact, when electrons start going close to the speed of light within a silicon, there's typically an avalanching effect (utilized in zenor diodes). Channel break-down can easily occur under such situations (caused by relatively high voltages).
To my understanding, the single biggest speedup in the past several years was the introduction bipolar transistors into the CMOS frame-work. Bipolar are very fast (non-capacitively switched), have high current, high amplification, but are power-hogs and require difficult geometries to manufacture. My understanding of BiCMOS is that FET's are used everywhere, but when a FET needs to be charged quickly (or generally requires high current output), a bipolar device is attached on the output as an amplifier. You get the best of both worlds (with the possible exception of the geometry limitations).
Wiring obviously was an issue because new copper based CPUs can run cooler and faster.
I only have an undergraduate understanding of the processes, but the simple point is that there are paracitics all throughout the architecture, and we're discovering efficincies everyday which provide percentage increases in overall performance. Thus it's not the speed, but the sophistication of the design.
There's lots of work going into light-based computing, but I don't think this will ever win out because they're plagued with even bigger interconnect problems and thus paracitics.
-Michael
Don't forget Samsung (Score:3, Informative)