What's Next in CPU Land after Itanium? 589
"I work for a major research organization. Of late a lot of the normal big computer companies have been visiting and preaching the gospel of
Itanium. My question to them, and to the assembled masses here at Slashdot is what happens next when Itanium is real? My world view is that Itanium based systems will become commodity products very quickly after good silicon is available in reasonable volume. At that point, why should one spend $8-10k for that hardware from the likes of HP, Compaq, Dell and others when one can build it for $2k (or even less)? In other words, has Intel finally done in most of their customers by obliterating all the other CPU choices (except IBM Power4 [& friends G4, et al] and AMD Hammer) and turned the remainder of the marketplace into raw commodity goods? Lest you defend the other CPUs... Sparc is dead,
Sun doesn't have the money (more than US$1B we'll guess) to do another round. PA-RISC is done, as HP has
given away the architecture group. MIPS lacks
funding (and perhaps even the idea people at this point). Alpha is
gone too (also because of the heavy investment problem no doubt). Most other CPUs don't have an installed base that makes any difference, especially in the high end computing world. So what's next? I don't like the single track future that Intel has just because it is a single track!"
compilers (Score:3, Insightful)
Re:compilers (Score:2)
Re:compilers (Score:3, Insightful)
Re:compilers (Score:3, Insightful)
LIW and VLIW were tried before. They flopped, because compilers were dumb then. Compilers stayed dumb until midway through the RISC era. Now RISC and CISC are the same, compilers are reasonably bright, and intel is trying its own hacky LIW thing. The compilers are smart enough for a generation 1 LIW design to work, but there may or may not be any indication that they'll be smart. And as each successive subarchitecture of IA64 happens, the compiler will need to change or the chip will need to handle previous generation instructions. Intel is not true LIW in this regard - you should be able to run unmodified IA64-1 bins on IA64-2 chips.
So, some brains are still in the IA-64 chip, meaning the compiler wont have to be _as_ smart, but they'll still need to be smart, and you'll still need a new compiler for each IA64 implementation to get max performance.
Re:compilers (Score:2)
Re:compilers (Score:3, Insightful)
Re:compilers (Score:3, Funny)
Re:compilers (Score:3, Insightful)
Sure, but the problem is how long before there are good compilers? That's one of the main problems with architectures like Itanium.
Re:compilers (Score:5, Insightful)
You obviously know nothing about Itanium, EPIC, VLIW, or pretty much anything else on this topic.
The issue isn't whether or not there's a compiler available. The issue is how GOOD the compiler is. In the case of a Very Large Instruction Word (VLIW) CPU like the Itanium, the compiler is the bottleneck for system performance. Why? Because the premise of these CPUs is that while they have a low clockspeed (750-800 MHz for Itanium), they execute many instructions per cycle - 10 or more. So while "slower", they get more done per cycle, resulting in a faster overall execution. It's up to the compiler to properly structure the executable machine code to take maximum advantage of this layout and keep all execution units of the CPU busy at all times, as well as reduce disseparate memory accesses and so forth.
The intial compilers that are released with these machines do it, but not as well as they could. In fact, compiler writers are still trying to grasp the issues with pipelining on modern CPUs and their much lower number of execution units, and this is without utilizing special instructions that explicitly do non-conflicting operations at once. We're still years away from writing fully optimized compilers for contemporary CPUs. And while there's been a great deal of work done on VLIW already (prior to Itanium), there's even more yet to be done. A decade for a "good" compiler is probably optimistic.
You may be wondering, what's the point anyway? If VLIW is so damn hard, why bother? Just ramp up that clock speed and get more CPU power! Well, that's nice, but it doesn't work in reality. We're starting to bump up against physical limitations in CPU speeds. Electrons are not magical particles that travel instantaeously. They are limited to slightly under the speed of light, which means roughly 1 cm per nanosecond. This doesn't seem to be a big deal until you realize that a 2.0 GHz CPU means each clock cycle is 0.5 nanoseconds. So if you have to fetch an instruction or data from main memory, and that memory is a mere 5 cm away, under optimal conditions you've just sat around for 10 clock cycles waiting on that memory to be fetched. This is ignoring the fact that there's propogation delays, latch delays, and other things. So go ahead, pump that CPU up to 10 GHz and waste even more clock cycles waiting on data. That or redesign the entire thing, expect the compiler to do the work and properly feed you data and instructions such that you can do 10x as much in the same amount of time, and all with no wasted CPU instructions.
That's the theory at least.
Reality is that not only does the compiler have to properly organize the machine code, it also has to have some idea of what the code is doing to do so. Compile the code w/ profiling, run the code against a "realistic" data set, then recompile it again feeding it the profile data. Many compilers can do this now, but it's rarely done. Because it's hard to guess a "realistic" data set, it's hard to acquire the same, how you expect the code to be used and how it actually is used are rarely the same, and there's more development time involved in all of this. So most companies don't bother. And despite what I said above, 2.0 GHz still hasn't reached the point where the CPU is sitting on it's ass more than it's doing work. Until we start approaching that point there's little incentive to put in the R&D time necessary to switch to a new CPU archictecture.
And, of course, on top of all of the above is the issue that Joe Sixpack will invariably see 2 GHz as faster than 750 MHz no matter what. Have fun with that one.
Wrong math, was Re:compilers (Score:3, Informative)
Speed of light is 3.10^8 m/s
In a nanosecond (10^-9s), light travels 30cm,
not 1cm like you wrote.
Re: Wrong math, was Re:compilers (Score:3, Funny)
SPeed of ... (Score:3, Insightful)
By the way the speed of light in matter (glass) is slower that the speed of light in vacuum.
And to answer your question: Yes.
Not to rain on your parade... (Score:3, Insightful)
Actually... Java/.NET and JIT compilers are exactly why "Merced" or "Itanic" isn't well suited for the very things it was supposed to be good at. You see, for a VLIW machine like those, the degree of compiler optimization required to achieve good performance is much greater than for a traditional RISC-ish machine (in which I'm including x86, for reasons I'm not going into). Essentially, to get maximum performance requires a great deal of compilation, profiling, and compiling again. This is all front-end overhead on your process. The whole idea behind JIT is that it's supposed to be fast, and occure when you download new code... But now the opposite is true. At this point, you're just as well off using a traditional-style compiler/profiler that produce traditional binaries.
Sorry. No VM utopia here.
Re:compilers (Score:3, Informative)
In fact, when electrons start going close to the speed of light within a silicon, there's typically an avalanching effect (utilized in zenor diodes). Channel break-down can easily occur under such situations (caused by relatively high voltages).
To my understanding, the single biggest speedup in the past several years was the introduction bipolar transistors into the CMOS frame-work. Bipolar are very fast (non-capacitively switched), have high current, high amplification, but are power-hogs and require difficult geometries to manufacture. My understanding of BiCMOS is that FET's are used everywhere, but when a FET needs to be charged quickly (or generally requires high current output), a bipolar device is attached on the output as an amplifier. You get the best of both worlds (with the possible exception of the geometry limitations).
Wiring obviously was an issue because new copper based CPUs can run cooler and faster.
I only have an undergraduate understanding of the processes, but the simple point is that there are paracitics all throughout the architecture, and we're discovering efficincies everyday which provide percentage increases in overall performance. Thus it's not the speed, but the sophistication of the design.
There's lots of work going into light-based computing, but I don't think this will ever win out because they're plagued with even bigger interconnect problems and thus paracitics.
-Michael
What's after Itanium? That's easy (Score:3, Funny)
That's probably only funny to chem majors.
Okay, maybe not even chem majors.
Re:What's after Itanium? That's easy (Score:2)
There, we'll make it funny only to comic book fans
I don't think we need to worry just yet (Score:4, Insightful)
Just because Intel will pave the way for mainstream 64-bit processors using the Itanium doesn't mean it will monopolize the market until it comes out with a 128-bit processor. No matter what, it will probably be years from now before we have to worry.
Re:I don't think we need to worry just yet (Score:3, Interesting)
I think a lot of people are too overconfident that Itanium is going to be successful, let alone quickly. It is going to require a lot of changes to software in order to take advantage of it because it isn't just a 64 bit x86, it is a whole new architecture, one more closely related to HP PA-RISC than x86. It also may not do a very good job of running existing 32 bit code, which could slow down its acceptance, particularly in desktop systems. The last time Intel made a big push (with the i432) to create a whole new non-x86 processor family, it was less than successful. Although to be fair, the i432 was a radically different proposition and the Itanium with its more proven PA-RISC roots looks a lot more sound.
AMD's Hammer architecture, on the other hand, is more conservative, being a x86 family processor extended to 64 bit. It should require less modifications to existing software to take advantage of it, although an argument could be made that it won't have as much advantage to take having more legacy issues with the aging x86 architecture. It also may perform a lot better on existing 32 bit code than Itanium. And if AMD's track history holds true, it will probably be significantly less expensive than the Itanium.
A lot of whether it is Intel or AMD that paves the way for 64 bit mainstream CPUs will probably have to do with which of them is the first one that offers a price attractive product that runs existing 32 bit software well while being marketable as a 64 bit chip. Unfortunately for AMD, the marketable part is, as always going to be tough. While AMD has been hugely successful in "white box" sales where customers can choose their CPU, they've had a much more difficult time penetrating the big name PC markets, particularly in higher end systems. This despite the fact that in many cases an Athlon or Duron would offer a better performance than a PIII or P4 at a better price.
Next? (Score:2, Funny)
Re:Next? (Score:2, Interesting)
There's a lot of information in this thread: http://slashdot.org/article.pl?sid=02/02/10/00162
specifically, this post: http://slashdot.org/comments.pl?sid=27736&cid=298
Re:Next? (Score:2)
Logically, it should be Anadium (Score:2, Funny)
I can't wait until they get to Hassium. They could name their chip Assium!
Itanium vs. Hammer vs. All Others. (Score:4, Interesting)
I'm not too worried about Itaniums, and I don't see them becoming prevalent for quite a while. While the Pentium II, III, and IV moved through the marketplace fairly rapidly they all offered compatibility at some level. If I recall correctly 32 bit programs that are not rewritten for 64 bit run SLOWER on the Itanium than they do the equivalent Pentium line.
In essence consider this: it's like a brand new operating system attempting to break into the monopoly that Microsoft has. (Parallels drawn out of necessity.) While it may be better, faster, superior in every way it doesn't have 20+ years of legacy code behind it - and that will end up being what drags it down.
Only time will tell. Remember the Pentium Pros..
Talonius
Re:Itanium vs. Hammer vs. All Others. (Score:3, Insightful)
Until Intel gets the Itanium cost down to the point where they run 32-bit code at equivalent speed to a Pentium at the same cost, Itanium probably isn't ready for the consumer market.
--
Damn the Emperor!
Re:Itanium vs. Hammer vs. All Others. (Score:2, Insightful)
While that's a valid point, it also bears pointing out that Pentium IV is at 2200 MHz whereas Itanium is at 800MHz -- about 1/3rd the clock speed. That ratio is going to remain for awhile too -- McKinley will come out at 1000 MHz, while Pentium IV continues its mad march toward 3000MHz and beyond. You acknowledge this fact implicitly with your next statement (re: Itanium not viable until approx same speed at approx same cost), but I felt it'd be interesting to point out just how large a gap there is.
These ratios spell doom for hardware-level emulation of the Pentium on the Itanium. Unless Intel has some serious magic, having a 100% cycle-for-cycle perfect emulation of the Pentium III or even Pentium IV on the Itanium die will never run better than 1/3rd the speed of the real thing, since the fundamental clock rate is so far off. The only real way to get close is to do a software-level translation and get a boost from scheduling for the native hardware.
It's interesting to note, BTW, that HP's Dynamo [hp.com] project does a software translation of PA-8000 code targeting (guess what) a PA-8000 CPU, and rather than slowing things down, it actually gets 20% speedups! Ars Technica [arstechnica.com] also did a piece on this. Perhaps that's why HP doesn't have hardware-level translation from PA-RISC to Itanium on the die like Intel does -- they (HP) are in a better position to just translate the PA-RISC code to IA-64 when needed. (Also, in the UNIX world, it's just simply less necessary.)
--JoeRe:Itanium vs. Hammer vs. All Others. (Score:3, Insightful)
Clock speed does not equal performance. This is a fact of life, especialy with 20 stage pipelines and the like. AMD and Apply have been trying to teach this to the world, and on the surface most geeks understand, but they don't beleive it in their hearts.
Now, I'm not saying that the PIV won't be faster than Itanium for a good while here, and I honestly have no idea if it will be or not. We just need to stop using Mhz as our comparisons unless we're comparing the same chip.
Re:Itanium vs. Hammer vs. All Others. (Score:2, Interesting)
Actually the break even point wasn't reached until about 100 Mhz or so, not sure. But I do remember when the first ppc came out they were definatly slower than the old 040's. Still don't know how Apple pulled that one off (selling new computers that were essentially slower than previous models)
Re:Itanium vs. Hammer vs. All Others. (Score:2)
The luxury Apple had in this situation was control of the operating system, which Intel doesn't have. Ironically, Apple will also be moving to a 64-bit architecture within the year (conservative rumors say Q3/Q4 2002.) The transition is supposed to go very smoothly, as developers are being told to prepare their programs with the 64-bit OS X libs and OS X-64-bit is being developed in concurrence with the 32 bit version. FAT binaries helped immenseley in the 68k-PPC transition, and probably will again for the G4-G5 transition.
Though honestly, if Microsoft gets what they want with the entire
Re:Itanium vs. Hammer vs. All Others. (Score:4, Insightful)
the ONLY reason the Pentium Pro didn't catch on was because Microsoft released a 16bit OS and told everyone it was a 32bit one ( Windows 95 ).
SCO Unix, OS/2, and to some degree Windows NT ran quite a bit faster on the 32bit optimized PPro when compared with the same clocked Pentium.
Because of Microsofts great PR, even Intel was caught off guard and scrambled out a hack called MMX to give the appearance of progress in the CPU market. While the MMX based Pentiums were getting press/air time, Intel was hacking at the Pentium Pro core to get it to run THE 16bit OS (Windows) faster. That was the Pentium II.
IBM did some speed tests of OS/2 on the PPro and in some cases they saw a 100% speed increase on the 32bit optimized PPro.
This reminds me of the 7degrees from Kevin Bacon reference. It seems that many failures in the computer industry are only about 3degrees from Microsoft. And never is the failure do to competition but more likely, marketing and market control. IMHO.
The PPro was a darn good CPU. It finally took 32bit-ness seriously though about 10 years after the 32bit i86386 was released. As much as I like the simplicity of RISC, Intel will never get the Titanicium off the ground and AMD/Hammer will force Intel to follow their lead with an extension to the i86 instruction set into 64bit land.
IMHO.
LoB
Re:Itanium vs. Hammer vs. All Others. (Score:3, Informative)
I wouldn't say ONLY. There was also the slight problem of the double chip package (separate cache and cpu dies mounted on one substrate) being horrendously expensive to produce. Looks like Itanium will have thesame problem [slashdot.org].
Recurring problem (Score:4, Interesting)
So here's the question: how do you keep competition alive when an initial investment costs in the billions of dollars. For any company less than Intel sized, a single bad product cycle spells complete doom. That's no kind of market to be in.
Also, wasn't this inevitable. There are a few Beowulf jokes being posted, but that's really what's going on. Increasingly high performance tasks (Google, render farms etc. etc. etc.) are using massive arrays of low-power CPUs. It costs a lot of money to develop big iron chips, and if people aren't buying them then there's no point in investing that much money.
What I'm worried about are the isolated markets that still require massively powerful, low processor number architectures. Not everything splits into nice Distributed.net packages.
Parallel machines. (Score:2, Redundant)
The problem is that a massively parallel computer is only useful for certain classes of problem. There are many types of problem where communications load goes up very rapidly with the number of processors, which makes a cluster (with its relatively poor communications bandwidth) impractical. This is what Big Iron is designed to be useful for.
Re:Recurring problem (Score:3, Interesting)
The whole thing was cased in a shiny metal module. Each chip had its own sping-loaded heat slug that transferred heat to the cooling liquid sent through the module's plumbing. (100 ECL chips == major kilowattage)
They told me each CPU cost about $50,000. On a factory tour, I saw an entire pallette of these sitting on the floor, kind of like gold at Fort Knox.
These things may not perform like today's chips, but they gave meaning to the term "Big Iron"
According to Mr. Pearson... (Score:2, Funny)
Bet on it.
SPARC is dead? (Score:4, Interesting)
Re:SPARC is dead? (Score:4, Informative)
Re:SPARC is dead? (Score:2)
Doubt Itanium will make it. (Score:5, Insightful)
Also, on what kind of clueless basis do you assume that Sun has little left. Here's what's coming in just the next 2-3 years:
http://www.aceshardware.com/read_news.jsp?id=5500
Sun's CPU division is 1300+ strong and they're planning to hire another 100-200 in the next 2 years.
A lot of HP's PA-RISC customers (and Compaq's Alpha customers) are quite unhappy with being forced to change architectures and are jumping ship to Sun and IBM - HP had a 7% drop in Unix sales Q3 to Q4 last year, while Sun had a 10% rise. By 2003 the significant majority of the $100k+ system market will be owned by Sun and IBM. Very reason for any of those customers to switch to Itanium, so it'll mostly just eat Xeon sales.
Re:Doubt Itanium will make it. (Score:3, Interesting)
"It's pretty well understood that Itanium will not provide leadership x86 performance. That's Hammer's great hope, in fact. AMD's strategy depends on Intel mistakenly abdicating its x86 throne leaving Hammer and its descendants the heirs apparent to a software kingdom.
Would Intel so cavalierly jeopardize its legacy? Not on your life. To no one's great surprise, Intel is rumored to be developing something that will give future Pentium processors--not IA-64 processors--a performance kick. In a perverse reversal of roles, Intel may actually be following AMD's lead in 64-bit x86 extensions. A "Hammer killer" technology, code-named Yamhill, may appear in chips late next year, about the time Hammer makes its debut. It's suggested that Intel's forthcoming Prescott processor will be based on Pentium 4, but with Yamhill 64-bit extensions that coincidentally mimic Hammer's. (Prescott is also rumored to be built on a 0.09 micron process and implement HyperThreading.)
Naturally, the very existence of Yamhill, if it exists at all, is a diplomatically touchy subject at Intel HQ. The company doesn't want to undermine its outward confidence in Itanium and IA-64, but neither can it afford the possibility of ceding x86 dominance to a competitor. Besides, whether they appear in future Pentium derivatives or not, Intel's 64-bit extensions could appear in future IA-64 processors instead. New IA-64 features plus competitive x86 performance--now that's a compelling product."
From Extremetech. [extremetech.com]
Another article on Yamhill at The Register [theregister.co.uk] and Extremetech. [extremetech.com]
Will SUN make it? (Score:4, Interesting)
Let's compare: Sun is a company that produces operating systems (Solaris), computers, CPUs, motherboards, and a host of peripherals. (Plus it has to invent Java, J2EE, etc.) It's R&D budget was $2.0bn in 2001.
Intel is 95% CPUs. It spent $3.8bn on R&D in 2001.
Intel has the world's most productive fabs. It's capex budget is so huge, it can order the lithograohy companies and the like to build to order inside its factories. Result, it's yields are 25% better at start; and still 10-12% better after 6-9 months.
It is incredibly difficult for anyone to keep up with the Intel machine. I wish it weren't so; but it is.
*r
Re: (Score:2, Interesting)
AMD's 64-bit K8 (Score:2, Informative)
I searched and managed to find an old comparison of the K8 vs. Itanium and a few other chips.
The article (page 5 of 5 of a review) is here:
http://www.sysopt.com/articles/k8/index5.html [sysopt.com]
EricKrout.com
SPARC's death *greatly exagerated* (Score:5, Insightful)
As far as money to go another round, remember, Sun doesn't fab CPUs. What Sun does is design them, and they turn it over to Texas Instruments for production. And TI has their own reasons to keep up-to-date with the latest production technologies, so Sun doesn't eat that cost.
BTW: I really wish that I could talk about the SPARC presentation. I liked it a whole lot better than the NDA I attended with HP talking about their Itanic future.
Re:SPARC's death *greatly exagerated* (Score:2, Funny)
Itanic. That's really funny.
(this post is obviously the set-up. now I just need someone to supply the punchline)
Re:SPARC's death *greatly exagerated* (Score:2)
Some more SPARC news... (Score:3, Funny)
Re:SPARC's death *greatly exagerated* (Score:2, Insightful)
When I talk with management about servers, they don't ask me which one has the fastest CPUs. They've got a "short list" of hardware vendors (IBM/Sun, then further down HP/NT).
Re:SPARC's death *greatly exagerated* (Score:4, Insightful)
Hell, most PCs don't even have enough PCI bandwidth to fully saturate a gigabit ethernet connection unless you have a totally bare PCI bus or a system which provides each PCI slot with its own dedicated bus, as most Sun PCI systems do.
Let's not even compare the stability, scalability, and worksmanship of PC and Sun hardware. That would just be unfair to 99% of the "business" PC workstations and servers on the market.
Itanium (Score:4, Insightful)
I have hopes for Intel producing the worlds best microprocessors as that would benefit s all. Simply advocating a move to Itanium for marketing reasons or to meet revenue targets does a disservice to the computer industry.
Then again, they are in business to make $$$....
The newest chip will be called... (Score:4, Funny)
It's release will follow the distribution pattern established by Transmeta.
Itanium will be Hammered (Score:4, Interesting)
The upcoming AMD Hammer series, OTOH, is supposed to be about 30% faster clock-to-clock than the current Athlon XP series (which is considerably faster clock-to-clock than the Intel P4) and start at 2GHz. Sun's recent announcement of Linux x86 platform support, with details to come midyear, suggests that they'll be moving to the Hammer (to ship Q4). Sun would certainly love to take a swipe at Intel, and Sun has made positive comments about AMD's x86-64 Hammer architecture.
Speculation: Intel gets Hammered in the second half of this year.
Just... (Score:2, Funny)
No, no, and no. (Score:4, Insightful)
No, Itanium will not become commodity as soon as you foresee because compilers and software do not exist to make good use of it (some argue nothing can make good use of it [derogatory]).
No, Intel has not killed the competition. AMD is alive and well. The PowerPC family is on the verge of The Next Big Thing (G5). And the reports of Sparc's demise have been greatly exaggerated.
No, other vendors are not irrelevant. Hitachi makes killer chips for big iron, and looks set to increase that trend. If anything, the CPU market is looking less and less like a monopoly than before.
*cough* PoerPC *cough* (Score:4, Interesting)
Re:*cough* PowerPC *cough* (Score:3, Interesting)
Intel CPUs will be killed by Microsoft's CLR (Score:3, Insightful)
You are nuts if you think ... (Score:3, Interesting)
Granted, it's going to be popular for a while. But isn't what's popular *always* sucky?
Re:Intel CPUs will be killed by Microsoft's CLR (Score:2, Informative)
In fact, NT was developed on MIPS. And M$ is in no way interested in having the CLR running on non windows based platforms. CLR is not designed to make code machine-independent, but rather location-independent. M$ still wants you to be using Windows, it just wants to have a tighter grip on you no matter where you go.
Why would anyone even think about adopting
What will drive Itanium price down? (Score:2)
Second, what will drive the price of the Itanium down? Historically, Intel have announced that their latest superchip is "targeted at servers, not desktops" about a week before releasing a flood of them into the desktop marketplace (usually the ones that didn't pass spec at the higher speed level), thus driving down the price of the server chips to where no one else could compete. What will be the driver this time? Businesses aren't buying desktops, and when they do start buying again it will be pure commodity: there is zero appeal for Itanium on a business desktop. And treble for home desktops.
Which leaves high-end servers. I don't think that any datacentre manager worth his pay is going to pull out $100,000 HP N-Class boxes in favor of $2,000 Intel clones. There's a bit more that goes into a server than the CPU.
sPh
Dead? I doubt it. (Score:5, Interesting)
I may be off base on some of the details, but Sun has a unified approach from top to bottom, from tools to silicon for the systems they plan to deliver. I doubt it will just throw in the towel. Ultimately, Sun ships iron, and they lead the market in their segment.
I don't see the basis for your assertion, and where you pulled 1B out of for cost I also don't know.
Alpha is AMD now, as that's where a good chunk of the people went. MIPS is still kicking, with the 14000 so far, but I won't speak to the future of that chip line. There's a lot of chip heads on this site with much better info than I on many of the lines.
One decent, although dated summary is here [f9.co.uk]
Please tell me there's more information you're basing this on than consumer workstation marketshare....
64bit code (Score:2)
Re:64bit code (Score:2)
You should of pointed them here [linuxia64.org].
It was first after all.
More importantly... (Score:5, Insightful)
Weren't we all suppose to be using high-speed serial connections by now instead of a cocktail of SCSI (1/2/3, wide, fast, hold the mayo), IDE (ATA-33/66/100), parallel, 8 bit serial, USB, Firewire, PS/2, PCI, ISA (which is finally disappearing), etc. Heck, I'd be happy if the motherboard ran at even half to a third the speed of the cpu.
Using a 20 year old peripheral port on last weeks multi-gig cpu is like sucking a McDonalds shake through a coffee stirrer!
Re:More importantly... (Score:2)
Re:More importantly... (Score:2)
Peripheral communication. (Score:3, Insightful)
Weren't we all suppose to be using high-speed serial connections by now instead of a cocktail of SCSI (1/2/3, wide, fast, hold the mayo), IDE (ATA-33/66/100), parallel, 8 bit serial, USB, Firewire, PS/2, PCI, ISA (which is finally disappearing), etc. Heck, I'd be happy if the motherboard ran at even half to a third the speed of the cpu.
The good news is that USB is well on its way to completely replacing serial and parallel ports, and that PCI has been the One True Bus for the past couple of years now. Everything south of the southbridge is slowly fading away.
IMO, if we'd switched to 66 MHz 64-bit PCI years ago, we'd have no further problems on this front. In practice, PCI-X may finally be pushed through by Intel, and that will serve most internal communications needs. Motherboard chipsets are modular enough that it doesn't really matter what flavour of IDE/SCSI/firewire your drive is hanging off of; the drive controller is just another PCI device to the processor. You have enough bandwidth and DMA functionality on PCI bus to handle it.
The only peripherals that are currently bottlenecks are RAM and the video card. RAM is handled by upgrading the memory bus every couple of years. This is easy to do, because peripherals don't care what happens on the other side of the northbridge. The video card was handled adequately by the hack that is AGP (64-bit 66 MHz PCI would have been a much better idea, but that wouldn't have given Intel its nice AGP port to license).
The only peripheral that *might* be a problem in the future will be the network card (when gigabit cards finally come into vogue), and that will probably be what forces motherboard makers to put wider/faster PCI on to midrange boards and not just high-end boards.
In summary, this is less of a problem than it first appears to be.
The only serious bottleneck for performance is RAM latency, and that's not because of legacy peripherals.
Re:Peripheral communication. (Score:4, Informative)
Unless you buy into Intel's PCI-X, which is 64/133.
And most graphics cards are not limited by bus bandwidth with *any* flavour of AGP (see the various Tom's Hardware benchmarks). The usual limit is fill rate for new cards, and lack of geometry processing for old cards (assuming you're playing a new game). Textures are stored on-card by any sane game, so the only thing going across the bus is lists of triangles.
AGP doesn't have contention with other devices on the bus so it doesn't have to do any logic for mastering or controlling and can allocate all its clocks to doing a data transfer.
While this would be an issue for very short data transfers, graphics cards will likely be transferring large batches of data. This is done in burst mode, which gives one transfer per clock.
Why would you want PCI? The only advantage PCI gives is that you can hang multiple devices off of it. But while that lets you get multiple monitor support easier, it will really kill your limited bandwidth.
You have bandwidth to spare; all you'd be doing in a multi-monitor setup is sending the same triangle lists over the bus, not cutting and pasting image data or doing texturing. Have one one dominant card and leave the others snooping traffic, and you have zero extra overhead for this.
The real benefit of having multiple video cards is that it lets you easily do render farming for things like games. Have each card render half the screen, and copy all cards' partial renderings to one card's frame buffer. 32/33 PCI is too slow to be practical for this, but 64/66 has more than enough bandwidth. I studied the feasibility of this at one of my past jobs.
NVidia, the next player (Score:4, Interesting)
If you look at the transistor counts, NVidia's graphic chips already are more complicated than most CPU parts. This is quite do-able.
Re:NVidia, the next player (Score:3, Interesting)
There's more to [CG]PU complexity than transistor count. Look at the 512Mbit memory cells that run for only a couple dollars a chip.
The trick is inter-related logic complexity. To my understanding the existing GPUs have no issues with backward compatability (so much of the x86 overhead is avoided), the core itself is pipelined and modular, so the complexity is spread out across the whole chip (independent teams can work on their own components with little concern for sistern components, whereas every ounce of performance is being squeezed out of x86's which require complete coordination). Further, graphics acceleration is simply the application of graphical algorithms into silicon. While I'm not quite sure which algorithms there are, the possibilities are endless. Imagine a fast-fourier transform implemented as a SIMD floating point instruction. You create an array of floating point logic units, and interconnect them. The floating point unit is pretty much a common-off-the-shelf design, so the only real logic you apply is the interconnectivity.
I'm not saying that GPU's are easy to design, I'm just saying that hardware filters are designed this way all the time, and I would'nt be surprised if a large percentage of the nVida chips weren't stock logic modules.
-Michael
Sure, build your own... (Score:2)
You don't pay $6k or $8k for a server just because there's high markup on the parts. A lot of it is due to tighter tolerances required for high-availability or high-reliability equipment. There's greater consideration for issues of heat, RF, power consumption and stability -- and then there's the built-in redundancy for many components (power supplies, fans, etc).
It's not as simple as you think.
Twoflower
I wouldn't count out everyone else yet. (Score:2, Insightful)
People love to through buzz words like 64-bit vs. 32-bit and stuff like that but when it comes down to it what do you need on your desktop? If you are using your PC for basic development or coding there is not much to be gained from a 64-bit core at all. You don't really need anymore precision. If you are talking about scientific applications then maybe you do need the 64-bit core.
I am not saying that desktop PC's won't eventually go to 64-bit cores. However, even if you were to get a cheap Itanium right now it would perform no better, and possibly worse then your high end AMD and Intel x86 processors because few of your applications would take advantage of the core.
This question will be better asked for when Intel puts a processor on there desktop timeline that utilizes IA-64 technology.
Sparc dead? And what about SGI? (Score:2, Informative)
And for those folks doing hard research (or special effects companies with lots o' money) SGI is still king. Despite what nvidia would like us to believe, SGI's not going anywhere anytime soon for big 3d rendering projects.
Address space requirements (Score:2)
That's why I doubt that we are going to see affordable IA64 systems soon. After all, the transition is quite rough, thanks to Itanium's abysmal IA32 emulation (performance-wise), so there isn't even much market demand.
In the future, Intel may well decide to switch to the IA64 instruction set before it is really time for it, just to make things a bit more complicated for AMD.
Re:Address space requirements (Score:2)
On the other hand, the per-process adress space is still limited to 4 GB. I don't think this is a concern for the pro user who wants to show off his RAM size, though.
It depends on what you need (Score:2, Informative)
Is missing something. HP, Compaq and Dell provide more than the hardware. They provide services that go along with the HW. They use the hardware to suck you into to using their services. While small companies can build these systems on their own for cheaper, the larger companies are the ones that need to outsource some things that HP, Compaq and Dell's services provide.
Also its kind of silly to think that these IA-64 systems will be able to be built for $2k each (given the cost of similiarly performance) Sparc's and IBMs. Intel is hoping for their backwards compatibility and clout to push ISVs into programming for their systems. Once they have those vendors in their camps, the chip and server prices will go up again.
And finally, most people that would need a 64bit solution will probably need multiproc systems. OEM's will be able to provide the small systems, but once you go past the 4-8 way space, there really isn't a cheep way of scaling up any higher (, and btw, clustering is really only a solution for tasks that don't involve large sharing of data between processors that is time sensitive.) Which is where HP, Compaq, Fujitsu, NEC, and IBM will be with their high-end systems. I doubt I will ever see Dell release a system with more than 8 IA-64 processors.
Of course only time will tell what will happen next. OH, one last thing. The guy who posted should be informed that HP did not sell any processor guys, they sold some chipset guys to Intel. I'm surprised that someone that is in a processor research group would not know this. Checkout:
http://slashdot.org/comments.pl?sid=22319&thresho
My 2c (Score:4, Interesting)
I also think that while AMD has shown that they can provide an honest competition in terms of performance, it is going to be stuck following Intel's every move, for the mere reason that Intel is "sleeping with" so many big OEMS (*cough* Dell *cough*), leaving it as the CPU for the hobbyist
Well, anyways, that's just my 2c...
Itanium? in $2k systems? (Score:2, Insightful)
First of all, Intel has said ever since the Itaniums much-delayed release that it couldn't really compete and is primarily released to get some infrastructure ready for when the McKinley is ready (IIRC, it's scheduled for about 3 months from now...).
Secondly, the die size for the McKinley is HUGE. On todays top-of-the-line
Thirdly, the competition isn't dead yet. Sparc and PA-RISC may be dead, but Sun offers competition, and IBMs Power4 will be a decent competitor. Alpha does indeed look to have disappeared, but I thought I heard something about some Japanese company buying rights to some Alpha stuff, and planning on a big die shrink and integrating a large cache (which is all the Alpha really needs to compete, for the near future).
Fourth of all, the performance of even the McKinley is questionable. Compilers for it's IA64 instruction set are still quite poor, with little sign of the anticipated improvements. It's predecessors, the Merced/Itanium, was dog-slow at most tasks (though good at floating-point). The most recent benchmarks show the McKinleys 32-bit performance as terrible, though it's floating-point performance is supposed to be stellar, and its integer performance decent (when combined with an enormous on-die L3 cache...).
Anyway. Intel just likes the Itanium because the the instruction set is sufficiently complex that the prohibitive cost of designing a compatible would raise the cost of entry to the market enough to give them a more secure monopoly for the next decade.
64-bit isn't necessary - and Itanium may suck (Score:3, Interesting)
Now don't get me wrong - 64-bit filesystems are great, and necessary - being limited to 2GB or 4GB files is terrible. But no 64-bit CPU is necessary for that kind of thing, the filesystem just has to be written as 64-bit (which is easier said than done, and could easily sacrifice backwards-compatibility with various API's, but I digress...).
That being said - Intel might very well be moving down the wrong path - the Itanium is a huge, expensive, hot, completely new chip. Even Intel is hedging its bets [theregister.co.uk] on whether or not Itanium will take off - and AMD is poised to eat Intel's lunch with their new Hammer design [com.com].
Who knows, perhaps all CPU's from now on will be compatible with x86 IA32, and innovation will be in the various processing units that sit behind the instruction-set decoder. Take a look at AMD or Transmeta for examples of that, already.
Consolidation is a Fact of Life (Score:2, Interesting)
The fact is that even though it looks impossible to overcome Intel at this point, someday someone will.
The killer is custom-made systems... (Score:5, Insightful)
Kjella
you're thinking too far away (Score:2, Insightful)
Maybe 10 years from now, but that's too far off.
1) HP's PA-RISC is as dead as Intel's x86
2) Alpha should regain the speed crown with the EV7 for a while, so they aren't dead yet. They've just announced they'll be dead in a few years
3) IBM's POWER4 is the current speed king and is likely to be around for a long long time.
4) MIPS.. Aren't these popular RISC chips in the world due to their embedded use? (N64, Playstation, networking) At 500Mhz in SGI's machines they are pretty dead, but various MIPS chips are doing quite well in emerging areas. Infact AMD just bought a MIPS company.
5) Sparc has never been that great CPU vs CPU with the other companies, but I expect them to be around for a fairly long time still, just based on their installed base. Their customers never really bought on performance (otherwise ALpha would still be around!), but on service and reliablity. As long as they can provide good enough performance they'll be around.
The next Itanium is HUGE making it very expensive to produce (meaning you won't ever build a system for under $2K with one!), requires a LOT of optimization in software to get accepable perfomance (meaning it'll suck unless you run active profiling optimizations and I doubt most game companies will even do that), it uses a lot of power and creates a lot of heat (it makes the Athlon/P4 look like embedded chips!), and it isn't really compatible with existing software. Nobody is going to run Win98, WinXP, or even GNU/Linux on it on the desktop.
The next Itanium will be more popular than the last, but it won't even register on people's radars as it won't provide the best performance, it won't have a bunch of software written for it, and it'll be expensive. Apple will sell more iBooks than Intel sells Itaniums for the next few years.
Itanium will take YEARS to commoditize... (Score:2, Interesting)
Until there is breakthrough brought on by computing speed, we will see a stall in computer upgrading as we have seen in the past.
I expect we will see more things like the Imac (very cool computers), before we see a press for new computers for speed.
The two things I think will create the next level breakthrough.
Real Time CGI imaging at Toystory/Mosters INC/FF, level of quality. We can probably predict precisely WHEN that will be possible by mapping the development speed of 3d hardware, memory, software breakthroughs, and polygon density to date, and where the predictable bottlenecks will appear. (My suspicion is that we are 5-8 years away).
The other breakthrough which I think would do it, and right now it is very difficult to predict when it will happen, but I suspect that adoption would be pretty rapid, is real time voice interaction that is 5 9's accurate. This is likely to appear after a certain speed level of computers, and a breakthrough understanding/algorithm for speech recognition.
However, I suspect the AMD x86-64 solution may be adopted much faster than the Itanium solution. Likely there is an app out there that may have a large enough niche to require 64 bit apps, and the rest of the apps on the computer would be 32 bit. I suspect that the app will be imaging or video related, and that will create an adoption around the AMD solution, before the Itanium moves out of the server market to the desktop market where it will be commoditized.
Mismatch? (Score:2)
While the Power4 will no doubt compete with the Itanium in the server space, since many people are talking about when 64-bit chips will hit the desktop, you should note that its "friend" the G4, which has been out since before the P4, is by no means meant to compete with new Intel offerings; the Goldfish PowerPC 8500 ("G5") is aimed squarely to dominate the desktop space before Intel can get to it with 64-bit chips. It's ability to run 32-bit code at much better speed than the othet 64-bit offerings makes it much more appealing to people looking to transition to 64-bit on the desktop, and if they can pull off the .13 SOI, 500MHz RapidIO bus, etc. it should reassert A.I.M.'s competitiveness in high-end desktops. Now when it will actually ship, how much of this will get implemented, and at what frequency it starts at is anyone's guess.
The Answer (Score:2)
Build an $8-10 server for $2k - um, no. (Score:5, Informative)
1) Hot-pluggable power supplies, drives, and PCI - slots.
2) Built-in hot-plug SCSI
3) Integrated service processor for diagnostics (essentially a computer within a computer)
4) Extremely well-tested box. (Very important to do integration testing on high-end units.)
5) Very nice, serviceable, rack-mount chassis
6) Crap-load of PCI slots
7) Light-path diagnostics. (Lets somebody without training figure out what's broke.)
8) IBM Director
9) Well-designed cooling that would be impossible to achieve with a garage box. (Do you know how to do airflow modeling?)
10) Support.
The list goes on...
Yes, they will become a commodity, in that you will be able to get them from multiple major manufacturers, but don't expect to build it yourself in your basement anytime soon.
SirWired
Bwa Ha ha ha ha ha ... (Score:3, Informative)
Someone remind me to post a link back to this story in a month or two when Sun announces their faster processors with solved ecache solutions...
Who's paying for this researcher? (Score:4, Informative)
Sounds like a trollish or clueless post. (Score:3, Interesting)
When peolpe start buying Itanium systems in volume, then the prices will drop on the Itanium systems. The reasons, they're expensive is not because the chips are hard to come by but because no one wants to buy them right now.
However, this comment alone makes me wonder about he posters cluelessness. He obviously hasn't worked in any real production environment. You people should realize that you simply can't build the kind of systems that Dell, HP, etc sell -today- out of commodity components. Take a look at a typical high-end SMP Dell server: propietary OEM motherboard, propietary case, hot-swap hard drives, hot-swap redundant power supplies and cooling, LOM support, etc. All components have been carefully designed to work together to produce a reliable, and scalable server system. You will never ever build the same kind of system on your own and if you do it's not going to be cheaper than buying one. Plus you don't get the vendor support.
The comment about SPARC being death is completely astonishing at the time when Sun is -THE- unix market leader. SPARC CPUs were never faster than the competition but that didn't worry Sun users as long as they were up to par with the competitors. The reason people buy Sun hardware is not the CPUs (CPU is alone is useless) but Solaris which is THE enterprise class OS and its applications, Sun's excellent support, massive multiprocessor scalability of Sun systems, massive I/O bandwidth, etc.
Current Sun chip is not bad at all (UltraSPARC III) and Sun is working on UltraSPARC V.
Re:Single track (Score:2, Funny)
Re:I don't get it. (Score:2)
Correction: IBM or AMD.
Misunderstanding of the term FUD (Score:2, Informative)
Re:I don't get it. (Score:5, Informative)
You're right. The new 1.05Ghz Cu chip is pretty frickin' fast - and speed is NOT always been Sun's selling point.
PA-RISC is done - FUD.
Not true. HP is moving to IA-64 - even their boxes are starting to wired to ship with either PA-RISC or McKinley.
McKinley is essentially an HP design... PA has lived longer than expected but that's just because IA-64 is so late.
MIPS lacks funding - FUD.
Actually this probably true. SGI is not a well comany and they will probably need to move to a new chip arch soon. There R14s are G5s rumored - who knows?
Alpha is gone - FUD.
Nope - it's gone. Intel bought it and swallowed it whole. No new development, no new generations, it'll only live on in some parts in IA-64.
This guy works for either IBM or Intel. Probably IBM, as he favors the Power4 and G4. Don't take him seriously!
I can't say where he works, but he has a point. Maybe you should look at the recent server chip landscape before dismissing this guy's claims.
=tkk
Don't forget Samsung (Score:3, Informative)
Re:I don't get it. (Score:2, Informative)
SPARC: Who buys them anymore? *Every* application in the last year that I have heard about the management has stated that they will buy a commodity PC rather than a Sun Workstation because of Price/performance ratios.
PA-RISC: Not enough info to comment on.
MIPS: I know one hardware guy who is trying to build an embedded MP3 player using a MIPS CPU. That's it. I've never heard of anybody using them commercially.
Alpha: Compaq stopped making them last year (go and check their old press releases for March), nobody makes systems based on them, nobody buys them that I have ever heard of. In fact, I can't remember ever seeing one.
Others: Programming the 8-bit CPU that runs the engine in your car can be fun if you like machine code, but it's not really satisfying.
Anything whos sales are decreasing, zero, or that is not being manufactured anymore is dead. The Z-80 is dead after the longest run of any CPU out there (26 years!) but it is gone. Alpha is gone. SPARC is going. MIPS is going. The world will be a poorer place for their loss.
(If you have evidence to the contrary, please post. I'd like to be wrong.)
-C
Re:I don't get it. (Score:2)
Re:Innovation in the CPU business (Score:3, Insightful)
Now, try to write a dynamic, JIT compiler for Itanium, which is even hardware than a static compiler. I haven't seen any java or CLR performance numbers for IA-64, and suspect I know the reason why.
Re:Innovation in the CPU business (Score:2)
Virtual machines rely on things like delayed compiling that are fairly antithetical to the whole idea of Itanium, where they push enormous amounts of work previously handled by the CPU out to the compiler. Personally, I believe that VLIW for general purpose processors was a really bad idea that was disproven a good decade ago. Intel is in the middle of giant train wreck, and the market doesn't even know it yet.
Consider the downside of pushing the majority of your branch prediction to the compiler. For example, the compiler doesn't know about multiple processes and how they interact with eachother! This means that it's likely that Itanium boxes won't even serve transactions very well. This begs the question of what Itanium will be useful for. If it's not for the desktop, and it's not for transaction service, what the heck is it for? High end scientific computing? Competing for Alpha's market share is a big mistake, in my mind.
C//
Re:I am confused (Score:2)
Sorting out the meaningful comments from the slush is part of good research.
sPh
Re:Just wondering, not a troll. (Score:2, Interesting)
However, I still think that there will be room for others. AMD will probably succeed doing what they do best, outpace Intel in quality and lower the price by ~10%. This has been successfull (I hope it continues, I own stock) and will probably continue. And I doubt Sun is out. There maybe changes coming, but I figure McNealy would sell his baby prior to using Intel chips. As for the others, they fell and never recovered. You can't charge super high premiums when your competition is charging super low premiums. A lot of corps assumed you could and get away with it and look what happend.
The future is unwritten, so any sort of prediction is just fantasy for the most part. Step back to 95 and tell me who predicted 2000 or 2001? Reality is far more interesting than any professional opinion from the Gartner group et. al.
Re:Just wondering, not a troll. SUN IS OUT (Score:2, Informative)
Anyone who sees the recent Sun announcements (re: Linux) as the end of SPARC or Solaris, clearly doesn't know anything about the business world or about Sun.
Yes, Sun has made an announcement to start supporting Linux. This is no big surprise, especially after the Cobalt aquisition.
This doesn't mean that they are switching to Intel or giving up on the SPARC architecture.
SPARC is far from dead. All you have to do is talk to anyone within Sun to see the U4 and U5 roadmaps. Sun firmly believes in their architecture and has/will spend the R&D to to continue to develop it.
Plus, the install base of these technologies is much too large for them to just give up on them.
Look at HP, for example... Here is a company that is part of the engineering process for Itanium. They've already committed to use Itanium on their higher end servers, but they aren't completely giving up on their PA series CPUS (yet). All of their new systems take both.
No company wants to alienate the majority of their install base.