Catch up on stories from the past week (and beyond) at the Slashdot story archive


Forgot your password?

What's Next in CPU Land after Itanium? 589

Posted by Cliff
from the is-intel-the-only-chip-maker-left dept.
"I work for a major research organization. Of late a lot of the normal big computer companies have been visiting and preaching the gospel of Itanium. My question to them, and to the assembled masses here at Slashdot is what happens next when Itanium is real? My world view is that Itanium based systems will become commodity products very quickly after good silicon is available in reasonable volume. At that point, why should one spend $8-10k for that hardware from the likes of HP, Compaq, Dell and others when one can build it for $2k (or even less)? In other words, has Intel finally done in most of their customers by obliterating all the other CPU choices (except IBM Power4 [& friends G4, et al] and AMD Hammer) and turned the remainder of the marketplace into raw commodity goods? Lest you defend the other CPUs... Sparc is dead, Sun doesn't have the money (more than US$1B we'll guess) to do another round. PA-RISC is done, as HP has given away the architecture group. MIPS lacks funding (and perhaps even the idea people at this point). Alpha is gone too (also because of the heavy investment problem no doubt). Most other CPUs don't have an installed base that makes any difference, especially in the high end computing world. So what's next? I don't like the single track future that Intel has just because it is a single track!"
This discussion has been archived. No new comments can be posted.

What's Next in CPU Land after Itanium?

Comments Filter:
  • by Talonius (97106) on Monday February 18, 2002 @05:39PM (#3028554)
    AMD's newest chip is supposedly fairly remarkable (don't have specifics, see Tom's Hardware's search engine). What about the Crusoe? VIA's purchase of (I believe) the M3? I wouldn't look at companies that are currently in the business only - I would tend to look at companies that might move into the business, either via investment, startup, or outright purchase.

    I'm not too worried about Itaniums, and I don't see them becoming prevalent for quite a while. While the Pentium II, III, and IV moved through the marketplace fairly rapidly they all offered compatibility at some level. If I recall correctly 32 bit programs that are not rewritten for 64 bit run SLOWER on the Itanium than they do the equivalent Pentium line.

    In essence consider this: it's like a brand new operating system attempting to break into the monopoly that Microsoft has. (Parallels drawn out of necessity.) While it may be better, faster, superior in every way it doesn't have 20+ years of legacy code behind it - and that will end up being what drags it down.

    Only time will tell. Remember the Pentium Pros..

  • Recurring problem (Score:4, Interesting)

    by colmore (56499) on Monday February 18, 2002 @05:39PM (#3028558) Journal
    This seems to be a recurring problem in a number of technology based industries. Once you get to a certain lever of high-tech, only the (very) big boys can even compete.

    So here's the question: how do you keep competition alive when an initial investment costs in the billions of dollars. For any company less than Intel sized, a single bad product cycle spells complete doom. That's no kind of market to be in.

    Also, wasn't this inevitable. There are a few Beowulf jokes being posted, but that's really what's going on. Increasingly high performance tasks (Google, render farms etc. etc. etc.) are using massive arrays of low-power CPUs. It costs a lot of money to develop big iron chips, and if people aren't buying them then there's no point in investing that much money.

    What I'm worried about are the isolated markets that still require massively powerful, low processor number architectures. Not everything splits into nice packages.
  • SPARC is dead? (Score:4, Interesting)

    by bconway (63464) on Monday February 18, 2002 @05:41PM (#3028570) Homepage
    That's news to me. I could swear a friend of mine just jumped in on the UltraSPARC 4 project.
  • Motorolas? AMDs? (Score:2, Interesting)

    by SanLouBlues (245548) on Monday February 18, 2002 @05:42PM (#3028574) Journal
    Why not G5's? Or x86-256s? Or those wacky 25x's []? Who knows? (rhetorical) Slashdot is not a magic eight ball, and the folks who do have a clue are most likely under NDA's. My guess is either a G(some large number here) or an Itanium(some other large number here) that has a 128bit bus. And God willing, whichever wins will run Irix.
  • by Brian Stretch (5304) on Monday February 18, 2002 @05:47PM (#3028614)
    The huge die size of the Itanium and its upcoming successor make the chip far more expensive than the Pentium series, so I would not expect Itanium machines for $2K. So far, the CPUs alone are several $thousand. I also haven't seen where its performence is that impressive. x86 code performence, since its emulated, is poor. Recompile or else. Intel has sold, what 500 Itanium CPUs?

    The upcoming AMD Hammer series, OTOH, is supposed to be about 30% faster clock-to-clock than the current Athlon XP series (which is considerably faster clock-to-clock than the Intel P4) and start at 2GHz. Sun's recent announcement of Linux x86 platform support, with details to come midyear, suggests that they'll be moving to the Hammer (to ship Q4). Sun would certainly love to take a swipe at Intel, and Sun has made positive comments about AMD's x86-64 Hammer architecture.

    Speculation: Intel gets Hammered in the second half of this year.
  • by S-prime (550519) on Monday February 18, 2002 @05:50PM (#3028631)
    Now that the G4 has finally gotten past the 1GHz mark, and Apple has a brand spanking new Unix based OS running on it(and if you don't like it you can run others), this opens a whole new choice for the researcher looking for a new platform.
  • by joe_n_bloe (244407) on Monday February 18, 2002 @05:52PM (#3028646) Homepage
    Also featuring stinking fast floating point.
  • Dead? I doubt it. (Score:5, Interesting)

    by BlackStar (106064) on Monday February 18, 2002 @05:52PM (#3028648) Homepage
    SPARC dead? I'm not sure where you come across that idea. Having listened to a few talks down at JavaOne and chatted briefly with Marc Tremblay (head chip dude down there, father of MAJC and one designer of SPARC) they've already got design down on the next two levels of SPARC as the IV is experimental, and the V is the next production level as I understand it. MAJC seems to be the experimental platform they are using for smaller implementations and alternative ideas to be tried, based on some of Tremblay's theories.

    I may be off base on some of the details, but Sun has a unified approach from top to bottom, from tools to silicon for the systems they plan to deliver. I doubt it will just throw in the towel. Ultimately, Sun ships iron, and they lead the market in their segment.

    I don't see the basis for your assertion, and where you pulled 1B out of for cost I also don't know.

    Alpha is AMD now, as that's where a good chunk of the people went. MIPS is still kicking, with the 14000 so far, but I won't speak to the future of that chip line. There's a lot of chip heads on this site with much better info than I on many of the lines.

    One decent, although dated summary is here []

    Please tell me there's more information you're basing this on than consumer workstation marketshare....

  • by Animats (122034) on Monday February 18, 2002 @05:53PM (#3028659) Homepage
    My own guess for the desktop is that NVidia will put a CPU core, probably from AMD, in the next generation of their nForce part. That puts CPU, graphics, networking, sound, disk control, and the motherboard logic on a single chip. Their current nForce part already has all of that but the CPU.

    If you look at the transistor counts, NVidia's graphic chips already are more complicated than most CPU parts. This is quite do-able.

  • by putzin (99318) on Monday February 18, 2002 @05:56PM (#3028675) Homepage
    It's different because they haven't signed exclusive deals and used marketing to force other competitors out of the fray. Essentially, they will have priced the competitors out of the building. I'm not saying they aren't a monopoly, but realistically, it's harder to argue they did it illegally or unjustly.

    However, I still think that there will be room for others. AMD will probably succeed doing what they do best, outpace Intel in quality and lower the price by ~10%. This has been successfull (I hope it continues, I own stock) and will probably continue. And I doubt Sun is out. There maybe changes coming, but I figure McNealy would sell his baby prior to using Intel chips. As for the others, they fell and never recovered. You can't charge super high premiums when your competition is charging super low premiums. A lot of corps assumed you could and get away with it and look what happend.

    The future is unwritten, so any sort of prediction is just fantasy for the most part. Step back to 95 and tell me who predicted 2000 or 2001? Reality is far more interesting than any professional opinion from the Gartner group et. al.
  • by joe_n_bloe (244407) on Monday February 18, 2002 @05:57PM (#3028676) Homepage
    ... that a runtime environment where "Hello World" will require, let's say, several GB of disk, a few hundred MB of RAM, continuous online updating (also requiring continuous hardware updating), and hundreds of old and newly-arriving security holes and exploits, is going to "take over the world."

    Granted, it's going to be popular for a while. But isn't what's popular *always* sucky?
  • My 2c (Score:4, Interesting)

    by UTPinky (472296) on Monday February 18, 2002 @06:05PM (#3028727) Homepage
    I had a professor last semester that worked at Intel, and several things he told me, reminded me of somthing: It's still a busisness. In my opinion Intel will not make any huge move, until they KNOW that they will profit off of it. This means that they won't make any major move until the consumer market is there. For example, he was telling us that there have been times where they have come up with ideas that would in fact increase performance, HOWEVER due to their wonderful job at brainwashing the entire public into thinking that clockspeed is THE measure of performance, they scrapped the ideas because they noticed that they would cost too much to implement, and would result in no frequency increase. (Thanks Intel)

    I also think that while AMD has shown that they can provide an honest competition in terms of performance, it is going to be stuck following Intel's every move, for the mere reason that Intel is "sleeping with" so many big OEMS (*cough* Dell *cough*), leaving it as the CPU for the hobbyist

    Well, anyways, that's just my 2c...
  • by camusatan (554328) on Monday February 18, 2002 @06:12PM (#3028760) Homepage
    The implicit assumption that the author is making here is that 64-bit CPU's such as Itanium will be the 'next big thing'. I'm not sure - 64-bit CPU's really only are necessary for machines that need more than 4 GB of VM space - and with various x86 addressing extensions, some IA32 CPU's can address up to 16 GB (I think).

    Now don't get me wrong - 64-bit filesystems are great, and necessary - being limited to 2GB or 4GB files is terrible. But no 64-bit CPU is necessary for that kind of thing, the filesystem just has to be written as 64-bit (which is easier said than done, and could easily sacrifice backwards-compatibility with various API's, but I digress...).

    That being said - Intel might very well be moving down the wrong path - the Itanium is a huge, expensive, hot, completely new chip. Even Intel is hedging its bets [] on whether or not Itanium will take off - and AMD is poised to eat Intel's lunch with their new Hammer design [].

    Who knows, perhaps all CPU's from now on will be compatible with x86 IA32, and innovation will be in the various processing units that sit behind the instruction-set decoder. Take a look at AMD or Transmeta for examples of that, already.

  • by dso (9793) on Monday February 18, 2002 @06:14PM (#3028767) Homepage
    Just look at the auto industry. GM, Ford, Chrysler began the North American market by consolidating all the smaller auto companies and dominated for years. Then along came Honda, Toyota, Nissan and now they have made huge gains.

    The fact is that even though it looks impossible to overcome Intel at this point, someday someone will.

  • by Metrollica (552191) <m etrollica AT hotmail D0T com> on Monday February 18, 2002 @06:15PM (#3028769) Homepage Journal
    If Itanium fails you can be sure Intel will release the Yamhill [], a chip much like AMD's Hammer.

    "It's pretty well understood that Itanium will not provide leadership x86 performance. That's Hammer's great hope, in fact. AMD's strategy depends on Intel mistakenly abdicating its x86 throne leaving Hammer and its descendants the heirs apparent to a software kingdom.

    Would Intel so cavalierly jeopardize its legacy? Not on your life. To no one's great surprise, Intel is rumored to be developing something that will give future Pentium processors--not IA-64 processors--a performance kick. In a perverse reversal of roles, Intel may actually be following AMD's lead in 64-bit x86 extensions. A "Hammer killer" technology, code-named Yamhill, may appear in chips late next year, about the time Hammer makes its debut. It's suggested that Intel's forthcoming Prescott processor will be based on Pentium 4, but with Yamhill 64-bit extensions that coincidentally mimic Hammer's. (Prescott is also rumored to be built on a 0.09 micron process and implement HyperThreading.)

    Naturally, the very existence of Yamhill, if it exists at all, is a diplomatically touchy subject at Intel HQ. The company doesn't want to undermine its outward confidence in Itanium and IA-64, but neither can it afford the possibility of ceding x86 dominance to a competitor. Besides, whether they appear in future Pentium derivatives or not, Intel's 64-bit extensions could appear in future IA-64 processors instead. New IA-64 features plus competitive x86 performance--now that's a compelling product."

    From Extremetech. []

    Another article on Yamhill at The Register [] and Extremetech. []
  • by SoftwareJanitor (15983) on Monday February 18, 2002 @06:21PM (#3028792)
    The only problem with AMD's 64 bit line is that it isn't going to be compatible with the Itanium. That is both good and bad. Good in that it is an alternative, bad in that it is going to cause a lot of confusion.

    I think a lot of people are too overconfident that Itanium is going to be successful, let alone quickly. It is going to require a lot of changes to software in order to take advantage of it because it isn't just a 64 bit x86, it is a whole new architecture, one more closely related to HP PA-RISC than x86. It also may not do a very good job of running existing 32 bit code, which could slow down its acceptance, particularly in desktop systems. The last time Intel made a big push (with the i432) to create a whole new non-x86 processor family, it was less than successful. Although to be fair, the i432 was a radically different proposition and the Itanium with its more proven PA-RISC roots looks a lot more sound.

    AMD's Hammer architecture, on the other hand, is more conservative, being a x86 family processor extended to 64 bit. It should require less modifications to existing software to take advantage of it, although an argument could be made that it won't have as much advantage to take having more legacy issues with the aging x86 architecture. It also may perform a lot better on existing 32 bit code than Itanium. And if AMD's track history holds true, it will probably be significantly less expensive than the Itanium.

    A lot of whether it is Intel or AMD that paves the way for 64 bit mainstream CPUs will probably have to do with which of them is the first one that offers a price attractive product that runs existing 32 bit software well while being marketable as a 64 bit chip. Unfortunately for AMD, the marketable part is, as always going to be tough. While AMD has been hugely successful in "white box" sales where customers can choose their CPU, they've had a much more difficult time penetrating the big name PC markets, particularly in higher end systems. This despite the fact that in many cases an Athlon or Duron would offer a better performance than a PIII or P4 at a better price.

  • by barfy (256323) on Monday February 18, 2002 @06:24PM (#3028805)
    There is little compelling need for desktop users (the ones that create the volume for commoditization) to move to 64 bit systems.

    Until there is breakthrough brought on by computing speed, we will see a stall in computer upgrading as we have seen in the past.

    I expect we will see more things like the Imac (very cool computers), before we see a press for new computers for speed.

    The two things I think will create the next level breakthrough.

    Real Time CGI imaging at Toystory/Mosters INC/FF, level of quality. We can probably predict precisely WHEN that will be possible by mapping the development speed of 3d hardware, memory, software breakthroughs, and polygon density to date, and where the predictable bottlenecks will appear. (My suspicion is that we are 5-8 years away).

    The other breakthrough which I think would do it, and right now it is very difficult to predict when it will happen, but I suspect that adoption would be pretty rapid, is real time voice interaction that is 5 9's accurate. This is likely to appear after a certain speed level of computers, and a breakthrough understanding/algorithm for speech recognition.

    However, I suspect the AMD x86-64 solution may be adopted much faster than the Itanium solution. Likely there is an app out there that may have a large enough niche to require 64 bit apps, and the rest of the apps on the computer would be 32 bit. I suspect that the app will be imaging or video related, and that will create an adoption around the AMD solution, before the Itanium moves out of the server market to the desktop market where it will be commoditized.
  • by Myxorg (528866) <> on Monday February 18, 2002 @06:24PM (#3028808)
    PPC 601 started at 60Mhz (approximately the break-even point to the emulation layer)

    Actually the break even point wasn't reached until about 100 Mhz or so, not sure. But I do remember when the first ppc came out they were definatly slower than the old 040's. Still don't know how Apple pulled that one off (selling new computers that were essentially slower than previous models)
  • Re:Recurring problem (Score:3, Interesting)

    by Waffle Iron (339739) on Monday February 18, 2002 @06:24PM (#3028810)
    Speaking of badass mainframe processors, I was an intern at IBM in the mid 80's. The top-of-the-line mainframes used a central processor comprised of about 100 custom ECL chips mounted on a 4-inch-square 100-layer ceramic substrate.

    The whole thing was cased in a shiny metal module. Each chip had its own sping-loaded heat slug that transferred heat to the cooling liquid sent through the module's plumbing. (100 ECL chips == major kilowattage)

    They told me each CPU cost about $50,000. On a factory tour, I saw an entire pallette of these sitting on the floor, kind of like gold at Fort Knox.

    These things may not perform like today's chips, but they gave meaning to the term "Big Iron"

  • Re:Next? (Score:2, Interesting)

    by emmons (94632) on Monday February 18, 2002 @06:38PM (#3028879) Homepage
    Not really. Quantum computers aren't very good at adding, subtracting or a lot of other things that most programmers find come in handy from time to time these days. Boolean logic will still be prevalent for a VERY long time to come. It may not happen on silicon for that long, however.

    There's a lot of information in this thread: 4&mode=thread [],
    specifically, this post: 985 [].
  • by MrPerfekt (414248) on Monday February 18, 2002 @06:51PM (#3028956) Homepage Journal
    This is a typical example of someone lacking clue and claiming to be authorative. I can admit I know nothing of most of the arch's there, but I can tell you about the SPARC.

    The UltraSPARC for workstations has always kinda been a niche market. For the simple reason, that you can get an Intel box with far more hardware options and software support and for far less money.

    However, in the server market (which I doubt the submitter has ever had any experience in) is a different story. For the most part, hardware support is irrelevant if it does what you want it to do. Which in most cases is just be some type of Internet server.. be it oracle databases or web servers or whatever. People that run critical servers and need the UltraSPARCs stability and Sun's support (or this can go for some other alt. arch. like IBM and an AS400) almost always do buy something other than Intel for their mission critical stuff.

    Anyway, my whole point is, just because you don't use it in your workstations (or your webserv0r on your dsl line) doesn't mean its dead. Workstations and Servers are and hopefully always will be very very different to actual companies that need a different level of service from their servers. I suspect because the submitter has a lunix server with a mandrake enterprise kernel, he thought he was an enterprise business.
  • Re:I don't get it. (Score:2, Interesting)

    by Walterk (124748) <<dublet> <at> <>> on Monday February 18, 2002 @06:56PM (#3028990) Homepage Journal
    Well, about buying SPARCs, the hardcore CS dept of our college is quite fond of SPARCs and they recently bought a whole butt load of Blade 100s to replace their SPARCStations 4 and 5, and I doubt they'll ever switch to another manufacturer.
    The management does prefer the windows/x86 solution, but thankfully they could agree on 2 seperate computer networks: a Novell one and a Sun one.

    There's nothing like 64 happy SPARCs humming away, especially in the summer when it gets 40-50 degrees inside :)
  • Re:Nano-technology (Score:2, Interesting)

    by Com2Kid (142006) <> on Monday February 18, 2002 @07:02PM (#3029033) Homepage Journal
    Hmm, full home computers?

    A mid 1980's home computer could EASILY be reduced to fit down to the size of my pocket.

    Actualy size in itself is not the problem. The bleeming screen is.

    Until we get some direct to retina or direct to optic nerve display technology, the size of a computer is always going to be limited by what the smallest display the user will stand for is.

    Well that and keyboard sizes.

    So you should also add direct mental input to your list of features that are needed.

    Quite frankly, if you give a modern day computer manufacturing facility the technologies that I have outlined above, and in a decent sized package, a computer could EASILY be made that fits in the palm of your hand.

    Hell, lets see now. Use a modified form of Sony's memory stick technology, they have gotten in packed down quite dense now days, so you really do not need such a large package if you are just going to store 4 or 8 megs of data.

    The CPU should be no problem. Since this is a business computer we are dealing with here, no FPUs are needed, and 66mhz or so should be enough to run a highly optimized operating system along with some standard business applications.

    Hmm, actualy, have you seen those MP3 players that they sell in the stores now days? Yah, those ones, the ones that are about the size of two of my thumbs next to each other. (or to put it in references that mean something, heh, about an inch tall by a bit less then an inch wide)

    That is what we can do now days.

    It is just the friggen display technology that is holding us back.

    Everything else until then (new display technologies coming out) is just a stop gap measure designed to keep the technology sector alive.

    Why else do you think that the latest office applications require 500mhz+ to run? Seriously? (this is MS Office of course, bleh. POS. . . . )

  • Re:Recurring problem (Score:2, Interesting)

    by joib (70841) on Monday February 18, 2002 @07:07PM (#3029052)
    Yes, very true. As I see it, there are currently 3 high end cpu architectures with a future, sparc, ibm:s power and IA-64. Producing (and designing) these chips gets more expensive all the time, and the upcoming extreme ultraviolet lithography is again going to cost even more. So barring any big surprises, these 3 companies are those which I'd bet on to have the funds to play the game till the end of the current silicon technology. Hopefully whatever comes after silicon will have a cheaper entry-to-market so there can be lots of new innovative companies driving innovation and competition forward. And I think it might be very feasible. Look at all the nanotechnology startups, research on molecular transistors, self-assembling systems and of course quantum computing. Very interresting stuff, albeit it will probably be >10 years before anything like that gets out of the research labs.

    Regarding supercomputing, you are quite correct in that there is little incentive to develop new vector supercomputers, taking into account increasing costs of design and fabbing the cpu:s. Cray:s upcoming SV2 seems to be an exception, though. Also keep in mind that most supercomputers, like the Cray T3E and the IBM SP, are just commodity cpu:s connected with a bad-ass proprietary interconnect. So essentially, they are just beowulf clusters with a superfast and low-latency interconnect. They are programmed through MPI, the same thing used for beowulf.

    While there are few scientifically interesting applications which require essentially no communication between the nodes, like seti@home and (if you want to do something scientifically useful with your spare cycles, check out folding@home), most get by with a quite modest amount of communication between the nodes.

    What I personally see as interresting in the supercomputing area in the next few years is not the maximum flops of the DOE:s newest lets-simulate-nuclear-explosions-big-ass-fastest-s upercomputer-in-the-world-thingy, but that the increasing affordability of commodity clustering will allow projects with somewhat more down-to-earth budjets to run some really interresting simulations. Also commodity clustering is improving. Things like Infiniband and whatever they are called will probably offer higher bandwidth/lower latency than the current Myrinet / Gigabit ethernet stuff. Also the improved SMP scalability of Linux 2.4 will perhaps make it economically feasible to have, say, 4-cpu nodes in the cluster instead of the current practice of 1-cpu nodes. The downside of this is of course that applications must be able to take advantage of this, either through directly using a combination of MPI and threads/SHM/whatever or through using frameworks like Petsc or POOMA.

    As a final note, remember that the tools (i.e. supercomputers) and the way we solve the problems at hand are not developed in a vacuum. The numerical methods that are developed today certainly place a greater emphasis on solving the problem via scaling through clustering than, say, 20 years ago when vector supercomputers where in the same price class as the parallell ones.
  • by Anonymous Coward on Monday February 18, 2002 @07:08PM (#3029055)
    Give up on the clock - we are wasting tons of potential CPU resources! Clockless chips are inherintly more efficient. Intel - wake up!
  • by screwtheNSA (538022) on Monday February 18, 2002 @07:20PM (#3029116) Homepage
    Again, many FORGET that Itanium is NOT a 32 bit processor, so no compilers geared solely for 32 bits will work properly with a CPU MADE for 64 bits! The whole chip architecture is ABOVE the x86 world as we buy it now, dump your thoughts of how clock speeds alone tell the "story" of how good or bad a CPU is.

    Intel and Itanium WILL give the royal BOOT to REAL workstations, no doubt about it! Many have posted this very same "story" on the DIFFERENCES in x86 and 64 bit processors, but NOBODY SEES the truth.

    Stop falling for the speel about clock cycles and how "badly" the Itanium works with current compilers, of course it's bad, it's NOT designed for x86 platforms!

    Try shoving a 6-cylinder, 300+ horsepower Lycoming into an ultralight and make it fly level, good luck!

    Itanium has too much "horsepower" over ANY x86 architecture in production now.
  • Will SUN make it? (Score:4, Interesting)

    by rcs1000 (462363) <rcs1000 AT gmail DOT com> on Monday February 18, 2002 @08:23PM (#3029389)
    The problem with discussions of Intel vs every other chip maker is they ignore the extraordinary differences in scale between the players.

    Let's compare: Sun is a company that produces operating systems (Solaris), computers, CPUs, motherboards, and a host of peripherals. (Plus it has to invent Java, J2EE, etc.) It's R&D budget was $2.0bn in 2001.

    Intel is 95% CPUs. It spent $3.8bn on R&D in 2001.

    Intel has the world's most productive fabs. It's capex budget is so huge, it can order the lithograohy companies and the like to build to order inside its factories. Result, it's yields are 25% better at start; and still 10-12% better after 6-9 months.

    It is incredibly difficult for anyone to keep up with the Intel machine. I wish it weren't so; but it is.

  • by maraist (68387) <michael DOT mara ... DOT n0spam DOT > on Monday February 18, 2002 @08:55PM (#3029518) Homepage
    if you look at the transistor counts, NVidia's graphic chips already are more complicated than most CPU parts. This is quite do-able.

    There's more to [CG]PU complexity than transistor count. Look at the 512Mbit memory cells that run for only a couple dollars a chip.

    The trick is inter-related logic complexity. To my understanding the existing GPUs have no issues with backward compatability (so much of the x86 overhead is avoided), the core itself is pipelined and modular, so the complexity is spread out across the whole chip (independent teams can work on their own components with little concern for sistern components, whereas every ounce of performance is being squeezed out of x86's which require complete coordination). Further, graphics acceleration is simply the application of graphical algorithms into silicon. While I'm not quite sure which algorithms there are, the possibilities are endless. Imagine a fast-fourier transform implemented as a SIMD floating point instruction. You create an array of floating point logic units, and interconnect them. The floating point unit is pretty much a common-off-the-shelf design, so the only real logic you apply is the interconnectivity.

    I'm not saying that GPU's are easy to design, I'm just saying that hardware filters are designed this way all the time, and I would'nt be surprised if a large percentage of the nVida chips weren't stock logic modules.

  • Re:I don't get it. (Score:2, Interesting)

    by Peter McC (24534) on Monday February 18, 2002 @09:36PM (#3029702) Homepage
    Well, SGI uses MIPS cpus for all their IRIX machines. This influences the argument, but I'm not sure which way :)

  • What's the problem? (Score:1, Interesting)

    by Anonymous Coward on Monday February 18, 2002 @11:54PM (#3030012)
    I'm just an uninformed idiot, but what's the big problem? Why was the transition from the 286 16-bit architecture to the 386's 32-bit architecture so seemless, and now the translation from 32 to 64 is such a big hastle? Is there no backwards compatabilty or something?

  • by guacamole (24270) on Tuesday February 19, 2002 @04:45AM (#3030727)
    My world view is that Itanium based systems will become commodity products very quickly after good silicon is available in reasonable volume. At that point, why should one spend $8-10k for that hardware from the likes of HP, Compaq, Dell and others when one can build it for $2k (or even less)?

    When peolpe start buying Itanium systems in volume, then the prices will drop on the Itanium systems. The reasons, they're expensive is not because the chips are hard to come by but because no one wants to buy them right now.

    However, this comment alone makes me wonder about he posters cluelessness. He obviously hasn't worked in any real production environment. You people should realize that you simply can't build the kind of systems that Dell, HP, etc sell -today- out of commodity components. Take a look at a typical high-end SMP Dell server: propietary OEM motherboard, propietary case, hot-swap hard drives, hot-swap redundant power supplies and cooling, LOM support, etc. All components have been carefully designed to work together to produce a reliable, and scalable server system. You will never ever build the same kind of system on your own and if you do it's not going to be cheaper than buying one. Plus you don't get the vendor support.

    The comment about SPARC being death is completely astonishing at the time when Sun is -THE- unix market leader. SPARC CPUs were never faster than the competition but that didn't worry Sun users as long as they were up to par with the competitors. The reason people buy Sun hardware is not the CPUs (CPU is alone is useless) but Solaris which is THE enterprise class OS and its applications, Sun's excellent support, massive multiprocessor scalability of Sun systems, massive I/O bandwidth, etc.

    Current Sun chip is not bad at all (UltraSPARC III) and Sun is working on UltraSPARC V.

The young lady had an unusual list, Linked in part to a structural weakness. She set no preconditions.