Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Intel Operating Systems Software

Why Do We Use x86 CPUs? 552

bluefoxlucid asks: "With Apple having now switched to x86 CPUs, I've been wondering for a while why we use the x86 architecture at all. The Power architecture was known for its better performance per clock; and still other RISC architectures such as the various ARM models provide very high performance per clock as well as reduced power usage, opening some potential for low-power laptops. Compilers can also deal with optimization in RISC architectures more easily, since the instruction set is smaller and the possible scheduling arrangements are thus reduced greatly. With Just-in-Time compilation, legacy x86 programs could be painlessly run on ARM/PPC by translating them dynamically at run time, similar to how CIL and Java work. So really, what do you all think about our choice of primary CPU architecture? Are x86 and x86_64 a good choice; or should we have shot for PPC64 or a 64-bit ARM solution?" The problem right now is that if we were going to try to "vote with our wallets" for computing architecture, the only vote would be x86. How long do you see Intel maintaining its dominance in the home PC market?
This discussion has been archived. No new comments can be posted.

Why Do We Use x86 CPUs?

Comments Filter:
  • Easy (Score:5, Insightful)

    by Short Circuit ( 52384 ) * <mikemol@gmail.com> on Thursday January 04, 2007 @11:15AM (#17458188) Homepage Journal
    Until someone replaces the PC.

    PC architecture sits in a local minima where the fastest route to greater profits lies in improving existing designs, rather than developing new approaches.

    The reason "We" use x86 is because "we" use PCs, where x86 technology is dominant and obvious. However, "we" also use PDAs, cell phones, TiVos and even game console systems. As the functions of those devices melt into a new class of unified devices, other architectures will advance.

    The real irony is that, for most of these other devices, the underlying architecture is invisible. Few know that Palm switched processors a few years back. Fewer still know what kind of cpu powers their cell phone.
    • Re:Easy (Score:5, Funny)

      by SheeEttin ( 899897 ) <[moc.liamg] [ta] [nitteeehs]> on Thursday January 04, 2007 @11:27AM (#17458356) Homepage
      Fewer still know what kind of cpu powers their cell phone.
      I think mine is silicon-based.
      • Re: (Score:3, Funny)

        by ajs318 ( 655362 )
        "For external use only" -- ah, so that's why the Essex girl was standing in the pouring rain applying Savlon to a cut!
      • Re:Easy (Score:5, Funny)

        by Pollardito ( 781263 ) on Thursday January 04, 2007 @06:50PM (#17466254)

        Fewer still know what kind of cpu powers their cell phone.
        I think mine is silicon-based.
        trust me, it's real and it's fantastic
    • Re:Easy (Score:5, Interesting)

      by HappySqurriel ( 1010623 ) on Thursday January 04, 2007 @12:01PM (#17458842)
      The reason "We" use x86 is because "we" use PCs, where x86 technology is dominant and obvious. However, "we" also use PDAs, cell phones, TiVos and even game console systems. As the functions of those devices melt into a new class of unified devices, other architectures will advance.

      Honestly, I think it is much simpler than that ...

      The problem has very little to do with the processors that are used and is entirely related to the software that we run. Even in the 80s/90s it would have been completely possible for Microsoft to support a wide range of processors ( if their OS was designed correctly ) and produce OS related libraries which abstracted software development from needing to directly access the underlying hardware; on install, necessary files would be re-compiled and all over the shelf software could run on any architecture that Windows/dos supported. In general, the concept is combining standard C/C++ with libraries (like OpenGL) and recompiling to ensure that no one was tied to a particular hardware architecture.

      Just think of how many different architectures Linux has been ported to, if DOS/Windows was built in a similar way you'd be able to choose between any architecture you wanted and still be able to run any program you wanted.
      • Re: (Score:3, Insightful)

        by Canthros ( 5769 )
        I think you severely overestimate computing power in the 80s and 90s. I last compiled XFree86 around 2000. Took a bit over a day.

        I really wouldn't have wanted to wait that long to install Office or DevStudio.
        • Well ...

          Compiling from source could have (potentially) have taken too long for most setups, but that doesn't mean that the general idea was impossible. If you started from an intermediate step in the compile process you could (hypothetically) greatly reduce the compile time, or (after 95 or so when CD players were available on most PCs) you could have included dozens of binaries for the most common setups and the source code in case someone wanted to run the program on hardware that wasn't currently support
          • Re: (Score:3, Insightful)

            by Canthros ( 5769 )
            Probably they would have included binaries.

            Even so, there are pretty major difficulties, even now, with the idea of write once, run anywhere. Although the interface to a library may be standard across multiple platforms, it isn't necessarily the case that the implementation is equally robust, let alone identically so. The observation that it was 'completely possible' neglects any consideration of practicality or commercial viability. I'm not saying that it was too impractical to happen, but you gloss over a
      • Re:Easy (Score:5, Insightful)

        by Marillion ( 33728 ) <ericbardes&gmail,com> on Thursday January 04, 2007 @12:21PM (#17459128)

        Windows NT was designed to run on i386, MIPS, PPC and Alpha. Over the years, Microsoft discontinued support for the various platforms - IIRC MIPS and Alpha ended with NT3.51 - PPC ended with NT4. NT5 (aka - Windows 2000) was the first i386 only version of NT.

        • Re: (Score:3, Interesting)

          by Marillion ( 33728 )

          Random note after I clicked submit - The designers of the DEC Alpha chip designed it as a 64bit Big Endian chip. Microsoft convinced DEC to add a feature to the Alpha that switched CPU to a 32bit Little Endian chip so that Microsoft wouldn't have to recode all their the apps that processed binary files with the assumption that integers were four byte little endian.

        • Re:Easy (Score:5, Insightful)

          by ergo98 ( 9391 ) on Thursday January 04, 2007 @01:10PM (#17460082) Homepage Journal
          Windows NT was designed to run on i386, MIPS, PPC and Alpha. Over the years, Microsoft discontinued support for the various platforms

          At the time Microsoft was hedging their bets as everyone ranted about the next generation of RISC chips storming the market. Endlessly we'd hear about how the x86 line was dead, and the only improvements would be to move to the next generation (Intel themselves was heavily focusing on RISC as well -- think of the i860).

          We never did because those alternatives never delivered on their promise. Ever since 680x0 processors, the highest performance per dollar crown has been held, with only a couple of very short term or niche exceptions, by the x86 line. When given the option of running software on a variety of platforms, customers always ended up buying the x86.

          Look at Linux -- so many platforms supported, yet on the desktop and up level, overwhelmingly it's run on x86. This has been the cast over its history.
          • Re:Easy (Score:5, Insightful)

            by king-manic ( 409855 ) on Thursday January 04, 2007 @02:13PM (#17461414)

            At the time Microsoft was hedging their bets as everyone ranted about the next generation of RISC chips storming the market. Endlessly we'd hear about how the x86 line was dead, and the only improvements would be to move to the next generation (Intel themselves was heavily focusing on RISC as well -- think of the i860).

            We never did because those alternatives never delivered on their promise. Ever since 680x0 processors, the highest performance per dollar crown has been held, with only a couple of very short term or niche exceptions, by the x86 line. When given the option of running software on a variety of platforms, customers always ended up buying the x86.

            Look at Linux -- so many platforms supported, yet on the desktop and up level, overwhelmingly it's run on x86. This has been the cast over its history.
            Part of the problem is the consumer thinks "I've spent X$ on software. If I move to platform B I would have wasted X$." so on of the primary reasons we're still with x86 is investment in software.

            As for linux. choice isn't good. People get overwhelmed easily. So anythign more then 2 or 3 choices makes them go cross eyed and buy windows. I myself operate mostly on windows occasionally on red hat/mandrake and very rarely mac OSX.
          • Re: (Score:3, Insightful)

            At the time Microsoft was hedging their bets as everyone ranted about the next generation of RISC chips storming the market. Endlessly we'd hear about how the x86 line was dead, and the only improvements would be to move to the next generation (Intel themselves was heavily focusing on RISC as well -- think of the i860).

            Not to worry, RISC won the day. x86 chips these days are RISC chips with a thin veneer of x86ness.

          • Re:Easy (Score:4, Insightful)

            by epine ( 68316 ) on Thursday January 04, 2007 @08:21PM (#17467464)

            I agree that the alternatives never delivered on their promises. That, more than anything else, is why we still use x86. The problem for the alternatives has always been that the liabilities of the x86 design were never as bad as people made them out to be. Certainly the x86 went sideways with the 286, but it returned to sanity again with the 386 and most of those worthless 286 protection modes are just a bit of wasted die area in the microcode array which gets smaller and smaller with each new shrink.

            The old story "make is easy for the compilers" is vastly overstated. Modern compilers schedule non-orthogonal register sets extremely well. What is hard for the compilers is crazy amounts of loop unrolling involving asynchronous memory access patterns. Out-of-order execution actually makes life a lot *easier* for the compilers than more rigid approaches such as Itanium or Cell at the cost of extra heat production. Now that we are hitting the thermal wall, maybe it's time for the compilers to finally earn their keep. I've never understood the sentiment of making it easy for the compiler as opposed to, say, writing a very good compiler that implements the best algorithm available.

            The key is to choose an aggressive target for what the compiler can reasonably contribute, and then ensure that the compiler achieves that target in a good way. Some of the VLIW efforts stand out in my mind as shining examples of asking the compiler to do a little more than could reasonably be expected. OTOH, I see no reason to simplify an instruction set to the point where a college student could implement the register scheduling algorithm over a long weekend.

            Yes, a few things could have been done far better from the outset, such as a more uniform instruction length (though not necessarily as regimented as ARM), a register based floating point unit, and a treatment of the flag register with fewer partial updates. But really, how much else of what was conceived in the early eighties doesn't have greater flaws? We still use email, barely.

            The time for debate over instruction sets has long passed. The future debate concerns the integration of heterogenous cores such as the IBM Cell and the future spawn of the AMD/ATI assimilation. I have no doubt someone is out there right now inventing an interconnect everyone will be wistfully lamenting twenty years from now. If only we had known and done something about it (other than this).

            After twenty years, it's time to face the music: x86 is more ugly than bad.

      • Even in the 80s/90s it would have been completely possible for Microsoft to support a wide range of processors ( if their OS was designed correctly )

        Microsoft did do that: there was a time when NT ran on X86, DEC Alpha, and PowerPC machines. (Okay, that's not a huge range, but the point stands.)

        X86s became cheaper and cheaper, and continuing development of NT on !X86 became financially infeasible due the rapidly-shrinking market share for non-PC platforms.

      • Re: (Score:3, Insightful)

        by soft_guy ( 534437 )
        By using X86 as the primary architecture for Windows, binary compatibility is maintained. While Linux is ported to many different architectures, it doesn't have binary compatibility between them. This is why most Linux software seems to be distributed as source that the user then has to compile. Obviously for closed source models, this is not acceptable.

        Use of Universal binaries (e.g. OS X) wastes disk space that up until recently would have been too costly for most people.
    • Re:Easy (Score:4, Insightful)

      by samkass ( 174571 ) on Thursday January 04, 2007 @01:10PM (#17460102) Homepage Journal
      The real irony is that, for most of these other devices, the underlying architecture is invisible. Few know that Palm switched processors a few years back. Fewer still know what kind of cpu powers their cell phone.

      And I think this will be increasingly true of the desktop. And since all the best desktop CPUs are x86, it'll probably stay x86.

      From an engineering design point of view, it's possible some of these other architectures are cleaner or easier to design or code to, but from a desktop chip and compiler point of view, x86 is the best. It has the best codegen compilers, despite the fact that it may have taken more effort to get to that state, and the CPUs are damn fast at reasonable (although perhaps not optimal) power consumption levels. Talk of theoretical alternate options are academic. Unless one of those other options can produce a chip and compiler that's so much better than x86 that it would be worth moving to, emulation overhead during transition and all, it's not a practical discussion.
  • easy (Score:5, Funny)

    by exspecto ( 513607 ) * on Thursday January 04, 2007 @11:16AM (#17458204)
    because they don't cost an ARM and a leg and they don't pose as much of a RISC
  • momentum (Score:5, Informative)

    by nuggz ( 69912 ) on Thursday January 04, 2007 @11:17AM (#17458226) Homepage
    Change is expensive.
    So don't change unless there is a compelling reason.

    Hard to optimize? You only have to optimize the compiler once, over the millions of devices this cost is small.

    Runtime interpreter/compilers, you lose the speed advantage.

    Volume and competition makes x86 series products cheap
    • Re: (Score:2, Informative)

      by jZnat ( 793348 ) *
      GCC is already architectured such that it's trivial to optimise the compiled code for any architecture, new or old. Parent's idea is pretty much wrong.
      • Re:momentum (Score:4, Interesting)

        by Frumious Wombat ( 845680 ) on Thursday January 04, 2007 @11:33AM (#17458430)
        If you're dead-set on using GCC, yes. Alternately, if you use the native compilers which only have to support a fairly narrow architecture, you can get much higher performance. XLF on RS/6K and Macs was one example (capable of halving your run-time in some cases), IFORT/ICC on x86 and up, or FORT/CCC on DEC/Compaq/HP Alpha-Unix and Alpha-Linux were others. Currently GCC is not bad on almost everything, but native-mode compilers will still tend to dust it, especially for numeric work.

        Which brings back the other problem; not only are x86 chips cheap to make, but we have 30 years of practice optimizing for them. Their replacement is going to have to be enough better to overcome those two factors.
        • Re: (Score:3, Informative)

          by UtucXul ( 658400 )

          IFORT/ICC on x86 and up

          Funny thing about IFORT is that while in simple tests it always outperforms g77 (I've since switched to gfortran, but haven't tested it too well yet), for complex things (a few thousand lines of FORTRAN 77 using mpi), it is very unpredictable. I have lots of cases where g77 outperforms ifort in real world cases (as real world as astronomy gets anyway) and cases where ifort wins. It just seems to me that either ifort is not the best compiler, or optimizing for x86 is funnier busin

          • Re: (Score:3, Interesting)

            The time I saw the runtime halved, it was a piece of code that had been rearranged to run well on early-90s Cray/Alliant vector machines. Dusty-deck fortran that grew up on VAXes or IBM mainframes doesn't do nearly as well, though I still can generally get 15-30% vs. g77. I can also get wierd bugs, as much of that code depends on, *ahem*, "features" left over from Fortran-66 which have since gone away and modern compilers (which are really F90/F95 compilers) don't support well.

            Personally, I'm sorry I w
        • Actually, I've got to say that while IFORT/ICC do well on simple tests, it's not always cut and dry in the real world. gcc/g77 can do just as well as IFORT/ICC on more complex programs, and sometimes they do better. Other times, IFORT/ICC do better. The bottom line is that optimizing for the legacy cruft that is the x86 architecture just isn't very straight forward. There's a lot of voodoo involved, as any x86 assembly language programmer worth their salt will tell you.
      • Re: (Score:2, Informative)

        GCC 4.x is designed to enable optimizations that will work across architectures, by providing an intermediate code layer for compiler hackers to work with.

        There are still optimizations possible at the assembly level for each architecture that depend on the quirks and features of those architectures and even their specific implementations.

        The intermediate level optimizations are intended to reduce code duplication by allowing optimizations common across all architectures to be applied to a common intermediat
    • Re:momentum (Score:5, Interesting)

      by Chris Burke ( 6130 ) on Thursday January 04, 2007 @01:31PM (#17460546) Homepage
      You're absolutely right, it's all about momentum.

      Hard to optimize? You only have to optimize the compiler once, over the millions of devices this cost is small.

      This is a red herring anyway. RISC being simpler has nothing to do with it being easier to optimize. If it is easier for a compiler to optimize simple RISC-like instructions, then the compiler can use RISC-like instructions that are present in x86. This has been the situation for years and years. Compilers use a basic subset of x86 that looks a lot like RISC (minus variable instruction lengths), but also with some of the decent syntactic sugar of x86 like push/pop and load-ops (you know: add eax, [esp + 12] to do a load and add in one inst).

      The only real obstacle for compilers optimizing x86 is the dearth of registers. With fast l1 caches and stack engine tricks like in Core Duo the performance hit for stack spillover isn't big, and x86-64 basically solves the problem by doubling the register space. Less than a RISC machine, but enough for what research has shown is typically needed. Maybe still a little too few, but combined with the first point enough to make this a wash.

      These arguments are as old as RISC itself, but the basis behind them has changed as the technology has changed. All of the performance, efficiency, and other technical arguments have been put to pasture in terms of actual implementations. In the end, it comes down to this:

      The only reason not to use x86 is because it is an ugly mess that makes engineers who like elegance cry at night.
      The only reason to use x86 is because it runs the vast majority of software on commodity chips.

      Which of these factors dominates is not an open question; it has already been decided. It's just those engineers who like elegance can't accept it, and thus keep bringing it up. Believe me, I don't like it either, but I don't see the point at screaming at reality and demanding that it change to suit my aesthetics.
  • by plambert ( 16507 ) * on Thursday January 04, 2007 @11:19AM (#17458250) Homepage
    The reason given, which people seem to keep forgetting, was pretty simple and believable:

    Performance per watt.

    The PPC architecture was not improving _at all_ in performance per watt. Apple's market was growing fastest in the portable space, but it was becoming impossible to keep temperatures and power consumption down with PPC processors.

    And IBM's future plans for the product line were focusing on the Power series (for high-end servers) and the Core processors (for Xbox 360's) and not on the PowerPCs themselves.

    While I've never had any particular love for the x86 instruction sets, I, for one, enjoy the performance of my Macbook Pro Core 2 Duo, and the fact that it doesn't burn my lap off, like a PowerPC G5-based laptop would.
    • by RAMMS+EIN ( 578166 ) on Thursday January 04, 2007 @11:36AM (#17458468) Homepage Journal
      ``Performance per watt.''

      Not as I remember. As I remember, the PPW of PowerPC CPUs was pretty good, and getting better thanks to Freescale, but the problem was that Freescale's CPUs didn't have enough raw performance, and IBM's didn't have low enough power consumption. Freescale was committed to the mobile platform and thus was only focusing on PPW, whereas IBM was focusing on the server market, and thus favored performance over low power consumption. Seeing that the situation wasn't likely to improve anytime soon, Apple switched to Intel.
      • by Tim Browse ( 9263 ) on Thursday January 04, 2007 @01:31PM (#17460536)

        Not as I remember.

        I'm confused - didn't you just then go on to confirm that Apple switched to Intel because they got better performance per watt, as the original poster said?

        Or do you mean IBM's performance per watt was really good, but had a lower bound on power consumption that was too much for Apple's laptops/iMacs/Mac minis?

        I was certainly under the impression that Apple were very frustrated with not being able to produce a 'state of the art' laptop with the G5 chipset (hence this famous image [revmike.us]), and saw the Pentium M and what else Intel had coming, and decided the world of x86 was good. Performance per watt was definitely the reason Jobs gave when he announced the Intel switch.

    • Re: (Score:3, Insightful)

      by loony ( 37622 )
      > The PPC architecture was not improving _at all_ in performance per watt. Apple's market was growing fastest in the portable space, but it was becoming impossible to keep temperatures and power consumption down with PPC processors.

      Dual core 2GHz PPC below 25W isn't an improvement I guess? Look at PA-Semi...

      Seriously, this has nothing at all to do with it.. What home user really cares if their PC takes 150W or 180W ? Nobody...

      Peter.
      • by Idaho ( 12907 )
        Seriously, this has nothing at all to do with it.. What home user really cares if their PC takes 150W or 180W ? Nobody...

        In addition to the fact that some people *do* actually care about the power savings, even if you don't you should still care because most of that power is transformed into heat, which the processor has to somehow get rid of. So you need larger (heavier) heatsinks, CPU coolers etc. not to mention that high power usage often means that increasing the size of the die and/or the clock frequen
      • by Wdomburg ( 141264 ) on Thursday January 04, 2007 @12:27PM (#17459232)
        Dual core 2GHz PPC below 25W isn't an improvement I guess? Look at PA-Semi...

        You mean a processor from a fabless company announced six months after Apple announced the switch to Intel, and wasn't expected to sample until nearly a year after the first Intel Macintosh shipped? It's an interesting product, particularly if the performance of the cores is any good (hard to say, since there seems to be much in the way of benchmarks), but it didn't exist as a product until recently. Even if it had, there's the significant question of whether they could secure the fab capacity to supply a major customer like Apple.

        What home user really cares if their PC takes 150W or 180W ? Nobody...

        Desktops aren't the dealbreaker here. Try asking "who cares if their laptop runs for 5 hours or 3 hours?" or maybe "who cares if their laptop can be used comfortably in their lap?" or perhaps "who cares if they can get reasonable performance for photo-editing, video-editing and what not in a portable form factor?".

        Cast an eye toward the business market and performance per watt on the desktop is important. You may not care about a 30W savings but an a company with 500 seats may well care about 28800kWh in savings per year (assuming 240 work eight hour work days a year after factoring out weekends holidays and vacation).
      • Re: (Score:3, Insightful)

        by soft_guy ( 534437 )

        What home user really cares if their PC takes 150W or 180W ? Nobody...

        Anyone using a laptop?

        Seriously, the reason for Apple to switch to x86 was due to the fact that Intel has focused their business around creating the best possible chips for laptop computers. Laptops are now more than 50% of Macintoshes sold (and probably more than 50% of personal computers in general).

        More money is being spent on R&D for the x86 architecture than anything else and this R&D is focused like a laser on the PC laptop/desktop use case.

        There was a time when the best chip available for a

  • Good question... (Score:5, Informative)

    by kotj.mf ( 645325 ) on Thursday January 04, 2007 @11:19AM (#17458254)
    I just got done reading about the PWRficient [realworldtech.com] (via Ars):
    • Two 64-bit, superscalar, out-of-order PowerPC processor cores with Altivec/VMX
    • Two DDR2 memory controllers (one per core!)
    • 2MB shared L2 cache
    • I/O unit that has support for: eight PCIe controllers, two 10 Gigabit Ethernet controllers, four Gigabit Ethernet controllers
    • 65nm process
    • 5-13 watts typical @ 2GHz, depending on the application

    Now I have to wait for the boner this gave me to go away before I can get up and walk around the office.

    Maybe Apple could have put off the Switch after all...

    • Re: (Score:3, Funny)

      Now I have to wait for the boner this gave me to go away before I can get up and walk around the office.

      It's called priapism [wikipedia.org]; you might want mosey on over to the emergency room quickly!
  • Why do we ... (Score:5, Insightful)

    by jbeaupre ( 752124 ) on Thursday January 04, 2007 @11:22AM (#17458294)
    Why do we drive on the right side of the road in some places, left in others?
    Why do most screws tighten clockwise?
    Why do we use a 7 day calender, 60 second minutes, 60 minute hours, and a 24 hour clock like the Sumerians instead of base 10?
    Why do we count in base 10 instead of binary, hex, base 12?
    Why don't we all switch to Esperanto or some other idealized language?
    Or if you're familiar with the story: Why are the Space Shuttle boosters the size they are?

    Because sometimes it's easier to stick with a standard.

    There. Question answered. Next article please.
    • by eln ( 21727 )
      Why is this a troll? The momentum behind the standard is clearly the primary reason x86 architecture is still so dominant in computing these days.
    • by SQLGuru ( 980662 )
      The link for those that are asking about the boosters: http://www.astrodigital.org/space/stshorse.html [astrodigital.org]

      (I didn't know and had to look it up, myself).

      Layne
      • Re: (Score:3, Informative)

        The chariot and horse's ass as a determiner of standard gauge for railroads story, while entertaining, is untrue [discoverlivesteam.com].
    • Re: (Score:3, Informative)

      by terrymr ( 316118 )
      "The railroad line from the factory had to run through a tunnel in the mountains. The SRBs had to fit through that tunnel. The tunnel is slightly wider than the railroad track, and the railroad track is about as wide as two horses' behinds. So, the major design feature of what is arguably the world's most advanced transportation system was determined over two thousand years ago by the width of a Horse's Ass!"

      from snopes.com [snopes.com]
    • Binary, hex, and base 12 aren't "natural" since we have 10 fingers to count.

      That being said, if our ancesters don't use thumbs or used them as carry bits or used them to extend our counting range to 32 we would all be using octal.
  • Chicken and Egg (Score:4, Interesting)

    by RAMMS+EIN ( 578166 ) on Thursday January 04, 2007 @11:24AM (#17458324) Homepage Journal
    I think it's a chicken and egg proposition. We use x86, because we use it. Historically, this is because of the popularity of the PC. A lot of people bought them. A lot of software was written for them. Other architectures did not succeed to displace the PC, because of the reluctance of people to abandon their software. Now, with years and years of this happening, the PC has actually become the most performant platform in its price class, while simultaneously becoming powerful enough that it could rival Real computers.

    Slowly, other architectures became more like PCs: Alpha's got PCI buses, Power Macs got PCI buses, Sun workstations got PCI buses, etc. Eventually, the same happened to the CPUs: the Alpha line was discontinued, Sun started shipping x86-64 systems, and Apple started shipping x86 systems. The reason this happened is that most of the action was in the PC world; other platforms just couldn't keep up, in price and performance.
  • by ivan256 ( 17499 ) on Thursday January 04, 2007 @11:26AM (#17458342)
    The reason is that intel provides better infrastructure and services that any other high performance microprocessor vendor in the industry. When Motorola or IBM tried to make a sale, intel would swoop in and offer to develop the customer's entire board for them. The variety of intel reference designs is unmatched. Intel not only provides every chip you need for a full solution, but they do it for more possible solution sets than you can imagine. Intel will manufacture your entire product including chassis and bezel. Nobody even comes close to intel's infrastructure services. That is why even when other vendors have had superior processors for periods of time over the years, intel has held on to market leadership. There may be other reasons too, but there don't have to be. That one alone is sufficient.

    The other answer, of course, is that we don't always... ARM/xScale has become *very* widely used, but that is still coming from intel. There are also probably more MIPS processors in people's homes than x86 processors since the cores are embedded in everything.
    • Re: (Score:2, Interesting)

      Interesting note: Intel did sell its ARM/xScale off a few months ago.
    • Re: (Score:2, Informative)

      by Cassini2 ( 956052 )
      Intel's support infrastructure also includes some of the best semiconductor fabrication facilities in the business. Intel has consistently held a significant process advantage at its fabs (fabrication facilities) over the life of the x86 architecture. Essentially, no one else can deliver the volume and performance of chips that Intel can. Even AMD is struggling to compete against Intel (90 nm vs 60 nm).

      The process advantage means Intel can get a horrible architecture (x86) to perform acceptably at a dece
  • ``The problem right now is that if we were going to try to "vote with our wallets" for computing architecture, the only vote would be x86. How long do you see Intel maintaining its dominance in the home PC market? ''

    These two things have little to do with one another. Intel isn't the only company making x86 CPUs. It's entirely possible for x86 to stick around while Intel is displaced.
  • internally, it is. and with the modern cpus the ipc is better than with powerpc.
  • Where The Money Is (Score:5, Insightful)

    by spoonboy42 ( 146048 ) on Thursday January 04, 2007 @11:36AM (#17458470)
    There's no doubt that x86 is an ugly, hacked-together architecture whose life has been extended far beyond reason by various extensions which were hobbled by having to maintain backwards compatibility. x86 was designed nearly 30 years ago as an entry level processor for the technology of the day. It was originally built as a 16-bit architecture, then extended to 32-bit, and recently 64-bit (compare to PowerPC, designed for 64-bit and, for the earlier models, scaled back to 32-bit with forward-looking design features). Even the major x86 hardware vendors, Intel and AMD, have long since stopped implementing x86 in hardware, choosing instead to design decoders which rapidly translate x86 instructions to the native RISC instruction set used by the cores.

    So why the hell do we use x86? A major reason is inertia. The PC is centered around the x86, and there are mountains and mountains of legacy software in use that depend on it. For those of us in the open-source world, it's not to difficult to recompile and maintain a new binary architecture, but for all of the software out there that's only available in binary form, emulation remains the only option. And although binary emulation of x86 is always improving, it remains much slower than native code, even with translation caches. Emulation is, at this point, fine for applications that aren't computationally intensive, but the overhead is such that the clocks-per-instruction and performance-per-watt advantages of better-designed architectures disappears.

    A side effect of the enormous inertia behind x86 is that a vast volume of sales goes to Intel and AMD, which in turn funds massive engineering projects to improve x86. All things being equal, the same investment of engineer man-hours would bear more performance fruit on MIPS, SPARC, POWER, ARM, Alpha, or any of a number of other more modern architectures, but because of the huge volumes the x86 manufacturers deal in, they can afford to spend the extra effort improving the x86. Nowadays, x86 has gotten fast enough that there are basically only 2 competing architectures left for general-purpose computing (the embedded space is another matter, though): SPARC and POWER. SPARC, in the form of the Niagra, has a very high-throughput multithreaded processor design great for server work, but it's very lackluster for low-latency and floating-point workloads. POWER has some extremely nice designs powering next-generation consoles (Xenon and the even more impressive Cell), but the Cell in particular is so radically different from a standard processor design that it requires changes in coding practice to really take advantage of it. So, even though the Cell can mop the floor with a Core 2 or an Opteron when fully optimized code is used, it's easier (right now at least) to develop code that uses an x86 well than code which fully utilizes the Cell.
    • Parent nails it (Score:3, Insightful)

      by metamatic ( 202216 )
      OK, parent is the first post I've seen that explains the real reason why the x86 has become basically the only instruction set in mainstream computing.

      There's no technical advantage to x86. In fact, IBM picked it specifically because it sucked--they didn't want the PC to compete with their professional workstations. Grafted on sets of extensions (SSE, MMX etc) have just made x86 more baroque over the years, and backward compatibility requirements have prevented cleaning away crap like segmented memory.

      Howev
  • If I remember right, weren't some Cyrix processors based on RISC architecture using a solid-state translation system a while ago? Whoever it was, I remember that already happening but the translation made it about half the speed of competing systems.

    I think with virtualization growing, that a future architecture change is almost unavoidable. What we should focus on is developing virtualization to the point where we can switch to RISC or ARM and then run a virtualized system for any x86 based compatibility
  • by forkazoo ( 138186 ) <<wrosecrans> <at> <gmail.com>> on Thursday January 04, 2007 @11:43AM (#17458558) Homepage
    One perspective on the question:

    Non x86 architectures are certainly not inherently better clock for clock. That's a matter of specific chip designs more than anything else. The P4 was a fairly fast chip, but miserable clock for clock against a G4. An Athlon however, was much closer to a G4. (Remember kids, not all code takes advantage of SIMD like AltiVec!) And, the G4 wasn't very easy get bring to super high clock rates. The whole argument of architectural elegance no longer applies.

    The RISC Revolution started at a time when decoding an ugly architecture like VAX or x86 would require a significant portion of the available chip area. The legacy modes of x86 significantly held back performance because the 8086 and 80286 compatibility areas took up space that could have been used for cache or floating point hardware, or whatever. Then, transistor budgets grew. People stopped manually placing individual transistors, and then they stopped manually fiddling with individual gates for the most part. Chips grew in transistor count to the point where basically, nobody knew what to do with all the extra space. When that happened, x86 instruction decoding became a tiny area of the chip. Removing legacy cruft from x86 really wouldn't have been a significant design win after about P6/K7.

    Instead of being a design win, the fixed instruction length of the RISc architectures no longer meant improved performance through simple decoding. They meant that even simple instructions took as much space as average instructions. Really complex instructions weren't allowed, so they had to be implimented as multiple instructions. Something that was one byte on x86 was always exactly 4 bytes on MIPS. Something that was 12 bytes on x86 might be done as four instruction on MIPS, and thus take 16 bytes. So, effective instruction cache sizes and effective instruction fetch bandwidth grew on X86 compared to purer RISC architectures.

    At the same time, the gap between compute performance and memory bandwidth on all architectures was widening. Instruction fetch badwidth was irrelevent in the time of the PC XT, because RAM fetches could actually be done in like a single cycle. Less that it takes to get to SRAM on-chip caches today. But, as time went on, memory accesses became more and more costly. So, if a MIPS machine was in a super tight loop that ran in L1 cache, it might be okay. But, it it was just going balls to the wall through sequential instructions, or a loop that was much larger than cache, then it didn't matter how fast it could compute the instructions if it couldn't fetch them quick enough to keep the processor fed. but, X86 absurdly ugly instruction encoding acted like a sort of compression, meaning that a loop was more likely to fit in a particularly sized cache, and that better use of instruction fetch bandwidth was made.

    Also, people had software that ran on X86, so they bought 9000 bazillion chips to run it all. The money spent on those 9000 bazillion chips got invested in building better chips. If somebody had the sort of financial resources that Intel had to build a better chip, and they shipped it in that sort of volume, we might well se an extremely competetive desktop SPARC or ARM chip.
    • Re: (Score:3, Interesting)

      by dido ( 9125 )

      This reminds me of an old Dr. Dobb's Journal article that I read more than a decade ago entitled "Personal Supercomputing" (I believe it was back in 1992 or thereabouts) where the author found a good use for a 486+i860 (remember that chip?) combo that involved making the i860 a computation engine and the 486 sort of like an I/O processor, and IIRC it was called PORT. The compiler set for this system didn't generate native i860 or x86 code, but instead compiled C or FORTRAN programs into a type of fixed-len

    • Limitations of x86 (Score:4, Interesting)

      by alexhmit01 ( 104757 ) on Thursday January 04, 2007 @02:34PM (#17461798)
      You're right about the advantages of the CISC ISA vs. a RISC ISA, but I wanted to throw a few more points out.

      Originally, going to RAM was cheap (in terms of cycles), going to disk was slow, so we loaded what we could in RAM and processed it. However, RAM was VERY expensive, until very recently, having "enough RAM" was rarely affordable. NT took so long to mass market (Win2K sort of, XP did it, almost 10 years), because when NT 3.1 shipped, it wanted 16 MB of RAM on the x86, and 32MB of RAM on the other systems, but going above 8 MB required specialized RAM because the RAM Chips (you plugged chips into sockets then, not chips on cards with standardized interfaces) were mass produced for 1 MB, 4 MB, and 8 MB, but going to 16 MB required using VERY expensive (relative to normal RAM) chips. So upgrading from 4 to 8 was normally doable (usually, they used the same chips, and you filled half the slots for 4MB, I think, it's been a long time since I had a 486 computer), but going to 16MB would often cost $2000 for the new RAM, when computers sold for $2000.

      In the days of expensive RAM, the tighter ISA of x86 (more instructions per megabyte) gave them a major advantage in the real world. Sure, the ISA was crap, and the chips were crap, but when the most expensive component was RAM, the x86 used on average half the memory as the RISC competitors, which gave Intel a HUGE advantage in the cost-conscious desktop fight. It wasn't until the last 5 years, when Microsoft stagnated in their quest to use up more and more memory with each release (largely by failing to release OS updates), that the continual growth of RAM outpaced the computers. WinXP will run in 256MB, and run decently on 512MB, but 1GB or 2GB of RAM is reasonable for a decent system, and 512MB is not reasonable for a budget system. However, when we were struggling cost wise with 4MB and 8MB, the larger size of RISC programs was a problem.

      Up until this point, it wasn't clear that x86 was the winner, it was the release of Windows 95 on the Pentium chips when Microsoft "won" the market, up until then, Windows was niche, OS/2 looked promising, Apple was a contender, and everyone just ran DOS/WP5.1 and NetWare 2.0. Up until 1995, it was anybody's game.

      The biggest hit to the x86 was the lack of registers. In the 8088 and 8086 days, going to RAM wasn't too expensive, and the chip couldn't do much in the mean time, so we didn't care so much that it was the most register starved system. However, as chips got faster, going to RAM got expensive, and we didn't have registers, which is why the x86 GOT SMOKED in tightly run loops, because it couldn't keep enough data in there. The original cache banks (these were high tech, the chips were on a little card you plugged into your motherboard, you could even upgrade them for more) were to run faster than RAM, and created a third tier. Originally, this seemed like a hack because of the lack of registers.

      However, our chips have massively increased in speed in the past 10 years (we were running at ~75-200 MHz in 1996) which meant that flooding the processor with data is the problem. The clock cycles are VERY short (we run ~ 2 Ghz, I remember the excitement at AMD making a 12 MHz 286, the 8088 started at 1 or 2 MHz), which means that carrying the signal over the wires is now an issue, so our motherboards are tighter, we keep cache ON THE CHIP, etc.

      One reason that the x86 always outperformed was that once going to RAM became expensive, the smaller instruction size (and at the time, having 16-bit integers instead of 32 or 64) meant that if Intel provided 128 KB cache, then the other players needed 256 KB or even 512 KB to have the same caching advantage. This means, all things being equal, RISC was the better architecture, but IN REALITY, x86 could do the same amount of work with half or less resources. This allowed the computers to price cheaper, AND it meant that Intel could make HUGE profits.

      For example, if RISC Vendor A sold a solution for $2500, assuming $2000 in parts
  • What are you on? (Score:5, Insightful)

    by Chibi Merrow ( 226057 ) <mrmerrow&monkeyinfinity,net> on Thursday January 04, 2007 @11:46AM (#17458612) Homepage Journal
    With Just-in-Time compilation, legacy x86 programs could be painlessly run on ARM/PPC by translating them dynamically at run time, similar to how CIL and Java work.

    Do you really believe that? If so, how does one get to this fantasy land you live in? This may be true sometime in the future, but that day is not today.

    I happen to own a PowerBook G4. I like it very much. I love nice little purpose-designed chips based on POWER like the Gekko in the GameCube and it's successor in the Wii. But until we're at a point where you can effortlessly and flawlessly run everything from fifteen year old accounting packages to the latest thing to come off the shelf WITHOUT some PHB type knowing any funny business is going on behind the scenes, x86 is here to stay.

    Plus, RISC has its own problems. It's not the second coming. It's nice, but not for everyone.
    • by jimicus ( 737525 )
      IBM managed it when they migrated the architecture of their AS/400 (now zSeries) mainframe from some custom CISC chip to a POWER-based platform.

      But to return to the original topic, I'm given to understand that way back in the mid '90s (back when there were a lot of architectures), Intel announced that they were working on their own "next generation" chip which would replace x86 and ultimately hammer everything else into the ground - the Itanium. Back then the x86 wasn't much, and it was easily beaten by Sp
  • We don't (Score:4, Insightful)

    by BenjyD ( 316700 ) on Thursday January 04, 2007 @11:46AM (#17458616)
    We don't really use x86 CPUs, they're all RISC with an x86->RISC decode stage at the front of the pipeline. As far as I understand it, we use the x86 ISA because there has always been too much x86 specific code around for people to switch easily, which gave Intel huge amounts of money to spend on research and fabs.
  • If you google 'Intel Busness Practices' you will find a number of probes into Intel, its monopoly status, using that monopoly status to keep competitors down, dumping chips to depress prices for competitors, locking AMD out by restrictive licensing etc.

    AMD may be a victim, IBM and the PPC chip may also be a victim in all this. Also, the 'Itanic' may be a huge loser of a chip but it served its purpose, it killed off the Alpha (a damn good chip),HP RISC and created FUD about the viability of other RISC chips.
  • Because all our favorite programs run on x86, and not on whatever other alternative we would choose otherwise. And then we make more programs for x86, ensuring we will continue to use it.
  • Not everyone used x86 (i386) Some of us use UltraSparc instead.

    Perhaps more would if Sun supported FreeBSD better.

  • by MartinG ( 52587 ) on Thursday January 04, 2007 @11:54AM (#17458714) Homepage Journal
    tIs' ebacsue ilttel neidna si ebttre.
  • The reason x86 is better is because its more popular and therefore Intel and AMD have more money to pour into R&D than anyone else. Or I would probably even say everyone else combined. So they make better chips, which sell more and the cycle continues. The other reason is that most PC software currently used is non-Free and runs on Windows and windows runs on x86. Obviously here we are only talking about PCs. Embedded is a completely different story where x86 is marginal, but the performance requirement
  • by swillden ( 191260 ) * <shawn-ds@willden.org> on Thursday January 04, 2007 @11:56AM (#17458760) Journal

    The x86 ISA hasn't been bound to Intel for some time now. There are currently at least three manufacturers making processors that implement the ISA, and of course there is a vast number of companies making software that runs on that ISA. Not only that, Intel isn't even the source of all of the changes/enhancements in their own ISA -- see AMD64.

    With all of that momentum, it's hard to see how any other ISA could make as much practical sense.

    And it's not like the ISA actually constrains the processor design much, either. NONE of the current x86 implementations actually execute the x86 instructions directly. x86 is basically a portable bytecode which gets translated by the processor into the RISC-like instruction set that *really* gets executed. You can almost think of x86 as a macro language.

    For very small processors, perhaps the additional overhead of translating the x86 instructions into whatever internal microcode will actually be executed isn't acceptable. But in the desktop and even laptop space, modern CPUs pack so many millions of transistors that the cost of the additional translation is trivial, at least in terms of silicon real estate.

    From the perspective of performance, that same overhead is a long term advantage because it allows generations of processors from different vendors to decouple the internal architecture from the external instruction set. Since it's not feasible, at least in the closed source world, for every processor generation from every vendor to use a different ISA, coupling the ISA to the internal architecture would constrain the performance improvements that CPU designers could make. Taking a 1% performance hit from the translation (and it's probably not that large) enables chipmakers to stay close to the performance improvement curve suggested by Moore's law[*], without requiring software vendors to support a half dozen ISAs.

    In short, x86 may not be the best ISA ever designed from a theoretical standpoint, but it does the job and it provides a well-known standard around which both the software and hardware worlds can build and compete.

    It's not going anywhere anytime soon.


    [*] Yes, I know Moore's law is about transistor counts, not performance.

  • CISC (x86) vs RISC (Score:3, Informative)

    by Spazmania ( 174582 ) on Thursday January 04, 2007 @12:04PM (#17458868) Homepage
    These days there is a limited amount difference under the hood between a CISC processor like the x86 series and a RISC processor. They're mostly RISC under the hood but a CPU like the x86 has a layer of microcode embedded in the processor which implements the complex instructions.

    http://www.heyrick.co.uk/assembler/riscvcisc.html [heyrick.co.uk]
  • But we did (Score:5, Insightful)

    by teflaime ( 738532 ) on Thursday January 04, 2007 @12:05PM (#17458882)
    vote with our wallets. The x86 architecture was cheaper than ppc, so that's what consumers chose. It is still consistently cheaper than other architectures. That's ultimately why Apple is moving to it too; they weren't selling enough product (yes, not being able to put their best chip in their laptops hurt, but most people were saying why am I paying $1000 more for a Mac when I can get almost everything I want from a PC)?
  • Because IBM chose x86 instead of RISC.
  • We use x86 CPUs because they're cheap, versatile, and run all of our old software. All of the little things the OP complains about might matter to a seriously nerdy programmer, but to 99% of the people using computers, those words are just gibberish. Something else to keep in mind about non-x86 CPUs is that yes, they may be faster than x86 at task X or cheaper for task Z, but that's because most of them aren't really designed for general use; if they were used by everybody, the architectures would change to
  • Because back in the days when Hardware dictated Software, Software generally sucked, for the end-user and wholly unapprochable by anyone without geek-cred. Think COBOL. Hardware has never dictated Software success in the marketplace, but the reverse is true.

    DOS and Windows MADE the market for the X-86 machines, just as Apple made the market for the Motorola 68000 series. Companies will purchase whatever hardware necessary to run their preferred apps. Almost never will you see an organization purchase part

  • by briancnorton ( 586947 ) on Thursday January 04, 2007 @12:26PM (#17459216) Homepage
    Why don't we all drive Formula 1 Cars? Why not hybrids? Why not motorcycles or electric scooters? The reason is that there is an infrastructure built around supporting a platform that is reliable, robust, and surprisingly extensible. (MMX, SSE, 64bit, etc) Intel asked the same question and came up with the Itanium. It is fast, efficient, and well understood. This is the same big reason that people don't use Linux, it's hard to switch for minimal tangible benefits. (not a flame, just an observation)
  • The reason that we use intel now most general purpose computing is simply is cost effective. Intel based arch. may not be the best in the world but its good enough. There are 20 years of development behind the processor so its well known for what it can do.

    There is no reason to go out an develop a proprietary processor when a intel based chip will do off the shelf. The processor wars are over and sadly or not intel won. The have a cheap processor that works for 99.9% of all computing applications.

    Th

  • by Erich ( 151 ) on Thursday January 04, 2007 @12:35PM (#17459396) Homepage Journal
    It's because if you're willing to throw some effort at the problem, the instruction set doesn't matter.

    Intel chips haven't really executed x86 since the original Pentium. They translate the instructions into a more convienient form and execute those. They do analysis in hardware and find all sorts of opportunities to make code run faster.

    As it turns out, you have to do most of the same analysis to get good performance on a RISC too. So you have a bit of extra decoding work, which you might not have to do on a MIPS or something, but you gain some flexibility in what lies underneath. And if you're producing 10x the amount of processors as Freescale, you're going to be able to make up for any marginal increase in cost the extra complexity costs you.

    Also, don't buy into the hype. You can't buy that much from a good ISA on high-end processors. Look at the SPEC numbers for the Core 2 duo vs. anyone else if you don't believe me. IA64 was supposed to be the greatest thing ever because compilers could do all the work at compile time. There's almost every instruction set hook imaginable in IA64. And look how that architecture has turned out.

    We use x86 because instruction translation is pretty easy and very effective... the same reason why Java algorithms perform pretty well, Transmeta had a halfway decent chip, Alpha could execute x86 code pretty well, and Apple can run PPC apps pretty well on x86. It's not bad enough to be completely broken, and we can engineer our way out of the problems in the architecture.

    Of course, if you're counting transistors and joules, some of this breaks down... that's why ARM and DSPs have been effective at the low end.

  • by Ninja Programmer ( 145252 ) on Thursday January 04, 2007 @01:22PM (#17460348) Homepage

    bluefoxlucid asks: "With Apple having now switched to x86 CPUs,
    I've been wondering for a while why we use the x86 architecture
    at all. ..."

    Because its a better CPU.

    "... The Power architecture was known for its better performance
    per clock; ..."

    Utter nonsense. This is a complete lie. Benchmarks do not bear this out. And this is besides the fact, that this qualifier reveals the PowerPC's primary weakness -- it has a far lower clock rate.

    "... and still other RISC architectures such as the various ARM
    models provide very high performance per clock as well as
    reduced power usage, opening some potential for low-power
    laptops. ... "

    ARM is currently made by Intel. It does have a high ops per clock performance, but it does so at a severe complexity penalty which drammatically limits clock rate. You can't get "free extra shift" or "free conditional computation" without some compromise to the architecture.

    " ... Compilers can also deal with optimization in RISC
    architectures more easily, since the instruction set is
    smaller and the possible scheduling arrangements are thus
    reduced greatly. ... "

    Nice in theory. Intel's latest generation compilers put other compilers to shame. Remember that x86s perform a lot of auto-scheduling themselves. While it may seem like putting more scheduling pressure onto the compiler seemed to make sense back in the 90s, no compiler can solve them totally correctly. This is critical especially in dynamic situations such as cache and branch misses (which the compiler can often neither detect or even solve). By letting the CPU solve the problem dynamically as the problems occurr, it can do so nearly optimally all the time.

    " ... With Just-in-Time compilation, legacy x86 programs
    could be painlessly run on ARM/PPC by translating them
    dynamically at run time, similar to how CIL and Java
    work. ... "

    Are you smoking pot? The state of the art in x86 CPU emulation are the Itanium and TransMeta CPUs. Both failed precisely because of their pathetic performance of their x86 emulators. The x86 has complicated addressing modes, flag registers, structured sub-registers, unaligned memory access, etc, which does not easily translate to "clean RISC" architectures. (However, they do translate to straight forward hardware implementations, as AMD and Intel have proven.)

    " ... So really, what do you all think about our choice of
    primary CPU architecture? ... "

    It is the correct and logical choice. If RISC were really the greatest thing since sliced bread, then PowerPC should be running circles around x86. But the truth is that it can't even keep up.

    "... Are x86 and x86_64 a good choice; or should we have shot
    for PPC64 or a 64-bit ARM solution?"

    Why would you want to use a slower, and less functional CPU? Yes, I said *LESS FUNCTIONAL*. Look into how the PowerPC performs atomic lock operations. Its pathetic. Its just built into the basic x86 architecture, but the PowerPC requires significant external hardware support (via special modes in the memory controller) to do the same thing. x86 just supports "locked memory addresses" which nicely maps to the caching modes.

    PowerPC is missing both the right instructions and relevant memory semantics to support it directly. PowerPC uses seperate lock instructions for cache lines, which means that each thread can lock out other threads arbitrarily; if you crash or stall with a held lock, all dependent threads deadlock. It also means you can't put multiple locks in a single cache line and expect them to operate ind

  • As apple proved (Score:3, Interesting)

    by polyp2000 ( 444682 ) on Thursday January 04, 2007 @01:30PM (#17460512) Homepage Journal
    by keeping their code open (at least internally) and cross platform it really doesnt matter what architecture it is running on. The switch to intel was comparitively quick - relatively speaking.

    The reasons for apples switch were made on costing and performance - and undoubtably because IBM failed to deliver.

    Of course if something else comes up - I imagine we would see another change.

    N.
  • by CatOne ( 655161 ) on Thursday January 04, 2007 @01:53PM (#17461050)
    For all the academic arguments about what architecture is better (CISC versus RISC, pipelined versus superscalar, etc.) what's difficult to deny is that the Intel/AMD (and the x86 architecture in general) have continued to meet or exceed Moore's law. Despite Apple's insistence that PowerPC was a better and faster path forward, the facts are that Intel outran them. This doesn't necessarily mean it's a better architecture, but rather the engineering resources (perhaps due to AMD/Intel competition) managed to do more with what they had. Maybe x86 is performing at 95% of its potential, and PPC at 50%. Who knows. Who actually cares, except for an academic discussion.

    SPARC may well be the same way -- the main issue there is that Sun doesn't have the resources to both build CPU architectures, computer architectures, and OS and application software, and be competitive in all markets.

    There are other instances of something similar. Take the Porsche 911. A rear-engined setup (with the majority of the weight of the car *behind* the rear axle) is inherently inferior to a mid-engine design. The car should handle like crap. And early versions, for the most part, did (oversteer was a very real and common issue). Take 40 years of incremental advances in body and suspension design, and the 911 is one of the best handling cars there is. The design is not the best -- it never WILL be the best -- but it is a very, very competent performer. Few things are better.
  • by Sebastopol ( 189276 ) on Thursday January 04, 2007 @01:55PM (#17461082) Homepage

    Compilers can also deal with optimization in RISC architectures more easily

    This is a dead giveaway that the author is just stabbing at the wind. Scheduling is no more complex with CISC than with RISC. In fact, some compilation can be optimized even better by specialized CISC instructions that happen frequently. This is an ancient debate that is a tie.

    With Just-in-Time compilation, legacy x86 programs could be painlessly run on ARM/PPC by translating them dynamically at run time, similar to how CIL and Java work.

    Yeah, and run 20x slower.

    what do you all think about our choice of primary CPU architecture

    The R&D was sunk into x86 by the two most able teams, Intel and AMD. Companies are driven by profit, and higher profit meant honing x86 and leveraging an installed base. That's the only reason why we are x86 world.

    But claiming a RISC world would be better is again an argument that has been put to rest decades ago: we'd be having the same argument reversed if RISC was the dominant architecture.

    Further, there are pedants out there who will argue all x86 is really RISC under the hood, but that's a bit misleading.

  • Who cares? (Score:3, Insightful)

    by Chris Snook ( 872473 ) on Thursday January 04, 2007 @02:09PM (#17461366)
    Modern CPUs have sufficiently complex decoder units at the front of sufficiently deep pipelines that the programmer-visible ISA has very little impact on the performance-critical aspects of the architecture. You need look no further than the vast architectural difference between AMD and Intel, or even Intel and Intel (P4 vs. Core) for the proof.

    So, we'll keep using x86 in this market because it's what we've been using. We need *some* ISA, and now that the ISA itself is largely decoupled from the underlying implementation, there's very little incentive to change it.
  • by localman ( 111171 ) on Thursday January 04, 2007 @02:46PM (#17462032) Homepage
    Isn't it because all of the alleged advantages of other architectures just weren't compelling enough? And because unless you've got a rare sense of aesthetics for processor architecture design, nobody really cares what kind of chip is in the machine?

    To elaoborate: I used to have and Amiga with an '040, and I remember then the arguments about how it was a "better" processor. But when the Commodore died I got a Cyrix Pentium-class PC and it seemed faster rendering in Lightwave, and that was all I really cared about at the time. Years later I switched to the Mac because I liked OSX. I ran it on a G4, and everyone said the G4 was "better" than the x86 of the day. I'm on a MacBook Pro now, with a Core Duo, and it seems faster. At least as fast, if not faster, than my wife's G5. Battery life is about as good as my G4 was. Maybe it gets a little hotter, but I don't really notice.

    So in the end, regardless of any aesthetic concerns about the architecture or theories about what is "right", "clean" or "better", x86 seems to have been good enough all along the way to make switching a waste of time. So if it's not pragmatically better, why all the sorrow?

    Cheers.

Two can Live as Cheaply as One for Half as Long. -- Howard Kandel

Working...