Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Intel Operating Systems Software

Why Do We Use x86 CPUs? 552

bluefoxlucid asks: "With Apple having now switched to x86 CPUs, I've been wondering for a while why we use the x86 architecture at all. The Power architecture was known for its better performance per clock; and still other RISC architectures such as the various ARM models provide very high performance per clock as well as reduced power usage, opening some potential for low-power laptops. Compilers can also deal with optimization in RISC architectures more easily, since the instruction set is smaller and the possible scheduling arrangements are thus reduced greatly. With Just-in-Time compilation, legacy x86 programs could be painlessly run on ARM/PPC by translating them dynamically at run time, similar to how CIL and Java work. So really, what do you all think about our choice of primary CPU architecture? Are x86 and x86_64 a good choice; or should we have shot for PPC64 or a 64-bit ARM solution?" The problem right now is that if we were going to try to "vote with our wallets" for computing architecture, the only vote would be x86. How long do you see Intel maintaining its dominance in the home PC market?
This discussion has been archived. No new comments can be posted.

Why Do We Use x86 CPUs?

Comments Filter:
  • momentum (Score:5, Informative)

    by nuggz ( 69912 ) on Thursday January 04, 2007 @11:17AM (#17458226) Homepage
    Change is expensive.
    So don't change unless there is a compelling reason.

    Hard to optimize? You only have to optimize the compiler once, over the millions of devices this cost is small.

    Runtime interpreter/compilers, you lose the speed advantage.

    Volume and competition makes x86 series products cheap
  • by plambert ( 16507 ) * on Thursday January 04, 2007 @11:19AM (#17458250) Homepage
    The reason given, which people seem to keep forgetting, was pretty simple and believable:

    Performance per watt.

    The PPC architecture was not improving _at all_ in performance per watt. Apple's market was growing fastest in the portable space, but it was becoming impossible to keep temperatures and power consumption down with PPC processors.

    And IBM's future plans for the product line were focusing on the Power series (for high-end servers) and the Core processors (for Xbox 360's) and not on the PowerPCs themselves.

    While I've never had any particular love for the x86 instruction sets, I, for one, enjoy the performance of my Macbook Pro Core 2 Duo, and the fact that it doesn't burn my lap off, like a PowerPC G5-based laptop would.
  • Good question... (Score:5, Informative)

    by kotj.mf ( 645325 ) on Thursday January 04, 2007 @11:19AM (#17458254)
    I just got done reading about the PWRficient [realworldtech.com] (via Ars):
    • Two 64-bit, superscalar, out-of-order PowerPC processor cores with Altivec/VMX
    • Two DDR2 memory controllers (one per core!)
    • 2MB shared L2 cache
    • I/O unit that has support for: eight PCIe controllers, two 10 Gigabit Ethernet controllers, four Gigabit Ethernet controllers
    • 65nm process
    • 5-13 watts typical @ 2GHz, depending on the application

    Now I have to wait for the boner this gave me to go away before I can get up and walk around the office.

    Maybe Apple could have put off the Switch after all...

  • Re:momentum (Score:2, Informative)

    by jZnat ( 793348 ) * on Thursday January 04, 2007 @11:20AM (#17458258) Homepage Journal
    GCC is already architectured such that it's trivial to optimise the compiled code for any architecture, new or old. Parent's idea is pretty much wrong.
  • by Anonymous Coward on Thursday January 04, 2007 @11:25AM (#17458330)
    Gotta hand it to Jobs ability to spread bullshit, but no one honestly believes the damage control story that Apple ever wanted to land in x86 land.

    After years of chip order games and being an all around pain in the ass to work with company, IBM, having recently locked up all three major console manufacturers, decided Apple was no longer worth the measly four percent of their chip business for the major hassle it was to deal with them. So IBM decided to dump Apple as a customer and not make a mobile version of the G5.

    Jobs in a panic ran to PA Semi to bail Apple out and was turned away.

    And AMD didn't have the capacity to sell to Apple.

    So Apple was left with only Intel - as their 'first choice'

    Bravo Jobs!

  • Re:momentum (Score:2, Informative)

    by Short Circuit ( 52384 ) * <mikemol@gmail.com> on Thursday January 04, 2007 @11:34AM (#17458448) Homepage Journal
    GCC 4.x is designed to enable optimizations that will work across architectures, by providing an intermediate code layer for compiler hackers to work with.

    There are still optimizations possible at the assembly level for each architecture that depend on the quirks and features of those architectures and even their specific implementations.

    The intermediate level optimizations are intended to reduce code duplication by allowing optimizations common across all architectures to be applied to a common intermediate architecture.

  • Re:momentum (Score:3, Informative)

    by UtucXul ( 658400 ) on Thursday January 04, 2007 @11:40AM (#17458534) Homepage
    IFORT/ICC on x86 and up
    Funny thing about IFORT is that while in simple tests it always outperforms g77 (I've since switched to gfortran, but haven't tested it too well yet), for complex things (a few thousand lines of FORTRAN 77 using mpi), it is very unpredictable. I have lots of cases where g77 outperforms ifort in real world cases (as real world as astronomy gets anyway) and cases where ifort wins. It just seems to me that either ifort is not the best compiler, or optimizing for x86 is funnier business than it seems (or there is some other variable I'm missing which is always possible).
  • by forkazoo ( 138186 ) <wrosecrans@@@gmail...com> on Thursday January 04, 2007 @11:43AM (#17458558) Homepage
    One perspective on the question:

    Non x86 architectures are certainly not inherently better clock for clock. That's a matter of specific chip designs more than anything else. The P4 was a fairly fast chip, but miserable clock for clock against a G4. An Athlon however, was much closer to a G4. (Remember kids, not all code takes advantage of SIMD like AltiVec!) And, the G4 wasn't very easy get bring to super high clock rates. The whole argument of architectural elegance no longer applies.

    The RISC Revolution started at a time when decoding an ugly architecture like VAX or x86 would require a significant portion of the available chip area. The legacy modes of x86 significantly held back performance because the 8086 and 80286 compatibility areas took up space that could have been used for cache or floating point hardware, or whatever. Then, transistor budgets grew. People stopped manually placing individual transistors, and then they stopped manually fiddling with individual gates for the most part. Chips grew in transistor count to the point where basically, nobody knew what to do with all the extra space. When that happened, x86 instruction decoding became a tiny area of the chip. Removing legacy cruft from x86 really wouldn't have been a significant design win after about P6/K7.

    Instead of being a design win, the fixed instruction length of the RISc architectures no longer meant improved performance through simple decoding. They meant that even simple instructions took as much space as average instructions. Really complex instructions weren't allowed, so they had to be implimented as multiple instructions. Something that was one byte on x86 was always exactly 4 bytes on MIPS. Something that was 12 bytes on x86 might be done as four instruction on MIPS, and thus take 16 bytes. So, effective instruction cache sizes and effective instruction fetch bandwidth grew on X86 compared to purer RISC architectures.

    At the same time, the gap between compute performance and memory bandwidth on all architectures was widening. Instruction fetch badwidth was irrelevent in the time of the PC XT, because RAM fetches could actually be done in like a single cycle. Less that it takes to get to SRAM on-chip caches today. But, as time went on, memory accesses became more and more costly. So, if a MIPS machine was in a super tight loop that ran in L1 cache, it might be okay. But, it it was just going balls to the wall through sequential instructions, or a loop that was much larger than cache, then it didn't matter how fast it could compute the instructions if it couldn't fetch them quick enough to keep the processor fed. but, X86 absurdly ugly instruction encoding acted like a sort of compression, meaning that a loop was more likely to fit in a particularly sized cache, and that better use of instruction fetch bandwidth was made.

    Also, people had software that ran on X86, so they bought 9000 bazillion chips to run it all. The money spent on those 9000 bazillion chips got invested in building better chips. If somebody had the sort of financial resources that Intel had to build a better chip, and they shipped it in that sort of volume, we might well se an extremely competetive desktop SPARC or ARM chip.
  • by Anonymous Coward on Thursday January 04, 2007 @11:56AM (#17458768)
    Apple switched to x86 because, they wanted to, Performance per watt(PPW) was a red herring. For years Apple said the PPC whether it be Motorola, now Freescales, implementation or IBM's, was a better performing machine, and that was the marketing line. Without a reason people would have started to question the validity of Apple's prior statements. So was the G5 a laptop ready chip, heck no it wasn't designed to be, but there were others available to Apple they chose not to use, because they wanted to be on x86. When Apple released it's PPW numbers it was total FUD they compared the numbers between a just released Intel part and 2+ year old IBM part. Now why would Apple want Intel,
    #1 economies of scale, intel offered lower cost per chip, so Apple can increase profit while holding the cost of the product the same.
    #2 Ability to more frequently upgrade. Intel releases newer faster version of processors much more rapidly then can be done with a custom processor.
    #3 It's the software. The Achilles heal for Apple was always look at how little software it runs or the every popular but I want to run games. Granted there were ways to try and get some programs working but it was tedious and an undertaking 95%+ of the population probably wouldn't take. Now by being on Intel the greatly simplify the porting of software.
     
    It's all about the software now, while certain people may drool over hardware, I'm one of them, I love hardware, It's the software that sells, it's the software that defines a box. Apple has essentially simplified the battlefield, taking away it's weakness to more concentrate on MSFT. Look we run on the same hardware they do, look we run the same programs they do, but wait we do have that pesky MSFT OS.

    Anyway my 2 cents
  • by OzPeter ( 195038 ) on Thursday January 04, 2007 @12:01PM (#17458840)
    Obligatory wiki page [wikipedia.org]

    And while you seem to be holding out, I did see one website that suggested less thna 7% of the worlds population doesn't use the metric system .. and the US is 80% of that 7%
  • CISC (x86) vs RISC (Score:3, Informative)

    by Spazmania ( 174582 ) on Thursday January 04, 2007 @12:04PM (#17458868) Homepage
    These days there is a limited amount difference under the hood between a CISC processor like the x86 series and a RISC processor. They're mostly RISC under the hood but a CPU like the x86 has a layer of microcode embedded in the processor which implements the complex instructions.

    http://www.heyrick.co.uk/assembler/riscvcisc.html [heyrick.co.uk]
  • by Cassini2 ( 956052 ) on Thursday January 04, 2007 @12:22PM (#17459132)
    Intel's support infrastructure also includes some of the best semiconductor fabrication facilities in the business. Intel has consistently held a significant process advantage at its fabs (fabrication facilities) over the life of the x86 architecture. Essentially, no one else can deliver the volume and performance of chips that Intel can. Even AMD is struggling to compete against Intel (90 nm vs 60 nm).

    The process advantage means Intel can get a horrible architecture (x86) to perform acceptably at a decent price/performance point. RISC chips, while faster, require different software. People aren't going to change their software unless a good reason exists. The process advantage of Intel, means that Intel can sell good processors at a reasonable price. Given that, why switch? The x86 is even clobbering Intel's own Itanium (Itanic) architecture in terms of sales.

    Other hardware vendors are competitive in market segments that place very high values on particular system metrics. For instance, the ARM processor is very competitive for low power dissipations and 32-bit applications. The 8-bit embedded microcontrollers (PIC, 8051) are really cheap. RISC chips still dominate the high performance computing market.
  • Re:Why do we ... (Score:3, Informative)

    by terrymr ( 316118 ) <terrymr@@@gmail...com> on Thursday January 04, 2007 @12:43PM (#17459516)
    "The railroad line from the factory had to run through a tunnel in the mountains. The SRBs had to fit through that tunnel. The tunnel is slightly wider than the railroad track, and the railroad track is about as wide as two horses' behinds. So, the major design feature of what is arguably the world's most advanced transportation system was determined over two thousand years ago by the width of a Horse's Ass!"

    from snopes.com [snopes.com]
  • Re:Easy (Score:4, Informative)

    by budcub ( 92165 ) on Thursday January 04, 2007 @12:46PM (#17459584) Homepage
    Release Candidate 1 of Windows 2000 still supported Alpha. It was somewhere around RC2 or RC3 (if there ever was a RC3, I don't remember) that Microsoft went to x86 only. Prior to that, with NT4 they supported MIPS, PPC, Alpha, and x86, up until the early service packs. After Service Pack 3 (for NT4 that is) they were only supporting x86 and Alpha. I remember because I worked in a shop that used Compac/DEC Alpha machines running MS Exchange on NT4 for their mail system. Don't know what their long term plans were since I got laid off from that place.
  • by BravoFourEcho ( 581460 ) on Thursday January 04, 2007 @01:00PM (#17459848)
    Ask the average Joe in the US how far a meter is, and you'll likely get a blank stare and the response "metric is too hard." Yes, metric is taught in schools. Yes, some states post road signs using kilometers alongside the mile signs. But the only non-engineering, non-scientist segment of the population to use metric is the military, because the land maps are in metric. Everything else is still pounds for weight and gallons for volume.
  • by sczimme ( 603413 ) on Thursday January 04, 2007 @01:05PM (#17459960)

    Even in the 80s/90s it would have been completely possible for Microsoft to support a wide range of processors ( if their OS was designed correctly )

    Microsoft did do that: there was a time when NT ran on X86, DEC Alpha, and PowerPC machines. (Okay, that's not a huge range, but the point stands.)

    X86s became cheaper and cheaper, and continuing development of NT on !X86 became financially infeasible due the rapidly-shrinking market share for non-PC platforms.

  • Re:Why do we ... (Score:3, Informative)

    by Robber Baron ( 112304 ) on Thursday January 04, 2007 @01:10PM (#17460096) Homepage
    The chariot and horse's ass as a determiner of standard gauge for railroads story, while entertaining, is untrue [discoverlivesteam.com].
  • 68k vs. 8086/8088 (Score:4, Informative)

    by nojayuk ( 567177 ) on Thursday January 04, 2007 @02:33PM (#17461774)

    There were a number of factors that made the IBM team go for the Intel solution rather than the superior 68000 chip from Motorola. One factor was, as you said, the 8088 chip could use plentiful and well-understood 8-bit peripheral chips like the 8251, the 8259 etc. Another factor was that the 68000 had a long gestation -- Motorola had problems producing chips in quantity that could actually run at their specced speed of 8MHz. I played with an early dev board which had a working 68000 but it only ran at 4MHz. At that time the 8086/8088 were already available on the market, in quantity and that's what IBM needed.

    The biggest factor though was probably the software. The x86 architecture was structurally similar to the venerable 8080 family's architecture (hence the memory segmentation, 8/16 bit registers etc.), and there was a lot of 8080-family code already out there, including developer toolsets like compilers and assemblers. Intel also released conversion tools for coders to convert existing 8080 code to run on their new 16-bit CPUs. The M68k was radically different structurally from Motorola's 6800 8-bit chips and its instruction set (although a dream to code new stuff for) did not make transitioning older 8-bit code easy.

  • by gnasher719 ( 869701 ) on Thursday January 04, 2007 @03:07PM (#17462442)
    '' Increment/decrement of a register is one of those 1-byte instructions. And don't underestimate those fancy instructions. It's very nice to be able to do things like a nondestructive multiply by 5 without using up the ALU. ''

    Increment/decrement of a register is gone in 64 bit code and painfully slow anyway (it seemed a good idea in 1977, but partial condition code register updates are a pain). Lots of floating point instructions have three or four bytes in prefixes alone, that's before we even start with the instruction itself. Multiplication by 5 using LEA instruction on a P4 took five cycles; on an ARM it takes one...

    I had to write a bit of decryption code a few months ago, and a 200 MHz ARM chip was about as fast as an 800 MHz P4.
  • Re:Easy (Score:3, Informative)

    by soft_guy ( 534437 ) on Thursday January 04, 2007 @03:08PM (#17462472)

    If only there were some way the installer could detect your platform and auto-strip on installation. Too bad that's impossible.
    There were installers that did this. They typically asked you whether you wanted to install the app for "this macintosh" or "any macintosh" (because some people had applications on a server.)
  • Re:Easy (Score:3, Informative)

    by TheRaven64 ( 641858 ) on Thursday January 04, 2007 @03:16PM (#17462602) Journal
    Anyone who wants to can build a computer without using an off-the-shelf CPU for few thousand dollars. You can get chips fab'd using one or two generation old technology for under $1000 each in runs as small as 10. The price drops a lot every time you add a zero to the end of your production run. You can generate the masks using open source tools, and there are even some HDL cores already designed under Free licenses. Of course, you'll lag a good 5 years behind x86 performance if you do...

    Once you've got the CPU, you need a motherboard. This is a little easier to design, and if you're lucky you can use off-the-shelf supporting chips, or make your CPU a monolithic system on chip and have things like memory, USB, PCIe, SATA, etc. controllers integrated. You can even put RAM on the die, but that is going to drive up your production costs a lot (it's much cheaper to buy DRAM modules).

    If you want to build a non-x86 machine and still want a general purpose machine, I'd recommend that you look at ARM or PowerPC chips. You can buy ones intended for high-end embedded systems quite cheaply, and these are in the GHz range, which is fast enough for a lot of things. You can also pick up an evaluation board quite cheaply, so you don't need to build your own motherboard.

    Of course, when I say cheaply, you're still talking about at least twice the price of x86 for equivalent performance. Why? Because you're getting limited volume stuff. PowerPC and ARM computers are cheap; the odds are that you own at least one or two, but don't think of them as computers (what do you think your mobile phone is?) but they tend to focus on low power rather than high performance. My current mobile phone has about the same processing power, RAM, and storage as my desktop from ten years ago though, so it may be enough for a lot of applications. My personal recommendation would be to go for a PowerPC 400 series.

    With Free Software, the cost of moving to a new processor architecture is cheap, but there still needs to be an incentive. Typically, that incentive is much better performance. An example of this is the Sun T1 chip, which gives much better performance both per watt and per dollar on a lot of web and database server workloads, and much worse performance on a lot of others.

    Unfortunately, it's a chicken and egg problem. Developing a CPU that's competitive in year X has a relatively fixed cost; you need a group of bright people to go from concept to mask and you need to set up some fabrication facilities. These are roughly fixed costs (you can save a bit for smallish runs by using someone else's fab, but it's still a big capital cost for them to do the set-up, and you have to pay it). Once you've paid these costs, then CPUs don't actually cost that much individually to manufacture. The second one you make probably costs under ten dollars. Unfortunately the first one cost several hundreds of thousands of dollars (at least; for something like the Core 2, you are talking hundreds of millions of dollars), so you need to sell enough to make enough to cover this.

    Will we keep using x86? Yes and no. x86 dominance will, I suspect, last until the end of the desktop era. I doubt it will last much longer though, so expect to be using something else in ten years or so. In the CPU market, x86 is a relatively small player; it's only the desktop (including laptops) market that it has such a large presence.

  • Re:Why do we ... (Score:2, Informative)

    by dreamlax ( 981973 ) on Thursday January 04, 2007 @03:21PM (#17462692)
    "Why do we drive on the right side of the road in some places, left in others?" Where do we drive on the left in the U.S.?

    According to Wikipedia, the Virgin Islands drive on the left.

    And by the way, there are readers from many different countries on Slashdot, and I (from NZ) drive on the left.

  • by gmezero ( 4448 ) on Thursday January 04, 2007 @03:45PM (#17463200) Homepage
    x86 only refers to a set of API interfaces with the CPU architecture. As of the launch of the Pentium, the modern "x86" processor is a RISC based CPU with an internal x86 translation layer. Start you learning here [wikipedia.org]. x86 is also refered to as x86-32 or IA-32. And with the current generation of processors, we are leaving that behind for "x64" also known as EM64T, IA-32e or IA-64 in its various iterations. Many of the "x64" series generally maintian "x86" interface compatibility in order to allow legacy operation. For instance you can run Warp Server on a dual Opteron server just fine.
  • by Creechur ( 847130 ) on Thursday January 04, 2007 @04:53PM (#17464422)
    And with the current generation of processors, we are leaving that behind for "x64" also known as EM64T, IA-32e or IA-64 in its various iterations.

    Just to clarify, IA-64 [wikipedia.org] was implemented only in Itanium processors, and is unrelated to x86-64. Intel themselves tried to break away from the x86 line, and the market wasn't very receptive, which is part of what drove the creation of x86-64 instead by AMD.

  • Re:Easy (Score:3, Informative)

    by MobyTurbo ( 537363 ) on Thursday January 04, 2007 @11:08PM (#17469006)
    IBM picked the x86 for the original IBM PC because they wanted something better than an Apple ]{ and TRS-80 but not too much better

    They probably picked the 8088, rather than the alternative 16 bit processor at the time, the somewhat better Motorolla 68000 used in (expensive) workstations at the time, because of it being very close to the 8 bit 8080/Z-80 chip - which was powering CP/M boxes that were the business software standard. That's why their first choice in operating system vendor was Digital Research, the makers of CP/M. If you're a whippersnapper, you may not remember CP/M, but it was the original platform of Wordstar, dBase II, and a number of other popular business software programs at the time. (Even MS's own MBASIC had its latest version running on CP/M.) Because of the 8088's similarity in instruction set with the 8080 and PC-DOS's similarity to CP/M, these applications were quickly ported (at first complete with the 8080 and Z-80's 8 bit limitations!) to the PC.

    When Digital Research failed to follow through with IBM (Gary Kidall's famous plane flight). IBM then went to the biggest microcomputer software vendor, Microsoft, with a proposition depending upon if they had an operating system soon available for them. They lied and said that not only they had one, they had one already made to keep others from making offers, bought it for less than $20,000 from a local hobbiest hardware maker, and sold it to IBM; much in the same way that they secured a monopoly in BASIC for the Altair by claiming they already had BASIC written for it. (Of course, this is far from the worst of their business practices.)

  • Re:Easy (Score:2, Informative)

    by thogard ( 43403 ) on Thursday January 04, 2007 @11:40PM (#17469238) Homepage
    The Kidall's theory goes out the window when you consider IBM already had a license agreement for CPM and was already making better x86 based machines than the PC (like the DisplayWrite which had a faster CPU, more memory potential and CPM) but instead of using a well engineered machine to take on what they considered "toy computer". Their intent was to put Apple and Tandy out of the computer business and then regain their market on the "real computers".

    I took more than one call from IBM's sales people trying to get me to consider upgrading to a better computer at 10 times the cost. A friend had a job of putting a flyer in boxes at the factory that offered to take the band new PC as a trade in for some thing like a 360 or whatever was their current mini at the time. The temporary workers at the factory (in Boca Raton) were told that the assembly line on the new PC would shut down with very little notice and were offered a very nice severance package. Even IBM's management had odd attitudes about the product until about two years after the XT.
  • by MSTCrow5429 ( 642744 ) on Friday January 05, 2007 @02:51AM (#17470364)
    Actually, the Pentium is CISC, x86 internally. The Pentium Pro is RISC, non-x86 internally. As is the AMD K5 and up. For some reason, Cyrix stuck with x86 internal. See sandpile.org.

Remember, UNIX spelled backwards is XINU. -- Mt.

Working...