Why Do We Use x86 CPUs? 552
bluefoxlucid asks: "With Apple having now switched to x86 CPUs, I've been wondering for a while why we use the x86 architecture at all. The Power architecture was known for its better performance per clock; and still other RISC architectures such as the various ARM models provide very high performance per clock as well as reduced power usage, opening some potential for low-power laptops. Compilers can also deal with optimization in RISC architectures more easily, since the instruction set is smaller and the possible scheduling arrangements are thus reduced greatly. With Just-in-Time compilation, legacy x86 programs could be painlessly run on ARM/PPC by translating them dynamically at run time, similar to how CIL and Java work. So really, what do you all think about our choice of primary CPU architecture? Are x86 and x86_64 a good choice; or should we have shot for PPC64 or a 64-bit ARM solution?" The problem right now is that if we were going to try to "vote with our wallets" for computing architecture, the only vote would be x86. How long do you see Intel maintaining its dominance in the home PC market?
momentum (Score:5, Informative)
So don't change unless there is a compelling reason.
Hard to optimize? You only have to optimize the compiler once, over the millions of devices this cost is small.
Runtime interpreter/compilers, you lose the speed advantage.
Volume and competition makes x86 series products cheap
Why Apple moved to x86 (Score:5, Informative)
Performance per watt.
The PPC architecture was not improving _at all_ in performance per watt. Apple's market was growing fastest in the portable space, but it was becoming impossible to keep temperatures and power consumption down with PPC processors.
And IBM's future plans for the product line were focusing on the Power series (for high-end servers) and the Core processors (for Xbox 360's) and not on the PowerPCs themselves.
While I've never had any particular love for the x86 instruction sets, I, for one, enjoy the performance of my Macbook Pro Core 2 Duo, and the fact that it doesn't burn my lap off, like a PowerPC G5-based laptop would.
Good question... (Score:5, Informative)
Now I have to wait for the boner this gave me to go away before I can get up and walk around the office.
Maybe Apple could have put off the Switch after all...
Re:momentum (Score:2, Informative)
Apple Didn't 'Switch', They Got Dumped By IBM (Score:0, Informative)
After years of chip order games and being an all around pain in the ass to work with company, IBM, having recently locked up all three major console manufacturers, decided Apple was no longer worth the measly four percent of their chip business for the major hassle it was to deal with them. So IBM decided to dump Apple as a customer and not make a mobile version of the G5.
Jobs in a panic ran to PA Semi to bail Apple out and was turned away.
And AMD didn't have the capacity to sell to Apple.
So Apple was left with only Intel - as their 'first choice'
Bravo Jobs!
Re:momentum (Score:2, Informative)
There are still optimizations possible at the assembly level for each architecture that depend on the quirks and features of those architectures and even their specific implementations.
The intermediate level optimizations are intended to reduce code duplication by allowing optimizations common across all architectures to be applied to a common intermediate architecture.
Re:momentum (Score:3, Informative)
The Ugly Architecture Runs Well (Score:5, Informative)
Non x86 architectures are certainly not inherently better clock for clock. That's a matter of specific chip designs more than anything else. The P4 was a fairly fast chip, but miserable clock for clock against a G4. An Athlon however, was much closer to a G4. (Remember kids, not all code takes advantage of SIMD like AltiVec!) And, the G4 wasn't very easy get bring to super high clock rates. The whole argument of architectural elegance no longer applies.
The RISC Revolution started at a time when decoding an ugly architecture like VAX or x86 would require a significant portion of the available chip area. The legacy modes of x86 significantly held back performance because the 8086 and 80286 compatibility areas took up space that could have been used for cache or floating point hardware, or whatever. Then, transistor budgets grew. People stopped manually placing individual transistors, and then they stopped manually fiddling with individual gates for the most part. Chips grew in transistor count to the point where basically, nobody knew what to do with all the extra space. When that happened, x86 instruction decoding became a tiny area of the chip. Removing legacy cruft from x86 really wouldn't have been a significant design win after about P6/K7.
Instead of being a design win, the fixed instruction length of the RISc architectures no longer meant improved performance through simple decoding. They meant that even simple instructions took as much space as average instructions. Really complex instructions weren't allowed, so they had to be implimented as multiple instructions. Something that was one byte on x86 was always exactly 4 bytes on MIPS. Something that was 12 bytes on x86 might be done as four instruction on MIPS, and thus take 16 bytes. So, effective instruction cache sizes and effective instruction fetch bandwidth grew on X86 compared to purer RISC architectures.
At the same time, the gap between compute performance and memory bandwidth on all architectures was widening. Instruction fetch badwidth was irrelevent in the time of the PC XT, because RAM fetches could actually be done in like a single cycle. Less that it takes to get to SRAM on-chip caches today. But, as time went on, memory accesses became more and more costly. So, if a MIPS machine was in a super tight loop that ran in L1 cache, it might be okay. But, it it was just going balls to the wall through sequential instructions, or a loop that was much larger than cache, then it didn't matter how fast it could compute the instructions if it couldn't fetch them quick enough to keep the processor fed. but, X86 absurdly ugly instruction encoding acted like a sort of compression, meaning that a loop was more likely to fit in a particularly sized cache, and that better use of instruction fetch bandwidth was made.
Also, people had software that ran on X86, so they bought 9000 bazillion chips to run it all. The money spent on those 9000 bazillion chips got invested in building better chips. If somebody had the sort of financial resources that Intel had to build a better chip, and they shipped it in that sort of volume, we might well se an extremely competetive desktop SPARC or ARM chip.
Re:Why Apple moved to x86 (Score:1, Informative)
#1 economies of scale, intel offered lower cost per chip, so Apple can increase profit while holding the cost of the product the same.
#2 Ability to more frequently upgrade. Intel releases newer faster version of processors much more rapidly then can be done with a custom processor.
#3 It's the software. The Achilles heal for Apple was always look at how little software it runs or the every popular but I want to run games. Granted there were ways to try and get some programs working but it was tedious and an undertaking 95%+ of the population probably wouldn't take. Now by being on Intel the greatly simplify the porting of software.
It's all about the software now, while certain people may drool over hardware, I'm one of them, I love hardware, It's the software that sells, it's the software that defines a box. Apple has essentially simplified the battlefield, taking away it's weakness to more concentrate on MSFT. Look we run on the same hardware they do, look we run the same programs they do, but wait we do have that pesky MSFT OS.
Anyway my 2 cents
But you do use the metric system (Score:3, Informative)
And while you seem to be holding out, I did see one website that suggested less thna 7% of the worlds population doesn't use the metric system
CISC (x86) vs RISC (Score:3, Informative)
http://www.heyrick.co.uk/assembler/riscvcisc.html [heyrick.co.uk]
Re:Not a technical reason (Score:2, Informative)
The process advantage means Intel can get a horrible architecture (x86) to perform acceptably at a decent price/performance point. RISC chips, while faster, require different software. People aren't going to change their software unless a good reason exists. The process advantage of Intel, means that Intel can sell good processors at a reasonable price. Given that, why switch? The x86 is even clobbering Intel's own Itanium (Itanic) architecture in terms of sales.
Other hardware vendors are competitive in market segments that place very high values on particular system metrics. For instance, the ARM processor is very competitive for low power dissipations and 32-bit applications. The 8-bit embedded microcontrollers (PIC, 8051) are really cheap. RISC chips still dominate the high performance computing market.
Re:Why do we ... (Score:3, Informative)
from snopes.com [snopes.com]
Re:Easy (Score:4, Informative)
Re:But you do use the metric system (Score:2, Informative)
Microsoft did do that with NT (Score:3, Informative)
Even in the 80s/90s it would have been completely possible for Microsoft to support a wide range of processors ( if their OS was designed correctly )
Microsoft did do that: there was a time when NT ran on X86, DEC Alpha, and PowerPC machines. (Okay, that's not a huge range, but the point stands.)
X86s became cheaper and cheaper, and continuing development of NT on !X86 became financially infeasible due the rapidly-shrinking market share for non-PC platforms.
Re:Why do we ... (Score:3, Informative)
68k vs. 8086/8088 (Score:4, Informative)
There were a number of factors that made the IBM team go for the Intel solution rather than the superior 68000 chip from Motorola. One factor was, as you said, the 8088 chip could use plentiful and well-understood 8-bit peripheral chips like the 8251, the 8259 etc. Another factor was that the 68000 had a long gestation -- Motorola had problems producing chips in quantity that could actually run at their specced speed of 8MHz. I played with an early dev board which had a working 68000 but it only ran at 4MHz. At that time the 8086/8088 were already available on the market, in quantity and that's what IBM needed.
The biggest factor though was probably the software. The x86 architecture was structurally similar to the venerable 8080 family's architecture (hence the memory segmentation, 8/16 bit registers etc.), and there was a lot of 8080-family code already out there, including developer toolsets like compilers and assemblers. Intel also released conversion tools for coders to convert existing 8080 code to run on their new 16-bit CPUs. The M68k was radically different structurally from Motorola's 6800 8-bit chips and its instruction set (although a dream to code new stuff for) did not make transitioning older 8-bit code easy.
Re:Code Size is the answer (Score:3, Informative)
Increment/decrement of a register is gone in 64 bit code and painfully slow anyway (it seemed a good idea in 1977, but partial condition code register updates are a pain). Lots of floating point instructions have three or four bytes in prefixes alone, that's before we even start with the instruction itself. Multiplication by 5 using LEA instruction on a P4 took five cycles; on an ARM it takes one...
I had to write a bit of decryption code a few months ago, and a 200 MHz ARM chip was about as fast as an 800 MHz P4.
Re:Easy (Score:3, Informative)
Re:Easy (Score:3, Informative)
Once you've got the CPU, you need a motherboard. This is a little easier to design, and if you're lucky you can use off-the-shelf supporting chips, or make your CPU a monolithic system on chip and have things like memory, USB, PCIe, SATA, etc. controllers integrated. You can even put RAM on the die, but that is going to drive up your production costs a lot (it's much cheaper to buy DRAM modules).
If you want to build a non-x86 machine and still want a general purpose machine, I'd recommend that you look at ARM or PowerPC chips. You can buy ones intended for high-end embedded systems quite cheaply, and these are in the GHz range, which is fast enough for a lot of things. You can also pick up an evaluation board quite cheaply, so you don't need to build your own motherboard.
Of course, when I say cheaply, you're still talking about at least twice the price of x86 for equivalent performance. Why? Because you're getting limited volume stuff. PowerPC and ARM computers are cheap; the odds are that you own at least one or two, but don't think of them as computers (what do you think your mobile phone is?) but they tend to focus on low power rather than high performance. My current mobile phone has about the same processing power, RAM, and storage as my desktop from ten years ago though, so it may be enough for a lot of applications. My personal recommendation would be to go for a PowerPC 400 series.
With Free Software, the cost of moving to a new processor architecture is cheap, but there still needs to be an incentive. Typically, that incentive is much better performance. An example of this is the Sun T1 chip, which gives much better performance both per watt and per dollar on a lot of web and database server workloads, and much worse performance on a lot of others.
Unfortunately, it's a chicken and egg problem. Developing a CPU that's competitive in year X has a relatively fixed cost; you need a group of bright people to go from concept to mask and you need to set up some fabrication facilities. These are roughly fixed costs (you can save a bit for smallish runs by using someone else's fab, but it's still a big capital cost for them to do the set-up, and you have to pay it). Once you've paid these costs, then CPUs don't actually cost that much individually to manufacture. The second one you make probably costs under ten dollars. Unfortunately the first one cost several hundreds of thousands of dollars (at least; for something like the Core 2, you are talking hundreds of millions of dollars), so you need to sell enough to make enough to cover this.
Will we keep using x86? Yes and no. x86 dominance will, I suspect, last until the end of the desktop era. I doubt it will last much longer though, so expect to be using something else in ten years or so. In the CPU market, x86 is a relatively small player; it's only the desktop (including laptops) market that it has such a large presence.
Re:Why do we ... (Score:2, Informative)
According to Wikipedia, the Virgin Islands drive on the left.
And by the way, there are readers from many different countries on Slashdot, and I (from NZ) drive on the left.
Silly rabits, x86 has been RISC core since PPro (Score:5, Informative)
Re:Silly rabits, x86 has been RISC core since PPro (Score:3, Informative)
Just to clarify, IA-64 [wikipedia.org] was implemented only in Itanium processors, and is unrelated to x86-64. Intel themselves tried to break away from the x86 line, and the market wasn't very receptive, which is part of what drove the creation of x86-64 instead by AMD.
Re:Easy (Score:3, Informative)
They probably picked the 8088, rather than the alternative 16 bit processor at the time, the somewhat better Motorolla 68000 used in (expensive) workstations at the time, because of it being very close to the 8 bit 8080/Z-80 chip - which was powering CP/M boxes that were the business software standard. That's why their first choice in operating system vendor was Digital Research, the makers of CP/M. If you're a whippersnapper, you may not remember CP/M, but it was the original platform of Wordstar, dBase II, and a number of other popular business software programs at the time. (Even MS's own MBASIC had its latest version running on CP/M.) Because of the 8088's similarity in instruction set with the 8080 and PC-DOS's similarity to CP/M, these applications were quickly ported (at first complete with the 8080 and Z-80's 8 bit limitations!) to the PC.
When Digital Research failed to follow through with IBM (Gary Kidall's famous plane flight). IBM then went to the biggest microcomputer software vendor, Microsoft, with a proposition depending upon if they had an operating system soon available for them. They lied and said that not only they had one, they had one already made to keep others from making offers, bought it for less than $20,000 from a local hobbiest hardware maker, and sold it to IBM; much in the same way that they secured a monopoly in BASIC for the Altair by claiming they already had BASIC written for it. (Of course, this is far from the worst of their business practices.)
Re:Easy (Score:2, Informative)
I took more than one call from IBM's sales people trying to get me to consider upgrading to a better computer at 10 times the cost. A friend had a job of putting a flyer in boxes at the factory that offered to take the band new PC as a trade in for some thing like a 360 or whatever was their current mini at the time. The temporary workers at the factory (in Boca Raton) were told that the assembly line on the new PC would shut down with very little notice and were offered a very nice severance package. Even IBM's management had odd attitudes about the product until about two years after the XT.
Re:Silly rabits, x86 has been RISC core since PPro (Score:3, Informative)