Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Intel Operating Systems Software

Why Do We Use x86 CPUs? 552

bluefoxlucid asks: "With Apple having now switched to x86 CPUs, I've been wondering for a while why we use the x86 architecture at all. The Power architecture was known for its better performance per clock; and still other RISC architectures such as the various ARM models provide very high performance per clock as well as reduced power usage, opening some potential for low-power laptops. Compilers can also deal with optimization in RISC architectures more easily, since the instruction set is smaller and the possible scheduling arrangements are thus reduced greatly. With Just-in-Time compilation, legacy x86 programs could be painlessly run on ARM/PPC by translating them dynamically at run time, similar to how CIL and Java work. So really, what do you all think about our choice of primary CPU architecture? Are x86 and x86_64 a good choice; or should we have shot for PPC64 or a 64-bit ARM solution?" The problem right now is that if we were going to try to "vote with our wallets" for computing architecture, the only vote would be x86. How long do you see Intel maintaining its dominance in the home PC market?
This discussion has been archived. No new comments can be posted.

Why Do We Use x86 CPUs?

Comments Filter:
  • Chicken and Egg (Score:4, Interesting)

    by RAMMS+EIN ( 578166 ) on Thursday January 04, 2007 @11:24AM (#17458324) Homepage Journal
    I think it's a chicken and egg proposition. We use x86, because we use it. Historically, this is because of the popularity of the PC. A lot of people bought them. A lot of software was written for them. Other architectures did not succeed to displace the PC, because of the reluctance of people to abandon their software. Now, with years and years of this happening, the PC has actually become the most performant platform in its price class, while simultaneously becoming powerful enough that it could rival Real computers.

    Slowly, other architectures became more like PCs: Alpha's got PCI buses, Power Macs got PCI buses, Sun workstations got PCI buses, etc. Eventually, the same happened to the CPUs: the Alpha line was discontinued, Sun started shipping x86-64 systems, and Apple started shipping x86 systems. The reason this happened is that most of the action was in the PC world; other platforms just couldn't keep up, in price and performance.
  • by ivan256 ( 17499 ) on Thursday January 04, 2007 @11:26AM (#17458342)
    The reason is that intel provides better infrastructure and services that any other high performance microprocessor vendor in the industry. When Motorola or IBM tried to make a sale, intel would swoop in and offer to develop the customer's entire board for them. The variety of intel reference designs is unmatched. Intel not only provides every chip you need for a full solution, but they do it for more possible solution sets than you can imagine. Intel will manufacture your entire product including chassis and bezel. Nobody even comes close to intel's infrastructure services. That is why even when other vendors have had superior processors for periods of time over the years, intel has held on to market leadership. There may be other reasons too, but there don't have to be. That one alone is sufficient.

    The other answer, of course, is that we don't always... ARM/xScale has become *very* widely used, but that is still coming from intel. There are also probably more MIPS processors in people's homes than x86 processors since the cores are embedded in everything.
  • Re:momentum (Score:4, Interesting)

    by Frumious Wombat ( 845680 ) on Thursday January 04, 2007 @11:33AM (#17458430)
    If you're dead-set on using GCC, yes. Alternately, if you use the native compilers which only have to support a fairly narrow architecture, you can get much higher performance. XLF on RS/6K and Macs was one example (capable of halving your run-time in some cases), IFORT/ICC on x86 and up, or FORT/CCC on DEC/Compaq/HP Alpha-Unix and Alpha-Linux were others. Currently GCC is not bad on almost everything, but native-mode compilers will still tend to dust it, especially for numeric work.

    Which brings back the other problem; not only are x86 chips cheap to make, but we have 30 years of practice optimizing for them. Their replacement is going to have to be enough better to overcome those two factors.
  • by RAMMS+EIN ( 578166 ) on Thursday January 04, 2007 @11:36AM (#17458468) Homepage Journal
    ``Performance per watt.''

    Not as I remember. As I remember, the PPW of PowerPC CPUs was pretty good, and getting better thanks to Freescale, but the problem was that Freescale's CPUs didn't have enough raw performance, and IBM's didn't have low enough power consumption. Freescale was committed to the mobile platform and thus was only focusing on PPW, whereas IBM was focusing on the server market, and thus favored performance over low power consumption. Seeing that the situation wasn't likely to improve anytime soon, Apple switched to Intel.
  • by Anonymous Coward on Thursday January 04, 2007 @11:40AM (#17458528)
    Dynamic translation through JIT optimisation isn't really that efficient for the general case, at least from an x86 source*.

    To see a good example of this, look at Transmeta Crusoe, which appeared to be an x86-compatible device, but was actually a 2-issue VLIW core running a software x86 emulator with a JIT compiler. Crusoe was really efficient at benchmarks, but its performance for "real world" applications was not so good. The simplest methods of optimisation - cache, branch prediction table, superscalar issue unit - seem to be more effective than complex optimisations involving recompilation.

    In Crusoe, the processor was specifically designed to operate by dynamic translation. It had hardware support for some things, like undo-ing speculatively executed instructions. If you pick a random ARM or PPC processor as your target, you don't get this, so performance will be even worse.

    If your source language isn't x86 code, you can clearly do more. If, for example, your source is written in C, you can do as much as an ordinary compiler. If your source is an intermediate register-transfer language, you can do almost as much. But x86 code doesn't provide much information to facilitate recompilation.

    * disclaimer: my PhD is in this subject area but I've not finished it yet.
  • by swillden ( 191260 ) * <shawn-ds@willden.org> on Thursday January 04, 2007 @11:56AM (#17458760) Journal

    The x86 ISA hasn't been bound to Intel for some time now. There are currently at least three manufacturers making processors that implement the ISA, and of course there is a vast number of companies making software that runs on that ISA. Not only that, Intel isn't even the source of all of the changes/enhancements in their own ISA -- see AMD64.

    With all of that momentum, it's hard to see how any other ISA could make as much practical sense.

    And it's not like the ISA actually constrains the processor design much, either. NONE of the current x86 implementations actually execute the x86 instructions directly. x86 is basically a portable bytecode which gets translated by the processor into the RISC-like instruction set that *really* gets executed. You can almost think of x86 as a macro language.

    For very small processors, perhaps the additional overhead of translating the x86 instructions into whatever internal microcode will actually be executed isn't acceptable. But in the desktop and even laptop space, modern CPUs pack so many millions of transistors that the cost of the additional translation is trivial, at least in terms of silicon real estate.

    From the perspective of performance, that same overhead is a long term advantage because it allows generations of processors from different vendors to decouple the internal architecture from the external instruction set. Since it's not feasible, at least in the closed source world, for every processor generation from every vendor to use a different ISA, coupling the ISA to the internal architecture would constrain the performance improvements that CPU designers could make. Taking a 1% performance hit from the translation (and it's probably not that large) enables chipmakers to stay close to the performance improvement curve suggested by Moore's law[*], without requiring software vendors to support a half dozen ISAs.

    In short, x86 may not be the best ISA ever designed from a theoretical standpoint, but it does the job and it provides a well-known standard around which both the software and hardware worlds can build and compete.

    It's not going anywhere anytime soon.


    [*] Yes, I know Moore's law is about transistor counts, not performance.

  • Re:Easy (Score:5, Interesting)

    by HappySqurriel ( 1010623 ) on Thursday January 04, 2007 @12:01PM (#17458842)
    The reason "We" use x86 is because "we" use PCs, where x86 technology is dominant and obvious. However, "we" also use PDAs, cell phones, TiVos and even game console systems. As the functions of those devices melt into a new class of unified devices, other architectures will advance.

    Honestly, I think it is much simpler than that ...

    The problem has very little to do with the processors that are used and is entirely related to the software that we run. Even in the 80s/90s it would have been completely possible for Microsoft to support a wide range of processors ( if their OS was designed correctly ) and produce OS related libraries which abstracted software development from needing to directly access the underlying hardware; on install, necessary files would be re-compiled and all over the shelf software could run on any architecture that Windows/dos supported. In general, the concept is combining standard C/C++ with libraries (like OpenGL) and recompiling to ensure that no one was tied to a particular hardware architecture.

    Just think of how many different architectures Linux has been ported to, if DOS/Windows was built in a similar way you'd be able to choose between any architecture you wanted and still be able to run any program you wanted.
  • by Lord of Hyphens ( 975895 ) <lordofhyphens.gmail@com> on Thursday January 04, 2007 @12:04PM (#17458876) Homepage
    Interesting note: Intel did sell its ARM/xScale off a few months ago.
  • by AndrewHowe ( 60826 ) on Thursday January 04, 2007 @12:28PM (#17459254)
    I can see where you're going with this... But... Well, not so much.

    RISC CPUs with 4-byte instructions that don't do very much require lots of memory bandwidth to execute.

    Well, I'm currently working on ARM, and stuff almost always ends up smaller than x86 code. Those 4-byte instructions actually do quite a lot. Oh, and that's with straight ARM code, not Thumb or Thumb-2.

    The x86 instruction set has lots of 1-byte instructions

    Not so many actually, and the ones it does have are mostly totally useless these days!

    and multi-byte instructions that do a lot.

    Well, you get to do fancy addressing modes on the rare occasions that you need them... But not too fancy, no pre/post increment/decrement etc.

    In other words, x86 is really just a compression scheme for instruction sets.

    Sort of, except that it was never designed to be one, and it's not very good at it at all.
    Well, you could say that it was an OK (but not great) encoding for 8086, but it's totally unsuited to encoding the instructions that modern software actually uses.
  • Re:Easy (Score:3, Interesting)

    by Marillion ( 33728 ) <ericbardes@gm[ ].com ['ail' in gap]> on Thursday January 04, 2007 @12:43PM (#17459512)

    Random note after I clicked submit - The designers of the DEC Alpha chip designed it as a 64bit Big Endian chip. Microsoft convinced DEC to add a feature to the Alpha that switched CPU to a 32bit Little Endian chip so that Microsoft wouldn't have to recode all their the apps that processed binary files with the assumption that integers were four byte little endian.

  • by Anonymous Coward on Thursday January 04, 2007 @01:08PM (#17460012)
    It also worked quite well with IBM's DAISY project, although it should be said that "quite well" is hard to define. For making better processors, this type of technology is currently competing with simpler, cheaper strategies like "add more cache", which are still viable.

    However. A RISC machine code is a better input for a recompiler than x86 code, as the register space is larger. The recompiler has to reconstruct parts of the original code before compilation Ideally, all of the original code would be recovered, allowing it to be completely optimised for the new target architecture, but unfortunately machine code generation is a very lossy process.

    Register assignment is just one part of the compilation process where information about the program is lost. A larger register space means that less information is lost, so RISC code makes a better input for a recompiler than x86. It is still not ideal though. Recompilers have to make conservative assumptions about code behaviour.
  • by Ninja Programmer ( 145252 ) on Thursday January 04, 2007 @01:22PM (#17460348) Homepage

    bluefoxlucid asks: "With Apple having now switched to x86 CPUs,
    I've been wondering for a while why we use the x86 architecture
    at all. ..."

    Because its a better CPU.

    "... The Power architecture was known for its better performance
    per clock; ..."

    Utter nonsense. This is a complete lie. Benchmarks do not bear this out. And this is besides the fact, that this qualifier reveals the PowerPC's primary weakness -- it has a far lower clock rate.

    "... and still other RISC architectures such as the various ARM
    models provide very high performance per clock as well as
    reduced power usage, opening some potential for low-power
    laptops. ... "

    ARM is currently made by Intel. It does have a high ops per clock performance, but it does so at a severe complexity penalty which drammatically limits clock rate. You can't get "free extra shift" or "free conditional computation" without some compromise to the architecture.

    " ... Compilers can also deal with optimization in RISC
    architectures more easily, since the instruction set is
    smaller and the possible scheduling arrangements are thus
    reduced greatly. ... "

    Nice in theory. Intel's latest generation compilers put other compilers to shame. Remember that x86s perform a lot of auto-scheduling themselves. While it may seem like putting more scheduling pressure onto the compiler seemed to make sense back in the 90s, no compiler can solve them totally correctly. This is critical especially in dynamic situations such as cache and branch misses (which the compiler can often neither detect or even solve). By letting the CPU solve the problem dynamically as the problems occurr, it can do so nearly optimally all the time.

    " ... With Just-in-Time compilation, legacy x86 programs
    could be painlessly run on ARM/PPC by translating them
    dynamically at run time, similar to how CIL and Java
    work. ... "

    Are you smoking pot? The state of the art in x86 CPU emulation are the Itanium and TransMeta CPUs. Both failed precisely because of their pathetic performance of their x86 emulators. The x86 has complicated addressing modes, flag registers, structured sub-registers, unaligned memory access, etc, which does not easily translate to "clean RISC" architectures. (However, they do translate to straight forward hardware implementations, as AMD and Intel have proven.)

    " ... So really, what do you all think about our choice of
    primary CPU architecture? ... "

    It is the correct and logical choice. If RISC were really the greatest thing since sliced bread, then PowerPC should be running circles around x86. But the truth is that it can't even keep up.

    "... Are x86 and x86_64 a good choice; or should we have shot
    for PPC64 or a 64-bit ARM solution?"

    Why would you want to use a slower, and less functional CPU? Yes, I said *LESS FUNCTIONAL*. Look into how the PowerPC performs atomic lock operations. Its pathetic. Its just built into the basic x86 architecture, but the PowerPC requires significant external hardware support (via special modes in the memory controller) to do the same thing. x86 just supports "locked memory addresses" which nicely maps to the caching modes.

    PowerPC is missing both the right instructions and relevant memory semantics to support it directly. PowerPC uses seperate lock instructions for cache lines, which means that each thread can lock out other threads arbitrarily; if you crash or stall with a held lock, all dependent threads deadlock. It also means you can't put multiple locks in a single cache line and expect them to operate ind

  • As apple proved (Score:3, Interesting)

    by polyp2000 ( 444682 ) on Thursday January 04, 2007 @01:30PM (#17460512) Homepage Journal
    by keeping their code open (at least internally) and cross platform it really doesnt matter what architecture it is running on. The switch to intel was comparitively quick - relatively speaking.

    The reasons for apples switch were made on costing and performance - and undoubtably because IBM failed to deliver.

    Of course if something else comes up - I imagine we would see another change.

    N.
  • Re:momentum (Score:5, Interesting)

    by Chris Burke ( 6130 ) on Thursday January 04, 2007 @01:31PM (#17460546) Homepage
    You're absolutely right, it's all about momentum.

    Hard to optimize? You only have to optimize the compiler once, over the millions of devices this cost is small.

    This is a red herring anyway. RISC being simpler has nothing to do with it being easier to optimize. If it is easier for a compiler to optimize simple RISC-like instructions, then the compiler can use RISC-like instructions that are present in x86. This has been the situation for years and years. Compilers use a basic subset of x86 that looks a lot like RISC (minus variable instruction lengths), but also with some of the decent syntactic sugar of x86 like push/pop and load-ops (you know: add eax, [esp + 12] to do a load and add in one inst).

    The only real obstacle for compilers optimizing x86 is the dearth of registers. With fast l1 caches and stack engine tricks like in Core Duo the performance hit for stack spillover isn't big, and x86-64 basically solves the problem by doubling the register space. Less than a RISC machine, but enough for what research has shown is typically needed. Maybe still a little too few, but combined with the first point enough to make this a wash.

    These arguments are as old as RISC itself, but the basis behind them has changed as the technology has changed. All of the performance, efficiency, and other technical arguments have been put to pasture in terms of actual implementations. In the end, it comes down to this:

    The only reason not to use x86 is because it is an ugly mess that makes engineers who like elegance cry at night.
    The only reason to use x86 is because it runs the vast majority of software on commodity chips.

    Which of these factors dominates is not an open question; it has already been decided. It's just those engineers who like elegance can't accept it, and thus keep bringing it up. Believe me, I don't like it either, but I don't see the point at screaming at reality and demanding that it change to suit my aesthetics.
  • by Sebastopol ( 189276 ) on Thursday January 04, 2007 @01:55PM (#17461082) Homepage

    Compilers can also deal with optimization in RISC architectures more easily

    This is a dead giveaway that the author is just stabbing at the wind. Scheduling is no more complex with CISC than with RISC. In fact, some compilation can be optimized even better by specialized CISC instructions that happen frequently. This is an ancient debate that is a tie.

    With Just-in-Time compilation, legacy x86 programs could be painlessly run on ARM/PPC by translating them dynamically at run time, similar to how CIL and Java work.

    Yeah, and run 20x slower.

    what do you all think about our choice of primary CPU architecture

    The R&D was sunk into x86 by the two most able teams, Intel and AMD. Companies are driven by profit, and higher profit meant honing x86 and leveraging an installed base. That's the only reason why we are x86 world.

    But claiming a RISC world would be better is again an argument that has been put to rest decades ago: we'd be having the same argument reversed if RISC was the dominant architecture.

    Further, there are pedants out there who will argue all x86 is really RISC under the hood, but that's a bit misleading.

  • by dido ( 9125 ) <dido AT imperium DOT ph> on Thursday January 04, 2007 @02:04PM (#17461270)

    This reminds me of an old Dr. Dobb's Journal article that I read more than a decade ago entitled "Personal Supercomputing" (I believe it was back in 1992 or thereabouts) where the author found a good use for a 486+i860 (remember that chip?) combo that involved making the i860 a computation engine and the 486 sort of like an I/O processor, and IIRC it was called PORT. The compiler set for this system didn't generate native i860 or x86 code, but instead compiled C or FORTRAN programs into a type of fixed-length instruction set tailored to the source language. The i860 would interpret this instruction set using a very efficiently hand-optimized interpreter that could fit almost entirely within the on-chip cache, reasoning that the frequent cache misses that come from executing RISC code directly are much more expensive than the interpretation overhead, and it seems that this observation was correct, at least in that case. Essentially, instead of using the i860 as a native RISC processor, the author used it as what could be considered a CISC processor with what amounted to programmable microcode inside its on-chip cache! The author even went so far as to say that this is the way RISC processors should really be used.

    I wonder why no one has tried to use this same approach with more modern RISC architectures. I can see that the approach doesn't lend itself well to multitasking (the article was concerned with building supercomputers, to which multitasking is the very antithesis), but it is similar in principle to how VM's such as Java's and the .NET CLR work. It should also be noted that the instruction sets that the compilers that the PORT system used were memory transfer instructions designed in such a way that most individual statements in C or FORTRAN would compile to at most one instruction, rather than stack-based instruction sets like those used by Java and .NET.

  • MS CPU (Score:2, Interesting)

    by codecore ( 395864 ) on Thursday January 04, 2007 @02:05PM (#17461292)
    Actually, the writing is on the wall. There will be a new architecture in the 5-10 year time-frame. Microsoft has openned a design center in Silicon Valley, and I suspect that they are developing the IP for a MSIL core. They will then license this IP to any vendor (ala ARM). But how do you introduce a new architecture without an installed base of software (ala 29K, 88K, T800, Clipper, etc)? Well, any software that has targetted the CLR will run on the new cores natively, or with an efficient JIT translation (ala Jazelle). This should open the CPU market to many other players. A good thing. What about legacy code? For source code, there will be a tool for translating C to C#. C++ may be translated to Managed C++, or C#. Java also maps to C#. For binaries, there will be a load-time translator (HDD image is x86, memory image is MSIL), and perhaps an install-time translator (DVD-ROM image is x86, HDD image is MSIL). This is all just my speculation, but the text on that wall looks pretty clear to me.
  • Re:Another reason (Score:3, Interesting)

    by vought ( 160908 ) on Thursday January 04, 2007 @02:17PM (#17461498)
    We know what happened to the PPC port (it was finished by Apple), we know what happened to the x86 port (it was secretly maintained by Apple).

    Having worked there during the mid-to-late 90s, I would characterize the effort more as "overhauling NeXTSTEP into Mac OS X and maintaining an x86 build".

    It was a matter of weeks after the acquisition was finalized before NeXT engineers had a PPC build running - minus a lot of support for Apple's ASICs. At the time, Apple's machines were pretty heavily specilaized, with custom ASICS for video (powerbooks), I/O, and specialized interfaces (ADB); other than supporting these chips and their bugs, there wasn't much to finish. ;-|

    I remember trying to NetBoot a Workgroup Server 700 (2x200MHz 604) with Rhapsody in late 1997. At that point, it had been up and running and demonstrated on campus several times throughout the summer. I never did get that to work because the Rhapsody team did not support the C&T video chip the NWS hardware used. Too bad, because those machines were extremely capable hardware.

    The x86 builds were maintained on control hardware as a matter of course throughout the Rhapsody, Mac OS X Server and Mac OS X days. Most Mac OS X development was done on PCs pre-2000 IIRC. I don't see the point in keeping it going on SPARC, but given the rumors about Apple and Sun during this time, I wouldn't have been surprised it was working on those boxes too.
  • Re:Easy (Score:2, Interesting)

    by Ngarrang ( 1023425 ) on Thursday January 04, 2007 @02:26PM (#17461666) Journal

    *cough* Games? And don't give me that lame "well, you get an Xbox/Playstation/Wii/whatever for those!" answer to that question.

    There is no better controller combo in FPS games than WASD + mouse. Period.

    I would partially agree with this. Game makers are now optimizing their games for new graphics cards, most of which are not x86 CPUs. These GPUs are custom and fast. They target the broadest market to garner the most money, which currently means M$ Windows, which means an x86 CPU. I firmly believe that if M$ ported Windows to the Cell processor and maintained a way to run the older apps, that Cell processor would become the new worldwide standard.

    Maybe the perfect computer is a computer powered by a Cell processor with 16Gb of RAM running several virtual machines. For that matter, with virtualization, anyone could create their own virtual architecture and run it as if it really existed in hardware.

  • Limitations of x86 (Score:4, Interesting)

    by alexhmit01 ( 104757 ) on Thursday January 04, 2007 @02:34PM (#17461798)
    You're right about the advantages of the CISC ISA vs. a RISC ISA, but I wanted to throw a few more points out.

    Originally, going to RAM was cheap (in terms of cycles), going to disk was slow, so we loaded what we could in RAM and processed it. However, RAM was VERY expensive, until very recently, having "enough RAM" was rarely affordable. NT took so long to mass market (Win2K sort of, XP did it, almost 10 years), because when NT 3.1 shipped, it wanted 16 MB of RAM on the x86, and 32MB of RAM on the other systems, but going above 8 MB required specialized RAM because the RAM Chips (you plugged chips into sockets then, not chips on cards with standardized interfaces) were mass produced for 1 MB, 4 MB, and 8 MB, but going to 16 MB required using VERY expensive (relative to normal RAM) chips. So upgrading from 4 to 8 was normally doable (usually, they used the same chips, and you filled half the slots for 4MB, I think, it's been a long time since I had a 486 computer), but going to 16MB would often cost $2000 for the new RAM, when computers sold for $2000.

    In the days of expensive RAM, the tighter ISA of x86 (more instructions per megabyte) gave them a major advantage in the real world. Sure, the ISA was crap, and the chips were crap, but when the most expensive component was RAM, the x86 used on average half the memory as the RISC competitors, which gave Intel a HUGE advantage in the cost-conscious desktop fight. It wasn't until the last 5 years, when Microsoft stagnated in their quest to use up more and more memory with each release (largely by failing to release OS updates), that the continual growth of RAM outpaced the computers. WinXP will run in 256MB, and run decently on 512MB, but 1GB or 2GB of RAM is reasonable for a decent system, and 512MB is not reasonable for a budget system. However, when we were struggling cost wise with 4MB and 8MB, the larger size of RISC programs was a problem.

    Up until this point, it wasn't clear that x86 was the winner, it was the release of Windows 95 on the Pentium chips when Microsoft "won" the market, up until then, Windows was niche, OS/2 looked promising, Apple was a contender, and everyone just ran DOS/WP5.1 and NetWare 2.0. Up until 1995, it was anybody's game.

    The biggest hit to the x86 was the lack of registers. In the 8088 and 8086 days, going to RAM wasn't too expensive, and the chip couldn't do much in the mean time, so we didn't care so much that it was the most register starved system. However, as chips got faster, going to RAM got expensive, and we didn't have registers, which is why the x86 GOT SMOKED in tightly run loops, because it couldn't keep enough data in there. The original cache banks (these were high tech, the chips were on a little card you plugged into your motherboard, you could even upgrade them for more) were to run faster than RAM, and created a third tier. Originally, this seemed like a hack because of the lack of registers.

    However, our chips have massively increased in speed in the past 10 years (we were running at ~75-200 MHz in 1996) which meant that flooding the processor with data is the problem. The clock cycles are VERY short (we run ~ 2 Ghz, I remember the excitement at AMD making a 12 MHz 286, the 8088 started at 1 or 2 MHz), which means that carrying the signal over the wires is now an issue, so our motherboards are tighter, we keep cache ON THE CHIP, etc.

    One reason that the x86 always outperformed was that once going to RAM became expensive, the smaller instruction size (and at the time, having 16-bit integers instead of 32 or 64) meant that if Intel provided 128 KB cache, then the other players needed 256 KB or even 512 KB to have the same caching advantage. This means, all things being equal, RISC was the better architecture, but IN REALITY, x86 could do the same amount of work with half or less resources. This allowed the computers to price cheaper, AND it meant that Intel could make HUGE profits.

    For example, if RISC Vendor A sold a solution for $2500, assuming $2000 in parts
  • by Reziac ( 43301 ) * on Thursday January 04, 2007 @02:52PM (#17462144) Homepage Journal
    Recently I saw a stat that about 40% of the power consumption in California goes to feed server farms. Given that, even a 1% savings is millions, possibly billions of dollars worth.

  • Re:momentum (Score:3, Interesting)

    by Frumious Wombat ( 845680 ) on Thursday January 04, 2007 @04:24PM (#17463944)
    The time I saw the runtime halved, it was a piece of code that had been rearranged to run well on early-90s Cray/Alliant vector machines. Dusty-deck fortran that grew up on VAXes or IBM mainframes doesn't do nearly as well, though I still can generally get 15-30% vs. g77. I can also get wierd bugs, as much of that code depends on, *ahem*, "features" left over from Fortran-66 which have since gone away and modern compilers (which are really F90/F95 compilers) don't support well.

    Personally, I'm sorry I won't be getting XServes with Power6 processors and 64-bit XLF, but price/flop for x86 isn't all that bad.

    Btw, my informal testing so far is showing that on PPC GFortran is about 12% slower than XLF for 32-bit code, which makes it significantly faster than the commercial competitors. A group that provides one of the packages I use (3/4 million lines of F77) recommends GFortran for building the 64-bit AMD version, for reasons of both speed and stability. So, it's getting better, but since I don't use C I don't know how much improvement GCC is showing for numerics.
  • Re:68k vs. 8086/8088 (Score:3, Interesting)

    by scdeimos ( 632778 ) on Thursday January 04, 2007 @11:45PM (#17469280)
    The 68000 was packaged in an enormous 64-pin package (I've heard it called "the aircraft carrier") and IIRC it required three voltages.

    Ah, no, the 68000 was definitely a single-supply 5 volt chip. It did come in via two pins, though (14 and 49), to spread the current load across the die.

    Perhaps you're confusing the power requirements of the chip with the power outputs of the Macintosh power supply which were +5v (main logic), +12v (drive and video power) and -5v (serial).

The hardest part of climbing the ladder of success is getting through the crowd at the bottom.

Working...