Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Hardware

Benchmarks for Embedded Processors? 12

shill asks: "I am interested in working on an embedded linux application, but I'm having difficulty choosing a processor. Is there somewhere I can find benchmarks or technical comparisons of various embedded processors, like the Trasmeta Crusoe 5400, Intel StrongARM 1110, and National Geode GX1? I am looking for performance information as well as power requirements, etc."
This discussion has been archived. No new comments can be posted.

Benchmarks for Embedded Processors?

Comments Filter:
  • by jpt.d ( 444929 ) <abfall&rogers,com> on Thursday December 13, 2001 @10:52PM (#2702612)
    http://www.eembc.org/

    Specific:
    http://www.eembc.org/Benchmark/score/ScoreFindSt ep 1.asp?BenchmarkType=PRO

    To slashdot: Can we please keep these questions out that can be found with keyword search in google within first three results?

    Regards,
    jptd
    • I did a report on benchmarking in an embedded systems class I took last year. The great thing about EEMBC is that they try to eliminate some of the problems that are typically associated with benchmarking. They have members ranging from chipmakers like NEC, Intel, and Hitachi, to software makers like Redhat and WindRiver. They have five categories of tests, corresponding to different processor applications (like industrial, network, office apps). They also have varying levels to which chip makers can tweak the code to run on their hardware, from basic out-of-the box scores that reflect a basic compiler's output, down to hand tuning the assembly language. This is great because it shows you how much improvement you can get if you're really willing to dig into the algorithms' implementation. EEMBC should be able to put an end to misleading benchmark scores.
  • The PowerPC "G4" (Score:2, Informative)

    by TRoLLaXoR ( 181585 )

    At the heart of the current high-end Macs, routers, and switches is the PowerPC G4, which is what Apple and Motorola claim to be their "fourth generation" CPU that is the result of the three-way AIM alliance, which has been designing and fabbing chips in various PowerPC families since 1991.

    I contend that the "G4" is a blatant misnomer by Apple and Motorola to spur sales and compete with Intel's Pentium 4 product and nomenclature. Below I'll give some historical background, technical information, and plain facts that support my claim that the PowerPC G4 is really a second-generation processor, and the broader notion that the PowerPC family has not evolved signifigantly since 1995-- something Apple and Motorola propoganda has repeatedly accused the competition of in recent years. But first, the background...

    By 1991, the AIM alliance (Apple, IBM, and Motorola) had begun working on a single-chip implementation of IBM's RSA chipset. This was IBM R&D's attempt to hack the POWER architecture into one chip instead of several . Imagine, instead of having a 64mm PowerPC chip having to use a 64cm PowerPC *board*. That's unacceptable to the desktop market.

    Motorola brought bussing technology to the table, which had previously been intended for the "Ripfire" 88k RISC series (displaced by the PowerPC) and Apple brought years of motherboard knowledge and operating systems (A/UX, Mac OS 7, and the new, mysterious Copland project). Between these three giants, the PowerPC 601 was realized. It ranged from 50-125MHz but was soon replaced by a quartet of newer, second-generation (G2) parts-- the 602, 603, 604, and 620.

    The 602 was an embedded chip, being used for satellite descramblers, stadium scoreboards, and the Nintendo64. It lacked an FPU. The 604 was a workstation-class chip that was an absolute monster. Performance was above the Pentium Pro's. The 620 was a 64-bit godhead beast that trounced all known microprocessers of the day-- but was mysteriously canned after it had been included in only a handful of beta motherboards by the Bull Group. The 603 was designed to draw little power and be cheap to manufacture, but AIM had hobbled it a bit too much-- beta testing sent it back to the lab to add L1 caches and the ability to access L2 cache. Performance afterwards was dismal, but acceptable for cheap consumer devices for the time being.

    It was this enhanced PowerPC 603 that would be the basis of its own savior. Apple and Mot only admitted that the 603 was subar along its whole production run when they had a replacement ready. By taking the L2 caching of the 620 and adding it to the 603, they had created the PowerPC 750L. And to Apple and Mot, this small change justified dubbing it a whole new generation of processor. Say Hello to the G3.

    Fast forward a few years. By 2001, Motorola was shipping 800MHz PowerPC 7450s, a "G4" series part. The "G4" stands for "Generation 4," which is totally misleading. Look at it this way: the entire 74xx / G4 family is based on the "G3" family, its prime "advances" over the G3 being an FPU ripped from a PowerPC 604, and AltiVec, a questionable technology meant to operate on mulitple pieces of data at once (MMX, anyone?). To get a better look at the crawl from 603 to 7450, let's look at a chart.

    [censored by SLashdot Lameness Filter]

    As you can see the "G4" is really just an evolution of the 603. The more "features" Mot addes to the creaky, second-generation 603 core, the slower the chip goes. Don't believe me? Visit SPEC's site [spec.org] and read the numbers. A 500MHz PowerPC 7400 is just as fast as an 800MHz PowerPC 7450 at the same clockspeed. And why is IBM *and* Mot still continuing PowerPC 750 development!? Mot can no longer expect to push this aging family on to 1GHz. It's clear that for PowerPC to survive, something drastic must be done. To this end I suggest two possible courses of actions.

    First, since its initial run with the PowerPC 604, Motorola has introduced 3 new fabrication processes. I suggest applying these latest fabrication processes, as well as Silicon-on-Insulator and Copper wiring, to the 604e. It's highly probable that such a part could reach GHz speed. Seeing that the "G3" began at 200MHz and will top 1GHz soon, the 604e could do much better-- it started at 100MHz and made it all the way to 400MHz (not in any Mac, but in an MCG motherboard).

    The other, more expensive option is to resurrect the PowerPC 620 and include all of today's latest enhancements. Give it AltiVec, a copper process, Silicon-on-Insulator, on-chip L2 cache up to 4 megs in size, the ability to address up to 8 megs of L3 cache, SpeedStep technology, etc. etc. and you'd have a chip that nothing from Intel or AMD could touch. The MHz myth would be null and void, the MHz war would be over-- and a solution to using dodgy G2 technology to drive Macs and networks the world over would be achieved.

  • by pagercam2 ( 533686 ) on Thursday December 13, 2001 @11:26PM (#2702735)
    It all depends on what you want to do. Benchmarks are generally pretty useless and the power estimates even more so, what is the processor supposed to be running when the power estimate is taken. Small applications, running entirely out of cache will use less power than those that must use external memory, and then how do you decide if the power taken by the memory is part of the processor or not. The processor is driving the address lines and data lines when writing, but how do you issolate the power contribution between the two???? Chips like the Transmeta require support chips which adds a second number to the power contribution for the processor. As far as performance is you are doing floating point a processor with floating point will be better even if the clock speed is higher on an integer processor, inversely a simple processor that does only fixed point math is generally smaller and more power efficient. These processors each have different capabilities as far as peripherals, the StrongARM has a lot, the Transmeta relies of support chips, the Geode has some. Bench marks are only helpful in very general relative terms you really need to understand your application and match it to the processor to make a valid decision.
  • by boopus ( 100890 ) on Friday December 14, 2001 @01:57AM (#2703093) Journal
    I've had very little experience with embedded processors, but generaly embeded processors are expected to get their work done, at whatever rate is required. If the processor can do the task that it's being imbedded for, any extra speed is wasted. So define your problem, and find the cheapest way to get it accomplished.

    Once example of this is a parking garage system i've had the bad fortune to work on a couple of times. It consists of multiple 486's running linux. (these things have uptimes measured in years) Each machine spits out tickets or calculates times/rates or reads monthly-pass cards, and none of them need anything more than a 486, even the one with the wireless link to the accounting system, as changing the microsecconds to nanosecconds that they take to do their task wouldn't help anyone...
  • Wrong question (Score:4, Informative)

    by bluGill ( 862 ) on Friday December 14, 2001 @10:59AM (#2704134)

    Morre's law applies, so if you choose a processor that turns out to not be fast enough, you can install one twice as fast when you ship, and if that isn't fast enough, then none of today's current processors are fast enough either. Of course if you depend on one feature a lot, then you should choose a processor that has it, but normally this isn't an issue)

    Where I work we use the strongArm (SA110) a nice chip overall. However the diagnostics people estimate that we lost an year of devolpment because there is (or was?) no in circuit emulator. however the StrogArm has some nice supporting hardware, so it took us less time to design the hardware.

    x86 is an ugly instruction set. You should reject all x86 thoughts just based on that. Any assembly programers can learn whatever you choose, and most work is done in something else (C/C++ normally), so those who are not assembly experts should have a nice binary to look at for the rare times they do have to look at a disassembled output. RISK is really nice for that reason.

    It is all a trade off. However speed isn't important. How nice it is to design the hardware, and how nice it is to program (and debug!) for your application are important. Don't forget power consumption/cooling requirements.

    I have no doupt that there are some things I didn't mention because it isn't a problem for the things I work with, but you should look.

    • You're Right! I myself have been in the situation where a guy had found the "perfect" chip (low power - low cost) but unfortunately it required a huge number of externals an we ended up using 4 times the current of the specs.

Our OS who art in CPU, UNIX be thy name. Thy programs run, thy syscalls done, In kernel as it is in user!

Working...