Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Hardware Hacking

Quick, Standard Measurement for CPU Power? 31

captnitro asks: "A particular research project I'm developing right now needs to compare 'potential' (idle/none) and 'load' for various hardware capabilities, and quickly -- maybe up to a several times every minute. For disk space, for RAM, it's relatively easy -- find what's used and what's not and report the ratio. For CPU, I have plenty of time to test 'potential' when the app starts. But for testing CPU load, I need a standard 'ruler' that will be able to compare across varying platforms and processors (e.g., x86, PowerPC, embedded, single and multi-proc) -- so for example, idle percentage won't work. At the same time, I don't have the ability to time 'openssl speed' every 25 seconds without bringing the system to a halt. I'm willing to sacrifice precision of the measurement for generalization of the unit -- that is, the operations that this test is for would be primarily mathematical and not say, text sorts -- but I'd prefer a generic, quick test of the current processor load rather than an average of 25 different tests. Regardless of hardware, the OS distribution is mostly *nix-based -- NetBSD, Linux, and even Mac OS X. Wild ideas are perfectly acceptable -- any thoughts?"
This discussion has been archived. No new comments can be posted.

Quick, Standard Measurement for CPU Power?

Comments Filter:
  • Bogomips (Score:2, Funny)

    by Anonymous Coward
    everyone understands bogomips, right?
  • by MerlynEmrys67 ( 583469 ) on Friday May 06, 2005 @03:34PM (#12455517)
    Well, first off - it is CPU available above a certain priority. I may be running a 100% cpu usage process at a very low priority - so there is no "left over" cpu to utilize, but an application can come in and get 100% of the CPU.

    I am not even sure there is a way of measuring what you want. There are so many variables (Disk I/O, memory bandwidth, etc.) that you can't get a reasonable measurement of how much CPU is left over to do useful work for your process.

    The other thing is that many things can take down a CPU as well - a huge burst of network traffic can take an "idle" CPU and peg it at 100% in kernal usage.

    Your best bet would probably be to hook into the various schedulers and get deep knowledge of what is really going on - pain in the butt, but doable

    • CPUs aren't comparable, so you need to get a % and normalize it.

      The easiest way to get a % is from something like top - but you can definitely do better, especially by successfully taking into account priority. You definitely want to find a way to _access_ information the scheduler already has, not make little minibenchmarks.

      But you still have the problem of 100% of a PII vs 100% of a G5. This is a principally insoluble problem that varies based on your application's use of int, flop, registers, L1, L2,
  • How about (Score:3, Insightful)

    by CounterZer0 ( 199086 ) on Friday May 06, 2005 @03:49PM (#12455809) Homepage
    Mhz? :)
    Or!
    Watts of thermal power on the die surface!
  • SPEC (Score:4, Informative)

    by machinecraig ( 657304 ) on Friday May 06, 2005 @04:07PM (#12456087)
    It sounds like you are looking for a way to normalize and compare system (and application?) performance across different hardware platforms.
    Say hello to SPEC [spec.org]. This is exactly what the organization was formed for. Take a look at their CPU benchmarks. I know you're looking for more of a snapshot, and less of a benchmark - but I would think SPEC is a good place to start.
    • Not enough information was given in the question, but it is possible that the application in mind is that of measuring the performance of webservers in real time. So the experimenter doesn't (or does?) want the tests to last the long time a good SPEC run would take. I was somewhat confused by the article text.

      Then again, it's possible that the application could be used to measure the speed of web clients. In that case, a flash or javascript loop with whirling icons could do the trick, but it would take
  • Perhaps I am misunderstanding the question, but could you measure current flow to the CPU to measure its operating level? You know the idle current flow, and you know the expected peak draw, so by measuring current current flow through the processor you should be able to tell how loaded it is.
  • You seem to hint that openssl speed is the type of metric you want... So why not run openssl speed once and use one of the tests as your power rating. For example say you choose blowfish cbc 8192 as the power rating. Total power for group of servers would be the sum of all the power ratings. Available processing power would server cpu idle % * power rating. A few simple scripts and you could have a rough approximation of available processing power available across all platforms. You could also design
    • Yeah, that was more or less what I was thinking too, except I'd be tempted to use CPU temperature instead of power consumption, just because most hardware platforms already incorporate heat sensors for CPU's, and I don't know of any commodity hardware which will tell you how many watts your CPU is drawing.

      It would be imperfect, because other things can influence CPU temperature -- some CPU's have variable-speed fans (especially on laptops), and other factors can cause temperatures to rise inside the case

  • One Mississippi.
    Two Mississippi.
    Three Mississippi.
  • by russellh ( 547685 ) on Friday May 06, 2005 @06:00PM (#12457583) Homepage
    Whatever you do, give energy consumption some weight in the comparison. In this day and age, low energy requirements is a virtue. In other words, for two CPUs that are otherwise equivalent, the one which consumes lower energy ought to win.
  • by Anonymous Coward on Friday May 06, 2005 @06:09PM (#12457673)
    Quick
    Fast
    Freakin Fast
    Holy Shit
    Dude you r0x0r
    OMFG

    Oh, just so you know, you should always buy at the Holy Shit level. If you buy OMFG you pay too much and you will soon regret buying anything less than Freakin Fast. But Holy Shit and you will be happy for a couple to three years without paying too much.
  • Alas, not quick... (Score:4, Informative)

    by davecb ( 6526 ) * <davecb@spamcop.net> on Friday May 06, 2005 @06:31PM (#12457909) Homepage Journal
    ... but standard, sufficiently so that it can be "gamed" (:-))

    The paper you're looking for is Wong, Brian, "Comparing MVS and UNIX Workload Characteristics" Int. CMG Conference, 1998, in which (if memory serves) he looks at the comparability of MVS and Unix programs, and derives a series of comparisons which very approximately correlate with TP performance in a benchmark like TPC.

    A paper that's available to CMG [cmg.org] members is his Developing a General-Purpose OLTP Sizing Tool [cmg.org], which builds on the subject.

    If you have any good, representative test that load a whole system, then in my opinion, you have a good predictor of average performance. (Tautology, eh?) If you have such a test, though, people will learn how to get the best posible numbers from it. Average together all the various X MHz Pentium III TPC-C results, don't trust any single one (:-))

    --dave

  • by vic5 ( 472734 )
    Setup a low end pc with a NIC and OS of your choice. Setup a share on the server to be tested. On your low end pc, write a script or "code" to move a small file to the server and then move it back. Log the rates, times and delta comparisons on the low end PC. Be sure to give the file a unique name each move. Something like the time stamp should work. You could, with the proper coding, derive a good bit of data without adversely affecting the test subject.
  • A long time ago, back when engineers built computers instead of marketers, everyone talked about measuring things in flops. Flops was apparantly the well-rounded, accurate and objective measuring system. I believe I only see flops in regards to super computers these days, and it seems to be applied to measure their computational power.

    Consider:
    http://en.wikipedia.org/wiki/Flops [wikipedia.org]
    http://en.wikipedia.org/wiki/Million_instructions_ per_second [wikipedia.org]

  • Random idea.. (Score:1, Interesting)

    by Anonymous Coward
    1) do some research on figuring out the relationship between Amps (current draw) and units of computation per time (like flops maybe). I.e., is it a linear relationship? Come up with a relationship.

    2) Using #1, calculate the current draw of CPU (perhaps use a separate dedicated power supply for the CPU and/or mobo) and therefore the current CPU utilization.

    You might need to address temperature in there too. Or *only* use temperature rather than current (for instance temperature difference between the outs
  • I've started a small project to measure cpu performance across platforms using john the ripper. You can see the results here:

    http://the1.no-ip.com/~the1/johnbench.txt [no-ip.com]

    You can download the john the ripper source code here:

    http://www.openwall.com//john/ [openwall.com]

    and then build it and run your own tests.

  • Watts?

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...