Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Software The Internet

The Cost of Distributed Client Computing? 527

ialbert asks: "I only recently decided to install SETI@home on my mostly idle home computer. It got me thinking though, are those free processor cycles truly free? Has anyone had experience with processors dying prematurely due to a constant, heavy load, or is usage pretty inconsequential? What about other components, like harddrives? And how much does a 100% processor load increase your power bill versus a 1-2% idle load over the course of a year? It's easy to think of idle computers as an untapped computational resource, but what are the costs to the computer owners?"
This discussion has been archived. No new comments can be posted.

The Cost of Distributed Client Computing?

Comments Filter:
  • full speed ahead (Score:1, Informative)

    by DNS-and-BIND ( 461968 ) on Wednesday October 15, 2003 @11:51AM (#7220272) Homepage
    Processors always run at full speed. It's just they're executing NOPs when they're "idle".

    Excludes wierdo laptop setups

  • the math (Score:5, Informative)

    by proj_2501 ( 78149 ) <mkb@ele.uri.edu> on Wednesday October 15, 2003 @11:51AM (#7220277) Journal
    somebody worked this out when i started the e2 distributed.net team.

    the figures [everything2.com]
  • Power (Score:5, Informative)

    by DaHat ( 247651 ) on Wednesday October 15, 2003 @11:52AM (#7220284)
    I've found that on my laptop, the cost of running seti@home cuts my battery life in half, so when I care about power I am sure to leave it off, however, when ever it's plugged in, it like the rest of my boxes are chugging away. When it comes to power costs I don't really care currently as I don't pay my electricity, it's included with my rent and believe you me I make good use of that.

    As for premature death of CPU, being under heavy load should not hurt it, powering on and off often does far more 'wear and tear'.
  • by eaglebtc ( 303754 ) * on Wednesday October 15, 2003 @11:54AM (#7220316)
    I have a Pentium 4 @ 2.6GHz, overclocked to 3.2GHz. My power strip is plugged into a great little device: the Kill-A-Watt [ccrane.com] wattmeter. I can track my electricity usage over time by Volts, amps, Watts, VA, and it keeps a log of the kWh consumed by a particular device.

    When Folding@Home is turned off, my power consumption for the entire system is 140W. When I activate Folding@Home, the Wattmeter reading jumps to about 190-195W.

    So if you're concerned about electricity usage in your house, then yes, distributed computing sucks more power.

  • by greymond ( 539980 ) on Wednesday October 15, 2003 @11:57AM (#7220366) Homepage Journal
    I've been using http://www.distributed.net/ [distributed.net] on and off for a few years now and i've never had a problem with any of my processors. However I usually upgrade my cpu/mb every 3-4 years, so if you have or keep your systems longer i'd imagine any burnouts would be due to "just an old cpu" and not from the constant use. Then again I don't plan or expect my hardware to last forever.

    As far as the power bill goes. I currently have a desktop, laptop, wireless router/hub and zaurus going the majority of the day - at least the systems are always on since I am too lazy to turn them off and have no need too. I also live with my girlfriend who runs the haridryer every morning and must have every light on in the house to check her makeup with. At the end of the month we get our power bill of $45-50 - which in my opinion is not a lot. We're also in California for the record.
  • Some Measurements. (Score:3, Informative)

    by taliver ( 174409 ) on Wednesday October 15, 2003 @11:58AM (#7220383)
    I'm kinda in a position to answer at least one part of this question.

    CPu's, when idle, can use as little as 2-5W. When fully utilized, up to 40-50W (depending on the make/model/etc). So let's assume you have a middle of the road processor that has a difference of 25W between active and idle. (This is consistant with measurements on a PIII 800MHz, a little lower than middle of the road.)

    Now, 25W * 24Hrs * 365 days * 1kw/1000W * $0.10/kWhr = $21/year. Roughly $1/year per Watt of additional power.

    As far as breaking of components, as well as the system is cooled properly, I wouldn't think it would be a problem.
  • Energy costs (Score:5, Informative)

    by p7 ( 245321 ) on Wednesday October 15, 2003 @12:03PM (#7220446)
    Check this website for a breakdown of the energy costs.

    http://www.dslreports.com/faq/2404
  • by Anonymous Coward on Wednesday October 15, 2003 @12:03PM (#7220455)
    The last time I measured my PCs' power consumption, there was about a 10-15 watt difference between sitting in idle state and actively crunching on something (seti@home, folding@home, etc.) That's assuming the OS is smart enough to issue HALT instructions when idle. Win98 in its busy-loop (w/o Rain or Waterfall installed) will draw the "full load" wattage no matter what.

    This was measured at the AC line input, using a "Kill-a-Watt" metet, for several Pentium III, AMD K6, and Via C3 machines. Consumption would rise from ~45 watts at idle to 55-60 under load. Multiprocessor boxes probably show this delta for each CPU, although I haven't had the chance to measure one.

    Assuming a 12-watt difference and 24/7 operation, this amounts to 8.64 kilowatt-hours per month... not very significant, unless you're off-grid running on solar panels or somesuch.

    Average US energy cost is around 8 cents/kW-hour, so that adds up to USD $0.70 per month.

    which would be about USD $0.70 at the nat
  • Re:Wear Out (Score:5, Informative)

    by randyest ( 589159 ) on Wednesday October 15, 2003 @12:06PM (#7220505) Homepage
    Right, and the standard in the ASIC industry is a 40 year lifetime minimum before electromigration will lead to failure in normal use (which means yo keep the chip in the allowed operating temperature range, regardless of if it's overclocked or not). That's 40 years. What hardware were you using 40 years ago?

    Point is, even running chips hot, to a degree, (pun not intended) doesn't reduce their lifetime enough to worry.

    Some of the other points, such as increased power use, and accelerated failure of mechanical components such as hard drives, are valid. But chip wear-out is a non issue -- you'd have to heat your chip past the point of system stability to get the em lifetime down low enough to care about it.
  • Re:full speed ahead (Score:3, Informative)

    by Merlin42 ( 148225 ) * on Wednesday October 15, 2003 @12:17PM (#7220642)
    Uh .... NO.

    That may have been true back in the bad old days of DOS, but today we have real operating systems. When there is nothing to due the OS exectues a HLT instruction which puts the CPU in a lower power state. There are numerous other ways to get to even lower power states that are required by ACPI which M$ has more or less REQUIRED all new computers to have in the past several years.

    Also even when the CPU is going different codes will heat it up by different amounts. The P4 has a rather large differential b/w its maximum heat disipation and its 'typical' disipation, whereas the AMD Athlons are more consistent about their disipation.

    <speculation>
    I would assume what is happening is that the CPU 'powers down' parts of the core that are not being used ie an integer only code does not need the FPU/MMX/SSE etc units running so theoretically the CPU could block the clock from entering these units (since transistors more or less only generate heat when changing state ).
    ps I am a 'software guy' not an EE
    </speculation>
  • Re:full speed ahead (Score:2, Informative)

    by greed ( 112493 ) on Wednesday October 15, 2003 @12:28PM (#7220738)
    Most modern processors, 68000 era and later, have a 'HALT' instruction which stops most of the internal 'ticking' of the CPU until an interrupt is received. On a CMOS CPU, your power use can go to approximately zero.

    Check the boot messages on Linux; see the one where it says "Checking 'hlt' instruction"? That's what that is. Without hlt, the kernel has to do a no-op loop when there's nothing to run.

    I believe all Windows NT versions (3.0 through 5.1 oops I mean XP) use hlt; there was some fuss about the DOS-based Windows not using it, but I don't care enough to look it up.
  • by default luser ( 529332 ) on Wednesday October 15, 2003 @02:01PM (#7221650) Journal
    Yes, the grandparent post is incorrect.

    Pentium IV CPUs have an internal temperture diode, just like every Intel chip since the Pentium II Deschutes core ( excluding early Celerons ).

    As opposed to all chips before it, the Pentium IV will do more than just crash when overheating. It will dynamically reduce it's own clock speed to reduce power consumption. But this feature will only come into play when the cooling solution is unable to keep up with the processor ( IE: dead fan, extremely hot room ), and will not affect performance under normal conditions.

    What the parent was referring to is the HLT instruction, which will cause the processor to do nothing and reduce power use. Most modern processors support it, and most modern operating systems ( including NT and Linux ) execute these instructions in an idle thread.

    This is basically the concept of this discussion: will your computer run hotter under load rather than running idle HLT commands?

    The answer is yes. What this means to you in terms of silicon lifetime is probably beyond the expertise of anyone here on Slashdot, so take every "insight" with a bag of salt.
  • by Botty ( 715495 ) on Wednesday October 15, 2003 @02:39PM (#7222126)
    Lol. At first I hated this guy because he was such a troll. But now when I read his posts I laugh. This guy posts the *exact* same text to zdnet talkback too. It's quite creative and would fool many normal users into thinking that it had a shred of credibility. Oh yea, if I remember correctly, this guy is regestered as Marvin Marvinski on zdnet. He claims to have a consulting company under the same name.
  • by Leomania ( 137289 ) on Wednesday October 15, 2003 @02:47PM (#7222206) Homepage
    While I won't bore everyone with the differences between MTBF, FIT rate and what those numbers actually mean in an integrated circuit, let me assure you that 40 years is NOT the lifetime of a CPU. A CPU is NOT an ASIC and it never will be treated like one.

    Design rules and electrical checks are supposed to give a level of assurance that there won't be reliability problems down the road but they are not perfect. Every chip has a flaw that will render it inoperable at some point; worst-case, a PN junction will start looking like a resistor and that will be it for that chip. That is WAY down the road though, so likely another flaw will be a chip's downfall.

    Random flaws are the most common. Some of these cause very early failure (known as "infant mortality" failures, unfortunately) but some take much longer to cause devices to fail. Not just he metal lines, although that is one mechanism; void migration, defects in the thin oxide of the transistors, contamination... the list is long. And each wafer lot coming out of the fab may have a different set of defects; newer technologies like 0.13u and 0.09u (aka 90nm) are not yielding well due to the process not being fully worked out. The chips that do make it out are likely not as good as the ones that will come later as a result.

    Now I'm not saying that current CPUs are going to start popping like popcorn due to heavy usage; just that there is going to be a wide distribution of the time that they fail. A 30+ watt CPU running full-tilt on a setiathome application is just not going to last 40 years (ignoring the usual issues of other components of the system dying before then). High junction temperatures have just a huge impace upon chip lifetimes and CPUs have the highest junction temps. They are not rated for 40 years at 100% activity -- don't you think there's a reason for a one to three year warranty?

    - Leo
  • by RhettLivingston ( 544140 ) on Wednesday October 15, 2003 @03:08PM (#7222445) Journal

    Many have pointed out that chips essentially don't wear out, but that's only in a world where every motherboard has a perfect design. In reality, given any motherboard, there will be some bad parts of the design and the lifetime may indeed be effected by how much it is stressed, especially those with an error in the design as regards to heat dissapation though underspeced drivers can be a big issue to. Also, many use capacitors whose values change after a few years due to chemicals cooking out of them. This is why many of the cheaper motherboards on the market will just stop working or become unreliable after about 3 years. If those motherboards are run hotter for a larger percentage of time, certainly there will be a reduction in life.

    Even so, the cost amortized over time is still minor. If a motherboard goes bad after 2 years instead of 3, then you've "spent" 1/3 of the lifetime of a $100 or so component on the task. So, maybe about 34ish bucks split over 2 years or 17ish bucks a year. Not free, but not much money either.

  • Re:Power (Score:1, Informative)

    by Anonymous Coward on Wednesday October 15, 2003 @06:46PM (#7224555)
    Something I've found to be handy for powering down my system when it doesn't need to be up is the Shuriken [sourceforge.net] uptime manager for Linux. It lets you set up an uptab file that ensures that your computer is powered for stuff (like when recording programs with a TV card) and then shuts down the computer when it's been idle and no one's logged in (and, of course, when the system doesn't need to be on).

Old programmers never die, they just hit account block limit.

Working...