The Cost of Distributed Client Computing? 527
ialbert asks: "I only recently decided to install SETI@home on my mostly idle home computer. It got me thinking though, are those free processor cycles truly free? Has anyone had experience with processors dying prematurely due to a constant, heavy load, or is usage pretty inconsequential? What about other components, like harddrives? And how much does a 100% processor load increase your power bill versus a 1-2% idle load over the course of a year? It's easy to think of idle computers as an untapped computational resource, but what are the costs to the computer owners?"
full speed ahead (Score:1, Informative)
Excludes wierdo laptop setups
the math (Score:5, Informative)
the figures [everything2.com]
Power (Score:5, Informative)
As for premature death of CPU, being under heavy load should not hurt it, powering on and off often does far more 'wear and tear'.
50 Watts increase at 100% CPU Load (Score:5, Informative)
When Folding@Home is turned off, my power consumption for the entire system is 140W. When I activate Folding@Home, the Wattmeter reading jumps to about 190-195W.
So if you're concerned about electricity usage in your house, then yes, distributed computing sucks more power.
Never had a problem with that... (Score:3, Informative)
As far as the power bill goes. I currently have a desktop, laptop, wireless router/hub and zaurus going the majority of the day - at least the systems are always on since I am too lazy to turn them off and have no need too. I also live with my girlfriend who runs the haridryer every morning and must have every light on in the house to check her makeup with. At the end of the month we get our power bill of $45-50 - which in my opinion is not a lot. We're also in California for the record.
Some Measurements. (Score:3, Informative)
CPu's, when idle, can use as little as 2-5W. When fully utilized, up to 40-50W (depending on the make/model/etc). So let's assume you have a middle of the road processor that has a difference of 25W between active and idle. (This is consistant with measurements on a PIII 800MHz, a little lower than middle of the road.)
Now, 25W * 24Hrs * 365 days * 1kw/1000W * $0.10/kWhr = $21/year. Roughly $1/year per Watt of additional power.
As far as breaking of components, as well as the system is cooled properly, I wouldn't think it would be a problem.
Energy costs (Score:5, Informative)
http://www.dslreports.com/faq/2404
Additional power consumption (Score:1, Informative)
This was measured at the AC line input, using a "Kill-a-Watt" metet, for several Pentium III, AMD K6, and Via C3 machines. Consumption would rise from ~45 watts at idle to 55-60 under load. Multiprocessor boxes probably show this delta for each CPU, although I haven't had the chance to measure one.
Assuming a 12-watt difference and 24/7 operation, this amounts to 8.64 kilowatt-hours per month... not very significant, unless you're off-grid running on solar panels or somesuch.
Average US energy cost is around 8 cents/kW-hour, so that adds up to USD $0.70 per month.
which would be about USD $0.70 at the nat
Re:Wear Out (Score:5, Informative)
Point is, even running chips hot, to a degree, (pun not intended) doesn't reduce their lifetime enough to worry.
Some of the other points, such as increased power use, and accelerated failure of mechanical components such as hard drives, are valid. But chip wear-out is a non issue -- you'd have to heat your chip past the point of system stability to get the em lifetime down low enough to care about it.
Re:full speed ahead (Score:3, Informative)
That may have been true back in the bad old days of DOS, but today we have real operating systems. When there is nothing to due the OS exectues a HLT instruction which puts the CPU in a lower power state. There are numerous other ways to get to even lower power states that are required by ACPI which M$ has more or less REQUIRED all new computers to have in the past several years.
Also even when the CPU is going different codes will heat it up by different amounts. The P4 has a rather large differential b/w its maximum heat disipation and its 'typical' disipation, whereas the AMD Athlons are more consistent about their disipation.
<speculation>
I would assume what is happening is that the CPU 'powers down' parts of the core that are not being used ie an integer only code does not need the FPU/MMX/SSE etc units running so theoretically the CPU could block the clock from entering these units (since transistors more or less only generate heat when changing state ).
ps I am a 'software guy' not an EE
</speculation>
Re:full speed ahead (Score:2, Informative)
Check the boot messages on Linux; see the one where it says "Checking 'hlt' instruction"? That's what that is. Without hlt, the kernel has to do a no-op loop when there's nothing to run.
I believe all Windows NT versions (3.0 through 5.1 oops I mean XP) use hlt; there was some fuss about the DOS-based Windows not using it, but I don't care enough to look it up.
Re:Processors dying... (Score:4, Informative)
Pentium IV CPUs have an internal temperture diode, just like every Intel chip since the Pentium II Deschutes core ( excluding early Celerons ).
As opposed to all chips before it, the Pentium IV will do more than just crash when overheating. It will dynamically reduce it's own clock speed to reduce power consumption. But this feature will only come into play when the cooling solution is unable to keep up with the processor ( IE: dead fan, extremely hot room ), and will not affect performance under normal conditions.
What the parent was referring to is the HLT instruction, which will cause the processor to do nothing and reduce power use. Most modern processors support it, and most modern operating systems ( including NT and Linux ) execute these instructions in an idle thread.
This is basically the concept of this discussion: will your computer run hotter under load rather than running idle HLT commands?
The answer is yes. What this means to you in terms of silicon lifetime is probably beyond the expertise of anyone here on Slashdot, so take every "insight" with a bag of salt.
Re:The cost of Linux? (Score:2, Informative)
40 years is misleading (Score:2, Informative)
Design rules and electrical checks are supposed to give a level of assurance that there won't be reliability problems down the road but they are not perfect. Every chip has a flaw that will render it inoperable at some point; worst-case, a PN junction will start looking like a resistor and that will be it for that chip. That is WAY down the road though, so likely another flaw will be a chip's downfall.
Random flaws are the most common. Some of these cause very early failure (known as "infant mortality" failures, unfortunately) but some take much longer to cause devices to fail. Not just he metal lines, although that is one mechanism; void migration, defects in the thin oxide of the transistors, contamination... the list is long. And each wafer lot coming out of the fab may have a different set of defects; newer technologies like 0.13u and 0.09u (aka 90nm) are not yielding well due to the process not being fully worked out. The chips that do make it out are likely not as good as the ones that will come later as a result.
Now I'm not saying that current CPUs are going to start popping like popcorn due to heavy usage; just that there is going to be a wide distribution of the time that they fail. A 30+ watt CPU running full-tilt on a setiathome application is just not going to last 40 years (ignoring the usual issues of other components of the system dying before then). High junction temperatures have just a huge impace upon chip lifetimes and CPUs have the highest junction temps. They are not rated for 40 years at 100% activity -- don't you think there's a reason for a one to three year warranty?
- Leo
There is a possible issue with solid state stuff (Score:3, Informative)
Many have pointed out that chips essentially don't wear out, but that's only in a world where every motherboard has a perfect design. In reality, given any motherboard, there will be some bad parts of the design and the lifetime may indeed be effected by how much it is stressed, especially those with an error in the design as regards to heat dissapation though underspeced drivers can be a big issue to. Also, many use capacitors whose values change after a few years due to chemicals cooking out of them. This is why many of the cheaper motherboards on the market will just stop working or become unreliable after about 3 years. If those motherboards are run hotter for a larger percentage of time, certainly there will be a reduction in life.
Even so, the cost amortized over time is still minor. If a motherboard goes bad after 2 years instead of 3, then you've "spent" 1/3 of the lifetime of a $100 or so component on the task. So, maybe about 34ish bucks split over 2 years or 17ish bucks a year. Not free, but not much money either.
Re:Power (Score:1, Informative)