Building an Energy Efficient Datacenter? 138
asc4 asks: "The company I work for is a webhosting and colocation company. As our power utilization grows, we have begun searching for ways to make our datacenter more efficient. The biggest hit from the utility company comes in the peak usage charge, which penalizes (rather severely) for the highest sustained burst of usage during a billing period. Due to the nature of the colocation business, we can't control how much or when client devices use power, so I'm wondering: is there's something we can do at the datacenter level to help smooth out our power consumption, over the course of a given period of time?"
"In these days of hybrid cars, Energy Star devices, and in general more eco-friendly power consumption, it seems like there must be some products out there that can help make datacenters more efficient, as well. Could fuel cell technology be something to look into? Would flywheels or capacitors help? How about using more efficient AC units than what are available from the big names? What are others doing to reduce peak power consumption in high-drain datacenter environments?"
Alternative energy? (Score:3, Informative)
Re:Alternative energy? (Score:2)
Re:Alternative energy? (Score:3, Informative)
A modern solar panel is 2 feet wide, 4 feet long and at noon on a cloudless summer day produces 125-175 watts.
You do the math.
Re:Alternative energy? (Score:5, Informative)
Take a fuel such as natural gas, propane or even heating oil/biodiesel. Run a generator to supplement your electrical needs.
Take the heat generated by the generator and run it through an absorption chiller [wikipedia.org] to provide some "free" cooling for your datacenter. If you have any extra heat left you might be able to use it for domestic hot water/space heating as well.
If using natural gas, then a fuel cell may be a viable option - they certainly run hot enough!
The only other kit you would need is a smallish cooling tower (help cool water prior to entering the chiller), some pumps and a chilled water coil + fan inside somewhere. This would probably be expensive to set up though. You would have to do some analysis to see if you would recover such an investment in a reasonable amount of time, if at all.
A bonus would be that, if properly designed, you'd have complete independence from the grid and won't be effected by blackouts!
Another approach would be to look into ground-sink heat pumps to reduce cooling costs. Special enclosures for your equipment may also help keep the cooling right where you need it.
Power conditioners on the incoming electrical circuits may also help improve efficiency of the power supplies, which would save on electricity and also a little on cooling costs (more efficient = less heat)
In the end, anything you do to reduce costs will likely be a Good Thing(tm) for both the environment and your bank account.
=Smidge=
Re:Alternative energy? (Score:1, Funny)
Great! I bet my boss is gonna love hearing how the server is down because it is cloudy or there just isn't enough wind.
Re:Alternative energy? (Score:1)
pump water uphill through generators (Score:5, Funny)
Heat the water with the waste heat from the cooling units.
Sell access for swimming - nice warm water (well, here in Canada we like it warm
During peak hours, drain the pool back via generators to make electricity. (make sure you tell people first)
Use warm water to cool more - generate steam.
Run steam through turbines to generate electricity.
Use electricity to pump more water to pool on roof
continue as needed
Wow... (Score:3, Funny)
Dirty Diapers (Score:3, Funny)
-Rick
Taum Sauk (Score:2)
Some pics from google cache - limited time only [216.239.51.104].
Re:pump water uphill through generators (Score:5, Informative)
A thought that crossed my mind: Power Factor
The power you actually benefit from is not what you are charged for. If the magnitude of real power (550 kwatts etc) is one side of a right triangle and reactive power (measured in VARS where 1 VAR= sqrt(-1) watts) is the other, you are charged based on the hypotenuse. (eg sqrt(Real^2 + Reactv^2) = Billed power. The angle between the hypotenuse and real power is controlled by the amount of impedance (reactance) in your system. This is called your power factor. To bring this towards unity in industry, special variable capacitance motors are used to counteract the inductive effect of normal motors (and PSUs and fans, and wires, etc.) Your power company should be able to tell you all about it, including if it is worth it for you to do. Just ask about power factor correcting.
The motor DOES use real power but it helps eliminate reactive power. Power companies typically charge a lot for an overabundance of reactive power consumption(ie too muich inductance) because this can seriously wear on generators.
Another thing. Make sure you have good switching power supplies. Cheapass supplies are both noisy and inefficient. ANything quoted as having Active-PFC or A-PFC already does power factor correction and the above can be ignored for it.
Wiki-links: Reactive Power [wikipedia.org], Power Factor [wikipedia.org], Power Factor Correction [wikipedia.org]. The last one is what you will want to do.
Solar Panels (Score:5, Informative)
If that's not an option, server consolidation and virtualization for the people whom it is appropriate for are the only other options I can come up with...
Re:Solar Panels (Score:2, Interesting)
Another option would be to get a natural gas line into the building, purchase your own generator, and when you aren't using the excess capacity of your generator, sell it back to the electric grid, if your u
Re:Solar Panels (Score:2)
Re:Solar Panels (Score:3, Interesting)
Solar power costs something like 18-22 cents/kwh once you amortize the cost of the panels over it's entire lifetime, etc. Commercial power is generally less than this, maxing out around 17cents/kwh in the pacific northwest. In the midwest commercial power costs like 7 cents/kwh.
Solar power is currently [i]extremely[/i] expensive compared to other energy sources. It's main penetration currently i
Re:Solar Panels (Score:2)
-solarelectricalsystems.com [solarelect...ystems.com]
-solarelectricpower.org [solarelectricpower.org]
-solar4power.com [solar4power.com]
-borregosolar.com [borregosolar.com]
-Akeena.net [akeena.net]
Those were just the first few relevant hits on G
Re:Solar Panels (Score:2)
Unfortunately the article is not online.
Re:Solar Panels (Score:2)
For a personal example, there is a 5 KW solar array on the high school I used to work in that has paid for itself in the 2 years it was installed and I worked there, in the Pacific NW, a region not known for it's sunny weather... Granted, a 5 KW panel barely makes a dent in
Load Balancing (Score:5, Informative)
based on the peak of one phase or the average of all. If you aren't balanced on your phase input into your building, you may be able to rebalance and see some benefit there. If you have one or two large UPS systems that are pulling equally across all three phases, make sure that the output of the UPS system is also balanced, that could end up bringing your input usage down.
This of course wouldn't help with your peak usage, but something to consider anyways.
Short of that, you would be looking for something that could store power and charge that at a regular rate. But then you could end up possibly shorting your demand on the output side based on the available power in that 'system' at peak times.
I am going to guess your best bet is to look at phase and load balancing through your power distribution network and make sure you have placed your clients. If I was in a similar situation, I would set up a collection of load coils across each hot lead in your power distribution network and graph the values on a tight schedule (in order to catch peaks) and determine what is responsible for your peaks.
Don't know if any of this would help, but it is discussion, mod accordingly.
Re:Load Balancing (Score:4, Informative)
Re:Load Balancing (Score:5, Interesting)
Once you get a feel for how the datacenter is 'breathing' (i.e. watch the usage graphs and become familiar with the pulse of workload, etc) you should be able to come up with good solutions to your problems (like starting your monthly billing processes 2 days early, so you can only run the batch processes at night when the power is cheaper).
Also, never underestimate the cost of lighting and A/C. Maybe you can get by with only turning on every 3rd fluorescent light. Maybe you can use exhaust fans instead of A/C in a colder climate.
The point is you'll never know what problems you need to address unless you monitor your DC.
It WILL reduce peak usage (Score:4, Informative)
Disks (and other mechanical parts) will consume a lot of energy, but you don't need to replicate every single physical disk - if the data is under two gigabytes, RAM disks should be fine. In the event of a hard drive failure, backing up off RAM disk is no different from backing up from physical disk, so what's the difference? A single SAN-based disk pack, copied into RAM on the servers, would be the least power-consuming design - especially if you powered the hard drive off except when syncing up.
It costs power to task swap, so the more active tasks there are, the more swapping (if the tasks are all being given fair time) and therefore the more CPU time is taken by kernel activity, therefore the more power is being used up on housekeeping. You should be able to reduce the power consumed by heavy kernel activity by load-balancing.
If you're going to load-balance, you don't need high-power server-rated or desktop-rated CPUs. Mobile CPUs will take less power, you'd just need a larger cluster to load-balance over. If using Linux, also look at CPUs other than Intel - many MIPS and MIPS64 implementations are pretty low-power.
Networks take power to run. There's no escaping that. Don't run more wire/fibre than you have to (that also includes not running longer cables than you need), and don't use more intermediate network devices than will get the job done properly. Oh, and don't overspec the network for a given technology. CAT6 is good stuff, but if your machines never exceed 10 mb/s on the network, you're going to lose efficiency. The "for a given technology" matters, as different technologies will consume different amounts of power for a given spec. Shop around.
Cooling systems are another mechanical system and so are necessarily power-hungry. You can't put those in RAM, however. Again, shop around. You want the best cooling power per unit of energy. This may turn out, for your system to involve having several fans on a single component. It might equally well work out that you can link ducting together such that a single fan can directly cool many components. Since the energy efficiency is what is important, go for the most energy efficient solution for your system.
Depenmding on the system, it MAY (this is not guaranteed) improve the efficiency to have a variable-speed fan, with the speed controllable by the CPU, and where all components cooled by this system have thermal sensors readable by the CPU. You can then vary the cooling as a function of both temperature and predicted load levels. (Varying according to temperature alone is useless, as the loads on the components will change faster than the sensor readings - but could change in either direction. Since the OS knows what tasks it is currently doing, it should be capable of predicting the likely loads for a much more reasonable timebase.)
Connectors are notorious for high resistance and therefore power loss. If there is something that you're unlikely to change for the productive lifetime of the computer, all power loss through all unnecessary connectors (whih are generally made from poor conductors anyway, just adding to the problem) is power you can conserve simply by improving the connection. If you insist on using connectors, make sure the wires that go to the connectors are soldered and not just held in place by pressure. Also, clean the connectors thoroughly, as buildups of oxide and dirt will increase the resistance. You WILL be better off by removing the connectors entirely and soldering anything that's not going to change in place.
Finally, the data center's power grid. You want very high voltage, very low current. (Power dissipation is proportional to voltage, but proportional to the square of the current.) The industrial powe
Cables and connectors, seriously? (Score:3, Insightful)
You *may* have an argument on very long fiber links. If you can get away with a short-reach transceivers instead of long-hau
Re:Load Balancing (Score:2)
Peak load reduction (Score:3, Interesting)
Re:Peak load reduction (Score:3, Interesting)
A good example - a Dual CPU Pentium4 Xeon Intel OEM system in a standard Intel OEM chassis eats nearly 400W when idle with no power management. With standard ACPI power management when idle it eats 350-380W. With CPU frequency scaling using the ondemand CPU governor it will eat less then 100W when idle. The numbers for Opteron based systems are not much different. A usual datacenter is designed to cope with full
Re:Peak load reduction (Score:3)
Peak load reduction can net you significant cost savings, for the cost of shutting down non-essential equipment during a brownout...
Re:Peak load reduction (Score:2)
Why only peak load reduction?
Because peak is when they fire up the gas powered generators that cost more to run.
Lower Peak Demand (Score:5, Informative)
Re:Lower Peak Demand (Score:1, Redundant)
Re:Lower Peak Demand (Score:2)
Yeah, a few hundred gallons of cold water in his data center sounds like a great idea. Not to mention the weight, space usage, condensation, and mold issues.
Re:Lower Peak Demand (Score:2)
Being open has nothing to do with condensation. Take a cold can of your favorite beverage out of the refridgerator and set it out on the counter for a bit and notice the moisture that forms on the outside of the can (unless you live someplace really dry).
Run the aircon predictively (Score:5, Informative)
Re:Run the aircon predictively (Score:2)
How much do you value reliability? (Score:5, Interesting)
I've watched an entire datacenter go out on what was supposed to be a controlled switchover -- power company needed to do some work, pulled the plug (with the datacenter's consent), the backup generators start... and then die. The UPSes kicked in, but could only supply 15-20 minutes of power. Everything failed over to a backup datacenter, whose link then decided to go out to lunch.
Total cost of the outage was measured in tens of millions of dollars.
Just keep this in mind when doing the business justification calculation (cost savings from lower energy bills, minus upfront cost of equipment, minus risk of additional downtime times cost of downtime, minus cost of maintaining the equipment). Unless energy prices go *way* up -- like oil hitting $250/barrel -- I'd be surprised if this would pay for itself.
Re:How much do you value reliability? (Score:2, Informative)
A good AGM SLA might cycle 500 times if you are lucky and you don't cycle them very deeply (keep it less than 80% discharged). You'll be replacing a lot of them after a year or two vs 4 or 5 years if you weren't cycling them every day.
And a good AGM SLA isn't cheap. An 85 amp hour runs well over $100, and that's small by datacenter standards, a 5000VA UPS might take 4 of those.
PFC (Score:1)
Make sure there's no charge there for KVAR hours instead of Kilowatt hours, or no surcharge for power factor. If there's on there it would benefit you to get a consultant in to install PFC correction.
No (Score:4, Informative)
No. Fuel cells are a way of transporting energy, not creating it. This is such an important concept to grasp that cannot be understated.
We are in deep trouble, energy wise. There is no immediate solution (within the next 30 years) that can help us. We need to get used to that concept, fast. Doing "your bit" for the environment is simply not enough.
Welcome, too, China and Inda. Welcome to the powerdown.
Re:No (Score:2, Insightful)
Bull. But Congress needs to get off their ass. The potential gains of energy efficiency are enormous with minimal cost.
The average mileage of US vehicles peaked in the late 1980s. Increasing mileage by only 1 MPG (techically feasible at minimal cost) would result in enormous savings, but the MPG standards haven't changed in decades. What's worse, as SUVs became popular, the standards for SUVs are
Re:No (Score:1)
The answer is still "no", however. He could crack water into H2 and O2 during off peak and run it through fuel cells during peak times, but there's a pretty large efficiency loss in electrolysis, and another one with the fuel cells.
Not to mention an extremely large up front cost. $5000 per kilowatt or so. For comparision, a house typically has 24 kilowatt s
Re:No (Score:2)
On your server farm (Score:4, Funny)
Intel sells a lot of crap, so take some of it and use a methane generator to produce power.
How about looking for energy efficient devices... (Score:5, Insightful)
I have one of these (1.2GHz) and with 1 large HDD, encoder card, network, DVD etc - it idles at less than 20W and maxes at about 60 (encoding, playback, DVD all going, CPU 100%). Burst power when switched on seems to be about 72. This is less than the processor alone on a high spec box.
This will only work with non-CPU intensive operations. However IO seems to be pretty good on these boxes, so an IO bound server would probably not suffer too greatly using a VIA mobo.
Re:How about looking for energy efficient devices. (Score:2)
If the machine is doing 100% cpu utilization, just replace it with a weaker CPU.
Maybe he can save more energy by running hundereds of vmware virtual machines on an Geode GX.
Re:How about looking for energy efficient devices. (Score:2)
What I meant was "How many of them are actually running at 100% CPU? Those that aren't can be replaced...".
If you re-read my comment, lower down you will see:
Re:How about looking for energy efficient devices. (Score:2)
They are nice, but they have their limitations. On the positive side:
On the negative side:
Re:How about looking for energy efficient devices. (Score:2)
Re:How about looking for energy efficient devices. (Score:3, Insightful)
Stirling Engine .... (Score:2)
Converts heat to electricity, which you can then re-use or resell.
In addition huge Vapo/chill could also be used, whith your heat exhaust as a cooling source....
You now have the problem of keeping your exhaust as hot as possible...which is much easier done than keeping it cool at all time.
Best of Luck.
Re:How about looking for energy efficient devices. (Score:2)
since a good data center should have back up generators anyways, you could run the cooling subsystem from those backup generators
Self-cooling (Score:1, Funny)
Re:Self-cooling (Score:1)
Actually, you can only retrieve ~35% (iirc) of energy dissapated as heat (remember entropy?). Efficiency is determined by the range of operating temperatures.
Although now that I think about it, Xeons might run hot enough to make a heat engine worthwhile :D
AMD CPU (Score:1, Interesting)
Hoe much money you got? (Score:4, Insightful)
First, pre-cool the room. There was a good article on
Second, install a solar power system. Kinda pricy, but if you have a large roof you can generate some solid power. And don't think that being in the north excludes you from solar power. Uni-Solar has a great sun index map showing what level of solar output and electrical output you can expect in any given area.
Third, going with solar, a battery array or some other type of power storage. By using the solar pannels to juice up the batteries, you can pull power from the batteries at peek time, but charge them all day.
Fourth, sub-teranian cooling. Once you get a little ways under the surface of the ground, the temperature becomes a pretty consistant mid/high 50's. Using sunken water tanks you can run 60 degree water through a radiator in your HVAC system. I know there are companies that can install these system but I can't recall any names off the top of my head.
Fith, solid state storage. If you can swing paying $50/gig as opposed to $1/gig for storage space you can dramaticly cut down on your both your cooling bill and your electric bill.But at $50,000 per ter vs $1,000 per ter, it's going to take a while to recoup the costs.
Sixth, custom server cases/cabinets. Traditional closets are great for cramming a lot of servers into a small area, but they about suck for heat management. You could fund a research project at any number of engineering schools to create a better storage solution.
-Rick
Solar, check. Batteries, wha? (Score:2)
Re:Solar, check. Batteries, wha? (Score:3, Interesting)
Re:Solar, check. Batteries, wha? (Score:2)
Forget the battery array, I want to know what kind of monster inverter you are going to need to run an entire data center full of equipment.
Re:Solar, check. Batteries, wha? (Score:2)
Sure it does, and I suppose all those plugs going into 120V AC outlets are just for show? Or are you suggesting he try to distribute DC directly to his various server components directly?
Re:Solar, check. Batteries, wha? (Score:1)
Photovoltaics because you're already DC-wired. (Score:3, Interesting)
Plus, in the event of a grid failure, your generator doesn't have to work quite as hard, which translates to slightly longer runtimes on the same fuel tank.
The available solar resource depends largely on latitude and weat
Re:Hoe much money you got? (Score:2, Interesting)
Actually, I think you'll find that the deep temperature of the earth is the average between the highs and lows on a yearly basis. In other words, if you live in a hot climate with temps of 120 in summer, 60 in winter, the deep earth temp would be 90. In the frozen arctic, the deep earth temp is below freezing ('permafrost'). Granted, for a lot of the continent
Re:Hoe much money you got? (Score:2)
This is correct. I used to do a bit of caving and mean ambient temperature of a cave in Central America is about 20 degrees higher (70F) than one in the Northern USA.
A few more suggestions (Score:3, Interesting)
2. During the day raise the thermostat so the AC does not kick in too soon.
3. If you have windows use the blinds on the sunny side. Thermal load is a royal pain. Where I work it hit 27c inside even though it was -14c outside. The north side was running at about 21c.
4. Put all non-essential equipment on powerbars and turn off the bars. Most monitors and other electronics still draw a bit of current for 'instant on'. That takes hydro and dumps more heat for the AC to handle.
Re:A few more suggestions (Score:2)
Re:A few more suggestions (Score:2)
Or put the datacenter in the basement. Not only do you not get thermal load, but the walls will naturally stay 18F year 'round.
Sun's coolthread servers (Score:2)
I believe the servers are too new for anyone to have a solid opinion about, but I know Sun has been actively moving in this direction for a while.
Alternative Sources of Power Generation (Score:2)
You will probably never be able to generate enough power to completely power your data center, but even if you generate 1%, 5%, 10%
Buildings in Chicago are strongly thinking about this (for obvious reasons).
Other areas could probably benefit too.
Chicago Wind (Score:2)
The other side of the lake has more wind [nrel.gov].
Re:Chicago Wind (Score:3, Informative)
They locate wind farms in mountain passes or other natural high-wind locations; I wonder if turbines located in certain spots of major metropolitan areas would be super-efficient. The plaza south of the IBM building on the river in downtown Chicago has to be one of the windiest places on
Re:Chicago Wind (Score:1)
Consult the experts... (Score:5, Insightful)
Older installations used to use giant flywheels, but not to limit peaks. They were used for power conditioning and limited power backup.
I'd do an extensive survey before trying anything else. Buy or rent a power meter that does logging and graphing. Check everything out for a month - each phase and the current draw on each phase, and current draw on each rack (each computer if possible).
Proper sequencing of cooling can drastically affect your power consumption. Never start your cooling motors when you're drawing a lot of power - motor startup is a huge peak. After doing a survey of your power needs you may be able to identify times when you can avoid turning the cooling system on which will lower your peak. For instance, before the daily peak, cool the data center down a few degrees more than usual. Then shut off one or more cooling system until after the daily peak. This can be tricky to correctly manage and implement, especially since it has to be automatic and failsafe.
Alternately, shop around for your power. check with a few competitive companies and see if they offer a better deal.
-Adam
Re:Consult the experts... (Score:3, Insightful)
Not an electrician. An electrical engineer.
how about running of the generator (Score:3, Informative)
Would it be cheaper/feasable during these peak times to "test" the generator... ie turn the mains power off and run on diesel?
relocate it (Score:4, Interesting)
Turn off the lights... (Score:3, Funny)
On the other hand, that might be a dumb idea.
Air Condition with Natural Gas (Score:3, Insightful)
Re:Air Condition with Natural Gas (Score:2)
When the power-down notice quicks in, they start diesel-powered generators, which is a good choice considering the price of natural gas.
Check to see if you'd get a big enough discount to justify the purchase of this kind of equipment.
Re:Air Condition with Natural Gas (Score:2)
http://www.google.com/search?hl=en&q=natural+gas+a ir+conditioners&btnG=Google+Search [google.com]
http://www.socalgas.com/business/useful_innovation s/gasac_overview.shtml [socalgas.com]
http://www.energysolutionscenter.org/consortia/coo ling.asp [energysolu...center.org]
http://www.gasairconditioning.org/robur_advantages
Re:Air Condition with Natural Gas (Score:2)
I had no idea this cycle was still in use.
Just use less (Score:3, Informative)
Yeah use the Ultrasparc T1 CPUs, use lower power scsi disks including compactflash disks for boot and OS, keep all lights out when you dont need em, add heavy wall insulation unless youre living far north, add lots of ram in all machines so the disks can be powered down etc.
Oh, the irony... (Score:2)
Monitoring is one way to start... (Score:2, Interesting)
Bigass flywheel (no, really!) (Score:1)
Well, you could also have an automated cutoff for nonessential load (like 3 of 4 fluorescent lights or something). Or, you could use a battery UPS instead. But the flywheel is cooler...
random link [ecmweb.com]
Any one solution won't help (Score:3, Interesting)
- If your server room is not enclosed on the roof of the room (just using plain false-roof tiles) make sure they are atleast insulated very well. The more A/C escapes, the more it has to work.
- Make sure there is enough air-flow through your server racks (best placements and setup ideals very from person to another), best not to have the rear right up to a wall. Middle of the room or offset (5 feet or so from the well) allows for good ventalation.
- Keep server room lights off unless needed with the exception of a low-heat emergency lighting.
- If you have raised flooring and the a/c comes through the bottom, place the racks behind vent openings (so the air is rising to the front of the rack, getting sucked in by the fans in the front) instead of having the rack on the vent itself.
- Upgrade older servers if possiable. Older servers (expecially the old HP NetServer series) are a lot less efficient as newer servers. Not componet (CPU, HD) but also overall engineering.
- Turn off monitors when not in use. LCDs are not as bad but better to be safe then sorry. If you do not need it running, just leave it off.
- Do not allow people to keep the server room door open, may sound simple but you wouldn't beleive how many times I've seen this. If the doors don't close automatically, get automatic closers for them!
- Make sure the doors are weatherstripped.
- Multiple airconditioners! I have a small server room that runs on three airconditioners. Two always run, one does not, this rotates weekly. Also great for redundancy.
I'm sure there are many more things you can do. Hiring outside consulants who have worked with issues such as this are always benifitial. Be sure to get second/third opionions.
Wow, spelling really sucks when you haven't slept for 72hrs. (I really, really hate Exchange. Expecially when custered.)
Re:Any one solution won't help (Score:2)
Never run a server with it's cover off (Score:2)
2) If a server runs cooler with it's cover off than when it does with it on.... THROW IT OUT, GET A NEW ONE. That indicates shit-poor design and I'd replace it if you value the data stored therein, let alone the power savings going with a newer, well engineered system.
However I do second the box fan notion. It's lowtech but it can be the perfect solution for a less than perfec
Considered Sun's coolthreads servers? (Score:2)
http://www.sun.com/servers/coolthreads/overview/i
Use laptops? (Score:2)
They have to be engineered for power efficieny to extend their battery lives, and they have a built-in ups/load balancer. Of course you's have to engineer your own DC system instead of those wall warts, but...
Re:Use laptops? (Score:2)
1. Use Linux
2. Bother to turn it on
Just look at the kernel documentation for cpufreq and the on_demand governor. Alternatively you can use cpufreqd which allows even finer tuning.
Turning it on for a dual CPU Xeon will drop the power consumption from 400W+ to under 100W when idle. In fact this feature works on any Pentium4 class CPU.
Numbers for Opterons are similar, but most dual and quad Opteron motherboards lack proper support for this feat
A few options (Score:2, Interesting)
I'm able to play building load for the laptops/desktops off against data center consumption, and also able to relocate equipment to other sites to juggle the load. I have the option of passing the cost on to the customers because most of what I do is cost-plus contra
Efficient airco (Score:2)
Make two rows of racks face each other. Place a roof on the lane between the racks and doors on either end. This is the cool lane. On the opposite sides of the racks, place no roofs. This is the warm side.
Only let cool air enter the room in the cold lanes and suck the hot air from the warm lanes. Use racks with perforated doors and use shields to completely cover unused space in the racks.
Now the cold air from the enclosed cool la
Make your own electricity. (Score:2)
Not only are the fuels cheaper, kWh-for-kWh, than mains electricity, but you get to use the waste heat from the generator to heat the building at the same time. Doing both at once gives you huge savings.
Typically people tend to use I/C engines for the generators --- gas turbines would be more efficient, but I/C engines are cheap and reliable and will scale down far more effectiv
Re:Make your own electricity. (Score:2)
On what planet exactly? Come on mods this one is +1 Funny.
AC and Power (Score:2)
Ducting the return air to the outside when the outside (or basement
Save money (Score:2, Interesting)
No office space to cool or heat. No coffee machine or water cooler. No overhead. Just house the machines and an small maintance staff.
Backup Generator (Score:2)
As for more
Ditch Xeons, buy Opteron HE dualcores (Score:2)
Power supply efficiency is important too. I switched to Seasonic high-efficiency power supplies for my desktops years ago. I'm not sure what you'd do about rackmount servers. There's
Re:Ditch Xeons, buy Opteron HE dualcores (Score:2)
While I am an old AMD fanboy and given a choice I will always chose AMD Intel must be given credit where credit is due. If you use CPU frequency scaling on Intel which is supported on both Pentium 4 and Xeon you can drop its consumption into the sub-25W territory when idle and ramp it up in a lot of increments (usually 8) to full as the load arrives. While in theory AMD powernow-k8 should give you similar features, in practice I have yet to see an SMP motherboard that does not have it disa
Read the experts here (Score:2, Interesting)
Use fewer machines. (Score:2)
Stick to AMD for now. They put out a lot less heat and burn fewer watts per mip than the current Intel Machines.
Look at the SunT1 line.
Few Hard drives. Use a storage server with a Raid instead of an HD per machine if you can.
Low heat blades?
There are lots of options to reduce power consumption the problem is will you save more than you spend? I would ban P4s right now. No reason to run a server with an Intel ch