Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Power

Building an Energy Efficient Datacenter? 138

asc4 asks: "The company I work for is a webhosting and colocation company. As our power utilization grows, we have begun searching for ways to make our datacenter more efficient. The biggest hit from the utility company comes in the peak usage charge, which penalizes (rather severely) for the highest sustained burst of usage during a billing period. Due to the nature of the colocation business, we can't control how much or when client devices use power, so I'm wondering: is there's something we can do at the datacenter level to help smooth out our power consumption, over the course of a given period of time?"
"In these days of hybrid cars, Energy Star devices, and in general more eco-friendly power consumption, it seems like there must be some products out there that can help make datacenters more efficient, as well. Could fuel cell technology be something to look into? Would flywheels or capacitors help? How about using more efficient AC units than what are available from the big names? What are others doing to reduce peak power consumption in high-drain datacenter environments?"
This discussion has been archived. No new comments can be posted.

Building an Energy Efficient Datacenter?

Comments Filter:
  • Alternative energy? (Score:3, Informative)

    by jimboisbored ( 871959 ) on Friday January 27, 2006 @10:00PM (#14585508)
    It's not necessarily cutting power consumption, but will reduce monthly bills and is eco-friendly. I'm thinking like solar or wind assist (depending on your geographical location)
    • Alternative energy won't always cut the monthly bill. What part of the world you are has a HUGE impact on what alternative energy supplies will actually cut costs. If you work in Seattle, for instance, you're probably not going to save much using solar panels. Wind energy can prove non-effective depending on how many trees, buildings and other obstructions are in the area.
    • Equipment in a modern data center consumes 3000-6000 watts in a cabinet 7 feet tall, 2 feet wide and 3 feet deep. Air conditioning for that same cabinet consumes another 1000-2000 watts.

      A modern solar panel is 2 feet wide, 4 feet long and at noon on a cloudless summer day produces 125-175 watts.

      You do the math.

    • by Smidge204 ( 605297 ) on Saturday January 28, 2006 @12:28AM (#14586303) Journal
      While not necessarily "alternative energy", you might try some form of cogeneration.

      Take a fuel such as natural gas, propane or even heating oil/biodiesel. Run a generator to supplement your electrical needs.

      Take the heat generated by the generator and run it through an absorption chiller [wikipedia.org] to provide some "free" cooling for your datacenter. If you have any extra heat left you might be able to use it for domestic hot water/space heating as well.

      If using natural gas, then a fuel cell may be a viable option - they certainly run hot enough!

      The only other kit you would need is a smallish cooling tower (help cool water prior to entering the chiller), some pumps and a chilled water coil + fan inside somewhere. This would probably be expensive to set up though. You would have to do some analysis to see if you would recover such an investment in a reasonable amount of time, if at all.

      A bonus would be that, if properly designed, you'd have complete independence from the grid and won't be effected by blackouts!

      Another approach would be to look into ground-sink heat pumps to reduce cooling costs. Special enclosures for your equipment may also help keep the cooling right where you need it.

      Power conditioners on the incoming electrical circuits may also help improve efficiency of the power supplies, which would save on electricity and also a little on cooling costs (more efficient = less heat)

      In the end, anything you do to reduce costs will likely be a Good Thing(tm) for both the environment and your bank account.
      =Smidge=
    • by Anonymous Coward
      I'm thinking like solar or wind assist

      Great! I bet my boss is gonna love hearing how the server is down because it is cloudy or there just isn't enough wind.
    • I know it's not viable in all areas. While a datacenter is in a small area, if you have a large office building you have alot of space for solar panels (thought they can be costly) again, if you're in the right area in a lesser metro area with lots of open fields and a windy area, wind may be viable. This is of course not the only power source, this would be in addition to standard power lines. No wind or cloudy day? You pull everything off the grid. Grid down? You make a small amount, the rest of which is
  • by rcpitt ( 711863 ) on Friday January 27, 2006 @10:05PM (#14585544) Homepage Journal
    During off-peak time, pump water uphill to a holding reservoir - a big swimming pool on the roof might do.

    Heat the water with the waste heat from the cooling units.

    Sell access for swimming - nice warm water (well, here in Canada we like it warm :)

    During peak hours, drain the pool back via generators to make electricity. (make sure you tell people first)

    Use warm water to cool more - generate steam.

    Run steam through turbines to generate electricity.

    Use electricity to pump more water to pool on roof

    continue as needed
    • Wow... (Score:3, Funny)

      by MachDelta ( 704883 )
      I smell a Nobel Prize in Physics here.
    • by Kitsuneymg ( 815431 ) on Friday January 27, 2006 @11:49PM (#14586116)
      This is exactly how the TVA stores extra generated power. They pump it back uphill above a dam. While hard for a small business to do, it is one of the most efficient power storage mechanisms used by the power industry.

      A thought that crossed my mind: Power Factor

      The power you actually benefit from is not what you are charged for. If the magnitude of real power (550 kwatts etc) is one side of a right triangle and reactive power (measured in VARS where 1 VAR= sqrt(-1) watts) is the other, you are charged based on the hypotenuse. (eg sqrt(Real^2 + Reactv^2) = Billed power. The angle between the hypotenuse and real power is controlled by the amount of impedance (reactance) in your system. This is called your power factor. To bring this towards unity in industry, special variable capacitance motors are used to counteract the inductive effect of normal motors (and PSUs and fans, and wires, etc.) Your power company should be able to tell you all about it, including if it is worth it for you to do. Just ask about power factor correcting.

      The motor DOES use real power but it helps eliminate reactive power. Power companies typically charge a lot for an overabundance of reactive power consumption(ie too muich inductance) because this can seriously wear on generators.

      Another thing. Make sure you have good switching power supplies. Cheapass supplies are both noisy and inefficient. ANything quoted as having Active-PFC or A-PFC already does power factor correction and the above can be ignored for it.

      Wiki-links: Reactive Power [wikipedia.org], Power Factor [wikipedia.org], Power Factor Correction [wikipedia.org]. The last one is what you will want to do.
  • Solar Panels (Score:5, Informative)

    by deque_alpha ( 257777 ) <qhartman&gmail,com> on Friday January 27, 2006 @10:07PM (#14585570) Journal
    If you can, install solar panels on your roof. It will smooth the peak a little, and also reduce your overall expenditure. If you are in a sunny location, the investment can often be recouped after only a couple years. Most utilities will even subsidize such ventures.

    If that's not an option, server consolidation and virtualization for the people whom it is appropriate for are the only other options I can come up with...
    • Re:Solar Panels (Score:2, Interesting)

      by redphive ( 175243 )
      There are solar systems that allow you to only 'buy' from the electric Co when your batteries are low. This isn't a bad option, but on the scale of a Datacenter, the sheer volume of the batteries required to convert the DC Solar to AC, and the accompanying conversion losses might not make it worthwhile.

      Another option would be to get a natural gas line into the building, purchase your own generator, and when you aren't using the excess capacity of your generator, sell it back to the electric grid, if your u
      • Note: It is possible to run a datacenter on almost entirly DC power. DC input power supplies are fairly inexpensive, though if you've already invested in AC supplies the switch is obviously expensive.
    • Re:Solar Panels (Score:3, Interesting)

      by figment ( 22844 )
      Unless in specific circumstances, it's rather doubtful that solar panels will actaully lower your overall bill.

      Solar power costs something like 18-22 cents/kwh once you amortize the cost of the panels over it's entire lifetime, etc. Commercial power is generally less than this, maxing out around 17cents/kwh in the pacific northwest. In the midwest commercial power costs like 7 cents/kwh.

      Solar power is currently [i]extremely[/i] expensive compared to other energy sources. It's main penetration currently i
      • You don't provide any sources for your information, so I have to assume it's just out of date. PV's have become significantly more efficient in recent years, and there are tax incentives and utility subsidies for commercial ventures installing solar, not just residentials. See the following for more optimistic (and current) information on solar in commercial settings:

        -solarelectricalsystems.com [solarelect...ystems.com]
        -solarelectricpower.org [solarelectricpower.org]
        -solar4power.com [solar4power.com]
        -borregosolar.com [borregosolar.com]
        -Akeena.net [akeena.net]

        Those were just the first few relevant hits on G
        • December 9th 2005 issue of The Economist goes into this very issue. It states specifically why solar power is currently not a viable resource, and uses these exact price comparisons to mention why solar power is only useful in residential contexts.

          Unfortunately the article is not online.
          • "Not a viable resource" means what exactly? Viable as a sole source of power? Almost definitely true. As an augmentation to other sources of power? I doubt that they made that assertion, and if they did, they ought to check their numbers.

            For a personal example, there is a 5 KW solar array on the high school I used to work in that has paid for itself in the 2 years it was installed and I worked there, in the Pacific NW, a region not known for it's sunny weather... Granted, a 5 KW panel barely makes a dent in
  • Load Balancing (Score:5, Informative)

    by redphive ( 175243 ) on Friday January 27, 2006 @10:08PM (#14585577) Homepage
    I am going to guess you have 3Phase power perhaps through more than one primary link. Do they charge
    based on the peak of one phase or the average of all. If you aren't balanced on your phase input into your building, you may be able to rebalance and see some benefit there. If you have one or two large UPS systems that are pulling equally across all three phases, make sure that the output of the UPS system is also balanced, that could end up bringing your input usage down.

    This of course wouldn't help with your peak usage, but something to consider anyways.

    Short of that, you would be looking for something that could store power and charge that at a regular rate. But then you could end up possibly shorting your demand on the output side based on the available power in that 'system' at peak times.

    I am going to guess your best bet is to look at phase and load balancing through your power distribution network and make sure you have placed your clients. If I was in a similar situation, I would set up a collection of load coils across each hot lead in your power distribution network and graph the values on a tight schedule (in order to catch peaks) and determine what is responsible for your peaks.

    Don't know if any of this would help, but it is discussion, mod accordingly.
    • Re:Load Balancing (Score:4, Informative)

      by redphive ( 175243 ) on Friday January 27, 2006 @10:11PM (#14585598) Homepage
      oh one other thing. If you don't know how they calculate your peak, you aren't going to get very far as your results could differ from your bill. Make sure you are fully versed in the way they are quantifying your demand.
    • Re:Load Balancing (Score:5, Interesting)

      by TinyManCan ( 580322 ) on Friday January 27, 2006 @10:18PM (#14585632) Homepage
      This is absolutely correct. The very first thing you should do is get some monitoring gear in place so that you can tell what is going on in real time, and find the sources and causes of those high peaks.

      Once you get a feel for how the datacenter is 'breathing' (i.e. watch the usage graphs and become familiar with the pulse of workload, etc) you should be able to come up with good solutions to your problems (like starting your monthly billing processes 2 days early, so you can only run the batch processes at night when the power is cheaper).

      Also, never underestimate the cost of lighting and A/C. Maybe you can get by with only turning on every 3rd fluorescent light. Maybe you can use exhaust fans instead of A/C in a colder climate.

      The point is you'll never know what problems you need to address unless you monitor your DC.

    • by jd ( 1658 ) <imipak@@@yahoo...com> on Saturday January 28, 2006 @01:50AM (#14586592) Homepage Journal
      Power supply units don't scale well. Double the power out will require FAR more than double the power in. Two computers with two high-efficiency PSUs will take LESS power than one computer with a single PSU that is less efficient.

      Disks (and other mechanical parts) will consume a lot of energy, but you don't need to replicate every single physical disk - if the data is under two gigabytes, RAM disks should be fine. In the event of a hard drive failure, backing up off RAM disk is no different from backing up from physical disk, so what's the difference? A single SAN-based disk pack, copied into RAM on the servers, would be the least power-consuming design - especially if you powered the hard drive off except when syncing up.

      It costs power to task swap, so the more active tasks there are, the more swapping (if the tasks are all being given fair time) and therefore the more CPU time is taken by kernel activity, therefore the more power is being used up on housekeeping. You should be able to reduce the power consumed by heavy kernel activity by load-balancing.

      If you're going to load-balance, you don't need high-power server-rated or desktop-rated CPUs. Mobile CPUs will take less power, you'd just need a larger cluster to load-balance over. If using Linux, also look at CPUs other than Intel - many MIPS and MIPS64 implementations are pretty low-power.

      Networks take power to run. There's no escaping that. Don't run more wire/fibre than you have to (that also includes not running longer cables than you need), and don't use more intermediate network devices than will get the job done properly. Oh, and don't overspec the network for a given technology. CAT6 is good stuff, but if your machines never exceed 10 mb/s on the network, you're going to lose efficiency. The "for a given technology" matters, as different technologies will consume different amounts of power for a given spec. Shop around.

      Cooling systems are another mechanical system and so are necessarily power-hungry. You can't put those in RAM, however. Again, shop around. You want the best cooling power per unit of energy. This may turn out, for your system to involve having several fans on a single component. It might equally well work out that you can link ducting together such that a single fan can directly cool many components. Since the energy efficiency is what is important, go for the most energy efficient solution for your system.

      Depenmding on the system, it MAY (this is not guaranteed) improve the efficiency to have a variable-speed fan, with the speed controllable by the CPU, and where all components cooled by this system have thermal sensors readable by the CPU. You can then vary the cooling as a function of both temperature and predicted load levels. (Varying according to temperature alone is useless, as the loads on the components will change faster than the sensor readings - but could change in either direction. Since the OS knows what tasks it is currently doing, it should be capable of predicting the likely loads for a much more reasonable timebase.)

      Connectors are notorious for high resistance and therefore power loss. If there is something that you're unlikely to change for the productive lifetime of the computer, all power loss through all unnecessary connectors (whih are generally made from poor conductors anyway, just adding to the problem) is power you can conserve simply by improving the connection. If you insist on using connectors, make sure the wires that go to the connectors are soldered and not just held in place by pressure. Also, clean the connectors thoroughly, as buildups of oxide and dirt will increase the resistance. You WILL be better off by removing the connectors entirely and soldering anything that's not going to change in place.

      Finally, the data center's power grid. You want very high voltage, very low current. (Power dissipation is proportional to voltage, but proportional to the square of the current.) The industrial powe

      • Can you honestly suggest that a shorter Ethernet link will consume less power than a longer one? Sure, there's a tiny difference in cable resistance. But the transmitting end is putting the same amount of energy into it either way, since it doesn't know the difference. Any that doesn't turn into heat in the cable will turn into heat in the receiving chipset. Hence, the same power draw.

        You *may* have an argument on very long fiber links. If you can get away with a short-reach transceivers instead of long-hau
    • Also look at your power factor. If you don't know what that is you need to hire a consultant who does.
  • Peak load reduction (Score:3, Interesting)

    by duffbeer703 ( 177751 ) on Friday January 27, 2006 @10:18PM (#14585634)
    In many states, you can save substantial amounts of money by agreeing to scale back energy utilization during critical times. In New York, NYSERDA (www.nyserda.org) is the agency that administers the peak load reduction program.
    • by arivanov ( 12034 )
      Why only peak load reduction? Load reduction in terms of electricity is the actual way to start.

      A good example - a Dual CPU Pentium4 Xeon Intel OEM system in a standard Intel OEM chassis eats nearly 400W when idle with no power management. With standard ACPI power management when idle it eats 350-380W. With CPU frequency scaling using the ondemand CPU governor it will eat less then 100W when idle. The numbers for Opteron based systems are not much different. A usual datacenter is designed to cope with full
      • The problem is, it isn't necessarily practical or cost-effective to rip out all of your servers and replace them with Operton-based machines.

        Peak load reduction can net you significant cost savings, for the cost of shutting down non-essential equipment during a brownout...
      • Why only peak load reduction?

        Because peak is when they fire up the gas powered generators that cost more to run.

  • Lower Peak Demand (Score:5, Informative)

    by RelaxedTension ( 914174 ) on Friday January 27, 2006 @10:19PM (#14585639)
    I used to work for a company where I was in charge of building automation and peak demand limiting. We used several strategies for this. 1. Use thermal storage where possible. The only real source you can control is the cooling/heating for the building, and you want to build uip as much of what you need during low periods of usage, like in the middle of the night. If you're in a cold climate, store heat, and if you're in a cold climate store cold. Use water large water tanks to achieve this. It will cost you to install them initially, but they will pay for themselves in a surprisingly short period of time. 2. Monitor the usage and trim where you can when you're hitting peak demand. Turn off lights, coooling units, etc., for the short time that it's required. Pre chill or heat the building ahead of time. 3. Run your backup generator to supplement existing power if you have seasons where usage is much greater than at other times of the year. If you have to run it every day of the year it won't help due to maintenance and fuel costs. But if you need it for short periods to chop the peak then it's well worth it. Again, it will more than pay for itself. The power company may even pay you to supplement them with it. 4. Look for alternative methods to heat or cool, or even generate power. You'd be surprised at what's available now for that.
    • Second everything said here. I'd boost it with a mod point but it's already at 5. It's fairly easy to store cold in insulated water tanks, chill them at night when the thermal environment is in your favor, and then turn the AC off in the middle of the day / peak electrical period. The other methods all work, too.
  • by HotNeedleOfInquiry ( 598897 ) on Friday January 27, 2006 @10:21PM (#14585650)
    If you're right on the edge of getting nailed for peak load, you could run the aircon aggressively before the peak load period and try to coast through it with the unit off. Chill the place to 60F, shutdown the aircon a few minutes before peak load and see how long you can go before turning it back on. Economizers can work well at reducing your aircon load. We pull in cold outside air at 5AM and cool the building down to 65f. This saves us about 2 hours of aircon running during summer days.
    • This will work for an office. It will not work for a datacenter which is the original question. The heat density in a datacenter is quite often in the 16KW per rack range for peak consumption. Precooling this will give at most 5 minutes. No point to bother
  • by Anonymous Coward on Friday January 27, 2006 @10:21PM (#14585652)
    For smoothing out power usage, there are a number of different options -- aside from alternative energy, you could do rolling brownouts throughout your datacenter and rely on UPSes or generators to keep things going -- but you *will* take a hit in reliability. Every switchover -- one mains circuit to another, mains to battery, etc. -- carries some risk.

    I've watched an entire datacenter go out on what was supposed to be a controlled switchover -- power company needed to do some work, pulled the plug (with the datacenter's consent), the backup generators start... and then die. The UPSes kicked in, but could only supply 15-20 minutes of power. Everything failed over to a backup datacenter, whose link then decided to go out to lunch.

    Total cost of the outage was measured in tens of millions of dollars.

    Just keep this in mind when doing the business justification calculation (cost savings from lower energy bills, minus upfront cost of equipment, minus risk of additional downtime times cost of downtime, minus cost of maintaining the equipment). Unless energy prices go *way* up -- like oil hitting $250/barrel -- I'd be surprised if this would pay for itself.
    • Other than that downside, the cost of wearing out all those lead acid batteries faster is high.

      A good AGM SLA might cycle 500 times if you are lucky and you don't cycle them very deeply (keep it less than 80% discharged). You'll be replacing a lot of them after a year or two vs 4 or 5 years if you weren't cycling them every day.

      And a good AGM SLA isn't cheap. An 85 amp hour runs well over $100, and that's small by datacenter standards, a 5000VA UPS might take 4 of those.
  • by GigsVT ( 208848 )
    Your utility might charge you based on KVAR hours or apparent power. If you have a bunch of computers and UPS then your power factor may be bad.

    Make sure there's no charge there for KVAR hours instead of Kilowatt hours, or no surcharge for power factor. If there's on there it would benefit you to get a consultant in to install PFC correction.
  • No (Score:4, Informative)

    by cca93014 ( 466820 ) on Friday January 27, 2006 @10:26PM (#14585672) Homepage
    "Could fuel cell technology be something to look into?"
    No. Fuel cells are a way of transporting energy, not creating it. This is such an important concept to grasp that cannot be understated.

    We are in deep trouble, energy wise. There is no immediate solution (within the next 30 years) that can help us. We need to get used to that concept, fast. Doing "your bit" for the environment is simply not enough.

    Welcome, too, China and Inda. Welcome to the powerdown.
    • Re:No (Score:2, Insightful)

      by Anonymous Coward
      We are in deep trouble, energy wise. There is no immediate solution (within the next 30 years) that can help us.

      Bull. But Congress needs to get off their ass. The potential gains of energy efficiency are enormous with minimal cost.

      The average mileage of US vehicles peaked in the late 1980s. Increasing mileage by only 1 MPG (techically feasible at minimal cost) would result in enormous savings, but the MPG standards haven't changed in decades. What's worse, as SUVs became popular, the standards for SUVs are
    • He wants to transport energy. From off-peak to peak times. His problem is one of energy storage and it's not wrong for him to ask about fuel cells.

      The answer is still "no", however. He could crack water into H2 and O2 during off peak and run it through fuel cells during peak times, but there's a pretty large efficiency loss in electrolysis, and another one with the fuel cells.

      Not to mention an extremely large up front cost. $5000 per kilowatt or so. For comparision, a house typically has 24 kilowatt s
    • In fact, there was an intersting article where some of the big wigs (soros and a few others) think that we will see 120 or even up to 262 / barrel in the next year. Right now, we are getting the bulk of our oil from USA (and EU) unfriendly countries; Iran and Venezuela are just 2. Both are angling to take on America while GWB has us mired in his war. If Iran and Venezula decide to embargo us, We would instantly be screwed. In addition, Al Qaeda and others are trying to take down the hous of saud (the saudi
  • by buck-yar ( 164658 ) on Friday January 27, 2006 @10:28PM (#14585684)
    I take it you have quite a server farm.

    Intel sells a lot of crap, so take some of it and use a methane generator to produce power.
  • by spagetti_code ( 773137 ) on Friday January 27, 2006 @10:33PM (#14585711)
    How many of your servers are running at 100% CPU? How about moving them to VIA [epiacenter.com] low power processors - up to 1.3GHz.

    I have one of these (1.2GHz) and with 1 large HDD, encoder card, network, DVD etc - it idles at less than 20W and maxes at about 60 (encoding, playback, DVD all going, CPU 100%). Burst power when switched on seems to be about 72. This is less than the processor alone on a high spec box.

    This will only work with non-CPU intensive operations. However IO seems to be pretty good on these boxes, so an IO bound server would probably not suffer too greatly using a VIA mobo.

    • youre funny.

      If the machine is doing 100% cpu utilization, just replace it with a weaker CPU.

      Maybe he can save more energy by running hundereds of vmware virtual machines on an Geode GX.
    • I actually use them for servers in my day job and in a number of my own projects. So my 0.02 eu

      They are nice, but they have their limitations. On the positive side:

      • Very low power consumption.
      • Very high IO speed. In fact considerably higher than expected. I have been getting 2+ times faster IO than from an Intel Xeon from them.
      • AES acceleration on the higher end models, high quality hardware RNG and RSA acceleration on the models coming up this year

      On the negative side:

      • Very small cache. Much smal
    • How about something a bit more realistic. Pentium M based blade servers. Soon you will be able to get Core Duo based blade servers which should pack enough density for anyones tastes. The thing are designed for mobile applications but they beat the snot out of the P4 in most server tasks.
  • by Anonymous Coward
    The trick is to use the heat from computers to drive a turbine that generates power. Then use that power to run the air conditioners.
    • Actually, you can only retrieve ~35% (iirc) of energy dissapated as heat (remember entropy?). Efficiency is determined by the range of operating temperatures.

      Although now that I think about it, Xeons might run hot enough to make a heat engine worthwhile :D

  • AMD CPU (Score:1, Interesting)

    by Anonymous Coward
    You may want to look into AMD based systems instead of Intel. We have reduced our power load considerably and gotten a boost in performance by using HP DL385 2 way servers with Dual-Core processors rather than Dell 4 way Intel servers. Don't underestimate how much this can impact power utilization when you have 100 servers.
  • by RingDev ( 879105 ) on Friday January 27, 2006 @10:50PM (#14585797) Homepage Journal
    Some things are easier to do in the design phase. but something can be done now.

    First, pre-cool the room. There was a good article on /. earlier this week, keeping the building cooler in the morning and warmer in the afternoon can drop your peek time costs.

    Second, install a solar power system. Kinda pricy, but if you have a large roof you can generate some solid power. And don't think that being in the north excludes you from solar power. Uni-Solar has a great sun index map showing what level of solar output and electrical output you can expect in any given area.

    Third, going with solar, a battery array or some other type of power storage. By using the solar pannels to juice up the batteries, you can pull power from the batteries at peek time, but charge them all day.

    Fourth, sub-teranian cooling. Once you get a little ways under the surface of the ground, the temperature becomes a pretty consistant mid/high 50's. Using sunken water tanks you can run 60 degree water through a radiator in your HVAC system. I know there are companies that can install these system but I can't recall any names off the top of my head.

    Fith, solid state storage. If you can swing paying $50/gig as opposed to $1/gig for storage space you can dramaticly cut down on your both your cooling bill and your electric bill.But at $50,000 per ter vs $1,000 per ter, it's going to take a while to recoup the costs.

    Sixth, custom server cases/cabinets. Traditional closets are great for cramming a lot of servers into a small area, but they about suck for heat management. You could fund a research project at any number of engineering schools to create a better storage solution.

    -Rick
    • Second, install a solar power system. Kinda pricy, but if you have a large roof you can generate some solid power. And don't think that being in the north excludes you from solar power. Uni-Solar has a great sun index map showing what level of solar output and electrical output you can expect in any given area.

      Third, going with solar, a battery array or some other type of power storage. By using the solar pannels to juice up the batteries, you can pull power from the batteries at peek time, but charge them

      • Solar panels kick out small voltage through out the day. True they will likely peek at the same time as your peek electricity. But the amount of power they put out at peek is not going to be much compared to your total consumption. So instead of using the power as it comes through out the day, where in the morning you may save 10kWh for say, 8 cents per kWh, you can instead store that juice in a battery for peek time and save 9kWh (due to loss) and 20 cents per kWh. Yes, it would cost extra for a battery ar
        • Yes, it would cost extra for a battery array, which is why I listed it seperately, but with it, you could replace more of your most expencive power with the cheapest, instead of replacing a smaller amount of power through out the day.

          Forget the battery array, I want to know what kind of monster inverter you are going to need to run an entire data center full of equipment.

      • Most med-large datacenters have a bank of batteries either in an UPS or as a DC power source that they just charge all the time. If you can use an alternate source to charge them, then you just saved some on your bill and if the peak is at the time when the solar cells are at it's peak, then that's a load off you peak charge.
        • I've been saying this for years, any outfit that already has a DC infrastructure should be installing photovoltaics on the roof. In a traditional PV installation, inverters and output wiring are a big part of the expense, but if that work is already done, the payoff period is a lot shorter.

          Plus, in the event of a grid failure, your generator doesn't have to work quite as hard, which translates to slightly longer runtimes on the same fuel tank.

          The available solar resource depends largely on latitude and weat
    • Fourth, sub-teranian cooling. Once you get a little ways under the surface of the ground, the temperature becomes a pretty consistant mid/high 50's.

      Actually, I think you'll find that the deep temperature of the earth is the average between the highs and lows on a yearly basis. In other words, if you live in a hot climate with temps of 120 in summer, 60 in winter, the deep earth temp would be 90. In the frozen arctic, the deep earth temp is below freezing ('permafrost'). Granted, for a lot of the continent
      • the deep temperature of the earth is the average between the highs and lows on a yearly basis

        This is correct. I used to do a bit of caving and mean ambient temperature of a cave in Central America is about 20 degrees higher (70F) than one in the Northern USA.
  • by TomTraynor ( 82129 ) <thomas.traynor@gmail.com> on Friday January 27, 2006 @10:50PM (#14585801)
    1. Cool down the centre during the night when hydro is at its cheapest.
    2. During the day raise the thermostat so the AC does not kick in too soon.
    3. If you have windows use the blinds on the sunny side. Thermal load is a royal pain. Where I work it hit 27c inside even though it was -14c outside. The north side was running at about 21c.
    4. Put all non-essential equipment on powerbars and turn off the bars. Most monitors and other electronics still draw a bit of current for 'instant on'. That takes hydro and dumps more heat for the AC to handle.
    • Number 4 is a good idea. I just redesigned my home power cabling, so I have a remote control that can turn off all non-essential devices (monitors, speakers/amplifiers, chargers). Easy to use, just turn off whenever you leave the room and at night.
    • If you have windows use the blinds on the sunny side. Thermal load is a royal pain. Where I work it hit 27c inside even though it was -14c outside. The north side was running at about 21c.

      Or put the datacenter in the basement. Not only do you not get thermal load, but the walls will naturally stay 18F year 'round.
  • CPUs with six or eight cores, with four threads per core. Sun says their new CoolThreads Servers [sun.com] offer significant power, cooling, and space savings.

    I believe the servers are too new for anyone to have a solid opinion about, but I know Sun has been actively moving in this direction for a while.
  • Have you thought about installing wind turbines on the roof of your building to generate electricity which you can then feed back into the grid?

    You will probably never be able to generate enough power to completely power your data center, but even if you generate 1%, 5%, 10% ... that's savings on your electric bill. Your ROI could be substantial over time.

    Buildings in Chicago are strongly thinking about this (for obvious reasons).

    Other areas could probably benefit too.
    • Actually Chicago was named the windy city for its politics.

      The other side of the lake has more wind [nrel.gov].

      • Re:Chicago Wind (Score:3, Informative)

        Southwest Michigan may have more per square mile, but my 10th floor balcony on Lake Shore Drive has no shortage of wind. I see 30-50 mph gusts almost every day of the year due to the layout of the other high-rises around me.

        They locate wind farms in mountain passes or other natural high-wind locations; I wonder if turbines located in certain spots of major metropolitan areas would be super-efficient. The plaza south of the IBM building on the river in downtown Chicago has to be one of the windiest places on
  • by stienman ( 51024 ) <adavis@ubas[ ].com ['ics' in gap]> on Friday January 27, 2006 @10:58PM (#14585854) Homepage Journal
    Get a professional electrician in that knows about peak charges.

    Older installations used to use giant flywheels, but not to limit peaks. They were used for power conditioning and limited power backup.

    I'd do an extensive survey before trying anything else. Buy or rent a power meter that does logging and graphing. Check everything out for a month - each phase and the current draw on each phase, and current draw on each rack (each computer if possible).

    Proper sequencing of cooling can drastically affect your power consumption. Never start your cooling motors when you're drawing a lot of power - motor startup is a huge peak. After doing a survey of your power needs you may be able to identify times when you can avoid turning the cooling system on which will lower your peak. For instance, before the daily peak, cool the data center down a few degrees more than usual. Then shut off one or more cooling system until after the daily peak. This can be tricky to correctly manage and implement, especially since it has to be automatic and failsafe.

    Alternately, shop around for your power. check with a few competitive companies and see if they offer a better deal.

    -Adam
  • by boxie ( 199960 ) on Friday January 27, 2006 @10:59PM (#14585860) Homepage
    Being a datacenter you would undoubtedly have a generator backup to your UPS solution.

    Would it be cheaper/feasable during these peak times to "test" the generator... ie turn the mains power off and run on diesel?
  • relocate it (Score:4, Interesting)

    by TheSHAD0W ( 258774 ) on Friday January 27, 2006 @11:08PM (#14585905) Homepage
    If you've got a datacenter large enough that energy efficiency is a problem, I recommend you move the whole shebang to a location where energy is more plentiful. Upstate NY, which has plenty of hydroelectric power, would be a good choice. Nowadays, thanks to the internet, you don't have to keep your datacenter next to part of your operation.
  • by bergeron76 ( 176351 ) on Friday January 27, 2006 @11:19PM (#14585968) Homepage
    Seriously. Try killing the flourescents and not allowing "lighted" maintenance during certain peak times.

    On the other hand, that might be a dumb idea.

  • by Spazmania ( 174582 ) on Friday January 27, 2006 @11:19PM (#14585970) Homepage
    Switch to natural gas to run the air conditioners. Your peak electricity hit is in the middle of the day when the air conditioners work hardest, but the peak natural gas hit is in the middle of the night when the exterior temperature is coldest. Price wise that works to your advantage.
  • Just use less (Score:3, Informative)

    by mnmn ( 145599 ) on Friday January 27, 2006 @11:39PM (#14586065) Homepage
    "is there's something we can do at the datacenter level"

    Yeah use the Ultrasparc T1 CPUs, use lower power scsi disks including compactflash disks for boot and OS, keep all lights out when you dont need em, add heavy wall insulation unless youre living far north, add lots of ram in all machines so the disks can be powered down etc.
  • The company I work for uses a webhosting and colocation company. As our bandwidth utilization grows, we have begun searching for ways to make our network more efficient. The biggest hit from the colocation company comes in the peak usage charge, which penalizes (rather severely) for the highest sustained burst of network usage during a billing period. Due to the nature of the web business, we can't control how much or when visitors use bandwidth, so I'm wondering: is there's something we can do at the datac

  • I built a data center power monitoring system about a year and a half ago for exaclty this purpose (I installed it in my house and posted a writeup to slashdot... the article is now here [70.141.216.161]). This system monitors every branch circuit in the data center and allows you to assign circuits to customers so you can track usage by customer. The first data center it was installed in was a colocation facility and their intention was to start billing for power like they do bandwidth. That is, you purchase power in 5a
  • There are high tech flywheels you can buy for UPS service; they may also be useful for load control. You'd buy more UPS capacity than you need for emergency power outage, then use the additional margin to cut off demand peaks.

    Well, you could also have an automated cutoff for nonessential load (like 3 of 4 fluorescent lights or something). Or, you could use a battery UPS instead. But the flywheel is cooler...

    random link [ecmweb.com]

  • by slakdrgn ( 531347 ) on Saturday January 28, 2006 @12:53AM (#14586424) Homepage
    Chances are there are many things that are eating away at your energy costs. Your best bet would be to hire a few consultants that work in the the eletrical, a/c and datacenter management fields. Any one solution probably won't help (say, replacing all your servers with low-power sun servers) or would be too costly. A few things to consider (you may have already implemented these or are considering these, you didn't say so I'm going on the assumption that you have not).


    - If your server room is not enclosed on the roof of the room (just using plain false-roof tiles) make sure they are atleast insulated very well. The more A/C escapes, the more it has to work.

    - Make sure there is enough air-flow through your server racks (best placements and setup ideals very from person to another), best not to have the rear right up to a wall. Middle of the room or offset (5 feet or so from the well) allows for good ventalation.

    - Keep server room lights off unless needed with the exception of a low-heat emergency lighting.

    - If you have raised flooring and the a/c comes through the bottom, place the racks behind vent openings (so the air is rising to the front of the rack, getting sucked in by the fans in the front) instead of having the rack on the vent itself.

    - Upgrade older servers if possiable. Older servers (expecially the old HP NetServer series) are a lot less efficient as newer servers. Not componet (CPU, HD) but also overall engineering.

    - Turn off monitors when not in use. LCDs are not as bad but better to be safe then sorry. If you do not need it running, just leave it off.

    - Do not allow people to keep the server room door open, may sound simple but you wouldn't beleive how many times I've seen this. If the doors don't close automatically, get automatic closers for them!

    - Make sure the doors are weatherstripped.

    - Multiple airconditioners! I have a small server room that runs on three airconditioners. Two always run, one does not, this rotates weekly. Also great for redundancy.



    I'm sure there are many more things you can do. Hiring outside consulants who have worked with issues such as this are always benifitial. Be sure to get second/third opionions.



    Wow, spelling really sucks when you haven't slept for 72hrs. (I really, really hate Exchange. Expecially when custered.)

    • Something else I forgot to add, check with how other large datacenters are handling their energy efficientcy. Do a lot of research. Someone also said about monitoring individual racks or even servers. Get to know where your energy is going from a systems standpoint. That will help fill the gap with where the rest of the energy for the datacenter is going.
  • Slow low power processors with multiple cores handling multiple processes:

    http://www.sun.com/servers/coolthreads/overview/in dex.jsp [sun.com]


  • They have to be engineered for power efficieny to extend their battery lives, and they have a built-in ups/load balancer. Of course you's have to engineer your own DC system instead of those wall warts, but...

    • You can use most of these features on a desktop or a server if you:

      1. Use Linux
      2. Bother to turn it on

      Just look at the kernel documentation for cpufreq and the on_demand governor. Alternatively you can use cpufreqd which allows even finer tuning.

      Turning it on for a dual CPU Xeon will drop the power consumption from 400W+ to under 100W when idle. In fact this feature works on any Pentium4 class CPU.

      Numbers for Opterons are similar, but most dual and quad Opteron motherboards lack proper support for this feat
  • A few options (Score:2, Interesting)

    by Khyron42 ( 519298 )
    As a data center manager myself, I can understand your pain. Unfortunately, I'm in charge of a corporate data center rather than a pure hosting arrangement; many of the tricks I've used to manage power consumption wouldn't work for you, but...

    I'm able to play building load for the laptops/desktops off against data center consumption, and also able to relocate equipment to other sites to juggle the load. I have the option of passing the cost on to the customers because most of what I do is cost-plus contra
  • Most datacenters cool the entire machine room. There's no need for that.

    Make two rows of racks face each other. Place a roof on the lane between the racks and doors on either end. This is the cool lane. On the opposite sides of the racks, place no roofs. This is the warm side.

    Only let cool air enter the room in the cold lanes and suck the hot air from the warm lanes. Use racks with perforated doors and use shields to completely cover unused space in the racks.

    Now the cold air from the enclosed cool la

  • One of the easiest and most effective way to reduce your energy bills is to generate your own electricity from gas or fuel oil.

    Not only are the fuels cheaper, kWh-for-kWh, than mains electricity, but you get to use the waste heat from the generator to heat the building at the same time. Doing both at once gives you huge savings.

    Typically people tend to use I/C engines for the generators --- gas turbines would be more efficient, but I/C engines are cheap and reliable and will scale down far more effectiv

    • One of the easiest and most effective way to reduce your energy bills is to generate your own electricity from gas or fuel oil.

      On what planet exactly? Come on mods this one is +1 Funny.
  • Ok some simple things like running gycol AC that do not have to run a compressor when it's cold out (circulation pumps and fans are a lot cheaper)

    Ducting the return air to the outside when the outside (or basement :) air is cooler and does nt need significant humidity adjustments. You will go through a lot more air filters but it's cheaper. Depending on the building the basement is actualy a pretty massive heat sink to the ground this works great if it's mostly open bulk storage etc. It also has the adde
  • Save money (Score:2, Interesting)

    by catahoula10 ( 944094 )
    Have all IT people work from home. :-)
    No office space to cool or heat. No coffee machine or water cooler. No overhead. Just house the machines and an small maintance staff.
  • If you're large enough, I'm going to assume that you have a decently sized backup diesel generator. Say one about 500kw. Talk to the power company about running it during peak hours and selling the electricity to them. I know one TV station that does this with their backup generator. When load gets high enough on the grid, they call up the transmitter site and they turn on the generator. The station gets paid enough from the power company to pay for maintainence and fuel for the generator.

    As for more
  • Biggest bang-for-the-buck: remove all your Intel Xeon and P4 machines and replace them with AMD Opteron dualcores, preferably the HE (High Efficiency 55W max) series. Each core will do more work than a Xeon and burn a fraction of the power doing it. Sun's new multicore CPU is interesting if you don't need x86 compatibility.

    Power supply efficiency is important too. I switched to Seasonic high-efficiency power supplies for my desktops years ago. I'm not sure what you'd do about rackmount servers. There's
    • Not quite correct.

      While I am an old AMD fanboy and given a choice I will always chose AMD Intel must be given credit where credit is due. If you use CPU frequency scaling on Intel which is supported on both Pentium 4 and Xeon you can drop its consumption into the sub-25W territory when idle and ramp it up in a lot of increments (usually 8) to full as the load arrives. While in theory AMD powernow-k8 should give you similar features, in practice I have yet to see an SMP motherboard that does not have it disa
  • by boggis ( 907030 )
    No, I'm not the experts but I refer you to the: Rocky Mountains Insititute [rmi.org] . They are a not for profit environmental think tank who work with corporations and governments (Ford, the US Military for example) to increase profits or reduce costs through more efficient environmental practices. They ran a Design Charrette [rmi.org] around this specific question. This is where they take their staff members with general energy efficiency expertise and a whole bunch of industry types (data centre types, power company types
  • If you are using Linux look at Zen so you can run multiple VMs on one machine. For windows VMWare.
    Stick to AMD for now. They put out a lot less heat and burn fewer watts per mip than the current Intel Machines.
    Look at the SunT1 line.
    Few Hard drives. Use a storage server with a Raid instead of an HD per machine if you can.
    Low heat blades?
    There are lots of options to reduce power consumption the problem is will you save more than you spend? I would ban P4s right now. No reason to run a server with an Intel ch

Variables don't; constants aren't.

Working...