Follow Slashdot stories on Twitter


Forgot your password?
Power IT

When Does Powering Down Servers Make Sense? 301

snydeq writes "Powering down servers to conserve energy is a controversial practice that, if undertaken wisely, could greatly benefit IT in its quest to rein in energy costs in the datacenter. Though power cycling's long-term effects on server hardware may be mythical, its effects on IT and business operations are certainly real and often detrimental. Yet, development, staging, batch processing, failover — several server environments seem like prime candidates for routine power cycling to reduce datacenter energy consumption. Under what conditions and in what environments does powering down servers seem to make the most economic and operational sense, and what tips do folks have to offer to those considering making use of the practice?"
This discussion has been archived. No new comments can be posted.

When Does Powering Down Servers Make Sense?

Comments Filter:
  • by Yvan256 ( 722131 ) on Thursday October 30, 2008 @03:37PM (#25574105) Homepage Journal

    Like when someone posts your domain name on slashdot!

    You can't take down a server that's already off-line.

  • by TheNecromancer ( 179644 ) on Thursday October 30, 2008 @03:40PM (#25574139)

    you see the Windows logo appear? (sorry, couldn't resist)

  • Simple (Score:5, Funny)

    by eln ( 21727 ) on Thursday October 30, 2008 @03:40PM (#25574153)

    The best time to shut down the servers is right before you quit your job. Password-protecting the BIOS first adds value too.

  • WOL (Score:5, Insightful)

    by Anonymous Coward on Thursday October 30, 2008 @03:41PM (#25574157)
    Put redundant/failover servers into a sleep state and enable WOL.
  • by Kamokazi ( 1080091 ) on Thursday October 30, 2008 @03:41PM (#25574161)

    It's pretty much up to your business....what must run 24/7, what systems are likely to get accessed in off hours, and how likely is that, and how critical are they? With redundant systems, can there be any downtime while they are powered up, or should it be immediate failover? If you use virtualization the redundancy should be easier to manage in many may be able to immediately offload to running systems and power up backup systems and then bring the VMs up there.

    It's hard to get very specific without knowing your business and what you are running and what the needs are.

    • Re: (Score:3, Interesting)

      by LWATCDR ( 28044 )

      Your correct.
      A lot of places I know shut completely down at night but leave the servers up and running. Often it is so they can run end of night jobs or just so they can get up and running quickly in the morning.
      A lot of it is just waste and a lot of it is just habit.
      Now for people that run 24/7? That is totaly up to you.

      • Seems a lot of places don't really worry about their power consumption. Look at how many places leave the lights on all night.
    • by vwjeff ( 709903 ) on Thursday October 30, 2008 @04:26PM (#25574761)
      Example Setup The organization I work for has a well known usage patterns that we use to make decisions like this. 95% or more of our traffic occurs during business hours which we define as 7:00 AM - 7:00 PM. During business hours we have dedicated servers for various functions. We have a cluster of servers running virtual server instances that duplicate the dedicated servers. During off hours the dedicated servers are powered down and the virtual server instances take over. It works for us and we have seen a significant decrease in power usage with no impact on our users.
  • by Anonymous Coward on Thursday October 30, 2008 @03:41PM (#25574163)

    I'm glad this was posted to "Ask Slashdot" where your audience is highly seasoned professionals that can give you wise, insightful answers...
    In the data center that I manage, I use a few simple rules to determine when I power them down.
    1) If the server is on fire
    2) If there are no users using the server
    3) If the power company is saying that I haven't paid my bill and they are sending "Hank" over to cut me off
    4) Civil unrest, tornado, earthquake, zombies, etc.

    • by Anonymous Coward on Thursday October 30, 2008 @04:19PM (#25574661)

      I'm glad this was posted to "Ask Slashdot" where your audience is highly seasoned professionals that can give you wise, insightful answers...
      In the data center that I manage, I use a few simple rules to determine when I power them down.
      1) If the server is on fire
      2) If there are no users using the server
      3) If the power company is saying that I haven't paid my bill and they are sending "Hank" over to cut me off
      4) Civil unrest, tornado, earthquake, zombies, etc.

      Zombies aren't a good reason for shutting down the servers, that's why our IT guy keeps a shotgun leaned up against the least he says it's for zombies.

  • by Anonymous Coward on Thursday October 30, 2008 @03:43PM (#25574187)

    If you virtualized your servers, you could create a managed power-down/power-up scenario. In the morning, your servers would turn on, your virtualized instances would move around (so they have more power for the day's activities), and then at night they'd retreat to a smaller group of servers. The unused servers could shut down for the night. You could even rotate which servers stay on overnight keeping the virtual servers running to spread the wear around if there is some.

    • by Amarok.Org ( 514102 ) on Thursday October 30, 2008 @04:00PM (#25574417)

      There are a number of tools and products out there to assist this.

      Consider a large (65k+ employees) company that has a several hundred server implementation that they use to process payroll every two weeks. They use a management tool to power them up on Friday, process payroll over the weekend, and shut them down on Monday. The power and cooling cost impact of these several hundred servers *not* running most of the month (6 or so days a month instead of 31) is huge.

      Another (and also in use by the same company) strategy is to virtualize the OS instances, spin those up and down as necessary, and then use something like VMWare's VMotion to maximize usage of the physical boxes - and again use another tool to power down unneeded compute capacity.

      Welcome to the virtual world...

      Lots of prerequisites, but when it works, it's pretty freakin' sweet...

      • by agallagh42 ( 301559 ) on Thursday October 30, 2008 @04:18PM (#25574635) Homepage

        ...and again use another tool to power down unneeded compute capacity.

        And that other tool is ... VMware! DPM (distributed power management) is built right in, and does exactly what you describe. [] (scroll to the bottom)

        Welcome to the virtual world...

        Yup, the game is officially changed.

        • by Amarok.Org ( 514102 ) on Thursday October 30, 2008 @04:30PM (#25574849)

          Actually, the other tool in this case is Cisco's VFrame Data Center. The problem with DPM (and other VMware tools) is that they won't let you move a physical box between ESX clusters. If you have multiple ESX clusters, the physical machine stays with it - powered up or not. With VFrame, the system can be powered down, removed from the cluster, and added to another if/when necessary... including any necessary network configuration (VLAN memberships, etc) and SAN configuration (zoning changes, LUN masking).

          Not that I'm complaining about VMWare's solution to this problem - they're actually quite complimentary.

          • by afidel ( 530433 )
            Other than a separate DR cluster which is going to be physically separate servers why would you want to use more than one ESX cluster?
      • by nabsltd ( 1313397 ) on Thursday October 30, 2008 @04:57PM (#25575223)

        First, let's assume "several hundred" equals 200, and we have exactly 65000 employees. Let's also assume that these extra servers are on for exactly 48 hours. Let's also assume perfect load balancing and distribution of the process over the servers.

        That means that each server processes payroll for 325 employees in 48 hours, or about 7 employees per hour. So, each of these servers is basically the equivalent of a Commodore 64 in computing power. I suggest that the best way to save money at this task is to replace the 200 servers with a single Pentium 4 quad core running at 3GHz.

        The other explanation—that the software is so unbelievably bad that it really does take 8½ minutes for it to run a single employee—is possible, but would going out and buying "QuickBooks" really cost more than the 200 servers to run this awful beast of a payroll program?

        • by Silentknyght ( 1042778 ) on Thursday October 30, 2008 @05:35PM (#25575789)
          If you have a consulting or legal business, where your employees bill time by the tenth of an hour, then yes, this could be a much longer process than you estimate. You have to tabulate all the hours for each employee for the month, and then allocate each hour spent on each day to each client, each client's job, each phase of said job, and each task under that job. Spread that across 1000 active clients with 1-2 jobs each, many with multiple phases, and all with multiple task codes. None of that has to do with processing a paycheck for me. The billing cycle isn't about getting a check from your employer, but getting a check from your client. The above may seem overly complex, but they ask for it and they pay the bills.
        • by billcopc ( 196330 ) <> on Thursday October 30, 2008 @07:45PM (#25577337) Homepage

          First of all, I'm with you, I also don't understand what it is about these mythical accounting processes that takes so damn long to process.

          I guess it's like everything else in the software industry:

          - software built by programmers for programmers runs quickly
          - software built by programmers for non-programmers is incoherent
          - software built by non-programmers for non-programmers is slow as molasses
          - software built by non-programmers for programmers is never executed!

      • Re: (Score:3, Interesting)

        by afidel ( 530433 )
        How much spare capacity do they have in that payroll cluster to deal with failed boxes? Is that more than they had before virtualizing? If so what is the cost of the additional hardware and maintenance vs the cost of running the previous boxes at idle for 25 days x 12 months x number_of_years in replacement cycle? For me electricity (even including AC units, UPS's, etc) is such a small part of a boxes operating cost (less than 10% over 3 years) that it's not worth it to shut them down.
      • by will592 ( 551704 ) on Thursday October 30, 2008 @05:39PM (#25575853)

        But what happens to all of the servers that fail to start up in time to process payroll? It's late? You pay overtime through the nose for th SysAdmins that have to come in and work 24 hour days to bring the machines online? Seriously, I'm not saying it's a bad idea but I would say that this scenario is probably more like 15 days on 15 days off. You have to build in time on the front end to make sure the machines are up and running in a stable configuration, and probably time on the back end to apply patches and perform metrics on the machines to make sure they are running properly for next month. I'm not sure that this would save anyone any money in the long run because of the load on their staff during spin-up.

    • Re: (Score:3, Funny)

      by cerberusss ( 660701 )

      your virtualized instances [...] retreat to a smaller group of servers. The unused servers could shut down for the night.

      This is NOT a good idea. We tried this but had the greatest trouble each morning convincing the virtualized instances to come out of their smaller, warmer group of servers into the cold, barely booted-up bigger servers.

      You see, virtualized instances are like kittens.

  • Does it make much of a difference if all your servers plug into a rack mount UPS that draws the same amount of power regardless of the devices running?

  • Like a car... (Score:4, Interesting)

    by fiftysixquarters ( 1078091 ) on Thursday October 30, 2008 @03:45PM (#25574217)
    Seriously, this analogy makes sense. When a car is cruising on the high way it's able to maintain speed using 4/8 cylinders. Servers could be cycled in a similar fashion. Do you really need 20 web servers running at 3 am on a Sunday?
    • Re: (Score:3, Insightful)

      by Yvan256 ( 722131 )

      It's not 03:00 everywhere on the planet nor is it sunday either.

      • Re: (Score:3, Insightful)

        by hesiod ( 111176 )

        You're assuming the web servers are for an international service.

      • Re:Like a car... (Score:4, Insightful)

        by compro01 ( 777531 ) on Thursday October 30, 2008 @04:00PM (#25574413)

        Excepting google and such, I doubt that the vast majority of servers would have such a geographically balanced workload.

        • Re: (Score:3, Insightful)

          by mmkkbb ( 816035 )

          Google likely shunts load to different datacenters based on location.

          • Re: (Score:3, Insightful)

            by Bandman ( 86149 )

            Must be nice to be able to load balance by datacenter as opposed to physical (or virtual) machine.

      • Re: (Score:3, Insightful)

        It's not 03:00 everywhere on the planet nor is it sunday either.

        Its quite likely that, even if your server is serving the public over the internet (which is certainly not the case for all servers), the userbase isn't spread uniformly across all available timezones.

        • Re: (Score:3, Informative)

          by afidel ( 530433 )
          Perhaps not but I don't think my company is atypical, we have people on both the East and West coast with people starting as early as 5am EST and people working as late as 7-8PM PST and by then our partner in India has their early people starting. Sure we have a reduction in usage on the weekends, but that's when we do weekly backups, patching and other maintenance, etc.
      • No, but lots of services (not just Web services) have peak times... the problem is, with traditional architecture, you have to size/plan for peak load - and much of that capacity sits idle waiting for the peak.

        With various solutions out there (discussed in other posts already), you can power down unneeded capacity (or repurpose it) during the down times, and bring it online when necessary.

        No, you're not going to do this with your big ERP application or whatever... but for web farms, compute clusters, app cl

    • by Etrias ( 1121031 )
      Oh thank God. I hadn't seen a car analogy in the last couple of articles and was beginning to wonder if I was on the right site.

      Slashdot? You're soaking in it!
    • ...just how much pr0n gets done at 3a.m. on Sunday? Really, man!

    • by genner ( 694963 )

      Seriously, this analogy makes sense. When a car is cruising on the high way it's able to maintain speed using 4/8 cylinders. Servers could be cycled in a similar fashion. Do you really need 20 web servers running at 3 am on a Sunday?

      So it's like putting to much air in a balloon and something bad happens....?

  • When.. (Score:2, Insightful)

    .. your business doesn't depend on it.

    Seriously .. powering down failover boxes or something like that is not wise thing to do.

    Imagine in some fucked up situation, when your main systems goes down... you can't boot failover servers for some reason ... long fsck, or whatever.

    You can power off the servers that aren't critical .. Why question on slashdot for that ?

    Logic anyone ?

  • Tech so the system can drop to a low power mode.

    Also get rid of the AC to DC and then back AC then back to DC part and only have 1 AC to DC step.

  • Simple Answer (Score:5, Insightful)

    by jcnnghm ( 538570 ) on Thursday October 30, 2008 @03:46PM (#25574237)

    When you're sure you don't need it to come back up.

    • Re: (Score:3, Funny)

      by Tim Doran ( 910 )

      Hey, if Jurassic Park taught me anything, it's that all you need is a wide-eyed little girl to say "I know this... this is a UNIX system!".

      By then some of your users may have been eaten by velociraptors, but your server will come back online eventually and you'll have saved yourself some power!

  • Not often (Score:5, Insightful)

    by nine-times ( 778537 ) <> on Thursday October 30, 2008 @03:46PM (#25574239) Homepage

    How many of us have servers that don't need to be live? Yeah, I guess there might be a development server, but that assumes that you're not developing. There could be a failover server that does nothing when the primary hasn't failed, but in that case you'd want to be damn sure that the failover will come online without difficulty when it needs to.

    It seems to me like it would be a pretty rare case when this is applicable. I'd sooner be interested in asking, can they build servers that can selectively power down subsystems that aren't currently in use, sufficiently enough that there's no serious harm. For example, I'd consider putting some of my fileservers' hard drives to sleep over night, but I'd still want the server to be available and the drives to spin back up if I log in from home and need access.

    Mostly, I'd say that if you have servers that you don't need to be live, you might not be using your servers efficiently. It may be worth looking into setting up some kind of VM server with various images that can be brought up on command. But hey, if you do have a server that you can turn off without causing problems, go for it.

    • Re:Not often (Score:4, Interesting)

      by CFTM ( 513264 ) on Thursday October 30, 2008 @03:51PM (#25574305)

      Uh I am the administrator of a server that archives all email for our company. We no longer use this solution for our email archiving, but according to federal regulations this email needs to be accessible for at least another 26 months. The only people who use the server anymore are the various alphabet soups of regulators who came in twice a year, maybe I'm the exception but not the rule but I can't see a reason to keep the server on...

      • Um...

        You're using an entire server, complete with associated on-line magnetic storage, as a glorified floppy disk?


        Back that stuff up to tape or permanent optical media and decommision that junk.

        I suppose someone proposed that, and got shot down as being too effort-intensive (compared to just letting the server sit).

        Seems kinda sad to me.

        • by CFTM ( 513264 )

          It's on permanent optical media, the databases that make all the data intelligible are stored on the servers. Otherwise you have seven years of email in text files, organized chronologically with absolutely no auditing information attached. Much more efficient systems to do this today, but it was implemented back like '99-'00 and I've inherited it from multiple other admins...good times!

    • by Splab ( 574204 )

      Actually when you got a hot fail over why not use it for some load balancing?

      And the solution is to use virtual servers, some of them support packing down a server and moving it to another physical server - that way you can power down half or more of your physical hardware but still keep all "servers" online. (Provided the systems aren't doing much at night)

  • by Errtu76 ( 776778 ) on Thursday October 30, 2008 @03:46PM (#25574241) Journal

    Why have 16 terminal servers (sorry, couldn't think of anything else) running when no more than 10-20 users are on it after working hours? Then in the morning, power them back on again using WakeOnLan.

    And that backup server with a whole lot of disks? Why not only have it running during the night when stuff is being backed up?

  • Er... yeah, let's power down our backup servers that are there as a safety net. What could possibly go wrong?

    I guess these guys don't care about little things like uptime, then?

  • Power Management (Score:3, Interesting)

    by Super_Z ( 756391 ) on Thursday October 30, 2008 @03:50PM (#25574275)
    Powering down your servers tends to introduce response issues. :-)
    Some servers, like the HP ProLiant line, has power management features []. Try experimenting with features like these first.
  • by Ngarrang ( 1023425 ) on Thursday October 30, 2008 @03:50PM (#25574289) Journal

    "Servers that sit in idle state for long periods of time are the top candidates for powering down between uses."

    Then virtualize it or combine its function with another server. I see this part of the article as a bad example. It starts by saying that virtualization has helped, and then uses an example that virtualization would solve, NOT power-cycling.

    Maybe its just me, but when I think of a server, I think of something important that is running, that needs to be accessible on something other than a glorified desktop. If it is important, then it cannot be turned off.

  • XenServer (Score:4, Interesting)

    by Obsession12 ( 554132 ) on Thursday October 30, 2008 @03:55PM (#25574347) Homepage
    Full disclosure, I work for Citrix. Check out XenServer, which can remotely provision server workloads to virtual and bare metal machines - based on load, you can remotely power up resources as needed. I have seen the future, and it is awesome. And green.
    • No amount of software can 'fix' a server in which the power supply refuses to turn on.

      That's a hardware problem. Seems it would indicate that 12v rails would be the way to go in the datacenter.

  • Wrong way round (Score:5, Insightful)

    by symes ( 835608 ) on Thursday October 30, 2008 @03:58PM (#25574399) Journal
    My guess is that managing energy consumption by powering down servers is the wrong way round - there seems to be a fair bit of interest in developing hardware that manages it's own energy consumption without loss, either in additional power to bring it back up to speed or in processing lag, etc. Of course, this doesn't address the poster's immediate concerns to which I have little to add other than it's probably good to cost in heightened risks of hardware failure and therefore the costs of unscheduled downtime.
  • If you're looking to save power, try using cpufreq on Linux, or power settings in Windows Server instead. If you simply shutdown everything by building policy, then have chron or schedular sync the file systems, then do a shut down at the chosen hour, then power them up ten minutes before the start of day (unless you have backups, reports, etc. to run

    If the power cost doesn't make any difference, power them down 2x per quarter to blow the dust and crap out of them. Then keep them on if you're already green.

  • by Anonymous Coward
    Right in the middle of a user having completed all of the form and about to hit submit button. Boy, I'd like to see the face of that user!
  • While there may be machines which do not need to be running they should not be refered to as servers in the traditional sense.

    More hardware failures occur between powering down, and finishing booting than you can shake a stick at.

  • Some criteria (Score:4, Informative)

    by Colin Smith ( 2679 ) on Thursday October 30, 2008 @04:03PM (#25574457)

    1: Can your service be load balanced across several identical servers?
    2: Does your services experience predictable but varying load?
    3: Can the state used by your service be rapidly replicated (10 minutes) across newly booted systems?

    Not all server systems make good candidates for shutdown. Web farms do tend to because they fit the criteria above.


  • Try virtualizing your *nix boxen on z/VM, on a z/10 mainframe - especially if your business/organization already has a mainframe. z/Linux is just Linux, after all... Apache and Mono are already ported, among many many other things; what's not ported can be ported in the usual way. The advantage is that you can run virtual servers on the same hardware as your mainframe "legacy" apps, without drastically increasing power consumption.
  • PSU failures (Score:5, Informative)

    by blind biker ( 1066130 ) on Thursday October 30, 2008 @04:05PM (#25574477) Journal

    The problem is the PSU, which fails most often during power-up. Leaving the servers always on has the advantage of avoiding that particular failure mode. Also, other components in the server are prone to failure during power-up, way more often than at steady state. So, powering up your computers is overall a risky moment.

    • by houghi ( 78078 )

      I have often heard this. Does anybody has numbers on this, or just a gut feeling that things went wrong when you rebooted and the reason for a reboot was because the system was already giving problems.

  • The real problem (Score:2, Interesting)

    by Anonymous Coward

    The real problem with powering down servers is that you won't know there's a problem until you power them up again. The result is that the problem always occur when you need the servers (otherwise you wouldn't be turning them on). This instead of the problem mostly occurring when the servers are not in use or at least not all servers at the same time.

    If you power up 1000 servers in approximately 15 minutes (once per day) and 10 don't power up, then you have 10 problems to solve asap. If you don't power up 1

  • by CPE1704TKS ( 995414 ) on Thursday October 30, 2008 @04:10PM (#25574541)

    VMWare has some cool functionality such that if you virtualize all your machines, at night time when the loads are lower, you can consolidate all your VMs onto a smaller number of physical machines, and automatically turn off the physical machines. Then, in the morning, as the loads increase, you can automatically power on the physical machines and move the VMs back onto these physical servers to handle the load. Not sure what it's called but when I heard about it, I thought it was really cool.

  • colo (Score:4, Insightful)

    by donnyspi ( 701349 ) <> on Thursday October 30, 2008 @04:15PM (#25574601) Homepage
    As long as power use is built into the fixed price I pay for the cabinet I rent at the colo, I'll never turn off my servers if I don't need to. Why would I?
    • Re: (Score:3, Insightful)

      by dkf ( 304284 )

      As long as power use is built into the fixed price I pay for the cabinet I rent at the colo, I'll never turn off my servers if I don't need to. Why would I?

      That's why power (or at least power over a certain basic level) shouldn't be part of the fixed price. This is a good thing from a colo operator PoV because their costs are dominated mostly by power: getting the power into the datacenter, and shipping the heat produced by it back out. (Yes, that power almost all becomes heat.) If your colo provider moved to a non-fixed price power regime and you cut your consumption sensibly, you wouldn't be paying so much for that colo.

      In short: if you're getting that power

  • Once servers are virtualized it becomes trivial to run only the number of virtual servers necessary to handle the load. Cloud computing, in essence. A truly distributed model of computing would work just as well, but my guess is that will only arrive sometime after most servers are already virtualized.

    The only impact shutdown and startup should have is on hard disks; all other electronics should take millions of power cycles without any problems as long as the power supplies are gentle. Hard disks for vi

  • Powering down non-critical servers make sense:

    If a large and potentially long lasting Hydro power outage occurs, and your Data Centre UPS switched over to using your generator, and you want to conserve Diesel / length of uptime your generator can keep the DC up for.


  • This question is a bit of a non-starter. If you can power it down, and no one screams, then it really isn't a 'server' at all. A server serves things and brings with it a promise of availability. If you're not providing the availability, then you really only talking about some other kind of computer, not a 'server' at all.

    • Umm... what about servers that only need to serve at various times? Payroll servers (see my other posts in this thread), reporting servers that collect data and generate reports on a scheduled frequency (weekly? monthly?), servers that support users/functions that only happen at certain times (inventory server in a retail store?), the list goes on and on.

      You have a fairly narrow view of what a server is... 24x7 availability is not a requirement to be a server.

  • We recently had to power down our main data centre, three times in fairly quick succession due to major power work that had to be carried out (no building UPS or generator unfortunately... boo!). Doing it was all well and good, but so many things are inter-connected, we found we almost had circular dependencies of things, so we had to be very careful in shutting down and bringing back services. The end result was something different every time wouldn't shut down, and something different every time wouldn't
  • Perhaps we could invent a way to power down servers in a manner that would not cause sudden temperature changes? What about cooling the server while it's on, then warm it while it shuts down, then let it cool gradually again, and then start warming it before we go to switch it on again, and only switch on when it is already warmed? Maybe we could think of a way to keep every chip and every component in a stable temperature and only allow very gradual temp changes. Then temperature change stress would be
  • I know of a server in a local restaurant. He often takes a "power nap" just after the lunch-time rush is over. Having conserved some energy, he wakes up refreshed and is can get back into high-power mode for the evening meals.
  • so powering down servers might not make sense for every IT department.

    If a company only has operations from 9 to 5, then it would make sense to power down the servers after everyone else has left and the Admin is the last to leave and powers down the server. Then the Admin comes in early or another Admin and powers back on the server before 9am.

    For a 24 hour IT department it makes sense to use low power settings to shut off hardware when not in use and use power saving screen savers, and sacrifice some CPU

  • by MrSteve007 ( 1000823 ) on Thursday October 30, 2008 @05:33PM (#25575763)
    No really. On certain times of the year (ie. not summer), I reduce/eliminate the cooling of the server room AC and redirect the server waste heat to warm the rest of the office. Ambient air from the office space is ducted/filtered in near the floor, and a 300 CFM fan takes in the heat at the ceiling above the server tower. I estimate I capture between 9,000 & 12,000 BTU of heat an hour because of this; greatly reducing the HVAC needs of the building during the night.

    This was a large part as to why the EPA gave my company one of their Annual ENERGYSTAR Energy Conservation Awards.

    It never made sense to me to run an AC unit when it's snowing outside. []
  • by aarggh ( 806617 ) on Thursday October 30, 2008 @05:36PM (#25575805)

    I would think that if any servers can be regularly powered off, then they probably fall into one of two categories, they aren't really needed or they are only loaded at certain times. In both cases there's a good reason for consolidation, whether it be physical or virtual. obviously virtual gives the best bang for buck. I run several ESX clusters and despite dept's not trusting virtual servers, they all come around in the end. I think virtuals really are the only way to go to really save money, space, power, and the all important UPS load. Don't forget, the myth about hard disks dying after they have been running for a long time and then allowed to cool after a power failure, well that myth isn't a myth, it happens, and it happens a lot.

    I cannot think of situation where powering off a server that is needed provides any benefit whatsoever, you might save a few dollars, but as we all know when you're IT dept struggles to get budget for anything, the risk of failure and the HUGE costs associated with that far outweigh ANY power savings you might achieve.

    I don't give a rat's arse if powering off a few minor servers saves $100 a month, when if the disk dies after it cools down, and I then have to go into repair mode to find out what peculiar apps were installed on the server, somehow scrounge another system with no budget, and then rebuild the whole damn thing, only to find out the developers have changed so many configurations it takes weeks or months before everything is really working as it should. Even more joy is that usually occurs at night if power is lost, as we have nothing better to do anyway. NO THANKS!

    For companies who have massive budgets for IT, and routinely swap out hard disks, etc, maybe that would work, most companies I know and have worked for over the years though tend to view IT as a parasitic loss centre run by people who spend there days watching the blinkety lights and having fun, "it's just a bunch of servers, how hard could it be" or "how much money could they need, I can go the PC shop and get a whole quad core rig for $600!"

    But I will say, nothing gets Capital Exp forms signed faster than a major downtime!

  • by 3seas ( 184403 ) on Thursday October 30, 2008 @05:44PM (#25575939) Homepage Journal

    ... is with electronic voting servers, to force paper ballots and more accurate counting.

  • by BanjoBob ( 686644 ) on Friday October 31, 2008 @12:38AM (#25579745) Homepage Journal

    I worked for a client that used a farm of web servers tied to their multiple Oracle systems. The web servers were all Sun Ultras balanced using Resonate as the balancing agent.

    During prime time, we needed over 60 servers running but, between 6PM and 6AM, we only required about 15 to handle the load. By taking 75% off line every night (not always the same 75%) we reduced power consumption a great deal. By also shutting down 4 of the 6 Sun 6500s, we also reduced power of the data center.

    In a year's time, we conserved over $80,000 in power alone and, had plenty of opportunity to perform off-hour upgrades and maintenance.

    Failure rate due to power cycling was immeasurable.

  • Never ... (Score:3, Insightful)

    by daveime ( 1253762 ) on Friday October 31, 2008 @01:18AM (#25579957)

    It never makes sense to power down a server that supplies web pages, if you are in e-commerce.

    The moment your potential customer sees a 404 page, you've lost him.

    Indeed we design all our pages to be displayed in less than 7 seconds, as our research showed anyone typically waiting longer than that for say "search results" would be likely to go elsewhere rather than waiting.

The only possible interpretation of any research whatever in the `social sciences' is: some do, some don't. -- Ernest Rutherford