Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
IT

Ask Slashdot: Unattended Maintenance Windows? 265

grahamsaa writes: Like many others in IT, I sometimes have to do server maintenance at unfortunate times. 6AM is the norm for us, but in some cases we're expected to do it as early as 2AM, which isn't exactly optimal. I understand that critical services can't be taken down during business hours, and most of our products are used 24 hours a day, but for some things it seems like it would be possible to automate maintenance (and downtime).

I have a maintenance window at about 5AM tomorrow. It's fairly simple — upgrade CentOS, remove a package, install a package, reboot. Downtime shouldn't be more than 5 minutes. While I don't think it would be wise to automate this window, I think with sufficient testing we might be able to automate future maintenance windows so I or someone else can sleep in. Aside from the benefit of getting a bit more sleep, automating this kind of thing means that it can be written, reviewed and tested well in advance. Of course, if something goes horribly wrong having a live body keeping watch is probably helpful. That said, we do have people on call 24/7 and they could probably respond capably in an emergency. Have any of you tried to do something like this? What's your experience been like?
This discussion has been archived. No new comments can be posted.

Ask Slashdot: Unattended Maintenance Windows?

Comments Filter:
  • Puppet. (Score:4, Informative)

    by Anonymous Coward on Friday July 11, 2014 @12:25PM (#47432007)

    Learn and use Puppet.

    • Re:Puppet. (Score:4, Interesting)

      by bwhaley ( 410361 ) <bwhaley@[ ]il.com ['gma' in gap]> on Friday July 11, 2014 @01:47PM (#47432723)

      Puppet is a great tool for automation but does not solve problems like patching and rebooting systems without downtime.

      • by Lumpy ( 12016 )

        Just having a proper IT infrastructure works even better.

        Patch and reboot secondary server at 11am. everything checks out, put it online and promote it to primary. All done. Now migrate the changes to the backup, Pack up the laptop and head home at 5pm... not a problem. Our SQL setup has 3 servers we upgrade one and promote it, the upgrade #2 #3 stays at the previous revisions until 5 days have passed so we have a rollback. Yes data is synced across all three, worst case if TWO servers were to expl

  • by Anonymous Coward on Friday July 11, 2014 @12:27PM (#47432021)

    Support for off-hour work is part of the job. Don't like it? Find another job where you don't have to do that. Can't find another job? Improve yourself so you can.

    • by Anonymous Coward

      Support for off-hour work is part of the job. Don't like it? Find another job where you don't have to do that. Can't find another job? Improve yourself so you can.

      This is the correct answer. I promise you that at some point, something will fail, and you will have failed by not being there to fix it immediately.

      • Re: (Score:2, Insightful)

        by Anonymous Coward

        Support for off-hour work is part of the job. Don't like it? Find another job where you don't have to do that. Can't find another job? Improve yourself so you can.

        This is the correct answer. I promise you that at some point, something will fail, and you will have failed by not being there to fix it immediately.

        Use of monitoring and alerting can alleviate this - access to the system through VPN can provide near-immediate access. It also helps if critical services can be made not to be single points of failure.

      • by 0123456 ( 636235 )

        I promise you that at some point, something will fail, and you will have failed by not being there to fix it immediately.

        Yeah, but this way, you won't be the one who has to fix it :).

        Of course, you might have to start looking at job ads the next day...

    • Exactly, and when it comes to maintenance windows one should never forget Murphy. If something can go wrong it will, and being there with a console cable and a laptop or tablet to get into a problem device is a good thing.

    • Support for off-hour work is part of the job. Don't like it? Find another job where you don't have to do that. Can't find another job? Improve yourself so you can.

      He might just need a better boss--it sounds like this one expects the guy to stay up all night for maintenance, then come in at 9am sharp, as if he didn't just do a full day's work in the middle of the night.

      Rather than automating, he should be lobbying for the right to sleep on maintenance days by shifting his work schedule so that his "maintenance time" IS his workday. "Off-hour work" doesn't mean "Work all day Monday, all night Monday night Tuesday morning, and all day Tuesday." Or, at least, it shouldn'

  • Murphy says no. (Score:5, Insightful)

    by wbr1 ( 2538558 ) on Friday July 11, 2014 @12:28PM (#47432033)
    You should always have a competent tech on hand for maintenance tasks. Period. If you do not, Murphy will bite you, and then, instead of having it back up by peak hours you are scrambling and looking dumb. In your current scenario, say the patch unexpectedly breaks another critical function of the server. It happens, if you have been in IT any time you have seen it happen. Bite the bullet and have a tech on hand to roll back the patch. Give them time off at another point, or pay them extra for night hours, but thems the breaks when dealing with critical services.
    • Re:Murphy says no. (Score:5, Interesting)

      by bwhaley ( 410361 ) <bwhaley@[ ]il.com ['gma' in gap]> on Friday July 11, 2014 @12:50PM (#47432267)

      The right answer to this is to have redundant systems so you can do the work during the day without impacting business operations.

    • Re:Murphy says no. (Score:5, Informative)

      by David_Hart ( 1184661 ) on Friday July 11, 2014 @01:00PM (#47432357)

      Here is what I have done in the past with network gear:

      1. Make sure that you have a test environment that is as close to your production environment as possible. In the case of network gear, I test on the exact same switches with the exact same firmware and configuration. For servers, VMWare is your friend....

      2. Build your script, test, and document the process as many times as necessary to ensure that there are no gotchas. This is easier for network gear as there are less prompts and options.

      3. Build in a backup job in your script, schedule a backup with enough time to complete before your script runs, or make your script dependent on the backup job completing successfully. A good backup is your friend. Make a local backup if you have the space.

      4. Schedule your job.

      5. Get up and check that the job complete successfully either when the job is scheduled to be completed or before the first user is expected to start using the system. Leave enough time to perform a restore, if necessary.

      As you can probably tell, doing this in an automated fashion would take more time and effort than baby sitting the process yourself. However, it is worth it if you can apply the same process to a bunch of systems (i.e. you have a bunch of UNIX boxes on the same version and you want to upgrade them all). In our environment we have a large number of switches, etc. that are all on the same version. Automation is pretty much the only option given our scope.

      • Yes.
        Also, this is one of these scenarios, where virtualization pays. You can simply spin up a new set of boxes (ideally via puppet,chef, whatever) and cut over to it once the new cluster has been thoroughly tested and tested some more. Human eye watching/managing the cutover still recommended, if not required.


    • say the patch unexpectedly breaks another critical function of the server. It happens, if you have been in IT any time you have seen it happen

      Yes, this happens all the time. And really it's a case for doing the upgrade when people are actually using the system. If the patch happens at 2am (chosen because nobody is using it at 2am), nobody is going to notice it until the morning. The morning, when the guy who put in the patch is still trying to recover from having to work at 2am. At the very leas

    • This. No matter what you do this maintenance and downtime is hundreds of times more likely to go wrong than normal running times. What is the point of even employing IT if they are not around for this window.
    • by smash ( 1351 )

      Yup. Although, that said, if you have a proper test environment, like say, a snap-clone of your live environment and an isolated test VLAN, you can do significant testing on copies of live systems and be pretty confident it will work. You can figure out your back-out plan, which may be as simple as rolling back to a snapshot (or possibly not).

      Way too many environments have no test environment, but these days with the mass deployment of FAS/SAN and virtualization, you owe it to your team to get that shi

    • say the patch unexpectedly breaks another critical function of the server.

      When this happens, it usually takes a lot longer to fix than it takes to drive in to work, because the way it breaks is unexpected. The proper method is to have an identical server get upgraded with this automatic maintenance window method the day before while you're at work or at least hours before the primary system so that you can halt the automatic method remotely before it screws up the primary system. If the service isn't important enough, let your monitoring software wake you up if there's a failur

  • by grasshoppa ( 657393 ) on Friday July 11, 2014 @12:29PM (#47432041) Homepage

    ...and while I'm reasonably sure I can execute automated maintenance windows with little to no impact to business operations, I'm not sure. So I don't do it.

    If there were more at stake, if the risk vs benefits were tipped more in my company's favor, I might test implement it. But just to catch an extra hour or two of sleep? Not worth it; I want a warm body watching the process in case it goes sideways. 9 times out of 10, that warm body is me.

    • by mlts ( 1038732 ) on Friday July 11, 2014 @01:56PM (#47432809)

      Even on fairly simple things (yum updates from mirrors, AIX PTFs, Solaris patches, or Windows patches released from WSUS), I like babysitting the job.

      There is a lot that can happen. A backup can fail, then the update can fail. Something relatively simple can go ka-boom. A kernel update doesn't "take" and the box falls back to the wrong kernel.

      Even something stupid as having a bootable CD in the drive and the server deciding it wants to run the OS from that rather than from the FCA or onboard drives. Being physically there so one can rectify that mistake is a lot easier when planned as opposed to having to get up and drive to work at a moment's notice... and by that time, someone else likely has discovered it and is sending scathing E-mails to you, CC:5 tiers of management.

    • I always test in advance, have a roll back plan, only automate low risk maintenance, test the results remotely, and have a warm body on back up should the need arise. Saves a little sleep since I don't babysit the entire process just the result. I don't have physical access to most of the equipment since it's scattered across multiple data centers so I do most of my work remotely anyway.

  • Comment removed (Score:5, Insightful)

    by account_deleted ( 4530225 ) on Friday July 11, 2014 @12:29PM (#47432047)
    Comment removed based on user account deletion
  • Attended automation (Score:3, Interesting)

    by Anonymous Coward on Friday July 11, 2014 @12:30PM (#47432053)

    Attended automation is the way to go. You gain all the advantages of documentation, testing etc. If the automation goes smooth, you only have to watch it for 5 mins. If it doesn't, then you can fix it immediately.

  • You just need to schedule some of your days as offset days. Work from 4pm to midnight some days so that you can get some work done when others aren't around. Some days require you being around people, some days command you be alone.

    Or you can just work 16hour days like the rest of us and wear it with a badge of honor.

    If you are your own boss and do this, you can earn enough money to take random weeks off from work with little to no notice so that you can travel the world, and do some recruiting while doing

    • by DarkOx ( 621550 )

      Pretty much this. If your company is big enough or drives enough revenue from its IT systems that require routine off hours maintenance they should staff for that.

      That is not say that if its just Patch Tuesdays they need to; or the occasional rare major internal code deployment that happens a couple time a year or so. For that you as the admin should suck it up, and roll out of bed early once and while. Hopefully your bosses are nice and let you have some flextime for it. Knock out at 3p on Fridays thos

    • Or you can just work 16hour days like the rest of us and wear it with a badge of honor.

      IMO, there is no honor in working more hours than you're actually being paid to work. Not only are you hurting yourself, you're keeping someone else from being able to take that job.

      If you've got 80 hours worth of work to do at your company, and one guy with a 40-hour-a-week contract, you need to hire another person, not convince the existing guy that he should be proud to be enslaved. Morally speaking.

      • by rikkards ( 98006 )

        Not only that but a company that lets someone do that is shooting themself in the foot. Sooner or later 80 hour a week guy is going to leave, good luck getting someone that is
        A: willing to do it coming in
        B: not taking the job until something better comes along.

        It's not a badge of honor, just an example of rationalization for a crappy job.

    • Or you can just work 16hour days like the rest of us and wear it with a badge of sucker.

      FTFY

  • Offshore (Score:5, Insightful)

    by pr0nbot ( 313417 ) on Friday July 11, 2014 @12:34PM (#47432091)

    Offshore your maintenance jobs to someone in the correct timezone!

  • by gstoddart ( 321705 ) on Friday July 11, 2014 @12:34PM (#47432095) Homepage

    You don't monitor maintenance windows for when everything goes well and is all boring. You monitor them for when things go all to hell and someone needs to correct it.

    In any organization I've worked in, if you suggested that, you'd be more or less told "too damned bad, this is what we do".

    I'm sure your business users would love to know that you're leaving it to run unattended and hoping it works. No, wait, I'm pretty sure they wouldn't.

    I know lots of people who work off hours shifts to cover maintenance windows. My advise to you: suck it up, princess, that's part of the job.

    This just sounds like risk taking in the name of being lazy.

  • by arse maker ( 1058608 ) on Friday July 11, 2014 @12:35PM (#47432101)

    Load balanced or mirrored systems. You can upgrade part of it any time, validate it, then swap it over to the live system when you are happy.

    Having someone with little or no sleep doing critical updates is not really the best strategy.

    • by Shoten ( 260439 ) on Friday July 11, 2014 @12:45PM (#47432207)

      Load balanced or mirrored systems. You can upgrade part of it any time, validate it, then swap it over to the live system when you are happy.

      Having someone with little or no sleep doing critical updates is not really the best strategy.

      First off, you can't mirror everything. Lots of infrastructure and applications are either prohibitively expensive to do in a High Availability (HA) configuration or don't support one. Go around a data center and look at all the Oracle database instances that are single-instance...that's because Oracle rapes you on licensing, and sometimes it's not worth the cost to have a failover just to reach a shorter RTO target that isn't needed by the business in the first place. As for load balancing, it normally doesn't do what you think it does...with virtual machine farms, sure, you can have N+X configurations and take machines offline for maintenance. But for most load balancing, the machines operate as a single entity...maintenance on one requires taking them all down because that's how the balancing logic works and/or because load has grown to require all of the systems online to prevent an outage. So HA is the only thing that actually supports the kind of maintenance activity you propose.

      Second, doing this adds a lot of work. Failing from primary to secondary on a high availability system is simple for some things (especially embedded devices like firewalls, switches and routers) but very complicated for others. It's cheaper and more effective to bump the pay rate a bit and do what everyone does, for good reason...hold maintenance windows in the middle of the night.

      Third, guess what happens when you spend the excess money to make everything HA, go through all the trouble of doing failovers as part of your maintenance...and then something goes wrong during that maintenance? You've just gone from HA to single-instance, during business hours. And if that application or device is one that warrants being in a HA configuration in the first place, you're now in a bit of danger. Roll the dice like that one too many times, and someday there will be an outage...of that application/device, followed immediately after by an outage of your job. It does happen, it has happen, I've seen it happen, and nobody experienced who runs a data center will let it happen to them.

      • In my experience, if your load-balancing solution requires all your nodes to be available, and you can't remove one or more nodes without affecting the remainder, it's a piss-poor load balancing solution. Good load balancing solutions are fault tolerant up to, and including, absent or non-responsive nodes and any load balanced system that suffers an outage due to removing a single node is seriously under-resourced.
      • by mlts ( 1038732 )

        There is also the fact that some failure modes will take both sides down. I've seen disk controllers overwrite shared LUNs, hosing both sides of the HA cluster (which is why I try to at least quiesce the DB or application so RTO/RPO in case of that failure mode is acceptable.)

        HA can also be located on different points on the stack. For example, an Oracle DB server. It can be clustered on the Oracle application level (active/active or active/passive), or it can be sitting in a VMWare instance, clustered u

    • Load balanced or mirrored systems. You can upgrade part of it any time, validate it, then swap it over to the live system when you are happy.

      Having someone with little or no sleep doing critical updates is not really the best strategy.

      Oh my $deity, this!

      I've worked in environments with test-to-live setups, and ones without, and the former is always, always a smoother running system than the latter.

  • by skydude_20 ( 307538 ) on Friday July 11, 2014 @12:35PM (#47432105) Journal
    If these are as critical services as you say, I would assume you have some sort of redundancy, at least a 2nd server somewhere. If so, treat each as "throw away", build out what you need on the alternative server, swing DNS and be done. Rinse and repeat for the next 'upgrade'. Then do your work in the middle of the day. See Immutable Servers: http://martinfowler.com/bliki/... [martinfowler.com]
  • Why would you want to automate someone or yourself out of a job? I realized years ago that Microsoft was working hard to automate me out of my contracts. It's almost done, why accelerate the inevitable?
    • by smash ( 1351 )

      This is why you move the fuck on and adapt. If your job is relying on stuff that can be done by a shell script, you need to up-skill and find another job. Because if you don't do it, someone like myself will.

      And we'll be getting paid more due to being able to work at scale (same shit for 10 machines or 10,000 machines), doing less work and being much happier, doing it.

  • by terbeaux ( 2579575 ) on Friday July 11, 2014 @12:39PM (#47432147)

    Everyone here is going to tell you that a human needs to be there because that is their livelihood. Any task can be automated at a cost. I am guessing that it is not your current task to automate maintenance tasks otherwise you wouldn't be asking. Somewhere up your chain they decided that for the uptime / quality of service it is more cost effective to have a human do it. That does not mean that you can not present a case showing otherwise. I highly suggest that you win approval and backing before taking time to try to automate anything.

    Out of curiosity, are they VMs?

    • Everyone here is going to tell you that a human needs to be there because that is their livelihood.

      No, many of us will tell you a human needs to be there because we've been in the IT industry long enough to have seen stuff go horribly wrong, and have learned to plan for the worst because it makes good sense.

      I had the misfortune of working with a guy once who would make major changes to live systems in the middle of the day because he was a lazy idiot. He once took several servers offline for a few days bec

    • by smash ( 1351 )

      Alternatively, perhaps somewhere up the chain they have no idea what can be done (this IT shit isn't their area of expertise), and are not being told by their IT department how to actually fix the problem properly. Rather, they are just applying band-aid after band-aid for breakage that happens.

      It is my experience that if you outline the risks, the costs and the possible mitigation strategies to eliminate the risk, most sensible businesses are all ears. At the very least, if they don't agree on the spo

    • OP here. Yes, they are VMs in most cases. The only machines we don't virtualize are database servers.
  • ...service bounces that are happening all the time. When it occurs and/or if any other issues, I can send myself a mail. My blackberry has filters which allow an alarm to go off which can wake me during the night. That would seem to meet your needs.
  • Although I do feel this is the nature of the beast when working in a true IT position where businesses rely on their systems nearly 100% of the time, there are some smart ways to go about it. I'm not exactly sure what type of environment you're using, but if you use something like VMware's vSphere product, or Microsoft's Hyper-V, both allow for "live migrations". Why not virtualize all of your servers first of all, make a snapshot, perform the maintenance, and live migrate the VMs? You could do it right in
  • by jwthompson2 ( 749521 ) on Friday July 11, 2014 @12:43PM (#47432183) Homepage

    If you really want to automate this sort of thing you should have redundant systems with working and routinely tested automatic fail-over and fallback behavior. With that in place you can more safely setup scheduled maintenance windows for routine stuff and/or pre-written maintenance scripts. But, if you are dealing with individual servers that aren't part of a redundancy plan then you should babysit your maintenance. Now, I say babysit because you should test and automate the actual maintenance with a script to prevent typos and other human errors when you are doing the maintenance on production machines. The human is just there in case something goes haywire with your well-tested script.

    Fully automating these sorts of things is out of reach more many small to medium sized firms because they don't want, or can't, invest in the added hardware to build out redundant setups that can continue operating when one participant is offline for maintenance. So, depending on the size of your operation and how much your company is willing to invest to "do it the right way" is the limiting factor in how much you are going to be able to effectively automate this sort of task.

  • by psergiu ( 67614 ) on Friday July 11, 2014 @12:43PM (#47432187)

    A friend of mine lost his job over a simmilar "automation" task on windows.

    Upgrade script was tested on lab environement who was supposed to be exactly like production (but it turns out it wasn't - someone tested something before without telling anyone and did not reverted). Upgrade script was scheduled to be run on production during the night.

    Result - \windows\system32 dir deleted from all the "upgraded" machines. Hundreds of them.

    On the Linux side i personally had RedHat doing some "small" changes on the storage side and PowerPath getting disabled at next boot after patching. Unfortunate event, since all Volume Groups were using /dev/emcpower devices. Or RedHat doing some "small" changes in the clustering software from one month to the other. No budget for test clusters. Production clusters refusing to mount shared filesystems after patching. Thankfuly on both cases the admins were up & online at 1AM when the patching started and we were able to fix everything in time.

    Then you can have glitchy hardware/software deciding not to come back up after reboot. RHEL GFS clusters are known to randomly hang/crash at reboot. HP Blades have sometimes to be physically removed & reinserted to boot.

    Get the business side to tell you how much is going to cost the company for the downtime until:
    - Monitoring software detects that something is wrong;
    - Alert reaches sleeping admin;
    - Admin wakes up and is able to reach the servers.
    Then see if you can risk it.

  • Can't you make some kind of setup that triggers if the update fails and alerts you / wakes you up with noise from your smartphone etc.

    Or like the other poster who beat me to it - off-load your work to someone in a country where your 5am is mid-day in their country.

  • By proving that your job can be largely automated, you are eroding the reasons to keep you employed.

    Sure, we all know it's a bad idea to set things on autopilot because eventually something will break badly. But do your managers know that?
    • by smash ( 1351 )

      Automating shit that can be automated so that you can actually do thing that benefit the business instead of simply maintaining the status-quo is not a bad thing. Doing automate-able drudge work when it could be automated is just stupid. Muppets who can click next through a Windows installer or run apt-get, etc. are a dime a dozen. IT staff who can get rid of that shit so they can actually help people get their own jobs done better are way more valuable.

      The job of IT is to enable the business to conti

  • by mythosaz ( 572040 ) on Friday July 11, 2014 @01:00PM (#47432353)

    Do you plan on automating the end-user testing and validation as well?

    Countless system administrators have confirmed the system was operational after change without throwing it to real live testers only to find that, well, it wasn't.

  • by ledow ( 319597 )

    Every second you save automating the task, will be taken out of your backside when it goes wrong (see the recent article where a university SCCM server formatted itself and EVERY OTHER MACHINE on campus) and you're not around to stop it or fix it.

    Honestly? It's not worth it.

    Work out of normal hours, or schedule downtime windows in the middle of the day.

    • by smash ( 1351 )
      That example was due to incompetence, not due to automation. Whilst recover from that would be a pain in the ass, if you are unable to recover at all, you have a major DR oversight.
  • by thecombatwombat ( 571826 ) on Friday July 11, 2014 @01:01PM (#47432369)

    First: I do something like this all the time, and it's great. Generally, I _never_ log into production systems. Automation tools developed in pre-prod do _everything_. However, it's not just a matter of automating what a person would do manually.

    The problem is that your maintenance for simple things like updating a package is requiring downtime. If you have better redundancy, you can do 99% of normal boring maintenance with zero downtime. I say if you're in this situation you need to think about two questions:

    1) Why do my systems require downtime for this kind of thing? I should have better redundancy.
    2) How good are my dry runs in pre-prod environments? If you use a system like Puppet for *everything* you can easily run through your puppet code as you like in non-production, then in a maintenance window you merge your Puppet code, and simply watch it propagate to your servers. I think you'll find reliability goes way up. A person should still be around, but unexpected problems will virtually vanish.

    Address those questions, and I bet you'll find your business is happy to let you do "maintenance" at more agreeable times. It may not make sense to do it in the middle of the business day, but deploying Puppet code at 7 PM and monitoring is a lot more agreeable to me than signing on at 5 AM to run patches. I've embraced this pattern professionally for a few years now. I don't think I'd still be doing this kind of work if I hadn't.

    • by pnutjam ( 523990 )
      Sounds awesome. I embraced solutions like that before I ended up in the over segmented large company. Now, not so much. I have to open a ticket to scratch my ass.
    • by 0123456 ( 636235 )

      1) Why do my systems require downtime for this kind of thing? I should have better redundancy.

      True. Last year we upgraded all our servers to a new OS with a wipe and reinstall, and the only people who noticed were the ones who could see the server monitoring screens. The standby servers took over and handled all customer traffic while we upgraded the others.

  • You're trading caution for convenience.

    I have automated some things such as patch installation overnight only to wake up to a broken server despite the patches being heavily tested and known to work in 100% of the cases before only to not have them work when nobody was watching.

    I urge you to only consider unattended automation overnight when it's for a system that can reasonably incur unexpected downtime without jeopardizing your job and/or the organization. If it's critical -- DO NOT AUTOMATE.

    You've been w

  • Just write a simply perl script to handle it, it would take about 1 hour to develop and test and you'd be good to go.
  • Are you talking about servers/services? If so, every service should have some sort of failover strategy to other hardware. That way anything you need to work on can be failed over during business hours and brought back.
  • I get paid for cleaning up after things that don't work right the first time.
  • That way, when things go south, you have time to right the ship before the early birds start logging in at 5:30.
  • by dave562 ( 969951 ) on Friday July 11, 2014 @01:55PM (#47432799) Journal

    If you want to progress in your IT career, you need to figure out how to automate basic system operations like maintenance and patching. Having to actually be awake at 2:00am to apply patches is rookie status. Sometimes it is unavoidable, but it should not be the default stance.

    My environment is virtual, so our workflow is basically snapshot VM, patch, test. If the test fails, rollback the snapshot and try again (if time is available) or delay until later. If the test is successful, we hold onto the snapshot for three days just in case users find something that we missed. If everything is good after three days, we delete the snapshot.

    We have a dev environment that mirrors production that we can use for patch testing, upgrade testing, etc. Due to testing, we rarely have problems with production changes. If we do, the junior guys escalate to someone who can sort it out. Our SLAs are defined to give us plenty of time to resolve issues that occur within the allocated window. (Typically ~4 hours)

    In the grand scheme of things, my environment is pretty small. We have ~1500 VMs. We manage it with three people and a lot of automation.

  • by Taelron ( 1046946 ) on Friday July 11, 2014 @02:29PM (#47433019)
    Unless you are updating the Kernel there are few times you need to reboot a centos box. Unless your app has a memory leak.

    The better way to go about it has already been pointed out above. Have several systems, load balance them in a pool, take one node out of the pool, work on it, return it to the pool then repeat for each remaining system. - No outage time and users are none the wiser to the update.
  • by ilsaloving ( 1534307 ) on Friday July 11, 2014 @04:45PM (#47434185)

    The OP is missing the point. Of *course* you can automate updates. You don't even need an automation system. It can be as simple as writing a bash script.

    The point is... what happens when something goes wrong? If all goes well, then there's no problem. But if something does go wrong, you no longer have anyone able to respond because nobody's paying attention. So you come in the next morning with a down server and a clusterf__k on your hands.

  • by grahamsaa ( 1287732 ) on Friday July 11, 2014 @06:48PM (#47435115)
    Thanks for all of the feedback -- it's useful.

    A couple clarifications: we do have redundant systems, on multiple physical machines with redundant power and network connections. If a VM (or even an entire hypervisor) dies, we're generally OK. Unfortunately, some things are very hard to make HA. If a primary database server needs to be rebooted, generally downtime is required. We do have a pretty good monitoring setup, and we also have support staff that work all shifts, so there's always someone around who could be tasked with 'call me if this breaks'. We also have a senior engineer on call at all times. Lately it's been pretty quiet because stuff mostly just works.

    Basically, up to this point we haven't automated anything that will / could be done during a maintenance window that causes downtime on a public facing service, and I can understand the reasoning behind that, but we also have lab and QA environments that are getting closer to what we have in production. They're not quite there yet, but when we get there, automating something like this could be an interesting way to go. We're already starting to use Ansible, but that's not completely baked in yet and will probably take several months.

    My interest in doing this is partly that sleep is nice, but really, if I'm doing maintenance at 5:30 AM for a window that has to be announced weeks ahead of time, I'm a single point of failure, and I don't really like that. Plus, considering the number of systems we have, the benefits of automating this particular scenario are significant. Proper testing is required, but proper testing (which can also be automated) can be used to ensure that our lab environments do actually match production (unit tests can be baked in). Initially it will take more time, but in the long run anything that can eliminate human error is good, particularly at odd hours.

    Somewhat related, about a year ago, my cat redeployed a service. I was up for an early morning window and pre staged a few commands chained with &&'s, went downstairs to make coffee and came back to find that the work had been done. Too early. My cat was hanging out on the desk. The first key he hit was "enter" followed by a bunch of garbage, so my commands were faithfully executed. It didn't cause any serious trouble, but it could have under different circumstances. Anyway, thanks for the useful feedback :)

A morsel of genuine history is a thing so rare as to be always valuable. -- Thomas Jefferson

Working...