Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
IT

Ask Slashdot: Unattended Maintenance Windows? 265

grahamsaa writes: Like many others in IT, I sometimes have to do server maintenance at unfortunate times. 6AM is the norm for us, but in some cases we're expected to do it as early as 2AM, which isn't exactly optimal. I understand that critical services can't be taken down during business hours, and most of our products are used 24 hours a day, but for some things it seems like it would be possible to automate maintenance (and downtime).

I have a maintenance window at about 5AM tomorrow. It's fairly simple — upgrade CentOS, remove a package, install a package, reboot. Downtime shouldn't be more than 5 minutes. While I don't think it would be wise to automate this window, I think with sufficient testing we might be able to automate future maintenance windows so I or someone else can sleep in. Aside from the benefit of getting a bit more sleep, automating this kind of thing means that it can be written, reviewed and tested well in advance. Of course, if something goes horribly wrong having a live body keeping watch is probably helpful. That said, we do have people on call 24/7 and they could probably respond capably in an emergency. Have any of you tried to do something like this? What's your experience been like?
This discussion has been archived. No new comments can be posted.

Ask Slashdot: Unattended Maintenance Windows?

Comments Filter:
  • by Anonymous Coward on Friday July 11, 2014 @12:27PM (#47432021)

    Support for off-hour work is part of the job. Don't like it? Find another job where you don't have to do that. Can't find another job? Improve yourself so you can.

  • Murphy says no. (Score:5, Insightful)

    by wbr1 ( 2538558 ) on Friday July 11, 2014 @12:28PM (#47432033)
    You should always have a competent tech on hand for maintenance tasks. Period. If you do not, Murphy will bite you, and then, instead of having it back up by peak hours you are scrambling and looking dumb. In your current scenario, say the patch unexpectedly breaks another critical function of the server. It happens, if you have been in IT any time you have seen it happen. Bite the bullet and have a tech on hand to roll back the patch. Give them time off at another point, or pay them extra for night hours, but thems the breaks when dealing with critical services.
  • Comment removed (Score:5, Insightful)

    by account_deleted ( 4530225 ) on Friday July 11, 2014 @12:29PM (#47432047)
    Comment removed based on user account deletion
  • Offshore (Score:5, Insightful)

    by pr0nbot ( 313417 ) on Friday July 11, 2014 @12:34PM (#47432091)

    Offshore your maintenance jobs to someone in the correct timezone!

  • by gstoddart ( 321705 ) on Friday July 11, 2014 @12:34PM (#47432095) Homepage

    You don't monitor maintenance windows for when everything goes well and is all boring. You monitor them for when things go all to hell and someone needs to correct it.

    In any organization I've worked in, if you suggested that, you'd be more or less told "too damned bad, this is what we do".

    I'm sure your business users would love to know that you're leaving it to run unattended and hoping it works. No, wait, I'm pretty sure they wouldn't.

    I know lots of people who work off hours shifts to cover maintenance windows. My advise to you: suck it up, princess, that's part of the job.

    This just sounds like risk taking in the name of being lazy.

  • by Anonymous Coward on Friday July 11, 2014 @12:34PM (#47432097)

    Support for off-hour work is part of the job. Don't like it? Find another job where you don't have to do that. Can't find another job? Improve yourself so you can.

    This is the correct answer. I promise you that at some point, something will fail, and you will have failed by not being there to fix it immediately.

  • by arse maker ( 1058608 ) on Friday July 11, 2014 @12:35PM (#47432101)

    Load balanced or mirrored systems. You can upgrade part of it any time, validate it, then swap it over to the live system when you are happy.

    Having someone with little or no sleep doing critical updates is not really the best strategy.

  • by psergiu ( 67614 ) on Friday July 11, 2014 @12:43PM (#47432187)

    A friend of mine lost his job over a simmilar "automation" task on windows.

    Upgrade script was tested on lab environement who was supposed to be exactly like production (but it turns out it wasn't - someone tested something before without telling anyone and did not reverted). Upgrade script was scheduled to be run on production during the night.

    Result - \windows\system32 dir deleted from all the "upgraded" machines. Hundreds of them.

    On the Linux side i personally had RedHat doing some "small" changes on the storage side and PowerPath getting disabled at next boot after patching. Unfortunate event, since all Volume Groups were using /dev/emcpower devices. Or RedHat doing some "small" changes in the clustering software from one month to the other. No budget for test clusters. Production clusters refusing to mount shared filesystems after patching. Thankfuly on both cases the admins were up & online at 1AM when the patching started and we were able to fix everything in time.

    Then you can have glitchy hardware/software deciding not to come back up after reboot. RHEL GFS clusters are known to randomly hang/crash at reboot. HP Blades have sometimes to be physically removed & reinserted to boot.

    Get the business side to tell you how much is going to cost the company for the downtime until:
    - Monitoring software detects that something is wrong;
    - Alert reaches sleeping admin;
    - Admin wakes up and is able to reach the servers.
    Then see if you can risk it.

  • by Shoten ( 260439 ) on Friday July 11, 2014 @12:45PM (#47432207)

    Load balanced or mirrored systems. You can upgrade part of it any time, validate it, then swap it over to the live system when you are happy.

    Having someone with little or no sleep doing critical updates is not really the best strategy.

    First off, you can't mirror everything. Lots of infrastructure and applications are either prohibitively expensive to do in a High Availability (HA) configuration or don't support one. Go around a data center and look at all the Oracle database instances that are single-instance...that's because Oracle rapes you on licensing, and sometimes it's not worth the cost to have a failover just to reach a shorter RTO target that isn't needed by the business in the first place. As for load balancing, it normally doesn't do what you think it does...with virtual machine farms, sure, you can have N+X configurations and take machines offline for maintenance. But for most load balancing, the machines operate as a single entity...maintenance on one requires taking them all down because that's how the balancing logic works and/or because load has grown to require all of the systems online to prevent an outage. So HA is the only thing that actually supports the kind of maintenance activity you propose.

    Second, doing this adds a lot of work. Failing from primary to secondary on a high availability system is simple for some things (especially embedded devices like firewalls, switches and routers) but very complicated for others. It's cheaper and more effective to bump the pay rate a bit and do what everyone does, for good reason...hold maintenance windows in the middle of the night.

    Third, guess what happens when you spend the excess money to make everything HA, go through all the trouble of doing failovers as part of your maintenance...and then something goes wrong during that maintenance? You've just gone from HA to single-instance, during business hours. And if that application or device is one that warrants being in a HA configuration in the first place, you're now in a bit of danger. Roll the dice like that one too many times, and someday there will be an outage...of that application/device, followed immediately after by an outage of your job. It does happen, it has happen, I've seen it happen, and nobody experienced who runs a data center will let it happen to them.

  • by CanHasDIY ( 1672858 ) on Friday July 11, 2014 @12:45PM (#47432209) Homepage Journal

    This guy probably is the tech but is wanting to spend more time with his family or something.

    Probably settled down too fast and can't get a better job now. My advice: don't settle down and quit using your wife and children as excuses for your career failures because they'll grow to hate you for it.

    OR, if you want to have a family life, don't take a job that requires you to do stuff that's not family-life-oriented.

    That's the route I've taken - no on-call phone, no midnight maintenance, no work-80-hours-get-paid-for-40 bullshit. Pay doesn't seem that great, until you factor in the wage dilution of those guys working more hours than they get paid for. Turns out, hour-for-hour I make just as much as a lot of the managers around here, and don't have to deal with half the crap they do.

    The rivers sure have been nice this year... and the barbecues, the lazy evenings relaxing on the porch, the weekends to myself... yea. I dig it.

  • by smash ( 1351 ) on Friday July 11, 2014 @01:30PM (#47432595) Homepage Journal
    This is why you build a test environment. VLANS, virtualization, SAN snapshots. There's no real excuse. Articulate the risks that a lack of a test environment entail to the business, and ask them if they want you doing shit without being able to test to see if it breaks things. Do some actual calculations on cost of system failure, and explain to them ways in which it can be mitigated. Putting your head in the sand and just breaking shit in live... well, that's one way to do it, but I fucking guarantee you: it WILL bit you in the ass, hard one day, whether it is automated or not. if you have a test environment, you can automate the shit out of your process, TEST it, and TEST a backout plan before going live.
  • by mlts ( 1038732 ) on Friday July 11, 2014 @01:56PM (#47432809)

    Even on fairly simple things (yum updates from mirrors, AIX PTFs, Solaris patches, or Windows patches released from WSUS), I like babysitting the job.

    There is a lot that can happen. A backup can fail, then the update can fail. Something relatively simple can go ka-boom. A kernel update doesn't "take" and the box falls back to the wrong kernel.

    Even something stupid as having a bootable CD in the drive and the server deciding it wants to run the OS from that rather than from the FCA or onboard drives. Being physically there so one can rectify that mistake is a lot easier when planned as opposed to having to get up and drive to work at a moment's notice... and by that time, someone else likely has discovered it and is sending scathing E-mails to you, CC:5 tiers of management.

  • by Anonymous Coward on Friday July 11, 2014 @02:05PM (#47432855)

    Support for off-hour work is part of the job. Don't like it? Find another job where you don't have to do that. Can't find another job? Improve yourself so you can.

    This is the correct answer. I promise you that at some point, something will fail, and you will have failed by not being there to fix it immediately.

    Use of monitoring and alerting can alleviate this - access to the system through VPN can provide near-immediate access. It also helps if critical services can be made not to be single points of failure.

  • Re:Murphy says no. (Score:3, Insightful)

    by NotSanguine ( 1917456 ) on Friday July 11, 2014 @02:13PM (#47432921) Journal

    ...Better yet, use Amazon EC2 for your infrastructure so you can spool up as many redundant systems as necessary.

    Exactly. Because if Amazon screws up, they won't blame you. That fantasy and a couple bucks will get you a Starbucks latte.

    Using someone else's servers is always a bad idea for critical systems. Virtualization is definitely the way to go, but use your own hardware. Yes, that means you need to maintain that hardware, but that's a small (or not so small, in a large environment -- but worth it) price to pay because Murphy was an optimist.

  • by Taelron ( 1046946 ) on Friday July 11, 2014 @02:29PM (#47433019)
    Unless you are updating the Kernel there are few times you need to reboot a centos box. Unless your app has a memory leak.

    The better way to go about it has already been pointed out above. Have several systems, load balance them in a pool, take one node out of the pool, work on it, return it to the pool then repeat for each remaining system. - No outage time and users are none the wiser to the update.
  • Re:Murphy says no. (Score:4, Insightful)

    by Zenin ( 266666 ) on Friday July 11, 2014 @04:57PM (#47434227) Homepage

    In general, don't do anything that isn't your core business. Or another way of saying it, Do What Only You Can Do.

    If you are an insurance company, is building and maintaining hardware your business? No, not in the slightest. You have no more business maintaining computer hardware as you have maintaining printing presses to print your own claims forms.

    Maintaining hardware and the rest of the infrastructure stack however, is the business of Amazon AWS, Windows Azure, etc. The "fantasy" you're referring to is the crazy idea that you, as some kind of God SysAdmin, can out-perform the world's top infrastructure providers at maintaining infrastructure. Even if you were the best SysAdmin alive on the planet, you can't scale very far.

    Sure, any of those providers can (and do, frequently) fail. Still, they are better than you can ever hope to be, especially once you scale past a handful of servers. If you are concerned that they still fail, that's good, yet it's still a problem worst addressed by taking the hardware in house. A much better solution is to build your deployments to be cloud vendor agnostic: Be able to run on AWS or Azure (or both, and maybe a few other friends too) either all the time by default or at the flip of a (frequently tested) switch.

    Even building in multi-cloud redundancy is far easier, cheaper, and more reliable than you could ever hope to build from scratch on your own. That's just the reality of modern computing.

    There are reasons to build on premises still, but they are few and far between. Especially now that cloud providers are becoming PCI, SOX, and even HIPAA capable and certified.

Happiness is twin floppies.

Working...