Forgot your password?
typodupeerror
IT

Ask Slashdot: System Administrator Vs Change Advisory Board 294

Posted by samzenpus
from the get-along dept.
thundergeek (808819) writes "I am the sole sysadmin for nearly 50 servers (win/linux) across several contracts. Now a Change Advisory Board (CAB) is wanting to manage every patch that will be installed on the OS and approve/disapprove for testing on the development network. Once tested and verified, all changes will then need to be approved for production. Windows servers aren't always the best for informing admin exactly what is being 'patched' on the OS, and the frequency of updates will make my efficiency take a nose dive. Now I'll have to track each KB, RHSA, directives and any other 3rd party updates, submit a lengthy report outlining each patch being applied, and then sit back and wait for approval. What should I use/do to track what I will be installing? Is there already a product out there that will make my life a little less stressful on the admin side? Does anyone else have to go toe-to-toe with a CAB? How do you handle your patch approval process?"
This discussion has been archived. No new comments can be posted.

Ask Slashdot: System Administrator Vs Change Advisory Board

Comments Filter:
  • Nonsense (Score:5, Insightful)

    by ruir (2709173) on Thursday April 17, 2014 @05:34AM (#46777417)
    They want bureaucracy, they make the paperwork. Tell them to track windows and distro security pages, the changes are there. I would be toasted with that kind of tape, I updated my servers in a pinch immediately after the first news of heartbleed at 3 in the morning. 0300AM right. How about dusting your resume and changing jobs? Let them play the shuffling reports game alone.
    • Re:Nonsense (Score:5, Interesting)

      by N1AK (864906) on Thursday April 17, 2014 @06:08AM (#46777533) Homepage
      Any remotely well organised IT department will have processes for handling both emergency deployments and retrospective approval. I'm not going to be cheerleader for the concept of CAB but if you're going to make a case against it then at least make a reasonable one because hiding behind obvious nonsense like this will just make you look stupid and change averse to your employer.
      • by mikelieman (35628)

        Pretty much this. Change Management is a process. I wonder if they even have any systems in place to manage this. Tracking migrations really benefits from a good system behind it. Maximo is however, not that system.

      • > Any remotely well organised IT department will have processes for handling both emergency deployments and retrospective approval

        Not when the architect is offline and is needed for every significant change. If there is going to _be_ a policy, a manager needs to be ready to enforce it, or it's going to be everyone making up their own undocumented and impossible to synchronize policies.

      • Re:Nonsense (Score:4, Insightful)

        by mysidia (191772) on Thursday April 17, 2014 @07:50AM (#46777871)

        like this will just make you look stupid and change averse to your employer.

        No... it's obviously just aversity to excessive, unnecessary and crippling micromanagement. It's obviously some idiots in suits who are change averse and feel they need to justify their existence by "approving" or "disapproving" of each and every required security update or patch or system admin action.

        Which involves real costs. With this kind of bullshit, they need to hire additional system admins for systems to approach proper management just to deal with the reduced time efficiency and increased waste caused by bureaucracy.

    • Re:Nonsense (Score:4, Interesting)

      by Anonymous Coward on Thursday April 17, 2014 @06:21AM (#46777583)
      Somehow reminds me of that joke where initially there's just one worker, then layers and layers of staff are added to manage that worker, then finally the worker is fired for underperforming.

      Can't find it on Google or Bing though for some reason.
      • Re:Nonsense (Score:5, Insightful)

        by timepilot (116247) on Thursday April 17, 2014 @06:45AM (#46777671)

        Dr. Seuss: “Oh, the jobs people work at! Out west near Hawtch-Hawtch there's a Hawtch-Hawtcher bee watcher, his job is to watch. Is to keep both his eyes on the lazy town bee, a bee that is watched will work harder you see. So he watched and he watched, but in spite of his watch that bee didn't work any harder not mawtch. So then somebody said "Our old bee-watching man just isn't bee watching as hard as he can, he ought to be watched by another Hawtch-Hawtcher! The thing that we need is a bee-watcher-watcher!". Well, the bee-watcher-watcher watched the bee-watcher. He didn't watch well so another Hawtch-Hawtcher had to come in as a watch-watcher-watcher! And now all the Hawtchers who live in Hawtch-Hawtch are watching on watch watcher watchering watch, watch watching the watcher who's watching that bee. You're not a Hawtch-Watcher you're lucky you see!”

      • by fatp (1171151)
        In the version I heard, the worker was fired because the department is over-staffing.
    • Re:Nonsense (Score:5, Funny)

      by sg_oneill (159032) on Thursday April 17, 2014 @06:28AM (#46777603)

      Back when I worked as a web administrator at my local university back in the early 2000s, the admin make-work types decided to bash out a web policy , mostly to keep standards up and guard against legal liability (Admittedly we had students setting up websites on chemistry lab pcs turned webserver with novel meth recipes and all sorts of shenanigans before that). All good and fine, I asked to be on the committee as an advisor, and so I was.

      Then the whole thing went off the rails, every page needed to be approved by a department head, 10,000+ pages of previously existing data had to be retrofitted with full dublin core metadata descriptions, and so on and so on for about 400 pages of rules and policy that despite my best efforts I could not stop. These people had no fucking idea.

      The crown was an insane rule that every new hyperlink had to be aproved not just by a department head but by the vice chancellor himself.

      And so thats what I did, and I made sure it was done good and proper. I wrote a perl script that took all new pages on the webserver network (about 50-100 new pages a day) and then whenever a hyperlink appeared it spat out a 1 page document for approval *per link* requiring the vice chancellor and a lawyer to co-sign off on. All with witnesses. All in all about 400 pages a day of paperwork for the vice chancellor and a lawyer.

      The policy lasted 3 days before I was dragged into the admin building to be ordered to stop producing the reports. I went in with my union rep. I said "Sorry , no , thats the official policy as passed by the university senate and the website will need to be shut down if this isn't done.". Since the next senate meeting was two weeks away, I made sure every god damn day that stack of paperwork was done by the vice chancellor for a glorious fortnight before the senate could revoke the whole damn policy.

      It was a magical and golden time to be a union protected government (Universities are mostly run by the state in australia) employee.

      For some reason later that year I was passed over for a promotion though. I wonder why, lol.

      • by JosKarith (757063)
        You should have discussed being passed over for promotion with your union rep - you'd have had a pretty strong case.
      • by rjune (123157)

        You should have asked for them to put that in writing. In fact, you should have made a written request for them to give you a written directive to stop producing the reports. Failing to generate the required reports would have given them grounds to fire you. What a glorious fortnight of rubbing their noses in their "Official Policy"!

      • by GlennC (96879)

        +1....well played, sir!

      • by plopez (54068)

        I wish I had a union rep.

      • Re:Nonsense (Score:5, Insightful)

        by Ash Vince (602485) * on Thursday April 17, 2014 @09:19AM (#46778435) Journal

        The crown was an insane rule that every new hyperlink had to be aproved not just by a department head but by the vice chancellor himself.

        At that point you should have just emailed everyone on the committee, and copied in the vice-chancellor with some stats on exactly how many approvals this would generate on a daily basis. Include the actual statistics for the previous 7 days so if this was generating 50 pages per day you had some clear number to back this up while still in the planning stage. That was clearly why you were put on the committee, to stop a bunch of know nothings from coming up with a stupid policy, you failed.

        The way to succeed as a techie is not about being technically brilliant any more, it is about how you can talk people round to your way of thinking and use evidence to back up your points of view.

    • Re:Nonsense (Score:5, Informative)

      by RabidReindeer (2625839) on Thursday April 17, 2014 @07:47AM (#46777853)

      They want bureaucracy, they make the paperwork. Tell them to track windows and distro security pages, the changes are there. I would be toasted with that kind of tape, I updated my servers in a pinch immediately after the first news of heartbleed at 3 in the morning. 0300AM right. How about dusting your resume and changing jobs? Let them play the shuffling reports game alone.

      I've served on a change control board. Every application and system update was supposed to be bundled to make the sysadmin's job easier, include a document that outlined the nature of the change and why it was needed, the instructions on how to apply the change, and the instructions on how to recover if it didn't work.

      Change committee met once a week, approved/scheduled, deferred, or rejected changes. In case of emergency, the CIO or designated proxy could approve an out-of-band change request.

      We didn't attempt to micro-manage changes, just understand the business risks and rewards. Obviously, the more details you could capture the better prepared you were to understand the consequences and the ways you could recover. But when Microsoft hands you a CAB that includes patches for SSL, IE, 6 GDI bugs and Windows notepad, that's their problem, not yours.

      The one thing that we didn't do (obviously!) was allow automated Windws updates. Then again, considering the damaged that some Windows Updates have done to desktop machines, I didn't even allow that on my desktop machine.

    • by rnturn (11092)

      Having patches approved by a CAB should not be a big deal. A brief write-up of the patches to be applied -- or an attachment listing the patches, reasons for applying them, etc -- was all that was required. Every CAB I've ever worked with has a procedure for an emergency like applying a patch for something like Heartbleed. All it usually took was a phone call to certain people and getting a verbal authorization. (You filled out the standard change request forms after the fact.) Working with a CAB is no big

  • Patching.... (Score:5, Informative)

    by Anonymous Coward on Thursday April 17, 2014 @05:34AM (#46777419)

    What we normally do is get a blanket approval if its coming from the OS provider with an understanding that patching will be done on a specific schedule.

    IE. If all the patches come from Redhat there is no approval its necessary to keep them up to date for security purposes. The same is true for patches pushed out from Microsoft.

    Then your only dealing with 3rd party applications. Even those the more common ones we get added to the blanket approval, ie. Adobe. This way you are only telling them you are bringing them into line with the latest set of patches provided by the OS vendor without having to list all the packages that are being updated. Then they only have to ask you if a program has or does not have a certain bug.

    • Re:Patching.... (Score:5, Insightful)

      by rioki (1328185) on Thursday April 17, 2014 @05:54AM (#46777483) Homepage

      I totally agree with the above. These change review rigmarole is often done for reasons of security and operational stability. This is a laudable goal, but often the added red tape make the entire system more vulnerable when they want to decide which security fixes get applies. You need to hammer it home that each second between the time the security fix is published and the time the fix is applied the systems are vulnerable. This is because, once the security fix is published, every hacker knows about the issue too. If you have something worthwhile to protect, which is probably the reason why a change review board was established, you do not want add more time to that time window. If they need red tape, you should get a blanket agreement that you apply security fixes from vendors for critical software (OS, databases, etc.) ASAP and they get a notification of when and what patch was installed.

      • Re:Patching.... (Score:5, Insightful)

        by N1AK (864906) on Thursday April 17, 2014 @06:13AM (#46777553) Homepage

        If you have something worthwhile to protect, which is probably the reason why a change review board was established, you do not want add more time to that time window.

        No, CABs often get implemented because someone is worried about the damage a borked patch/update could do and doesn't have confidence that it could be reliably fixed quickly. Most of the 'admin' in a change request is things like a process plan (which surely you already know if you're deploying an update to a critical live system) and a rollback process (which again, surely you should be considering before risking fubaring the system).

        What I will say is that you should ensure that the CAB members are aware of the need to be able to handle emergency requests (meet, agree and deploy in hours) and should have some process to handle retrospective requests if a business critical update comes out and you can't wait for CAB approval. Normally the requirements for retrospective requests is that it's genuinely critical and that you send a completed request before the update. It might sound odd, but the idea is that they can use that to see if you had properly thought through the process and not just gone Rambo on it.

        • by GoChickenFat (743372) on Thursday April 17, 2014 @09:15AM (#46778413)
          I've been an admin for a very long time. What I see is a lot of admins think the OS is the most important and fail to understand why the server even exists in the first place. If you patch simply because it was made available, you don't test or know what the application the server is hosting does at all, then are you really doing what is best? Yes, patches break things and often the patch "fixes" something that was low or no risk inside the corporate network to begin with. Too many admins fail to balance the risks with application uptime. ...and that's why you end up with a CAB - to keep everyone informed, to balance risk and to account for audit controls. These usually pop up after too many system outages or lack of information sharing. Admins have a bad habit of being too smart and too busy to keep others informed. I have worked with a lot of CAB's in many companies and the best way to work with them is to be proactive in keeping them informed and to build a trust relationship in advance.
      • by ixl (811473)
        In addition to the above two comments, if the policy changes the CAB is instituting impair sysadmin efficiency (and it sounds like they do), then the CAB should be held accountable for the effects of those changes. This means that they should have to find additional funding for additional sysadmins for these servers.
    • Re:Patching.... (Score:4, Insightful)

      by Zocalo (252965) on Thursday April 17, 2014 @06:31AM (#46777613) Homepage
      Blanket approvals and template documents that you can cut and paste notifications into are the way to go, especially when it's on a schedule like MS, Adobe & Oracle. If they push back, suggest a documented process (this is ITIL, right? You can avoid the need for a CAB if it's an approved and documented procedure) where you push the patches to a few test systems on Tuesday (in the case of MS) then deploy to the rest later in the week - whatever they are happy with - if there are no issues. Depending on your timezone Tuesday PM or Wednesday AM are good slots for weekly CABs to pick up this; push to the test servers on the day, then the rest at the end of the week. For *nix, i've done updates this way for anything that didn't require a reboot so only stuff like Kernel updates and major low-level libraries needed to get approval via a CAB.

      For everything else, it's your call. Either the patch waits for the next regular CAB or you play the game and keep calling emergency CABs when there are justifiably critical updates, such as Heartbleed, or for the inevitable critical updates from MS every second Tuesday that impact your systems. The best tactic is to embrace ITIL and make it work for you, not allow them to make you jump through hoops and spend your time crafting unique documents for every patch. It also serves as a useful procedure check to make sure you don't mess up and have a contingency plan for when you do, and ultimately, if you get it right, you still get to dictate the schedule and make them do things in ways that you are happy to work with.
  • by ProfessionalCookie (673314) on Thursday April 17, 2014 @05:35AM (#46777423) Journal
    ethanol.
    • "...Is there already a product out there that will make my life a little less stressful on the admin side?..."

      I was thinking along the lines of Meth, it's not going to change anything, but so what.
    • by OzPeter (195038)

      ethanol.

      Yeah .. but methanol is a better solution - especially if its not you drinking it

  • perhaps (Score:4, Insightful)

    by dimko (1166489) on Thursday April 17, 2014 @05:40AM (#46777433)
    New product your comapny requires is called: junior admin? Expensive stuff but does the job.
  • You know that stress reduces your life expectancy? You have most stress with dumb supervisors/bosses. Go and quit there. This has also the effect that you've ultimately showed your position about it.
  • I do this (Score:5, Interesting)

    by beezly (197427) <beezlyNO@SPAMbeezly.org.uk> on Thursday April 17, 2014 @05:41AM (#46777439) Homepage

    I have to do this and it's no problem at all, although our change management process doesn't sound quite as onerous as yours (I suspect yours will adapt over time -- the CAB will soon get bored if they have to approve every single OS patch).

    I have to do a risk analysis for each change that gets made to a system (not just patches). Sometimes this risk analysis is fairly informal, for example if the change is to add more RAM to a VM, it's very unlikely to have a significant adverse impact and is easily reversible, so low risk. Other times the risk analysis (and processes that come out of that) may take a long time and require significant co-ordination with other parts of the organisation I work in.

    A good example is if we make a change to a service that impacts the look and feel of that service. It will require co-ordinating with our communications, helpdesk, training and documentation teams as well as other parts of the technical group I work in and the CAB really acts as a check to make sure all of that has happened properly.

    There are still a few people in our organisation who see the CAB as a barrier to getting work done, but for me it is really a check to make sure we're delivering changes in a proper way.

    I can recommend you take a look at The Phoenix Project by Gene Kim, Kevin Behr and George Spafford. http://itrevolution.com/books/... [itrevolution.com] - I had quite a few "this is where I work" moments whilst reading it :)

    • by OzPeter (195038)

      I have to do a risk analysis for each change that gets made to a system (not just patches)

      Which sounds like its straight out of the OSHA playbook for considering the health and safety aspects of a physical job before performing it. While it is a PITA sometimes, when the shit does hit the fan you are glad that you have all the correct responses ready to roll.

      • by beezly (197427)

        Indeed. When we introduced our change management process I realised that I was informally doing this risk analysis anyway. The change management process and CAB just formalise it.

        Risk analysis can be as simple as thinking "is this low impact" for a second and then deciding it is and continuing. Most of these types of changes are pre-approved by CAB and we just have to record the change. If we started creating outages from these types of changes then that pre-approval would probably be reviewed.

        There are oth

  • Setup a WSUS server (Score:5, Informative)

    by will_die (586523) on Thursday April 17, 2014 @05:46AM (#46777451) Homepage
    Setup a WSUS server, you probably already have the licenses. From there you can pull the patches to it and then push it to needed servers as approved.
    There are commercial products that can also this in a nicer manner but they cost money.
    • Re: (Score:2, Informative)

      by Anonymous Coward

      Also, get the CAB to set policy to pre-approval any patches marked "security" or "critical". This can be configured within WSUS. That will cut the workload/paperwork down significantly.

  • by flinkflonk (573023) on Thursday April 17, 2014 @05:49AM (#46777461) Homepage

    This is known as the change process in ITIL, and it does have a remedy. The remedy is pre-approved changes (standard changes), which should include patching the OS with patches approved by the vendor. It's meant for exactly this situation, and if your change process doesn't have them it's just a paper wall.
    The ITIL change process is all about reducing risk. If there is a risk with patching your OS (there is, especially since you mention Windows, it's not that unheard of that a Windows patch makes your whole network inoperative) you have to weigh it against the risk of not patching it (meaning you leave known security holes in).
    So, my advice is to get OS patches for your OSes pre-approved by the CAB, that is, when a vendor releases a set of patches you are allowed to patch your systems in the way and the order of that pre-approved change. Of course it's paper-pushing, but use it to your advantage and push some paper yourself. If a server gets compromised and you have the papers (changelog) to prove that you followed procedure, blame will be placed somewhere else. And things will be done differently from there on, since it has been proven that the procedure didn't work, and everybody wins.
    Or you could go find another job (like some other posters recommended) where you are the sole *cowboy*-admin and nothing gets done properly. Your choice really.

    • This lowered productivity at my last place from 2 days to 47 days for a similar change level for changes from 1 to 400 lines and from 3 months to 6 to 9 months to never for larger changes. Once the cost was recognized, it also resulted in a lot of small changes not being done because their benefit no longer justified the cost.

      However, it lowered our critical errors which effected production from about 6 unscheduled downtimes per year to about 6 unscheduled downtimes per year so it was worth it.

      All kidding

      • Oh and the worst case scenario was that the cab meeting was a fixed length and the number of changes took to long so all changes not approved were pushed back to the next cab meeting unless you got a senior director to hold a special meeting for the projects. In one really bad time- a lot of critical projects slid over 90 days due to this.

        But they were very serious about it. The CEO or president's ass was on the line if a change went in which wasn't approved or recorded. So it was a firing offense. You

        • by sjames (1099)

          What you needed was a CAB CAB to maintain the change procedure process document. And then, of course the CAB CAB CAB to maintain the change procedure document change procedure process document.

          They might need to lay off their production people to afford another layer of CAB or two, but that's OK, with the constant change in the change procedure change procedure, none of them knew what they were supposed to be doing anymore anyway,.

    • by Pikewake (217555)
      I've been involved in setting up ITIL processes for several organiztions and agree 100% with the above. The main benefit of a change process and a CAB is that you can get an overall picture of all incoming changes, compare it to the available resources and prioritize based on fact instead on who screams the loudest. Hope you have a compentent change manager who can keep that focus and avoid greasing the squeaky wheels.
      If the CAB starts micromanaging they will self-destruct.
      After you get your standard
  • by Anonymous Coward on Thursday April 17, 2014 @05:50AM (#46777463)
    I bet your CEO or upper level boss is the typical dimwit/jerk, knows nothing about the business, microcontroller type of guy, stupid games of power, calls you on purpose once his secretary tells him you are out of the door. Small guy, stupid looking, may beard of a goatee, cheap-looking suit. Tell him to sod off and change jobs...
  • Run away! (Score:5, Insightful)

    by arcade (16638) on Thursday April 17, 2014 @05:50AM (#46777467) Homepage

    Given your description, you're the sole sysadmin. This means you're the person who should take these decision - nobody else. If the company disagrees with this, then either you've done a poor job previously, or they don't trust you to do your job for some strange reason.

    Now, if it's you that have fscked up on previous occasions, then it's understandable that they want the red tape.

    If you haven't, then it's time to put down the foot and say "Nope, that's my job". If they disagree with that - linkedin should be a relatively short distance away, and after you find yourself a new job - simply hand in your resignation pointing out that you have no interest in having babysitters.

    • I think the real moral of the story is, not that it isn't his job but it's a job for a whole team. There should be an engineering team (testing updates, finding the issues and improvements etc) and a change/ops team who does the leg work when deploying these kinds of processes. One guy responsible for a big ass pile of servers and is also responsible for all of this other stuff is at least two full time jobs.

  • They have a point (Score:3, Insightful)

    by distilate (1037896) on Thursday April 17, 2014 @05:57AM (#46777495)
    As a software developer I have multiple times had a development box screwed over by an IT department pushing unneeded drivers and patches that cause problems. I say prove they are good or needed before you waste other peoples time. If you just want to push any random patch that comes along then you should be forced to resolve all issues without the traditional reinstall the machine.
    • Re:They have a point (Score:4, Interesting)

      by Anonymous Coward on Thursday April 17, 2014 @06:27AM (#46777595)

      As a software developer I have multiple times had a development box screwed over by an IT department pushing unneeded drivers and patches that cause problems.

      I say prove they are good or needed before you waste other peoples time.
      If you just want to push any random patch that comes along then you should be forced to resolve all issues without the traditional reinstall the machine.

      Er, waste people's time?

      As a software developer, you have no fucking idea how difficult it is to pick and choose patches and driver updates to get pushed out to machines while also trying to maintain any sort of consistency with patch levels across the enterprise, but apparently this is something you want me to waste my time on in order to ensure you've not lost a spare second on the rare and random occurrence that you experience a problem in 1 out of 200 patches (my patching track record over 15+ years shows the frequency quite less than that)

      And if you're doing patching correctly, you're mainly concerned about patches deemed "critical", so again, you're not really afforded the luxury of picking and choosing here without risk.

      As a seasoned sysadmin, I have a fix for you. It's called VMs. Play till your hearts content and press the rewind (snapshot) button when find the environment screwed up (shockingly, it's not always the IT department that screws computers up...yes, I know this is breaking news)

      • Re: (Score:2, Insightful)

        by Anonymous Coward

        And you sir, are why most people hate IT.

        In short, yes, I do expect you to waste your time to pick and choose which patches so I can not lose that spare second. After all, it's YOUR JOB to keep the computers running well. If you can't be bothered to do it, then what's the point of you being employed? My job as a developer is to develop products, not to battle with my machine. By you not doing your job properly and by approving a patch that takes out my machine, I'm now unable to do my job. And likely,

  • by hsa (598343) on Thursday April 17, 2014 @06:09AM (#46777539)

    In the voice of Nelson from the Simpsons: Ha-ha!

    They want to make your work more transparent. Apparently, they think you have too much spare time, too. Or you getting fired/outsourced, and this is a gentle reminder to document your work..

    Since all the reports are similar, I would just create a script to handle the documentation needs. I would also do extra work: create report how much this affects the efficiency of patch / hotfix distribution and how time all these process changes take (and maybe inflate that number a bit, just a bit).

    This would also be a great time to ask for an assistant to ease the workload.

  • by Pete (big-pete) (253496) * <peter_endean@hotmail.com> on Thursday April 17, 2014 @06:19AM (#46777571)

    I work in Change Management for a major telco, I chair the IT CAB, and I oversee server and client patching (amongst many other changes!). When we patch clients, we are patching up to around 30,000 real and virtual desktops - when we patch servers, they also number in the thousands.

    There is no way we would allow a sysadmin to patch anything at any time without some level of oversight, an individual admin has no oversight on other patches, hardware interventions, application releases, network upgrades, business campaigns, etc that may be happening on our environment at any given moment (this isn't their job to be keeping track of all of that info). For server and client patching is as light as possible, but we still maintain a close oversight.

    On the Wednesday following the second Tuesday of each month (for example), I sit down with the Windows server guys and the Windows client guys, and we review their proposals to patch - usually we have a fairly rapid timescale that we can meet to ensure that the patches are deployed (including pilot testing, etc to catch any issues before everyone's desktop is broken!), sometimes there are other major interventions that overlap, and then we need to make prioritisation decisions and decide which has priority. We have made similar agreements with the Linux teams, where they have a special process to patch, and we have close oversight on Unix patches, as upgrading these servers with a reboot can be a very big deal.

    The last thing you want is an application version release of a critical ordering application happening at the same time as a system software patch, and then to have an issue afterwards - is it the application version, is it the systems patch, was there some conflict with the activties being performed at the same time? Troubleshooting gets more difficult, teams point fingers at eachother, and the whole time the business is screaming blue murder.

    Of course in an Incident situation there is more flexibility to get things fixed fast, and with security issues I am keen to break open the S-CAB process to expedite a rapid approval flow to ensure that security holes are fixed as fast as possible - of course most changes are encouraged to follow the rules though, the change calendar is published, and everyone knows when the "standard" slots for deployment are, and if most people manage to schedule their changes within those windows, then it minimises potential conflict for everyone.

    Change management are not your enemy, they are your friend - once you register your change with them, they have your back, they will guard from other interventions clashing with you, will stop you from inadvertently upsetting the business, and will decrease change related Incidents. However, with great power comes great responsibility, and Change Management need to find the right process for the right type of change - we cannot have a full in depth investigation into every configuration change, every patch, every bug-fix, every new server to be provisioned. A good Change Management team will guide changes to the appropriate flow, and grease the wheels for certain types of interventions - it seems that the CAB mentioned in the summary are still finding their feet a little, and I am sure they will evolve over time as they start to understand which changes are high risk, and which can be allowed to pass with a lighter touch.

    -- Pete.

    • by Anonymous Coward on Thursday April 17, 2014 @06:46AM (#46777677)

      OP is managing 50 servers, you are managing tens of thousands of Systems - the situations are hardly comparable.

      Like the OP I am the sole admin for our companies IT (60+ on prem servers, mix of WinTel and Linux, plus 10 Azure hosted servers) and I am in charge of patch management.

      If a committee in the organisation came and told me they were taking responsibility for patching away from me I would either tell them to sod off OR I would hand over all the admin accounts and wish them luck.

      • by Anonymous Coward on Thursday April 17, 2014 @08:23AM (#46778039)

        Exactly. While I wouldn't have a problem if it was just a matter of tracking when changes were applied for audit purposes, having to document each patch is completely unreasonable. In my case, we have various regulations that require us to patch our systems within a certain period of time, which we've translated into patching once a month. If I get asked for justification for installing a particular patch, it's usually "because if I don't the auditors will ding us for not installing it." If something needs to be done off-cycle or in an emergency (e.g., OpenSSL updates for Heartbleed), those get documented and rubber-stamped by our change control process, but beyond that it's assumed that regular patches are approved.

        At one point it was suggested that for compliance I was going to have to not only document and justify all patches applied, but pre-document exactly which patches were being applied on each system AND show proof that those patches and ONLY those patches were applied on each system. Given the way that RHEL releases patches (not to mention Debian and Ubuntu, that would have turned a monthly patch cycle that takes at most a day or two into a full-time job. I pushed back, pointing out that it would do nothing other than create more paperwork and take time away from my ability to do anything else, and they eventually agreed. Yes, there can be a happy medium somewhere, and it's not the same for every organization, but the fewer technical people there are doing the actual work, the less this type of bureaucracy will produce functional results...

  • by wonkey_monkey (2592601) on Thursday April 17, 2014 @06:22AM (#46777589) Homepage

    System Administrator Vs Change Advisory Board

    50 quatloos on the newcomer!

  • by Madman (84403) on Thursday April 17, 2014 @06:32AM (#46777619) Homepage

    There is genuine value in a well-run change management program. Organizations need to know what is going on in their infrastructure, and plan things properly. In many industries there is a growing regulatory requirement to have change management, and auditors are looking for these things more often too. Many smaller shops are bringing in change control, so rather than handing in your badge my advice would be to deal with it and learn the lessons.
    One lesson is rather than fight it, use it to your advantage. Yes, there's paperwork, however if you follow the system correctly they cannot blame you if things go wrong. What you thought of as freedom was also a risk to your own position as you had sole responsibility - change control means less freedom, but you are covered. Also, you can get budget for better management systems which will make your life easier. Put together a realistic list of what you need and get involved with setting up the change control process. If you stay silent or fight it you won't get a say.

  • I'm not a fan of CAB (Score:3, Interesting)

    by Anonymous Coward on Thursday April 17, 2014 @06:37AM (#46777641)

    I used to work for a Fortune 100 company. I'm not sure how CAB works at other companies but I get the impression that their implementation was flawed. 1) You could easily go around the process. 2) I'm certain nobody reviews the code - They just kind of discussed it. In my opinion this is a half-baked solution to prevent things from getting pushed to production which could cause problems (errors, leak sensitive info, etc). I am 100% confident that I could have gotten CAB approval for nearly anything. I understand the idea behind CAB but in my experience it isn't effective.

    I actually quit that job partially due to things like CAB. Increasingly control was taken away from people in the IT department, and handed to things like CAB or to 3rd parties who managed our systems, databases, etc. The jobs of myself and others in IT staff were being reduced from "actually doing the work" to "submitting tickets and following up on tickets." Nothing like being on hold when calling the 3rd party for a critical issue you yourself know how to fix in 5 minutes. It's also a blast when I had to tell the support guy what commands to run because he wasn't familiar.

    And no we didn't fuck up anything to deserve this treatment. It was dictated to us from upper management.

  • Do exactly what they say to the letter. After the second "patch Tues" where they pound the ever lovin fuck out of Windows Server with updates and the CAB has a pile of paperwork big enough to roast a wild boar they'll suddenly regain a measure of common sense.

  • Buy something like Tenable Nessus or Rapid7. Make reports very easy and works across Windows, Linux, Cisco, etc. If you get Security Center it will track changes over time and you can see trends over time with patching.
  • by Kahn_au (1349259) on Thursday April 17, 2014 @06:47AM (#46777679)

    Where I come from CAB stands for "Change Acceptance Board", they don't get to make dumb decisions...

  • Pre-approvals (Score:4, Interesting)

    by AndyCanfield (700565) <andycanfield@@@yandex...com> on Thursday April 17, 2014 @06:57AM (#46777707) Homepage
    Seems to me that you need to establish a list of pre-approved changes. For example, if you're running Windows and IIS, make sure there's a clause that says anything that comes down the pipeline via Windows Update does not need formal approval. That way you can offload the responsibilty, and work, onto Microsoft. You can keep your core software up-to-date. Third party software, same thing for corporations. Student projects and your own shell scripts might need more examination; not a bad idea actually. But if there's a new version of Firefox, why in the world would a Change Advisory Board think it knows more than Mozilla?
  • by Anonymous Coward

    fuck this site and popups

    BYE

  • I'm the zOS Systems Programmer at a Fortune 500 company. When we do system maintenance cycles our CRB just wants to know when the system environment is changing, not what's changing.

    If anyone ever does want to know I do have detailed logs and a before and after image of the maintenance management database (SMP/E Consolidated Software Inventory) for them to peruse. They never do; since they don't understand zOS Systems Programming, and they shouldn't have to. It's their job to manage system availability and

  • by erroneus (253617) on Thursday April 17, 2014 @07:25AM (#46777783) Homepage

    Yes, I know how they are thinking and the pain you are feeling. To accomplish the implementation of this change management process you will need a lot of people working for you. Use this to your advantage. Quickly study up on the subject so your experience with the systems will not leave you with a dog pile of new bosses to tell you how to do your job. Instead insist that you need to hire more people to manage the overhead.

    In the end that probably won't work and you'll be kept "at the bottom" where you are now.

    These changes are going to be enormously expensive and despite all you have done, it will be perceived that you created this mess by not having a change management system in place to begin with. Of course, they will also see that you don't know about change management and will prefer to hire someone who already knows about it.

    Now I'm not going to down change management processes. They can prevent problems and identify people who would otherwise deflect blame and hide in the shadows. But from what I have seen, you're just getting the beginning of the tsunami of changes.

    Push for testing systems and additional hardware to support it. Of course it will also require more space and other resources. Try to get ahead of this beast.

  • We got our CAB to agree to a certain class of routine changes that require minimum review. They don't need anymore detail than, Test servers updated on Tuesday, Production one week later per maintenance windows.

  • by MrNemesis (587188) on Thursday April 17, 2014 @07:39AM (#46777831) Homepage Journal

    ...and necessary* but that doesn't stop some change management boards being needlessly obstructive.

    Years back, I was working at a company where all of our servers got patched at build and then never patched again "in case it broke something". Myself and the rest of the ops team begged and pleaded for the business to allow us maintenance windows, allowed to reboot the OS outside of business hours, install patches... all to no avail.

    Until the company lost a bidding on a contract because they had no maintenance or patch management policy in place so the business comes running at us screaming why we don't patch our servers (they would listen to their potential clients about computer security and whatnot, but not to their own staff). Cue us showing them the dozen or so draft maintenance policies that we'd submitted over the years, all of which were rejected by the directors. Red faces all round in that meeting :)

    So the latest draft gets pushed into force by a wheelbarrow full of cash and we go out and buy Shavlik, a really rather nice patch management solution... and then our change management board goes nuts when they see our report. Lots of w2k and w2k3 boxes had literally hundreds of service packs and patches oustanding before, and like the OP wanted an individual change raised for each patch going on each server. We then set up an email direct to the change board that gave them Shavlik's automated PDF thingy which gives a list of all the patches outstanding on a server along with a hyperlink to the MS KB or similar... but that wasn't good enough. They wanted a report on what each patch did, which files it altered, all the usual stuff. Now as another poster had pointed out, under ITIL this should all have been "standard change" without needing so much paperwork (seriously, they should be at least aware of ITIL even if they're not going to follow it to the letter) but we could sympathise with them that, even with our planned dependency-based staggered rollout over a 4 week period, this was both a radical shift in company culture and posed a significant opportunity for breakage... but still. Filing about 20,000 change requests it was to be.

    So obviously, since we were dealing with obstructive officials, we did exactly that. Did a few dozen hacky shell scripts that took the PDFs that Shavlik made, CURLed down the contents of the link to the KB page and then posted it off into the change management system - one request per patch per machine. After about twenty minutes of this we'd submitted about 400 requests and the change management system (an in-house pile o' shite that wasn't so much written as congealed out of various bits of sharepoint and was universally hated) had slowed to a crawl enough that it took 10mins to open the page. It used funky whizz-bang ajax to load *all* of the pending change requests in the background ("who needs a LIMIT on this SQL parameter?! We're never going to have more than fifty open change requests!" The developer in question also seemed to think that using a LIMIT statement was akin to taking the go-fasta stripes off your car. Wonder if he's doing webscale development now). After some brief arguing where they actually suggested we should open a change request to submit changes - at which point we cackled at the prospect of submitting another 20,000 pre-change-request changes - and after finding their ITIL manual down the back of the sofa they finally agreed that yes, actually, they didn't need quite such a detailed report, and were prepared to accept our risk assessment report as a single change for the first weekend's rollout.

    So about 20,000 patches/service packs were staged and installed over the next two months, and luckily we didn't have a single failure due to the patches (yes, I also thought this was miraculous considering the crufty applications). From then on, every patch cycle needed just four changes, one for each week. That's how it should be done.

    * Yes, necessary! I've done more than my fair share of JFDI but that just do

  • by NapalmV (1934294) on Thursday April 17, 2014 @07:53AM (#46777887)
    This makes no sense unless you also have a QA department were all these patches would be tested. Then the CAB would need to get a list of the patches description, justification, and impact to existing enterprise applications. Based on this list they could select what can be applied immediately, bundled in a weekly/montly release, scrapped or postponed until a remediation plan is completed. Without QA results the CAB is useless.
  • by bwcbwc (601780) on Thursday April 17, 2014 @07:55AM (#46777893)

    In my experience a CAB usually gets introduced in a small organization if something really got screwed up under the old process. There are exceptions - you could get a CTO who is gung-ho for ITIL, or you may have a new, important customer who insists on "process". But a CAB is an attempt to manage change and prevent problems in the working environment. So unless you have a better solution that will prevent negative impacts from your change process, go do the paperwork, with special attention to any risks or issues associated with the change (extended maintenance window, complex install or backout process, partial or incomplete fixes that still leave issues open). You can probably half-ass the CAB and get your work done almost like the old days, but when the next failed change occurs and they find out you hid risks or didn't do proper research, your ass could be out the door.

    OTOH, if you really hate bureaucracy that much, hauling your ass out the door could be your best option - as long as you have a different career in mind besides sysadmin.

  • CM is there for both preventing fuck ups and dealing with them when they occur. First things first: do you have a test environment? If not, build one. Do you have documented processes? If not, document them.

    Proper change management ensures that: 1. people in the group know what is going on. 2. you have a second/third set of eyes to ensure that you have both a plan, a backout plan (or plan B in case it can't be backed out) and a test methodology to ensure that a change hasn't broken things. 3. to

  • In a previous life, we passed around virtual machines rather than doing paperwork. Paperwork is to be sure you have a plan to solve the explosion-and-revert problem.Managing machines instead of paper allowed us to include a process for doing an immediate revert on explosion (;-))

    The VMs we passed around were Solaris zones, so they were very lightweight. If I wanted to apply an emergency patch to production, I first applied it to an image, put an instance on pre-prod, a physical machine, and varied it int

  • We are a 100,000+ user operation. Our patch tracking and approval process is a giant paperwork nightmare that does nothing useful. I would get Microsoft Security Baseline Analyzer, run the report after Patch Tuesday, and send it to the management types. Say look at this nice list of required patches. If there are no objections, we will roll them out :D
  • A whole bunch of OS patches = One change
    Replacing a server = One change
    Reconfiguring some shared folders = One change
    Replacing a whole bunch of printers = One change

    There are a couple of advantages with a change process like this.. the first one is collective responsibility, so the poor sysadmin can pass at least some of the blame back to the CAB if it goes wrong. And then also there's the point that other people might have a legitimate input into the process, especially if there are things happening in

  • So you told them it won't work and they didn't listen. Now show them it won't work. Script something to send them a request for each update for each server. When they get flooded with 100+ perfectly valid requests each day they will beg for mercy. Then file one request for 'ongoing ad-hoc security updates for systems' and watch how fast they approve that one.

  • Things like this always annoy me. Someone has decided either that you don't know your job or that they need more layers of bureaucracy. In my experience it is usually because they think you don't know your job as a system admin. Do I really need a 'paper trail' or make work for things I'm already tasked to manage the risk for? And why would a group of business people (generally) think they are somehow better at mitigating IT risks than the IT person?

    Part of what they are supposed to be paying me for is to k

Take care of the luxuries and the necessities will take care of themselves. -- Lazarus Long

Working...