Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
IT

Disempowering the Singular Sysadmin? 433

An anonymous reader writes "Practically every computer system appears to be at the mercy of at least one individual who holds root (or whatever other superuser identity can destroy or subvert that system). However, making a system require multiple individuals for any root operation (think of the classic two-key process to launch a nuke) has shortcomings: simple operations sometimes require root, and would be enormously cumbersome if they needed a consensus of administrators to execute. There is the idea of a Distributed Administration Network, which is like a cluster of independently administered servers, but this is a limited case for deployment of certain applications. And besides, DAN appears still to be vaporware. Are there more sweeping yet practical solutions out there for avoiding the weakness of a singular empowered superuser?"
This discussion has been archived. No new comments can be posted.

Disempowering the Singular Sysadmin?

Comments Filter:
  • We just have an account in Active Directory called "Support" with Administrative Rights across the domain, and the Sysadmin holds the Root and Administrative Passwords to effectively hold total control if need be. He can change the Support Password and lock all of us out if he wishes, or give us the info to let us back in. But pretty much anything that needs to be done, we can do on that account, including adding the PC to the domain itself.

    Or am I going to be laughed at for posting the Microsoft answer?

    • Perhaps a stupid question, but if you're in the Administratos group, can't you change the Administrator password anyway?

      Probably he's changed the security permissions for that account, I just am interested!

    • by jimicus ( 737525 )

      Or you could add the people who need admin rights to the appropriate group, which is a lot more secure because you can then audit what individuals do. A general-use user account with that level of privilege is generally considered a Very Bad Idea unless you really can't avoid it.

      • I would generally tend to agree, in the growing pains of the company - they hired more lab people then they had computer techs to handle, as such a few labs have generic lab accounts, and whenever one of them gets a virus or deletes something important by accident or whatever, there's no one you can point the finger at because there's no log besides the generic account.

        And the same thing is open with our support section, anonymity with admin rights, very dangerous should someone screw up or should someone w

    • no, but you'll be laughed at for having a single shared account that means anyone who logs in to perform some "support activity" (maybe after a few drinks, or just general brainfarts) cannot be determined after the event. This can be a good thing, depending on how bad your admins are (good for them, that is). :)

  • In other news... (Score:5, Insightful)

    by Anonymous Coward on Monday January 10, 2011 @11:45AM (#34823662)

    Rule by a benevolent dictator has certain advantages, and rule by committee has certain opposite advantages. It was ever thus.

  • we have a few databases where selected developers can do anything they want since they do most of the work there and there is no SOX requirement for those databases. every week mysterious things happen where column schemas are changed, stored procedures are updated, etc with no notification to anyone except when trouble tickets come in because some other application broke

  • by arivanov ( 12034 ) on Monday January 10, 2011 @11:46AM (#34823690) Homepage

    It is called: "Change Control" and usually goes along with "Revision Control" on configs.

    If you change without recording the reason for change and without checking in the result so that the two versions can be compared and analysed you get a pink slip. Voila. Problem solved.

    • by Anon-Admin ( 443764 ) on Monday January 10, 2011 @11:52AM (#34823792) Journal
      What an Amazing Idea, now tell me who does this? I have worked for 4 fortune 10 companies and 1 financial institution. Not a single one has used Revision control, and only one has used change control. That is if you consider a meeting of 20 non-technical managers who can nix a change with out explaining why, change control.
      • we do this - there's a spreadsheet (well, got to have something right) that gets updated whenever anyone dials in to a customer site, even if you did nothing. If you do something when dialled in, that gets logged too.

        I don't know if failing to update it is a pink-slippable offence, but you will get a severe b*ll*cking if you fail to do it twice.

        BTW, our customers are police, fire and ambulance control centres. Maybe that makes us different to the usual, but it's simple and works well. If we could get rid of

      • by CompMD ( 522020 )

        We do it, a major publicly traded international company with thousands of employees. Its not hard: make /etc a repository (we like mercurial), have puppet manage your servers, and revision control the server config files on the puppetmaster (again, mercurial helps here).

    • by daid303 ( 843777 )

      Doesn't solve the stupid admin from logging in to my server and entering "reboot". Which is more of a problem in my case then configuration files (they won't even touch those with a 10 foot pole)

    • by alen ( 225700 )

      and how many people are always changing minor things without change control because they feel this is their baby and they can do anything?

      • by vlm ( 69642 )

        and how many people are always changing minor things without change control because they feel this is their baby and they can do anything?

        That's because the development server IS the production server, for whatever reasons. Its not a maintenance procedure problem, but a design problem way upstream of scheduled maintenance.

        The other scenario is when you're breaking individual (or world wide) new ground. It works when a huge team can spend months debating the route and design for some new railroad tracks, however an operating engineer needs full and instant discretion of how and when to work the throttle and brake levers.

        There's some things t

    • by vlm ( 69642 ) on Monday January 10, 2011 @12:02PM (#34823910)

      Works, although excruciatingly slowly for planned work.
      The collision of excruciatingly slow proactive planned work, and reactive trouble tickets, always is a source of utter hilarity. Usually the end result is you only do planned proactive paper shuffling for meaningless stuff "lets change the background color to be 0.001% darker" and ram thru development as part of a trouble ticket with no oversight at all (well, to make our big customer happy, we've decided to completely redo our database schema and stored procedures this afternoon as part of the ticket).

      Another example, if it takes a month and endless meetings to replace a failing drive during scheduled maint, and a half hour to replace a failed drive at any time, this simply eliminates all proactive maintenance. Much easier / cheaper to burn the power supply out, have a nice long outage, and then replace the whole device, than to get permission to blow dust out of the air filter.

      The end result is usually much worse than it was at the beginning.

      • by JamesP ( 688957 )

        Another example, if it takes a month and endless meetings to replace a failing drive during scheduled maint, and a half hour to replace a failed drive at any time, this simply eliminates all proactive maintenance. Much easier / cheaper to burn the power supply out, have a nice long outage, and then replace the whole device, than to get permission to blow dust out of the air filter.

        The end result is usually much worse than it was at the beginning.

        Of course the first one follows all ITIL processes

        How long the corporate world will see that CMMI and ITIL are the (very expensive) equivalent of 'power crystals' and astrology?! Oh yeah, that's right, NEVER

      • Another example, if it takes a month and endless meetings to replace a failing drive during scheduled maintenance, and a half hour to replace a failed drive at any time, ...

            Sadly enough, I've had a simple drive replacement tied up in meetings and other office politics for months. Write up a proposal for change, sit in meetings where various department heads without a clue discuss the potential hazards, write up the rollback process (for changing a drive?). Your plans are torn apart and put back together. Departmental announcements, customer notifications, etc, etc. Accounting wants numbers, and proposals from 3 sources for the cost of a replacement drive (which you have 5 of in the datacenter, and a regular supplier). You're sitting there with the mind numbing noise flowing past. All you can think is "the array was set up with no hot spare. It's running in a degraded mode. Change the damned drive." Of course, complaints of slow drive performance are scattered throughout the meeting.

            Two months and more meetings than you can remember later, they slate it for an arbitrary windows. Saturday at 3am. Not only change it, but you are required to stay while it rebuilds, "just in case...". Just in case? You have me working 8 to 7 Monday through Friday, weekends on demand (which are every weekend) AND you want me to blow off Saturday night to do the change? Ah who cares, I don't need sleep.

            Then Thursday afternoon before the schedule change is done, a second drive in the array fails, and the whole thing is down. All the same people who were in on the meetings start screaming "How could you let this happen?!"

            Thursday afternoon becomes Thursday night, and by Friday morning you have the array back up and working, through some dumb luck. (crossing fingers, praying to whatever gods may be listening, and tapping the drive with a screwdriver at boot time to make it spin up). The only planning that helped is that you keep a change of clothes and a toothbrush in the car, since you don't have time to go home once you're done. In doing the work, you notice the same thing happening to a neighboring machine. Damned aging hardware. So you just change it without the mess that accompanied the first change. Not only are you bitched out for not fixing the first array in time, but you get it twice as bad for fixing the other one before it became a problem. How could you have independent thought? How could you make a change without proper authorization?

            The only thoughts still in your head are "I hate this job", "my car keys are in my pocket, and I could just leave." Is this the day you quit? Maybe, just maybe. Just one more thing, and that'll be it. I don't need this shit.

            Friday afternoon, not sleeping since Wednesday night, you are told "Do [some other task] after hours tonight." No, you won't get paid any overtime since you're on salary. The task will take at least 8 hours, and they need it done before Saturday morning. Do you scratch out a resignation with a sharpie on the CEO's wall at 2am, or do you just walk out?

            I really hated that job.

        • by sglewis100 ( 916818 ) on Monday January 10, 2011 @01:05PM (#34824692)

          Sadly enough, I've had a simple drive replacement tied up in meetings and other office politics for months. Write up a proposal for change, sit in meetings where various department heads without a clue discuss the potential hazards, write up the rollback process (for changing a drive?).

          Not that I don't agree that some companies make change management more than it needs to be (mine does it OKAY), but I bet the guy I knew years ago who changed a drive on a RAID-5 array had thought about testing and rollback. You see, he received the replacement drive late in the day, ran into the data center, popped out a drive, popped in the new drive, and went home. Sadly, he had pulled the wrong drive.

          • hehe.

            Sorry, I had to laugh.

            That has more to do with check your work than it does with the prolonged control processes that businesses put in place. I've seen the control processes made by committee fail miserably. Sure, they want all this stuff done. The 400 point checklist frequently misses some essential piece, like "is it the right drive?" and "verify it's rebuilding properly". The last step would have screamed "You're doing it wrong".

            If there's no clear

          • All this reminds me something I was fortunate enough to see happening without being involved and responsible for all the shit that results from it.

            Once a day, we had a change to do and this change need to be coordinate with a mainframe change. We were there to test the procedure on test environment, with the test mainframe partition. In the operators' room, there is three levels of desk, each level seeing what is going on on the other level. It was very like the Star Trek Enterprise command room or someth

  • /etc/sudoers will handle a majority of those "simple operations" that require root.
    • And the top programs run through sudo? "sudo su" and "sudo sh" :)

      The article wasn't suggesting controls for a single admin to accomplish a task. They were talking about requiring at least 3 admins to do the same thing in three identical environments to accomplish one task.

      "Ok, we need to reboot server X, all of you on my mark type 'shutdown -r now' ... 3 ... 2 ... 1 ... mark"

      "Dammit Mark, you didn't hit enter in time. Lets try again."

  • look at programs where there is a lot of technical activity and communication activity for time sensitive work

    you can't have a nuclear missile system where one guy can invoke the bombs to go off. at the same time, the system has to be quick and responsive

    so you need to engineer administrative systems where not less people are involved but MORE: you can't do this function or that function without also involving this guy over there turning a key, etc.: all admin functions invoke more than one person. that's the best way to have a system where power can't be abused. its about redundancy and layers of admins, not less admins

    and if people are pursuing this question because they don't want to pay an admin or can't trust someone else with their system, then such idiots get the system they deserve: a broken one and no one willing to fix it at the money you want to pay

  • well, someone has to be in charge. we arent looking to get rid of the ceo, despite their abuses.
  • by Rogerborg ( 306625 ) on Monday January 10, 2011 @11:49AM (#34823738) Homepage

    Oh, the jobs people work at! Out west, near Hawtch-Hawtch, there's a Hawtch-Hawtcher Bee-Watcher. His job is to watch... is to keep both his eyes on the lazy town bee. A bee that is watched will work harder, you see.

    Well...he watched and he watched. But, in spite of his watch, that bee didn't work any harder. Not mawtch.

    So then somebody said, 'Our old bee-watching man just isn't bee-watching as hard as he can. He ought to be watched by another Hawtch-Hawtcher! The thing that we need is a Bee-Watcher -Watcher!'

    Well... The Bee-Watcher-Watcher watched the Bee-Watcher. He didn't watch well. So another Hawtch-Hawtcher had to come in as a Watch Watcher-Watcher!

    And today all the Hawtchers who live in Hawtch-Hawtch are watching on Watch-Watcher-Watchering-Watch, Watch-Watching the Watcher who's watching that bee.

    You're not a Hawtch-Watcher. You're lucky, you see.

  • Reinventing history (Score:5, Interesting)

    by vlm ( 69642 ) on Monday January 10, 2011 @11:51AM (#34823776)

    would be enormously cumbersome if they needed a consensus of administrators to execute.

    Thats why you leave changes to the 24x7 onsite operations team not one lone admin doin' his thing in the cube. They're the ones monitoring the systems, seems most sensible if they "push the buttons" on the things they watch. Ideally you have one team that does nothing but watch and one team that does nothing but do, and theoretically they cooperate.

    And besides, DAN appears still to be vaporware.

    DAN appears to be a poor reinvention of flight control software for aerospace from the 70s/80s. Those whom don't know their history are doomed to poorly repeating their past.

    Next up, we'll reinvent the concept of the security office from AS/400, or maybe the idea of hard realtime control.

    Maybe someone out there could could reinvent the concept of the watchdog timer so the "DAN" cluster doesn't go into deadlock? Naah, we'll let them "discover" it themselves, the hard way.

  • There's a reason.. (Score:4, Insightful)

    by malkavian ( 9512 ) on Monday January 10, 2011 @11:52AM (#34823794)

    That you have one person doing it. It's effective, and versatile.
    If you have multiple people empowered to do exactly the same thing, you end up at the mercy of the one that decides to shut everyone else out.
    If you then have a security admin that's the only one to be able to alter the login info, then you're at their mercy.
    With the "dual key" type approach, what's to stop someone installing a back door along with a normal software upgrade? Does everyone have the same knowledge as your prime sysop? Can you afford to have one person that completely mirrors another, instead of distributing the skills across a time (with duplication covered across the team)?
    What if both the key holders are in cahoots?

    Interestingly, who is stopping your CEO from making those really bad decisions, or your FD from siphoning the cash, or a whole host of other areas where you trust one person to do a job?

    Value the person, and make sure you treat them well enough to make it not worth their while to play you up.. Then you'll have no problem.
    Screw them over at every opportunity, and you'll really have to trust their ethical views (you're still usually safe, but it's no guarantee then).

    • But ... make sure you have a backup in case the person gets hit by a bus.

    • by gmuslera ( 3436 )
      I would separate the problems in 2, one thing is having someone with close to god priviledges that can't be trusted (so having multiple of them you probably multiplied the problem too) and another putting some sort of safety belt, the trust is there, but you as admin restrain yourself for non critical operations or collaborative/role administration. Sysadmins are not excluded from Hanlon's razor.
    • by Peeteriz ( 821290 ) on Monday January 10, 2011 @12:23PM (#34824162)

      who is stopping your CEO from making those really bad decisions

      The board; other executive officers, and limitations for class of big decisions that requite a vote of shareholders; (especially in non-public companies)

      or your FD from siphoning the cash,

      Periodic independent audit, as well as requirement of extra authorisation for amounts above X - in any well managed company FD can't siphon all cash without other officers getting dirty as well;

      or a whole host of other areas where you trust one person to do a job?

      There are no other areas where high-risk issues are trusted to one person without serious oversight. In most companies the IT management and auditing is either solved as well, or the only remaining weak point with this problem - that's why the article is there.

      Valuing persons and treating them well is in no way a solution - compare 'security by obscurity' vs. 'security by goodwill' vs. 'security by prayer' and you'll find some similarities.

      Four-eyes principle stops a lot of potential malice, as the likelihood of both keyholders being ethically faulty and not betraying each other is much, much lower than simple chance of one person being ethically faulty.

      Installation of back doors along with a normal software upgrade is a prime reason why someone other than 'your prime sysop' needs to periodically verify stuff; if you don't mirror, then you ask for outside audit of stuff; have secure write-only logging of 'root' tasks to a system which is completely controlled by someone else, etc.

      Of course, it depends on the risks - if the worst your sysadmin can do is shut down an informative website that you have, then it's no big deal. If it's a payment system that can fund a life-long vacation in the Bahama's for an opportunistic administrator, then we're talking about all such measures.

    • by jedidiah ( 1196 )

      > Interestingly, who is stopping your CEO from making those
      > really bad decisions, or your FD from siphoning the cash,
      > or a whole host of other areas where you trust one person to do a job?

      Nothing. Sometimes it is the CxO that is making some clueless change without telling anyone that subsequently breaks everything.

  • Really though if they have physical access to something they can do whatever they like. Auditing and logs can go pretty far but at some point you have to trust the people that run things.

  • by BooRadley ( 3956 ) on Monday January 10, 2011 @11:55AM (#34823828)

    Mostly, except in very small organizations, there are several implicit safeguards to keep any one person from doing evil with the systems. They are subtle, but effective.

    Peer review: Most sysadmins are hired by other sysadmins, or at the very least a technical manager. This means that you are hired based on your skills, reputation, track record, and demonstrated attitude. This means that ideally, you wouldn't even *think* about intentionally subverting a system, because that would mean breaking it or compromising it in some way, and most professional SA'a are simply too OCD to allow it.

    Business continuity: Most organizations have several layers of continuity in place, such as disaster recovery scenarios, system snapshots, monitoring, and auditing. This means that unless you are VERY subtle, or work for an entirely incompetent team, you WILL get caught, and the damage will be minimized as you are being put into a police car, never to work in IT again.

    There are no "indispensable people:" If you are a sysadmin, and you are the only one who knows your systems, you have not done your job. Every system and app should be documented, and there should be accountability for every change and decision.

    No technical solution will ever replace good management and planning, and a design that eliminates the vulnerabilities of a system to rogue sysadmins, will also eliminate its flexibility. It's just a lot cheaper and easier to try and run a good shop.

  • by DigitalSorceress ( 156609 ) on Monday January 10, 2011 @11:59AM (#34823868)

    Hire admins who know their stuff and make sure you have at least two of them with the root password. Make sure they've got some kind of change control in place, and make sure you have them document what they're doing.

    I've been the sole sysadmin before, and I always felt worried that my legacy, should I be fired or quit or hit by a bus, would be "She didn't do a great job because everything fell apart after she left/was fired/was bussassinated". So, I always tried to document things and made sure the boss had the "keys to the kingdom" (document with root pw and locations of my documentation to give to my successor).

    • Re: (Score:2, Funny)

      by Anonymous Coward

      bussassinated

      I have a new word of the day! Thank you. :D

  • by plover ( 150551 ) * on Monday January 10, 2011 @11:59AM (#34823880) Homepage Journal

    First, understand that Slashdot is only going to provide a hint of what you will be doing. Security is complex and easy to get wrong, and there's a whole lot of evidence of that in the news. If security is important to your company, you should invest in a CISSP to really help you get things set up in a fashion that the industry considers to be best practices. Until then, consider these few generic suggestions.

    Multiple layers of security help ensure that nothing goes astray, or if it does that it's detected before too much damage is done. And separation of duties helps make sure that one rogue actor can't do it all by himself.

    Separate the admin of the box from the admin of the data. The guy who holds the root PW doesn't have to be the same guy who holds the private key for the database.

    Add off-the-box auditing to the actions of root. As soon as someone signs on as root, notification is sent to a different box of the originating IP and it's timestamped. Don't let your application sysadmin be the sysadmin of the audit box! And the auditor should investigate carefully any situations that are out of the ordinary. (This box fell off the network just before root logged on? That's an odd coincidence.)

    Define expected behavior with policies. If you want to run a trustworthy ship, clearly stating who has access to do what with which systems eliminates confusion, and helps avoid where one sysadmin creeps over into other systems.

    Ultimately, you've placed trust your admin to do a job, and you need to trust him or her to do that job. Somebody's got to be root. But they also have to know they'll be held accountable for what they do.

  • In our IT department, root admins are required to sign a notarized document telling their deepest, darkest secrets in order to be given the password. This keeps them in line for eternity, and seems to be working quite well...
  • by Doc Hopper ( 59070 ) on Monday January 10, 2011 @12:01PM (#34823900) Homepage Journal

    We have several solutions which work together to minimize the risk of root at my company:

    1. Powerbroker. It's in use on every single UNIX system administered by our Global IT teams. Every user has a role (or several roles), and that allows them to execute a variety of commands with elevated privileges. Once Powerbroker is invoked, however, every single keystroke is logged and can be played back. These logs are stored indefinitely; access is very restricted.

    2. Automated, centralized root password management. One of the steps to setting up a UNIX machine here is ensuring the root password and remote console admin passwords match that dictated by our automated provisioning system. Then every 30-90 days (depending on policy for this type of system) the root password is changed to a very long, apparently very random string. I can look this password up if my role allows it, but the lookup is also logged.

    3. A good Change Request (CR) process. Every system that exists in a data center should have a record in our systems database. Once a system has passed through the phases of deployment (Warehouse -> Data Center Install -> Sysadm Configure -> Deployed) any change made to the system must be requested and approved by the owners of the system. This approval is logged, and the date/time of the work is also logged. Sysadms must close service requests within the time window specified by the CR, or apply for an extension or reschedule if they're unable to complete it within the allotted time.
        The downside to this is that you lose quite a bit of system administrator work hours filing and managing change requests. However, this loss of efficiency -- IMHO -- is better than the mayhem that ensues without an organized change process.

    4. Automated forensic tools to monitor changes. Information overload is a real risk with any Tripwire-style system, though. We're still working out some of the kinks on this part of the system. Once we ensure that all normal changes due to operation of the system and scheduled maintenance get excluded, this will be the fourth leg to reduce the risk of super-user privileges.

    At any company, IT must find a balance between controlling user actions and monitoring those actions. In most cases, the easiest approach is to prohibit by policy only those things that might typically result in lawsuits, but monitor everything else to the best of your ability. Combining a Powerbroker-like product with automated root password management -- both with fascistic logging -- is a reasonable approach that works well for many large companies. Combine this with a change management system, and a forensic tool to automatically monitor and notify of unauthorized changes, and super-user isn't really all that big of a concern.

    • These logs are stored indefinitely; access is very restricted.

      to whom? what you have to keep in mind is that computers operate as single minded entities. when you approach a machine like that: security is currently an afterthought. this tells me that there is somebody that holds access above the other users, basically missing the point here.

      I can look this password up if my role allows it, but the lookup is also logged

      Again, that means that there's somebody administering the logging system. and I almost assure you that even if their logins are listed somewhere: they have full access to remove those entries and make it look like it never happen

      • by Doc Hopper ( 59070 ) on Monday January 10, 2011 @01:30PM (#34825020) Homepage Journal

        You've tossed out a few red herrings and a couple of valid points. I'll try to address them in order.

        this tells me that there is somebody that holds access above the other users, basically missing the point here.

        No, I haven't missed the point at all. The point is to distribute the responsibility with sufficient checks in order to ensure that misbehavior will be caught and dealt with in a timely fashion. Is it possible someone could scheme up a way to slide abuses past the admins? Of course it is. But between good backups, fascistic logging, role-based access control, and routine audits by the change control committee, the risk is minimized.

        There's no one person who holds the "keys to the kingdom". No critical data is stored on the machines themselves; it's all stored on centralized storage. The folks who admin the automated root password changes don't have any access to storage; the storage folks typically don't have any access to the systems.

        Again, that means that there's somebody administering the logging system. and I almost assure you that even if their logins are listed somewhere: they have full access to remove those entries and make it look like it never happened.

        Incorrect. I didn't cover this in my original post, but logs are (and should be) stored on write-once media. You can designate volumes on modern storage media so that, once written, it can never be altered without destroying the entire volume. We use this extensively.

        say I have a machine that stores credit card numbers on a DSS approved network that's locked down in the ways you describe above. at the admin level, it would take me minutes to provision a machine to replicate the target. I don't mean replicate as in contents, I mean replicate to the network view.

        Once again, distributed access can prevent this. The network team and the sysadm team aren't the same teams. Every port on your switch is disabled until it's enabled by the network team. Even once enabled, that port must be on the same VLAN as the hypothetical credit-card storage system.

        That's once again where fascistic logging and automated reporting come into play. If a port is disconnected, unless a host has been blacked out with an appropriate change control ticket filed, the port disconnection generates an immediate Priority 1 service request to investigate.

        If a drive is removed from centralized storage, that also generates an immediate P1 ticket. The sysadm's access would have been logged the moment he swiped his badge, and cameras throughout the data center capture the switch-over.

        A corrupt admin can do a lot of damage, I admit. There's no getting around it. But with sufficient logging -- and yes, I include physical surveillance as "logging" too -- they're not going to get away with it.

        the replicated machine can be tunneled into place and act as if it was the machine in question.

        Now this is the red herring. If you've ever done ANYTHING major with credit cards in a data center, you are aware that you're subject to yearly audits of your infrastructure by Payment Services. They do a deep-dive of your systems to enforce a huge number of requirements. I can't go into it here. It literally fills a large book, and they go over it line-by-line with all the admins involved, every single year. I've been through several of these, and each year it gets broadened to cover more potential issues.

        Chief among these requirements? A separate admin/management network from the front-end/back-end network. You can't "tunnel in" to that network and make it "act like" another system. The network is an unroutable private VLAN or fibre-channel connection.

        at this point, I can reverse firewall the unit preventing it for calling for help or reporting the changes I make. I can snapshot the drive and move it offsite

        Ye

  • by account_deleted ( 4530225 ) on Monday January 10, 2011 @12:03PM (#34823930)
    Comment removed based on user account deletion
  • by petes_PoV ( 912422 ) on Monday January 10, 2011 @12:04PM (#34823934)
    Trying to get 2 sysadmins to cooperate would be like insisting every car has 2 drivers (and not like a plane has a copilot). There are at least four possible outcomes: one sysadmin becomes dominant and you're back to where you started, but paying two salaries. They continually bicker about the best way to do things and nothing ever gets done (or worse: they sabotage each others' efforts), one just slacks off and causes decision-making bottlenecks or they spend so long reaching a consensus that even the most trivial task takes a week of decision making, timetabling, agreeing and finally doing it.

    The only solution I can think of that would stand a chance is to require:
    a) everything gets documented (you'll know this is the correct way, as all the techies will hate it)
    b.) every week / month all the roles change, if an admin coming into a role finds that things aren't as they were documented, someone gets yelled at
    This also has the advantage that you're no longer completely screwed if someone leaves, goes sick or gets promoted. it also makes it clear to the people in question that the company can get along quite nicely without them.

  • How about (Score:4, Insightful)

    by 0racle ( 667029 ) on Monday January 10, 2011 @12:05PM (#34823952)
    Everyone treats everyone else like adults and every one acts like an adult? Honestly, if you don't trust your admins, why are they your admins?

    Also, simple change management alleviates most of these problems. Even if it's just a log for what happened so that the next shift or your colleague tomorrow knows what you did today. Then again, I guess that is really back to acting like adults.
  • by McMuffin Man ( 21896 ) on Monday January 10, 2011 @12:05PM (#34823958)

    This is an old problem in high assurance systems. As other posters have pointed out, as some point you have to trust someone. But you can still "trust but verify".

    The standard solution is "division of privilege". Over time folks have learned that the key is a system which audits everything the admin does, and the one thing the admin can't do is modify or delete the audit trail. A separate person or team has the role of auditor.

    This is one of the requirements of a B2 level system in the old Orange Book model, and you'll see if it as a requirement if you need to provide systems for most countries' military or intelligence organizations. It's rarely used elsewhere because more or less noone else is willing to pay the staffing costs. The solution there is trust someone, and be ready to fire, sue, and/or prosecute if they violate that trust.

    • by Animats ( 122034 ) on Monday January 10, 2011 @12:37PM (#34824346) Homepage

      This is one of the requirements of a B2 level system in the old Orange Book model, and you'll see if it as a requirement if you need to provide systems for most countries' military or intelligence organizations. It's rarely used elsewhere because more or less noone else is willing to pay the staffing costs.

      Right. I developed an OS for that model many years ago.

      The key to this is a mandatory security/integrity model. At a given privilege level, you can only run programs trusted at that privilege level. So, if you're running as some kind of administrator, you can only run trusted administrator tools. You can't use a text editor on the password file, for example.

      Then you have compartments, and some tools are accessible only in some compartments. For example, the person or program that makes backups needs the ability to read almost everything, but to write almost nothing. (Restoring from backups, which is done less often, requires different privileges.) The security officer can add and delete users, but can't install programs. All this is enforced by the OS, looking at privileges associated with files, users, and programs, not by the applications themselves. A few applications are trusted, and they have to go through an elaborate approval process, which means they're usually rather dumb apps.

      The "control panels" used by hosting services are a step in this direction. Users can do some things, and first-line tech support people can do others.

      Currently, the big hole is program installation. Installers typically demand far more privileges than they should. In a mandatory security model, installation of an ordinary "application" should mean that the installer has write permission for the vendor's compartment and nothing else.

  • Smack * (Score:4, Insightful)

    by onyxruby ( 118189 ) <onyxruby&comcast,net> on Monday January 10, 2011 @12:11PM (#34824004)

    Peer Review, Change Control, Auditing, Maintenance Windows, Testing all changes in a lab before production, source and version control / maintenance. These are all best practices, work regardless of operating system and don't require any special software.

    Why o why do you want to use software to take the place of established best practices? Best practices are there for good reasons, and those reasons usually have multi-million dollar lessons attached to them. You don't need special software, just a heavy that says yes you /must/ do it this way and raises hell when you try otherwise...

  • There's a relatively simple way to handle this. First, you set up a call center with operators on the phone who each have access to the servers under management. They each have sudo rights to a single command: to create an xterm as root. Their workstations are locked down and do not have an X server installed. (You can take this further and restrict X from reaching them via firewall policies.)

    Second, the admins who need root access do have an X server installed. When they need root access to a system, they

  • No doubt many sites are "lazy", or "old-fashioned" Solaris has had "Role Based Access Control" for many years. Different tasks can be farmed out/delegated to different people.

    Auditing, etc. all provided.

  • In my household I have been implementing SAP (Spousal Approval Protocol) for years.
  • Yes, you do, even if that someONE is YOU.

    For all you quivering sysadmins out there, and you lusers that question their authority and trustworthiness:

    Do you trust your CIO to not shackle you to poor choices just for their kickbacks?

    Do you trust the Board to not limit your opportunities by failing to act on corporate goals?

    Do you trust your CEO to not collude with your accountants and cook the books?

    Trust.

  • Are there more sweeping yet practical solutions out there for avoiding the weakness of a singular empowered superuser?

    Give the responsibility back to the users.

    By removing the responsibility from users, one keeps them oblivious to the infrastructure problems. That perpetuates arrogance and the "not my responsibility" mentality.

    By moving the responsibility to admins, to the people who do not use (for their primary purpose) the services and infrastructure they are responsible for, make them oblivious to the actual user needs. And self servingly often make the infrastructure more complex than it really needs to be - fe

  • Use the wikipedia-approach:

    Don't restrict, but instead log exactly who does what change when and why, and make it trivial to undo any change.

    For example, for /etc use revision-control, and require that all changes be comitted.

    That way, yes, the one doing something may screw up, but you can easily undo it. And when the customer calls and goes "why doesn't that work, it worked last thursday!" you can trivially get a list of all changes since then.

    As an added bonus looking at logs and commit-diffs give a new a

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...