Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Unix Operating Systems Software Businesses IT

Linux in a Business - Got Root? 464

greenBeard asks: "I work for a government contractor, and have recently convinced them to purchase a Beowulf cluster, and start moving their numeric modelers from Sun to Linux. Like most historically UNIX shops, they don't allow users even low-level SUDO access, to do silly things like change file permissions or ownerships, in a tracked environment. I am an ex-*NIX admin myself ,so I understand their perspective and wish to keep control over the environment, but as a user, I'm frustrated by having to frequently call the help-desk just to get a file ownership changed or a specific package installed. If you're an admin, do you allow your users basic SUDO rights like chmod, cp, mv, etc (assuming all SUDO commands are logged to a remote system)? If no, why don't you? If you allow root access to your knowledgeable users (ie developers with Linux experience), what do you do to keep them 'in line'?"
This discussion has been archived. No new comments can be posted.

Linux in a Business - Got Root?

Comments Filter:
  • SUDO Commands (Score:2, Offtopic)

    by Traxton1 ( 154182 )
    You forgot AIII-YAAA!

    • Re:SUDO Commands (Score:5, Informative)

      by mrbooze ( 49713 ) on Friday December 30, 2005 @01:34AM (#14362889)
      I don't really get the original premise. Nobody needs to be root to run chmod, cp, mv, etc on their own files. The only command mentioned one might need root for is chown. Which would make me ask the question, why do you need to change file ownerships so often?

      It would take a hard-core serious business case to convince me to grant someone root access, even sudo-limited root access to a production system. The fact that I might have a "log" of whatever broken thing they did to take a business critical machine down is fairly irrelevant to me. My job is to make sure that doesn't happen in the first place.
      • Re:SUDO Commands (Score:5, Insightful)

        by dsoltesz ( 563978 ) * <deborah.soltesz@gmail.com> on Friday December 30, 2005 @04:11AM (#14363378) Homepage Journal
        I used to be both sysadm and web admin of my web server. When I moved to a different division where there is a sysadm group, I thought I'd die without root. Not only have I discovered I don't miss it (given I have some sudo privs), but I quickly learned I didn't miss sysadmining my own server - I get the great joy of focusing on being a web developer and leaving most of the fuss to someone else.

        As both the primary web developer and web admin, I probably qualify as a "special case" because I'm both end user and quasi-sysadm. The sysadms take care of the O/S, standard software, primary user accounts, etc., and I handle server software configs, user support on the development system, etc. I do have sudo privs for chmod, chgrp, chown, and so forth to give users ownership of their stuff, as well necessary sudo privs to manage certain daemons. However, end users do not touch my production box, and have zero special privs on my development box. My "regular user" log-in has no sudo privs - I'm a Jane Schmuck just like the rest of them.

        • Re:SUDO Commands (Score:3, Insightful)

          by TallMatthew ( 919136 )
          I used to be both sysadm and web admin of my web server. When I moved to a different division where there is a sysadm group, I thought I'd die without root.

          A decent amount of people start out with *nix at home, and do everything as root so they don't have to figure out permissions. When they leave that environment for an org's production environment, they think they need root. I did that, too, until I blew away a bunch of mail by accident at my first sysadmin job. Doh.

          The only time I've ever consider

      • by msobkow ( 48369 ) on Friday December 30, 2005 @09:37AM (#14364116) Homepage Journal

        On a production box, the admins have access to sudo, and root itself is locked down except for scheduled maintenance/upgrades or emergencies. No paperwork, no root.

        As a developer with over 15 years *nix experience, I have never had root access to a box unless I was doing an install, except for my own desktop workstation. In the case of my desktop, the only reason developers had root was so we could kill rogue services during debug sessions gone bad.

        Under no circumstances do I agree with any user installing additional software on a box. If it's needed, it gets approved and installed for everyone who needs the functionality, not by rogue users.

  • I usually write kernal modules that nerf certain permissions.

    This way, users can do what they like, but they can't fsck anything up.

    Failing that, I reckon a big man with a large knife could probably go a long way to keeping them in line.
  • Users != Root. (Score:5, Informative)

    by paitre ( 32242 ) on Friday December 30, 2005 @12:39AM (#14362668) Journal
    Much as I hate to break it to you - this is SoP.
    End users Do. Not. Get. Root. Even allowing SUDO access to change file permissions, copy, or even move files is just asking for trouble.
    Installing software or libraries? Hell no. Not on a live system.

    If they have a development-type machine at there desk, that's one thing (just don't call for support if you break the damned thing). Even then, my preference is that they have limited access.

    On large, shared systems, users get as much, and as little, access to do their jobs as necessary, and absolutely no more than that. I have to keep the system up for other users, I can't have power-user #1 screwing things up by changing permissions on something they really shouldn't be touching (let's take the compiler for example...)

    A little knowledge makes one dangerous, and I'd just as soon noone other than those paid to admin the machines have access.
    • Re:Users != Root. (Score:2, Interesting)

      by Anonymous Coward
      all well and good but I have worked in places where I definitly had more skills than the admin and had to wait long periods of time for the inept admin to fumble their way through somthing I could do in 5 seconds.
    • One variation that I've seen work well is to allow people full access to their own workstations, but not on servers (or clusters in your case).

      This limits the types of access people can have on the heavy server machines, but lets them install apps or do whatever they need locally, including installing a different OS or OS variation based on need or preference.

      Of course, the more people 'adjust' their workstations the less support you can afford to give them. You also have to be very careful about how much s
      • by Halfbaked Plan ( 769830 ) on Friday December 30, 2005 @01:56AM (#14362954)
        There are some interesting 'privledge escalation' things that can happen on machines 'owned' by the user on a big network, though.

        The one I experienced firsthand was a Windows NT machine that was my desktop that I ('naturally') had full admin access to. This was a machine on a large corporate network that was very diverse (there were Solaris, OS/2 Warp, Netware, and Windows NT servers on the network). I discovered, quite by accident actually, that if I ran the POSIX Interix (now SFU) shell on my NT workstation (something the company had bought for me, and I had installed myself,) that I could create any account I wanted on my local machine, and it would allow me, using that account name, to access shares on the network, doing whatever I wanted to files my username 'owned'. I am talking about the network that a company that makes implantable medical devices kept their work on. I suspect the 'defect' had something to do with NIS and 'travelling profiles' in Solaris, and the security system not being equipped to deal with other Unix-like hosts on the network that weren't secured. Incidentally, I didn't discover the problem by 'poking around where I wasn't supposed to be,' I simply noticed I was suddenly able to do things to files I normally had access to without entering my UNIX password as required in the past. Something clicked in my head, so I created a local account on the NT box that matched an important person's UID on the Unix system... yep, I had all his permissions.

        Delete test account. Never touch again. Too scared to mention it to anybody. It's been enough years now that I can even mention it in public. I hope they've secured things a bit better now, because these days there are unsecured Unixy systems all over the place.

        • Take a lesson from this guy - he's smarter than he looks. When you find a security hole like this, do NOT report it unless you can do it anonymously. If you can't report it anonymously, then just sit on the knowlege until the end of time. This is your job, your life, and your paycheck. We've all read the stories about how the person who reports a security hole gets criminally prosecuted for "hacking". You might be a smart person, but everyone around you is a blathering moron. That is a FACT. That blathering moron isn't going to say "thanks for pointing out this embarassing security hole that my ass was hanging out of." The blathering moron is going to try to cover his ass by blaming somebody else, and the easiest somebody is YOU. That way he takes care of the problem and gets brownie points for uncovering a dangerous "hacker" within the company.

          Next thing you know you're getting arrested by a nice FBI agent named Bob, and then getting cornholed for days in the local jail waiting for a judge to set bail. It's not worth it.

        • funny thing, that security stuff.

          Years ago I ran a network to transfer messages, much like Email. Then, it was typical to relay some 10,000 messages per week. I worked in a regional office. We're talking DOS 3.2 and Banyan VINES days, 286 systems and token ring networks, when 19200 bps was considered "fast", and the best modems available where 9600 bps.

          Well, being curious and all, I read the manuals, starting with MS-DOS. Got a working knowledge, used batch files to tweak certain, commonly run commands, and
        • by nathanh ( 1214 ) on Friday December 30, 2005 @06:13AM (#14363643) Homepage
          that I could create any account I wanted on my local machine, and it would allow me, using that account name, to access shares on the network, doing whatever I wanted to files my username 'owned'. I am talking about the network that a company that makes implantable medical devices kept their work on. I suspect the 'defect' had something to do with NIS and 'travelling profiles' in Solaris

          Nah, that's just a standard limitation of NFS. There is no security in NFS; the unofficial expansion of the acronym is No Fucking Security. The server trusts the client is providing a valid userid. You spoofed the userid and NFS has no way to detect that because the server assumes the client always tells the truth.

          Some environments implement netgroups to limit the opportunity for attack. The server checks incoming client connections against this list; clients on the list are assumed to be properly secured so nobody using the machine can spoof a userid. This is not very effective either because spoofing a client IP address is almost as trivial as spoofing a userid.

          What you found was simply standard practise for NFS, as frightening as that might be.

    • Re:Users != Root. (Score:5, Insightful)

      by Frumious Wombat ( 845680 ) on Friday December 30, 2005 @01:23AM (#14362850)
      Good for you. I don't give myself privs on the system (I have a separate account for root-access), and I'm certainly not giving people who aren't familiar with all of the ins and outs of a production system access. I am most certainly not giving developers, who have a tendency to muck with libraries and paths to solve problems, access, even if its logged. Being able to yell at the specific miscreant later really is poor compensation for having to take a production system down, repair their handiwork, and deal with the rest of the angry users.

      I used to deal with this on our production cluster all of the time. I implemented a pretty rigid policy early on, which was no root for users on anything that was (1) in production, or (2) had access to the various network servers. This policy came about after a few 'experienced' users demonstrated their skills. Accidentally changing access privs and ownership of the /Users directory tends to raise sysadmin blood pressure, as does nuking /etc, thinking it was ~/etc, or updating a system library to fix your program, which then breaks production codes that people are actually using.

      The problem always seems to be that people who've admined their own, solitary, system, think that experience automatically translates into full privs on a much larger, integrated, environment. This is where I miss VMS, and its fine-grained privs. I'm not sure I'd hand those out either, but at least it's better than all or nothing. The next best solution is giving developers access to a box that you can nuke and reimage back to a standard state, and letting them hack with that.

      • Re:Users != Root. (Score:5, Insightful)

        by dbIII ( 701233 ) on Friday December 30, 2005 @03:03AM (#14363200)
        The problem always seems to be that people who've admined their own, solitary, system, think that experience automatically translates into full privs on a much larger, integrated, environment.
        You get stuff like the guy who has the root password for the purpose of redundancy doing experiments on a couple of subnets to learn about routing and preventing 40 users from being able to do anything other than newspaper crossword puzzles until the problem is found. I had to wait nearly an hour after that one before I was sure that I could talk to the guy in a civil fashion - and he still thought I was an arsehole for asking him not to disrupt things because he didn't know enough to know that it was wrong and he didn't lose connectivity, so I must have been making things up.

        People who take a selfish view instead of a system view shouldn't be allowed to muck about with multi-user environments without strict guidelines that consider the system view. Unfortunately this can often come over as a power trip and office politics even when you can bring up valid reasons. Ultimately, if you aren't going to be in the office after hours to fix a production system then you are not responsible enough to have the root password for it - a microcosm of with power comes responsibility.

        • Re:Users != Root. (Score:3, Insightful)

          by tomjen ( 839882 )
          You get stuff like the guy who has the root password for the purpose of redundancy
          This is a good policy, but i read a book at sugested giving it to the boss in a sealed envelope. That way if you die, the system is not locked forever but the boss cant mess with it, and if somebody does he can prove he did not.
    • I have an idea (Score:3, Interesting)

      Howsabout this. A second, sudo-like command. Lets call it "root". You use it like sudo, but rather than actually doing anything, it just basically logs the user, working dir and command line in a to-do list. Admin staff can browse the log via a web UI, edit each command line in a text box, check each item for "do it" or "don't", and press "go". So if I do "root chown bsmith:staff myfile; root cp myfile ~bsmith", then root will see lines like:

      jmorrison | /home/jmorrison/ | [chown bsmith:staff myfile ] Do[_]

    • Amen (Score:5, Insightful)

      by Sycraft-fu ( 314770 ) on Friday December 30, 2005 @02:12AM (#14363011)
      Hell, at time I think I shouldn't even give the users a keyboard and monitor. It's not a question of if the users will screw something up, but when. They are ALWAYS doing things they shouldn't. Thus the less they can do, the better.

      The worst are the "I'm a sysadmin" types. For every one I meet that actually has the experience to make them a competent sysadmin, there are 50 that know just enough to be dangerous, but think they know it all.

      For example some time ago I decided to roll Firefox out to the educational labs and make it the default browser. All other considerations aside, it's minority status in the browser market makes it far less of a target. Well a couple days later I get some guy in who's bitching about Firefox being installed in "his domain" and he wants it removed. Upon further questioning, it becomes clear he believes that programs are installed in user accounts. I cannot seem to convince him that the program is a local installation on every system and no, I'm not removing it.

      Now for Windows systems, the damage someone can do is somewhat limited since all software installs are on the local system. However the UNIX systems all run off a central server. Like hell we are giving anyone anything but read access to that. All the time people want things installed or modified for their particular project. Quite often, they have no idea what they are asking, and what they want done would completely break the app, or worse.

      I agree that access should be as limited as allows you to get the job done. Now, in some cases that needs to be total access. Fine, you get a system that's seperate and you assume responsibility for it. If you are doing something such that you need system access, you'd better have the knowledge to fix what you break. In other cases, come to us, that's what we are paid for.

      We even operate that way internal to our group. I don't just go and change shit in DNS. Not because I don't know how to, not because I don't have the root password, but because it's not my area. Better I should ask the guy who is supposed to do it. That way, there's less chance somerthing that gets broken.

      I think the problem is that some users have a real inflated sense of self importance and entitlement. They think that their project is real, real important, more important than everyone else's. Thus they don't have the time to wait to have the admins do things, they want to just be able to do them themselves. If it messes something up, well then the amdins can fix it. Of course people like that are also the most likely to do something that will break things for others.

      The more shared the resource, the more you have to be strict with the access. Even on user desktops, limited access needs to be the rule. Support can't spend hours and hours fixing problems caused by users that don't know what they are doing. It's just not cost effective.

      If you truly have the need and knowledge to run your own system, then fine, take it up with management. However part of that understanding has to be you can't bother the support team if you hose things. If you aren't good enough to admin the thing yourself, you probably ought not have admin permissions.
    • Re:Users != Root. (Score:5, Insightful)

      by HardCase ( 14757 ) on Friday December 30, 2005 @02:15AM (#14363025)
      I have to agree. I'm neither an administrator nor a developer. I use Solaris and Linux platforms for electrical simulations. Neither I nor my fellow engineers have, want, or need root access. Our admins handle all of the software installations and other system maintenance, as they should. The admins have created groups for the various functions that we perform - the appropriate user is a member of the appropriate group(s). That way the only files that we can mess up are the files that we own. And, fortunately, our admins have implemented an effective backup plan so that when we do make a mistake (and, believe me, we do make mistakes), it can be fixed with minimal headaches for all concerned.

      In our case, there's really no way to allow root access to local machines - everything is on the network via NFS. Software installations are tightly controlled and it's virtually impossible for a hardware casualty to cause any significant loss of data.

      This is in an organization with roughly 5000 engineers using the *NIX network and an IT budget in the tens of millions of dollars. Believe me, the *NIX side of the house works a hell of a lot better than the Windows side.

      Oh, and on my SunBlade at home, I almost cringe every time I run a command as root...

      -h-
    • Re:Users != Root. (Score:5, Interesting)

      by SmallFurryCreature ( 593017 ) on Friday December 30, 2005 @05:16AM (#14363525) Journal

      On the rare occasions I am not overruled by marketting I tend to prefer the following development setup.

      The developers personal machine. Full access, do what you wish, if you cannot trust your people with that then do not employ them. No developer worth his salt will ruin a desktop machine. Someone who either on purpose or by accident comprimeses his machine should be fired. Almost every developer has his own preferences and methods of working, I see no reason to restrict them on their own machine. Want to test a replacement for apache? Go right ahead.

      The test server is the step where the locally developed code/setups etc are being tested. Access to this machine is limited by protocol. Basically any chance will have to be documented and be reasoned(?). So a chance would have to accompenied by who did it, why they did, on whose authority and exactly what was done. The test server is NOT a development enviroment, it is a proving ground for new developments. This is sometimes very hard to explain. So in caps, "YOU DO NOT DEVELOP ON THE TEST SERVER, YOU TESTIF the test server works as expected with the new chance then this will be ported to the live server. Friday afternoon is a bad time for this but somehow always seems to be the time desired by the guys who sign the paychecks.

      But in principle I see no reason to deny any developer root access to any of these machines. What needs to be in place is proper protocol to make sure that people know how to deal with chances (no point in documenting all the chances if people don't read them) and that you have good people who do not mess with machines.

      I have been the victim of bad restrictions to often to have any fate in people that create them. I had to personally subvert a production webserver to handle IM traffic because the office network blocked them and our sales support staff needed it. (Case of an outside department being absorbed in the larger organisation) It tooks over 3 months for them to finally get official permission to upgrade the firewall rules. I myself was denied SSH access to the outside webservers for a full week until I told them I would simply work from home permanantly until it was fixed.

      If you have good people they can be trusted with root access. If you do not have such people then they cannot be trusted with being let into the building. My first IT job had a guy who installed a keylogger. He didn't have root access, he simply had a limited account on a windows machine and downloaded some exploit kit.

      But in the same job I was being outsourced to a very large dutch company and had root access on their AIX production machines. I, a new then new newbie noob had to do my development on the production machines since my desktop was to restricted to install the software needed and in any case couldn't handle the filesizes involved (good luck opening a 2gig database dumb in either word or notepad on NT4.0). One morning I was early (so I could leave early and miss the endless meetings) and was asked by the director of the company to start a database. I was the only person who could do that, if I had not started that database the entire national compnay with a hundred offices could not have started the working day. (Was a temp agency).

      A stable production enviroment does not come from limiting your employees, it comes from not letting your unix admin quit in disgust and having proper training in place so your critical servers do not depend on a hired developer who is still reading his Unix for dummy's manual.

      If the above sounds fancifull then be glad I did not tell the complete story. It was the most insane enviroment I ever been in. It was so bad that when the company was bought by a rival and they learned about the true state of the accounts it even made the one national newspaper. While it focusses mostly on financial issues it also reported that they found the IT department to be a total mess. Not bad for your first assignment eh?

  • sudo chmod == pwnt (Score:5, Insightful)

    by qweqazfoo ( 765286 ) on Friday December 30, 2005 @12:40AM (#14362671)
    Ever heard of setuid root?
  • by Billly Gates ( 198444 ) on Friday December 30, 2005 @12:40AM (#14362674) Journal
    ACL's are quite nice and so are different levels of security.
  • Sarbanes-Oxley? (Score:3, Informative)

    by pbrammer ( 526214 ) on Friday December 30, 2005 @12:42AM (#14362685)
    Do you fall under the scope of the Sarbanes-Oxley act? By not allowing sudo or plain 'ol root access, accountability goes way up if you have to call the help-desk to perform whatever action you need to take. You have effectively limited the scope of those who can make changes to the system and presumably the changes that are made are logged.
  • Two names... (Score:5, Insightful)

    by toupsie ( 88295 ) on Friday December 30, 2005 @12:44AM (#14362689) Homepage
    Sarbanes and Oxley [sarbanes-oxley.com]. I don't know you, you don't need that access, we have a process in place and I am not signing off on you. Follow the procedure or go somewhere else to work.
    • Re:Two names... (Score:3, Insightful)

      by Anonymous Coward
      Follow the procedure or go somewhere else to work.

      That's fine. You just remember that the point of a work computer isn't for that computer to be secured; it isn't there so that access logs can be made for it, and it isn't there so that you have a system to hover over and say, ``I keep this system secure.''

      That computer system is there for people to use to do work, and your job is to move Heaven and Hell to make sure that the people using that computer system can do their jobs. Your job has no point
  • by Jamori ( 725303 ) on Friday December 30, 2005 @12:44AM (#14362690)
    Allowing root access on a knowledgeable user's local machine is one thing, but multiple arbitrary people with root on your main cluster is entirely another matter. There are simply far too many chances of one of them "accidentally" doing something they didn't mean to and borking the system. That's definitely not an issue you want to deal with.

    And even allowing chmod, mv, etc via sudo can be dangerous. Someone accidentally issuing a "sudo chmod 777 -R / ", having meant to type "./" for everything below their current directory, isn't going to be good for your system health and is going to be somewhat of a pain to recover from, even if you do know who screwed things up.

    • If I do have a sudo chmod, all I need to find is one file that has an owner root. Lets say a /tmp/owned. Then I can do the following commands:
      sudo chmod o+w /tmp/owned
      cp /bin/bash /tmp/owned
      sudo chmod u+rs,o+rx /tmp/owned
      /tmp/owned
      And I have a shell root access. So that is why I don't give a sudo access to any command unless I want to give a sudo to bash.
  • the way I do it... (Score:5, Interesting)

    by Heem ( 448667 ) on Friday December 30, 2005 @12:46AM (#14362698) Homepage Journal
    You are going to get a bunch of responses. most of them from people that will say something like.. "NO." "NOBODY GETS ROOT, PERIOD".

    Well, in an ideal world, it would be that way. We would setup systems for people to use and they could just use them without root privledge. Unfortunatley we know that isn't possible if you want your users to actually be productive and get things done.

    I work for a large software company. Trust me you'd know the name of it if I could tell you. We use linux on the desktop, as well as the servers. We also have some Microsoft servers that are either for legacy purposes (havent been updated yet) or for testing applications against MS environments. Anyway...

    All my users have laptops with Linux on it. They all have the root password to their individual laptops. Many of the also have a server at their desk for their own testing purposes. They have root to that.

    However, the "real" servers that are accessed by someone that isn't themselves, the users do not get the root password, ever.

    I look at it this way. If you bomb your laptop or your test server, either you can fix it, or you can call me and I'll walk you through fixing it, fix it, or just give you a new clean configuration.

    If you bomb my server, I'm going to make sure you never have access to anything, ever.

    • I think your shop is pretty typical. For a software developer there's a lot of reasons why you should give them root access to a non-shared machine they use for development and/or testing. Giving root access to a developer on a shared, production machine, no matter how competent an administrator they are is just bad policy.

      Sudo on shared test machines can be a bit more liberal though. Much of the time developers need to start and stop services multiple times a day, if not an hour. It's impractical and e
      • and as good a developer or QA engineer they may be, they are not the ones accountable for the systems. at the end of the day, I, and only I am accountable for building and maintaining good systems.

      • As a developer... (Score:3, Insightful)

        by Belial6 ( 794905 )
        I have to agree. Prior to my current gig (going on 6 years now) every environment I worked on allowed changes in production. The trouble never ended. People would make changes and bring everybody down. My current job has complete and seperate DEV/TEST/PROD servers. This has saved us a great deal of trouble. There is one other developer that works with me ( in Domino ), and we have complete control to do anything we want to Dev. Test and Prod are off limits for any changes. Even with this I will som
  • Have them share a group? They can always share files by allowing complete group permissions, can they not? If that is all they want to do.
  • Nope. (Score:3, Insightful)

    by Onan ( 25162 ) on Friday December 30, 2005 @12:48AM (#14362706)
    The next time that server blows up and needs to be replaced, or we simply decide we need to add another one, building it is my problem, not those developers'. And it's a whole lot more problematic if I don't know all of what was done to get it into its current state.

    Of course, that really just goes back to the fact that you should never do anything adminnish directly on a single server, ever. Your configuration management tool should do it for you, so it will also know to do it to the next one.
  • not root, but ..... (Score:2, Informative)

    by skelley ( 526008 )
    .... we run a SOA enviroment with about 50 different apps on many machines. We run each app under a seperate uid. App developers are in a group named for the app and members of that group are given full sudo permissions for the app uid. Creative use of /tmp and cp have eliminating most of the chown requests. Only issue is for those few developer's than need to work on more apps the the 32 group limit allows. They have to suffer with the newgrp command.
  • by Uhh_Duh ( 125375 ) on Friday December 30, 2005 @12:50AM (#14362717) Homepage
    Developers with Linux experience are a LOT more dangerous than developers without linux experience. My experience has been (100% of the time) when I give "experienced developers" access to commands like 'chmod', I find all kinds of files mode 777 (among a list of about 10,000 random, stupid things developers do) because, well, I've heard pretty much every excuse you can imagine.

    The problem is that as soon as people outside of the core sysadmin team have access to critical system commands (cp, chown, chmod) the integrity of the box is left to chance. There's always the possibility someone is going to do something outside the policy. Sysadmins make it their job to know and understand the impact of every change to a box. Developers tend to make changes in order to get their stuff to work, regardless of the consequence (hey, each group is just trying to do their job, which is "make it work!!" -- I'm not defending either side).

    My rule of thumb:

    - Developers get root in their dev environments.
    - Sysadmins get root in the production environments (developers shouldn't even have user-level logins to these machines.) If your company is releasing software (even for internal use) properly, the IT group will be managing the code as a product, using developers as a help desk rather than letting them manage the applications directly.

    Stick to this and everyone will be happy.
    • My problem comes from when I see operations not have root on production servers. They need to call another group to do server specific stuff even though their app is the only thing running on the server. Does that make sense?
      • Yes it does.

        Its usually the other group's reponsiblity to have the box and app running. That means changes are well communicated and planned, not ad-hoc.

        Also, its to make sure the single app is the only thing running on the box.

        Also, its to make sure that there is a known level of security on the box.
  • To suggest that you install a rootkit on the computer you need to use.

    As for keeping them 'in line' once they have root access... I recommend a pointy stick.

    Umm... or training. Even knowledgeable users can accidentally forget to reset permissions up and since you're a gov't contractor, you have to be more careful about data security. Right?
  • by unixpro ( 464350 ) on Friday December 30, 2005 @12:53AM (#14362725)
    I come from the other side of the fence. I am a developer of multiplayer servers. For my part, I couldn't do my job without root access. I need to do things like set the date and time on the machines, install to /bin, upgrade compilers, etc. If I had to ask the helpdesk every time I needed root, they'd just set up right outside my cube.

    On my Windoze machine, OTOH, I have no need for system level permissions, and I don't ask for them. I can install software, but so can all the other developers (and, I think, anyone in the company). All I use that machine for is e-mail and testing client connectivity to my servers, when I'm not using my Linux test client.

    Some people need root and some don't. Don't make blanket policies unless you're prepared to make exceptions. Oh, and, for everyone's sake, if you do restrict access, please, please make sure that at least one person who can change things is available 24/7. I can guarantee you that Peterson up in Accounting is going to have a system crash that requires help when trying to get the year-end reports out at 2:30 A.M. before the big board meeting at 9:00.
    • >I am a developer of multiplayer servers. For my part, I couldn't do my job without root access.

      If you are a developer, you don't need root access. All the examples you've given are the system admin job (system administration?)

      >Don't make blanket policies unless you're prepared to make exceptions.

      Its not really a blanket policy if there are exceptions.
    • Sigh... (Score:4, Insightful)

      by ivan256 ( 17499 ) * on Friday December 30, 2005 @01:56AM (#14362951)
      I'll give you that you need root to set the date and time, but your system should do that for you with NTP, so it's not that you don't need root for that, but that you shouldn't have to do it...

      But for your other examples... You do not need root to upgrade your compilers. You do not need to install things in /bin unless you're the system administrator. If you develop software that can only be run out of a particular directory, please post your name and address in a response to this comment so that those of us who have been forced to use such software in the past can come beat the living snot out of you.

      If you're a developer, you should probably have your own machine, which you would have root access to, but save for setting the time, you don't need root for any of the tasks you've described. Your license to call yourself knowledgeable is hereby revoked.

      Even if you do have root, unless you expect the users of your software to have root access as well, you shouldn't be using your root access, or you'll end up wondering why your users have problems that you don't see on your development system.
    • by Karma Farmer ( 595141 ) on Friday December 30, 2005 @02:28AM (#14363078)
      I come from the other side of the fence. I am a developer of multiplayer servers. For my part, I couldn't do my job without root access.

      I come from the other side of the fence. I am a developer of complex client-server applications. For my part, I don't even have login permissions on production.

      I have root on my local development machine and shared development. If there are problems during testing, I get a temporary logon on stage, with an admin sitting over my shoulder watching me type. But I've never had a logon on production, and I can't imagine why I would ever need one.

      I develop the app. I write deployment, testing, and rollback documentation. That way, I never need to touch the production server. This is how every real shop I've ever been in works.
  • Dear slashdot... (Score:5, Insightful)

    by GoofyBoy ( 44399 ) on Friday December 30, 2005 @12:55AM (#14362737) Journal
    ... I'm special and rules don't apply to me.

    How can I convince others of this?
  • It is best to have one and only one person with root access, even if your users are knowledgeable and honest. This eliminates the chance that two or more users could make changes to the system that, together, compromise its security or stability. Also, if something goes wrong, it's alot easier to know who did it (!) and what needs to be done to fix it.
  • Users are stoopid? (Score:3, Insightful)

    by Gilmoure ( 18428 ) on Friday December 30, 2005 @01:00AM (#14362757) Journal
    Even the smart ones. Sure, give the users some stand alone development machines with root access but don' let them fuck up the cluster/servers. A lot of users are focused on their job but they don't always see how their actions on shared equipment will impact the company or entity at large. /tech support at large lab, full of brilliant idiots.
  • Need more info (Score:5, Informative)

    by Frohboy ( 78614 ) on Friday December 30, 2005 @01:05AM (#14362770)
    This sounds to me kind of like the situation in a university Unix network. I'm not entirely sure I understand what you necessarily need that wouldn't be available (though I would like to know, to get a better understanding of the question). Certainly, at the university I attended [uwaterloo.ca], we didn't have sudo access, but we were able to develop some rather powerful applications.

    I can see an adjustment period of a couple of months, where applications you regularly use aren't available, so you ask for them to be installed. After that, assuming they don't see the general need for an application (or they don't want to have to officially support it), you could theoretically install applications under your home directory. (I was thrilled when I became a grad student, and got 100MB of disk quota, so I could compile and run Blackbox as my window manager instead of the crappy twm we were generally stuck with. In fact, I made it globally executable, so my friends could use it as their window manager. In fact, I received a phonecall once from one of the admins, asking me what this spinning "blackbox" process was running on one of undergrad servers, since I was the only grad student or professor (and therefore in the phone directory) who also ran it.)

    These days, as part of my regular job, I am one of the unofficial sysadmins of a Beowulf cluster (largely because I'm the one of the only ones who have developed MPI applications that run on it). I get the odd request from other users who want me to hook them up with some library or such. I compile and install it under /usr/local/whatever, and tell them how to set up their LD_LIBRARY_PATH to link against it, and they're good to go.

    Again, I have to ask what you need that requires root or sudo access, that can't be solved by the rare admin call or installing under $HOME. (I really don't mean this in an insulting way. I do want to know. The story post is a little brief.)
  • Root Access? (Score:3, Informative)

    by DA-MAN ( 17442 ) on Friday December 30, 2005 @01:07AM (#14362778) Homepage
    I work for a government contractor, and have recently convinced them to purchase a Beowulf cluster, and start moving their numeric modelers from Sun to Linux.

    Congrats!

    Like most historically UNIX shops, they don't allow users even low-level SUDO access, to do silly things like change file permissions or ownerships, in a tracked environment. I am an ex-*NIX admin myself ,so I understand their perspective and wish to keep control over the environment, but as a user, I'm frustrated by having to frequently call the help-desk just to get a file ownership changed or a specific package installed.

    Good, Good!

    If you're an admin, do you allow your users basic SUDO rights like chmod, cp, mv, etc (assuming all SUDO commands are logged to a remote system)?

    Hell No!

    If no, why don't you?

    Because it is my responsibility, and my responsibility alone, to keep those machines running. If you screw up the system, then I will have to work later and possibly come in on the weekend to fix it. This is not something I am willing to risk.

    In addition, as an ex-*NIX Administrator, you should know that the best way to keep a secure system is to give the least privs possible.
  • Makes no sense... (Score:5, Insightful)

    by boner ( 27505 ) on Friday December 30, 2005 @01:07AM (#14362779)
    (Disclosure: I work for Sun and work with Linux since 1994).

    Why would you move the modelers to Linux from Solaris? There is no real advantage....
    Sure a Beowulf cluster is a nice piece of hardware, but hardware can only compensate a bit for programmer productivity... If their code is written using MPI or OpenMP or some other standard clustering environment then there shouldn't be a need to move the developers, should there? Just recompile and go.
    It is really much more efficient to shove faster hardware under a programmer then to force the programmer to adapt to a different programming environment. Programming for a cluster is hard enough without having to take into account the details of the operating system, forcing them from Solaris to Linux might improve the execution part (on a side note, have you considered Sun's clustering tools?). But it *will* set them back in productivity while they move to different compilers and adapt the execution of the program to the Beowulf environment.

    In my opinion you have forced your customer to make a move on questionable grounds.

    Now to the matter of security. As you are aware, Solaris has the highest level rating for security. Secure Solaris is the defacto operating system at a number of government agencies. Linux cannot hold a candle to the multiple access levels of the Secure Solaris operating system. You state that you are frustrated at needing the helpdesk for file permission changes. What is your point? Are you using the fact that YOU don't like the limitations to change a customer from Solaris to Linux? Or are you complaining that the customer's environment did not deploy secure solaris with its multiple access layers? In Secure Solaris there is no need to muck with sudo. Each file can be managed properly from a security point of view (come to think of it, much of that can be done with Linux too).

    Before I answer your question, let me state that I understand your point of view. When I joined the navy as a UNIX project manager, the admins gave me absolutely no rights whatsoever on the production systems. Their reasoning: '.. he can do things I don't understand, can't control or prevent.' There will always be a tension between the lockdown desired by the admins to keep their environment safe and secure and the users who want total freedom....

    In my mind there is NO good reason to give ANY user root access in a secure environment. Period. If you have frustrated in the past by having to interface with the helpdesk, then the helpdesk needs to be improved. At the same time, I assume, any user has full access to their files.
    You mention that you have convinced modelers to move to a Beowulf environment, then why the issue anyway. If they run cluster code then they run as user. All the need are basic user access rights, nothing more...

    Maybe I don't understand your point....

    • I am glad you did point out at the beginning that you work for Sun, since it does explain your point of view on clustering :) Otherwise it would look entirely crazy.

      Beowulf can mean cheap hardware, Sun doesn't. Government doesn't always need secure. He doesn't say what part of the government.. it might be the department of the interior doing climate modelling or something. Trusted/Secure Solaris adds huge amount of overheads to installing and configuring and using a system that they just might not need. Sur
      • Re:Makes no sense... (Score:4, Interesting)

        by Antique Geekmeister ( 740220 ) on Friday December 30, 2005 @01:48AM (#14362930)
        Suns also lack, as a default configuration, literally dozens of extremely useful tools that power users expect. That starts with a decent compiler built into the OS, (although I understand they're finally including gcc), standard X libraries and development libraries (because Sun's environment has always been way off the beaten path and need hundreds of man-hours to beat into shape with each new release at least up through Solaris 2.8), tend to be very short on RAM for the same amount of money, have a version of "tar" that inconsiderately makes it incredible painful to do a "ssh root@hostname tar cf - /etc/passwd | tar xf - -C /tmp" and yet have it overwrite your local /etc/passwd because it doesn't strip the leading slashes, have one of the strangest versions of LDAP I've ever seen, and their pkg package management system needs to be....

        Well, I'm going to avoid using that kind of language on a bulletin board, but it's not good. But the amount of work necessary to weld together a cluster of 100 Suns into a large and flexible working unit with whatever software the users need to do their jobs is easily enough to pay for another 50 servers. The money saved by buying something other than Suns will buy the backup and cooling systems needed for the whole setup.
    • by ZG-Rules ( 661531 ) on Friday December 30, 2005 @03:09AM (#14363224) Homepage

      I have a somewhat balanced view of this, as I work for a University [imperial.ac.uk] and have a variety of different interactions with Solaris and Linux. What follows is a few notes on Linux vs. Solaris and Access Rights across different categories of system

      Firstly, our Production MIS Systems:

      Almost without exception, these run on Solaris on Sparc. Why is this? Simply because it is very very very reliable and the support contract is excellent. Ours runs on SunFire and midrange stuff like 1280s and 890s for the backend DB with a variety of frontends from Netras to 490s.

      Show me a linux machine (apart from an HP superdome possibly, but that's Itanic) that you can partition into multiple physical systems, has 6 power supplies, has the possibility of over 100 CPU cores in a physical partition, can have hardware swapped in and out live and so on and I bet you it will have a pricetag like a SunFire. I am aware that a cluster of linux machines could do the same job for less buck, but for this stuff it's much more effective to have one very large highly resilient and available server.

      We do use sudo; The production DBAs can Sudo as environment users and the admins (there is more than one because unlike some poster I just read I think a single key to the kingdom is a very bad idea - but then our team has already had one auto-accident death this year.) can Sudo to root. This is purely for a tracking point of view - we could have passwords for the root and application users and let people su, but it's harder to manage. They probably could use some shennanigans to get themselves a root shell if they really tried, but we'd see them because we have good (live) log monitoring and we trust them not to jeapordise their own jobs.

      Nextly, our Development MIS Systems

      Some of these run on Linux (RedHat Enterprise on HP hardware if you must know), Some of them run on Solaris. Typically the ones that are developing for things that talk to existing Solaris stuff stay Solaris, new stuff goes to Linux.

      The reasons for this are manyfold - but they mainly hinge around the fact that Dev systems need not be highly resilient so the bang:buck ratio for Linux on HP is better than Solaris on Sun.

      Sudo gets more relaxed here - our full-time (as opposed to Contract) DBAs are allowed to Sudo to root and we watch what they are doing a little less carefully. The rationale we have as sysadmins is we don't care what they are doing on our dev system (we can rebuild the OS in minutes, if they've fscked Oracle, that's their problem); Provided they can rebuild the code in Test and DR environments consistently and documentably as part of the project deliverables, we will release it into Production.

      Thirdly, Academic Development Systems

      Note that I am distinguishing between MIS and Academic systems... screwups in the former cost us money, the latter may cost us Grant money in the long run but at least Payroll still goes through. Think of Academic Development as the systems people write real code on (as opposed to tinkering with Databases or SQL).

      These systems mostly (if they need *nix at all) run on Linux. Flavor depends on the moment and the supplier, but there's only two Research Groups out of all the departments in College still using Suns and Solaris, and that's only because their big-money code won't yet run on Linux.

      Our access rights policy is something along the lines of: sysadmins and grant owner get to do what they like. Unfortunately I as a sysadmin don't get the right to tell Professor X that he can't have full access to his £Ymillion system, so he gets the same kind of access we do with appropriate disclaimers about how we'll charge

    • There are two issues you're arguing here, one having little to do with the article. First let's look into the administrative issue.

      I agree that there's little reason to give any user root access. Note I didn't say in a secure environment, a cluster may very well not even be on the network. In general, a user should not have root access even in such an environment unless they are the person responsible if the system goes down. Limited privileges may be given through sudo, but any program with the ability to
  • by originalhack ( 142366 ) on Friday December 30, 2005 @01:08AM (#14362786)
    I have always had this ability (in several of the largest companies in the US) but I have always started the conversation with an acknowledgment that the sysadmins are ultimately responsible for the network. Then, we focus on what functions I may need to do, how I avoid causing a problem beyond my own work, and how we can establish a regimen where I report what I have touched and where they are able to monitor to ensure that I have do only that.

    This has never been a problem. Then again, they already know, prior to that, that they would be in bigger trouble if I were not trustworthy. I offer them more controls than they would have insisted on and this gets me more latitude than they normally would have offerred.

  • by Anonymous Coward on Friday December 30, 2005 @01:09AM (#14362788)
    I also work for a defense contractor and adhere to strict security rules. We have a fairly simple means of controlling our developers who need root access. We buy their systems using their bosses' overhead or project charge numbers, place them on a monitored, isolated subnet and, when they hose the system, all time expenditures are billed to one of the previously mentioned charge numbers. At my billing rate, it doesn't take too many incidents for them to feel serious heat or be canned. Either way, they do not touch production machines or cause problems that cannot be quickly isolated by disconnecting the subnet.
    It works for us.
  • by NitsujTPU ( 19263 ) on Friday December 30, 2005 @01:09AM (#14362792)
    If you're at a defense contractor, they're probably following the DoD Guidelines of least privelege, logging, stuff of that nature.

    What you're asking is, essentially, to establish yourself as a certain class of user under whatever scheme you're using, or for some kind of "well, Slashdot agrees" circumvention of guidelines.

    It reminds me of a time that I was working on such a machine, and I sat in a conference room where people were trying to bargain with me as if I represent the STIG. The simple fact of the matter is, the STIG is a set of guidelines, and nobody's opinion will change the contents of the document.

    Stop trying to negotiate it.
  • Re: Got Root? (Score:2, Interesting)

    by hedrick ( 701605 )
    I work at a University, so we may not be close to your environment to matter. But I would distinguish between production and research systems. I wouldn't give anyone but professional sysadmins anything close to root on systems like mail servers or multiuser systems. But unless you're working with sensitive data, on research systems things are more flexible. A lot depends upon the sophistication of the people involved as well as the actual environment. I'd be more inclined to give out root on a machine with
  • If you use a filesystem and kernel that supports ACLs, users can do everything they should need in almost all circumstances.

    There are only a couple of situations I've run into where I've needed more, such as applications that need to bind to a priviledged port, or where I've needed to run a cron job that needed more than 1024 file descriptors and so had to make it setuid root. (Setting the number of file descriptors up for a user via /etc/security/limits.conf doesn't apply to that user's cron jobs).

  • And only give them the tools they really need to develop, test and debug their programs.
  • What's wrong with keeping Sun and running Linux on that hardware?

    Another question: What's wrong with Solaris?
    • Re:No more Sun? (Score:5, Interesting)

      by Karma Farmer ( 595141 ) on Friday December 30, 2005 @03:49AM (#14363320)
      Another question: What's wrong with Solaris?

      Read a few "Ask Slashdot" questions, and you'll understand. Ask Slashdot can almost always be summed up:
      "I've been running linux on the computer in my bedroom for a long time, so obviously I'm incredibly 144t. I just got hired as a summer intern doing tape backups at the local brewery, and I built a beowulf cluster out of old 486sx's I found in the dumpster. How do I convince my boss that we should restructure the company's entire infrastructure around open source software? Also, does anyone know an open source program for running a brewery?"
      I'm old and cynical, and longer surprised by clueless fanboys with an exaggerated opinion of their own experience, intelligence, and skill.

      I will never understand, however, how any of them manage to find jobs.
  • by alee ( 64786 ) on Friday December 30, 2005 @01:18AM (#14362826)
    I've been both a developer and an administrator.

    As a general policy, if a developer needs root access, they need to prove to me as an administrator that they actually do need root access. I'm not going to give root access (sudo, su -, or access to privileged accounts), even on a development box, to someone that needs occasional chmod privileges. More often than not, the people who are begging for root access are those that have been so spoiled by coding on their own Linux boxes that they lose sight of all the best practices that contribute to good code. They want foolish things like directories with 777 privileges so they can drop temp files when there are 30 better ways to do it. root is not a cure all... just because you're used to it on your own machine doesn't mean it's appropriate for coding in a multi-user environment developing customer-facing applications.

    In the end, there are very few specialized applications that actually require root access to work. I will concede that sometimes root access is necessary but it needs to be treated on a case-by-case basis. I'm of the belief that a properly written application should be written such that it can be run with the least amount of privileges, and can be installed anywhere... not just /usr. root access as we know it is a luxury that should be reserved for true administrative duties, unless absolutely positively necessary.
  • by Yonder Way ( 603108 ) on Friday December 30, 2005 @01:18AM (#14362828)
    Another sysadmin who is going to tell you that I don't give out root or sudo access to users. Most users who think they know enough, or even DO know enough, really know enough to make big problems. They invariably never check with me before making a change, or tell me that they made a change, or even admit to having made a change when they inevitably screw something up.

    I make them come to me for everything. But not directly. That's what the ticketing system is for. The ticketing system justifies my existence, keeps any requests from slipping through the cracks, and helps to keep track of ad-hoc changes made to any given system.

    Many times end users think they need root for something when they don't. For example, there might be some niche tool that they need installed on a system. Or do they? If the one user is the only one that is going to use it, I advise him to do something like "./configure --prefix=~" to build apps to install in his home directory. You don't need root to install apps anymore. Besides, if you want an app installed for everyone to have access to, sysadmin should be doing that anyway.

    It might be a pain in the ass to make you go to the sysadmin for everything, but in the long run it will keep things running smoothly and perhaps force you to be a little more disciplined in your work.
  • by ChaosDiscord ( 4913 ) * on Friday December 30, 2005 @01:28AM (#14362864) Homepage Journal

    Why in the world do your users need root access? On Windows it makes sense; all too many poorly written programs refuse to install or run unless they can run roughshod over the entire system. But this is Unix. It's a rare piece of software that can't be installed and run as a user. Most can even be installed as a user but made available to others. Yeah, it's a bit more frustrating that you can't just install the latest RPM, but if you're skilled enough to install an RPM, you can probably manage "./configure && make --prefix=~/mybin && make install". Changing file ownership? Again, why do you want that? If you're sharing files with other users, get a group set up and chgrp the files appropriately. If you have lots of complex sharing needs, set up one of the Access Control List options.

    Ultimately users shouldn't need root. Professionally I development clustering software for Linux and other Unix systems. I regularly install new applications I'd like to use in my home directory. Our administrators set us up with a good ACL system (courtesy of AFS). I do the cast majority of my work as my own account. The only time I need it is to test root-specific aspects of our software (if launched as root, it runs jobs as the user who submitted it). I can't remember the last time I switched to root, probably a month or so ago.

    Unless you've got a damn good reason, your administrators are right to withhold root access from you. Your desires aren't good enough.

  • Most just say no. (Score:3, Insightful)

    by Liam Slider ( 908600 ) on Friday December 30, 2005 @01:28AM (#14362865)
    I say...Hell no. Not on the main system. That's just asking for way too many security problems. These kinds of things are done for a damn good reason. Now...their own desktops, laptops, some isolated and limited test computer, whatever...that's much less of a problem. But letting users have root access, even limited, to the main system is just asking for trouble.
  • I'm a developer... (Score:5, Insightful)

    by stevens ( 84346 ) on Friday December 30, 2005 @01:35AM (#14362892) Homepage
    ...and I do NOT WANT ROOT.

    I have root on my workstation (cold dead hands and all), but not on a single server--not even a dev server.

    sudo on things like mv and chmod gets you a root shell on the box fer chrissakes, why not just put the root password on a sticky on the rack?

    When something goes wrong, I don't want to hear, "Maybe the dev did it." I didn't do it--no access. When we go to prod on something, I don't want to hear the admins complaining they don't know how to promote the app because some ass developer did it manually in dev instead of creating a proper install.

    If you need root to chmod something, then your admin hasn't set up the box properly. Either he doesn't know what he's doing, or you haven't told him properly what sort of environment you need. Either get a better admin, or write up a clear description of all the functionality you require. Either way, you don't need root.

    Of course, the smaller the business, the more likely an admin is a dev and vice versa. In that case, all bets are off.
    • by ivan256 ( 17499 ) * on Friday December 30, 2005 @02:14AM (#14363017)
      Man, I wish I hadn't posted in this thread, so I could moderate your comment.

      You are the only poster so far who seems to have any understanding... Or at least the only one that doesn't let their understanding get clouded by their childish desire to "have root" even if they don't really need it.

      With root access comes responsibility... and I don't mean that like the way they use it in a Spider Man comic book. It's not that you need to exercise caution, ethics, and good judgement lest you become evil; If you have root and something goes wrong, you are responsible. Even if you weren't the one that broke it. Root is a blame magnet. Period. End of story. Unless they're paying you the sysadmin's salary too, you should not want to have root access on any shared system.

      Also, people who can't grasp the concept that sudo access to chmod is exactly the same thing as complete root access should have their *nix geek license revoked.

      Unless you need to set the clock, signal a process you don't own, or listen on a well-known port numbered 1024 or lower (if it's not a well-known port, you don't need to use a low number. I don't care how much you insist. You don't have a good reason. I'm not listening anymore...), you do not need to be root. Yes, you can do every single other thing you need to do as a user without root. It's not even inconvienient. One must wonder how these people would have survived before PCs...
  • by Fallen Kell ( 165468 ) on Friday December 30, 2005 @01:39AM (#14362907)
    This is what they were designed to do in the first place. Group level permissions allow people who work in the same "work group" to also have a permission level to all their "group" product files.

    Basically you need to have your entire filesystem layout setup properly, with "project" areas where each "project" has its own directory tree with setgid for the project's group on all the main directory and sub directories. Each major "project" would have a group setup for it. Then all file permissions would be covered by anyone in the group, or possibly a "project's lead" who keeps track of all the groups and knows what permissions should be set to different areas (i.e. for data sharing between projects etc.).

    Once the infrastructure is in place, the worst thing that happens is that a person is not a member of the "group" and just needs a helpdesk call/form to gain group access ("ok'ed" by a lead member of the "group"). Basically something that can happen in 5-10 minutes time if implemented properly. With the setgid, all new files created in the areas will always be owned by the proper group, which has full access to chmod/chown those files (assuming someone doesn't do "chmod 700") but even then, cron jobs can be setup to run every hour or so that do a "chmod -R 770 /" to any/all project areas (with the cron job removed if you need to lock the area down to no access).

    This is how it should be done, no sudo needed. All the work is in the preperation, with true processes needing to be setup and implemented (basically a form/forms for creation of a "new group" (which includes group ownership as well as a box to transfer "ownership" to another person), another form/forms for requesting new data areas (with what group owns the area), and finally a form/forms for adding/removing members to/from the group which gets signed off by the current group owner). Optionally another form for "locking" a data area to keep all access out. Then it simply needs to go to the IT staff which then simply reads down the "process" document and verifies the data in/on the form and either creates a new directory (setting the setgid bit and setting proper group ownership), adds/remove a user to a group, creates a new group, or moves a user to the first name in the group file (for easy tracking of the group owner or updates a seperate documents with this information).

    • by Khopesh ( 112447 ) on Friday December 30, 2005 @10:54AM (#14364515) Homepage Journal
      I agree with the parent post; groups eliminate most of the need for root. A cron script to change permissions should do 'chmod g+w' ('chmod g+sw' for directories) instead of 'chmod 770' which makes blind assumptions.

      To address the article's question, groups solve more than just file permissions; consider an environment in which users in the admin group have the ability to do things (via sudo) as the admin user, who owns /usr/local and all of its children. This lets priviledged users install things, but prevents them from accidentally messing with them (the admin group should not have write access to /usr/local, so sudo is required).

      A more restricted implementation would chown /usr/local/stow to the admin user and grant the admin group sudo access as the admin user plus sudo access to the stow command (or perhaps a shell script that ensures items are stowed to /usr/local).

      Of course, /usr/local is only one potential target. Perhaps your environment is better suited for /arch/beta or /opt. Also note that this idea is easily abstracted and applicable to other tasks.

  • by dindi ( 78034 ) on Friday December 30, 2005 @01:39AM (#14362908)
    Well, a possible solution could be to make a dedicated user writeable dir (/usr/whatever) where these advanced users cound install stuff.

    Sudo is a no-no for me on the other hand for mv/cp and so on, but you can ask for scripts to edit certain files and to change stuff in a dir.

    You migh optionally enable a bin dir in the users dir, so a "bin" user group could install stuff there (similar to option 1) ..

    but again your setup might vary by unix (even by linux distribution). It is doable, just molest the admins enough ...

    in unix EVERYTHING is possible imho (yes, yes now all windows admins just flame me)
     
  • by kbielefe ( 606566 ) <karl.bielefeldt@gma[ ]com ['il.' in gap]> on Friday December 30, 2005 @02:21AM (#14363055)
    I've installed numerous versions of Ada and C++ compilers, window managers, gtk, qt, lyx, and more in ~/bin under Solaris without root. Almost all file permission problems are solvable by contacting the file owner directly without involving an admin, and that's usually the courteous thing to do anyway. Things that do actually require root are few and far between in my book. Even if you really need to do something regularly like restart a web server, it can be easily arranged with a one time change to a sudoers file or something. However, that is the exception rather than the rule.
  • MIT Athena (Score:3, Informative)

    by Anonymous Coward on Friday December 30, 2005 @02:22AM (#14363061)
    At MIT, all Athena [mit.edu] workstations have the same root password, which is freely distributed. Any user can get the root password by typing in tellme root at a console. There are thousands of these machines scattered all around campus, with many different hardware configurations. This has been policy for many years now. Sometimes you get stupid kids doing stuff like this [mit.edu], but other than that, it seems to have been working...
  • Are You Kidding? (Score:3, Informative)

    by nathanh ( 1214 ) on Friday December 30, 2005 @02:43AM (#14363124) Homepage
    If you're an admin, do you allow your users basic SUDO rights like chmod, cp, mv, etc (assuming all SUDO commands are logged to a remote system)?

    No.

    If no, why don't you

    Because they will break the system and then they will blame the IT department. Logging lets you know who did it but the blame is still entirely assigned to the IT department.

    If you allow root access to your knowledgeable users (ie developers with Linux experience), what do you do to keep them 'in line'?

    Developers are even worse because they think they know it all but 9 times out of 10 they know next to nothing about system administration. I would be more willing to give sudo rights to a normal user who follows a documented procedure than I would to a gung-ho know-it-all "hey I run Linux at home gimme full root access" developer. I've seen developers chmod 777 their files because they don't understand permissions. Do you think I'm going to trust them with root access to mv or cp? No chance.

    I've seen developers ask for sudo access to run patchadd, to run pkgadd, to run pkgrm, to run vi (how I laughed at that one). They are rejected every single time. If they have a process that needs to run regularly as root then it can go into a script in /usr/local, permissions will be locked down so only root can modify the script, and a limited number of users will be granted access to run that script. That's as good as it will get without divine intervention (aka the CEO).

    If the application developers are making changes that require frequent superuser access - especially to commands like chmod and mv and cp - then perhaps they need to rethink what they're doing. It sounds to me like they're doing something wrong.

  • NSA SELinux (Score:3, Insightful)

    by Dark Coder ( 66759 ) on Friday December 30, 2005 @02:51AM (#14363154)
    Install Gentoo NSA SElinux and whip up a policy to cover these pesky guys.

    After a month or two worth of feedback, the system should stablize to the point of giving it to the researcher what they want in an extremely restrictive manner.

    The time invested results in a secured system that behaves exactly as your policy dictacts AND still be giving out 'ROOT' liberally.
  • by MadDog Bob-2 ( 139526 ) on Friday December 30, 2005 @03:03AM (#14363198)

    Just logging the sudo commands isn't going to give you nearly the auditing ability I suspect you're looking for, and giving them any kind of root-level access to the filesystem is game over.

    $ ln -s /bin/sh ~/my/seemingly/innocuous/path
    $ sudo chmod u+s ~/my/seemingly/innocuous/path

    Figure that any chmod u+s is suspicious and will get caught?

    $ ln -s /etc/sudoers ~/my/seemingly/innocuous/path
    $ sudo chown `whoami` ~/my/seemingly/innocuous/path
    $ vi /etc/sudoers
    $ ln -s /etc/sudoers ~/my/other/seemingly/unrelated/path
    $ sudo chown root ~/my/other/seemingly/unrelated/path

    Figure you'd notice their subsequent use of whatever new sudo permissions they just gave themselves?

    $ ln -s /etc/passwd ~/my/seemingly/innocuous/path
    $ sudo chown `whoami` ~/my/seemingly/innocuous/path
    $ vi /etc/passwd
    $ ln -s /etc/passwd ~/my/other/seemingly/unrelated/path
    $ sudo chown root ~/my/other/seemingly/unrelated/path

    And, look at that, suddenly their UID is 0.

    The list goes on...

  • by brunes69 ( 86786 ) <`gro.daetsriek' `ta' `todhsals'> on Friday December 30, 2005 @08:10AM (#14363880)
    I have worked both as an admin and a developer, and I can tewll you the real answer to this problem, and it does not involve giving out root.

    The whole reason people get pissed at admins and want to do it themselves, is that they always feel the time crunch. They have project X that is due on Friday, they submit a request to have something done to the server Wednesday morning, and it still is not done by Wednesday afternoon. This is not always the admin's fault, they have priorities too, and sometimes it is hard to juggle all the requests because he doesn't really know what the real priorities are in terms of the company as a whole.

    The solution is to implement a trouble ticket system for all admin requests, and give managers access to it as well, allowing them (and only them) to adjust priorities of requests. That way, managers can set the priorities of the requests to the admin as they see fit. As well, because the managers all know that the developer *did* make the request, and there is a record of it, the developer feels less worried about delays coming from the admin department (passes the buck), and less pissed off at the admins.

    The beauty of it is it also takes some responsibilities off the admin, and gives it to the managers, where it should be anyway.

  • by vinsci ( 537958 ) on Friday December 30, 2005 @08:53AM (#14363979) Journal
    ...at ftp.funet.fi to allow tens of people to move files around, without su access:

    ftp://ftp.funet.fi/pub/local/src/omi-file.tar.gz [funet.fi]

    It allows users to grab ownership[1] of files in certain per-user configured paths, whenever they need to (sample config file included). This allowed us to manage the incoming ftp directories without going insanse.

    It was written some 15 years ago by Matti Aarnio.

    [1] Ownership is "omistus" in Finnish, hence the name of the tool

  • You're wrong. (Score:3, Interesting)

    by FellowConspirator ( 882908 ) on Friday December 30, 2005 @09:55AM (#14364198)
    There should be absolutely no reason that a user should need to sudo to change permissions, copy an move files, etc. That's something that should be done explicitly by the systes administrator.

    We're currently setting up a Beowulf cluster and my job is to manage the queues, setup the resource management, and tune the scheduler to optimize the performace.

    I've never seen a situation where anyone has needed to change ownership of a file except for where someone departs. De rigeur, you put in a request to the admin to chmod all files under that user id to g+r and directories g+rx. That's it. Anyone in the person's department can then copy out whatever they need.

    Install software? We simply provide the software with instructions, and a log of installation on another machine -- or a binary RPM -- to the admin wit a request to install it. It's not like we install applications every day. This is doubly important in a Beowulf cluster since you need to sync the software amongst the compute nodes.

    No, if you find yourself wanting root access for such things, then you are doing something seriously wrong.
  • by Y2 ( 733949 ) on Friday December 30, 2005 @10:19AM (#14364330)
    In the old days when VMS mattered (to me), there were 35 or so different privileges a user could have. Most of them were functionally equivalent to "ALL," in that a user with such a privilege could perform a series of actions that would lead to actually having all privileges.

    Similarly, giving a Unix user the ability to execute mv or chmod (or quite a variety of other single commands) as root is functionally equivalent to giving that user full root access.

    Even if all the authorized users can be trusted not to abuse the power, can anyone be sure they will protect their password (or other access token) so well that no intruder will ever use their account? I think not.

  • by Syberghost ( 10557 ) <syberghost@@@syberghost...com> on Friday December 30, 2005 @11:03AM (#14364574)
    If you can chmod or cp as root, you can do anything else as root seconds later.

    We only give somebody root ability to do something if it's essential to their job, and a team reviews any new application of that to ensure it doesn't facilitate unwanted privilege escalation.

    Their basic access to a system at all is reviewed quarterly by their manager, and if he doesn't take action to change the default answer to "yes, they still need this access", they get deleted.

    Show me a publicly-traded company that's not acting like that, and I'll show you the next Enron.
  • by DoofusOfDeath ( 636671 ) on Friday December 30, 2005 @12:11PM (#14365022)
    I wonder, especially as Debian and Ubuntu become more popular: How much would users' desire for sudo access go away if apt / dpkg had the ability to install software for just the current user (and thus didn't require root)?

    I know that in a home environment (such as if I'm setting up my parents' computer), I'd be a lot more comfortable having them use a version of Synaptic that installed software just for the current user. That would basically eliminate the need for them to have root access at all. Maybe a similar thing holds true even for most developers.

    Granted people can usually install software for themselves by compiling the source code, but to require that is to basically ignore all of the benefits that apt / Synaptic offer.

    (If you're a Gentoo, I think the same point can be made by using a find/replace on the terms apt/dpkg/synaptic.)

Get hold of portable property. -- Charles Dickens, "Great Expectations"

Working...