Linux in a Business - Got Root? 464
greenBeard asks: "I work for a government contractor, and have recently convinced them to purchase a Beowulf cluster, and start moving their numeric modelers from Sun to Linux. Like most historically UNIX shops, they don't allow users even low-level SUDO access, to do silly things like change file permissions or ownerships, in a tracked environment. I am an ex-*NIX admin myself ,so I understand their perspective and wish to keep control over the environment, but as a user, I'm frustrated by having to frequently call the help-desk just to get a file ownership changed or a specific package installed. If you're an admin, do you allow your users basic SUDO rights like chmod, cp, mv, etc (assuming all SUDO commands are logged to a remote system)? If no, why don't you? If you allow root access to your knowledgeable users (ie developers with Linux experience), what do you do to keep them 'in line'?"
SUDO Commands (Score:2, Offtopic)
Re:SUDO Commands (Score:5, Informative)
It would take a hard-core serious business case to convince me to grant someone root access, even sudo-limited root access to a production system. The fact that I might have a "log" of whatever broken thing they did to take a business critical machine down is fairly irrelevant to me. My job is to make sure that doesn't happen in the first place.
Re:SUDO Commands (Score:5, Insightful)
As both the primary web developer and web admin, I probably qualify as a "special case" because I'm both end user and quasi-sysadm. The sysadms take care of the O/S, standard software, primary user accounts, etc., and I handle server software configs, user support on the development system, etc. I do have sudo privs for chmod, chgrp, chown, and so forth to give users ownership of their stuff, as well necessary sudo privs to manage certain daemons. However, end users do not touch my production box, and have zero special privs on my development box. My "regular user" log-in has no sudo privs - I'm a Jane Schmuck just like the rest of them.
Re:SUDO Commands (Score:3, Insightful)
A decent amount of people start out with *nix at home, and do everything as root so they don't have to figure out permissions. When they leave that environment for an org's production environment, they think they need root. I did that, too, until I blew away a bunch of mail by accident at my first sysadmin job. Doh.
The only time I've ever consider
Re:SUDO Commands (Score:3, Informative)
Never on production (Score:5, Insightful)
On a production box, the admins have access to sudo, and root itself is locked down except for scheduled maintenance/upgrades or emergencies. No paperwork, no root.
As a developer with over 15 years *nix experience, I have never had root access to a box unless I was doing an install, except for my own desktop workstation. In the case of my desktop, the only reason developers had root was so we could kill rogue services during debug sessions gone bad.
Under no circumstances do I agree with any user installing additional software on a box. If it's needed, it gets approved and installed for everyone who needs the functionality, not by rogue users.
I'm not sure I agree. (Score:3, Interesting)
Our Solaris sysadmins have a better (IMO) policy on our development servers:
(1) They will install software packages on request if a few users have a legitimate need or if it's otherwise considered to be really important for a product/project.
(2) If you, as a user, want to install something nonstandard, go ahead and figure out how to do it, but you have to support it yourself. I've done this with tools like mc, DDD, vim, an
Don't give them full control (Score:2, Insightful)
This way, users can do what they like, but they can't fsck anything up.
Failing that, I reckon a big man with a large knife could probably go a long way to keeping them in line.
Users != Root. (Score:5, Informative)
End users Do. Not. Get. Root. Even allowing SUDO access to change file permissions, copy, or even move files is just asking for trouble.
Installing software or libraries? Hell no. Not on a live system.
If they have a development-type machine at there desk, that's one thing (just don't call for support if you break the damned thing). Even then, my preference is that they have limited access.
On large, shared systems, users get as much, and as little, access to do their jobs as necessary, and absolutely no more than that. I have to keep the system up for other users, I can't have power-user #1 screwing things up by changing permissions on something they really shouldn't be touching (let's take the compiler for example...)
A little knowledge makes one dangerous, and I'd just as soon noone other than those paid to admin the machines have access.
Re:Users != Root. (Score:2, Interesting)
Users != Root on servers, not workstations (Score:3, Interesting)
This limits the types of access people can have on the heavy server machines, but lets them install apps or do whatever they need locally, including installing a different OS or OS variation based on need or preference.
Of course, the more people 'adjust' their workstations the less support you can afford to give them. You also have to be very careful about how much s
Re:Users != Root on servers, not workstations (Score:5, Interesting)
The one I experienced firsthand was a Windows NT machine that was my desktop that I ('naturally') had full admin access to. This was a machine on a large corporate network that was very diverse (there were Solaris, OS/2 Warp, Netware, and Windows NT servers on the network). I discovered, quite by accident actually, that if I ran the POSIX Interix (now SFU) shell on my NT workstation (something the company had bought for me, and I had installed myself,) that I could create any account I wanted on my local machine, and it would allow me, using that account name, to access shares on the network, doing whatever I wanted to files my username 'owned'. I am talking about the network that a company that makes implantable medical devices kept their work on. I suspect the 'defect' had something to do with NIS and 'travelling profiles' in Solaris, and the security system not being equipped to deal with other Unix-like hosts on the network that weren't secured. Incidentally, I didn't discover the problem by 'poking around where I wasn't supposed to be,' I simply noticed I was suddenly able to do things to files I normally had access to without entering my UNIX password as required in the past. Something clicked in my head, so I created a local account on the NT box that matched an important person's UID on the Unix system... yep, I had all his permissions.
Delete test account. Never touch again. Too scared to mention it to anybody. It's been enough years now that I can even mention it in public. I hope they've secured things a bit better now, because these days there are unsecured Unixy systems all over the place.
Re:Users != Root on servers, not workstations (Score:5, Funny)
Next thing you know you're getting arrested by a nice FBI agent named Bob, and then getting cornholed for days in the local jail waiting for a judge to set bail. It's not worth it.
Re:Users != Root on servers, not workstations (Score:3, Interesting)
Years ago I ran a network to transfer messages, much like Email. Then, it was typical to relay some 10,000 messages per week. I worked in a regional office. We're talking DOS 3.2 and Banyan VINES days, 286 systems and token ring networks, when 19200 bps was considered "fast", and the best modems available where 9600 bps.
Well, being curious and all, I read the manuals, starting with MS-DOS. Got a working knowledge, used batch files to tweak certain, commonly run commands, and
Re:Users != Root on servers, not workstations (Score:5, Informative)
Nah, that's just a standard limitation of NFS. There is no security in NFS; the unofficial expansion of the acronym is No Fucking Security. The server trusts the client is providing a valid userid. You spoofed the userid and NFS has no way to detect that because the server assumes the client always tells the truth.
Some environments implement netgroups to limit the opportunity for attack. The server checks incoming client connections against this list; clients on the list are assumed to be properly secured so nobody using the machine can spoof a userid. This is not very effective either because spoofing a client IP address is almost as trivial as spoofing a userid.
What you found was simply standard practise for NFS, as frightening as that might be.
Re:Users != Root. (Score:5, Insightful)
I used to deal with this on our production cluster all of the time. I implemented a pretty rigid policy early on, which was no root for users on anything that was (1) in production, or (2) had access to the various network servers. This policy came about after a few 'experienced' users demonstrated their skills. Accidentally changing access privs and ownership of the
The problem always seems to be that people who've admined their own, solitary, system, think that experience automatically translates into full privs on a much larger, integrated, environment. This is where I miss VMS, and its fine-grained privs. I'm not sure I'd hand those out either, but at least it's better than all or nothing. The next best solution is giving developers access to a box that you can nuke and reimage back to a standard state, and letting them hack with that.
Re:Users != Root. (Score:5, Insightful)
People who take a selfish view instead of a system view shouldn't be allowed to muck about with multi-user environments without strict guidelines that consider the system view. Unfortunately this can often come over as a power trip and office politics even when you can bring up valid reasons. Ultimately, if you aren't going to be in the office after hours to fix a production system then you are not responsible enough to have the root password for it - a microcosm of with power comes responsibility.
Re:Users != Root. (Score:3, Insightful)
This is a good policy, but i read a book at sugested giving it to the boss in a sealed envelope. That way if you die, the system is not locked forever but the boss cant mess with it, and if somebody does he can prove he did not.
I have an idea (Score:3, Interesting)
Amen (Score:5, Insightful)
The worst are the "I'm a sysadmin" types. For every one I meet that actually has the experience to make them a competent sysadmin, there are 50 that know just enough to be dangerous, but think they know it all.
For example some time ago I decided to roll Firefox out to the educational labs and make it the default browser. All other considerations aside, it's minority status in the browser market makes it far less of a target. Well a couple days later I get some guy in who's bitching about Firefox being installed in "his domain" and he wants it removed. Upon further questioning, it becomes clear he believes that programs are installed in user accounts. I cannot seem to convince him that the program is a local installation on every system and no, I'm not removing it.
Now for Windows systems, the damage someone can do is somewhat limited since all software installs are on the local system. However the UNIX systems all run off a central server. Like hell we are giving anyone anything but read access to that. All the time people want things installed or modified for their particular project. Quite often, they have no idea what they are asking, and what they want done would completely break the app, or worse.
I agree that access should be as limited as allows you to get the job done. Now, in some cases that needs to be total access. Fine, you get a system that's seperate and you assume responsibility for it. If you are doing something such that you need system access, you'd better have the knowledge to fix what you break. In other cases, come to us, that's what we are paid for.
We even operate that way internal to our group. I don't just go and change shit in DNS. Not because I don't know how to, not because I don't have the root password, but because it's not my area. Better I should ask the guy who is supposed to do it. That way, there's less chance somerthing that gets broken.
I think the problem is that some users have a real inflated sense of self importance and entitlement. They think that their project is real, real important, more important than everyone else's. Thus they don't have the time to wait to have the admins do things, they want to just be able to do them themselves. If it messes something up, well then the amdins can fix it. Of course people like that are also the most likely to do something that will break things for others.
The more shared the resource, the more you have to be strict with the access. Even on user desktops, limited access needs to be the rule. Support can't spend hours and hours fixing problems caused by users that don't know what they are doing. It's just not cost effective.
If you truly have the need and knowledge to run your own system, then fine, take it up with management. However part of that understanding has to be you can't bother the support team if you hose things. If you aren't good enough to admin the thing yourself, you probably ought not have admin permissions.
Re:Users != Root. (Score:5, Insightful)
In our case, there's really no way to allow root access to local machines - everything is on the network via NFS. Software installations are tightly controlled and it's virtually impossible for a hardware casualty to cause any significant loss of data.
This is in an organization with roughly 5000 engineers using the *NIX network and an IT budget in the tens of millions of dollars. Believe me, the *NIX side of the house works a hell of a lot better than the Windows side.
Oh, and on my SunBlade at home, I almost cringe every time I run a command as root...
-h-
Re:Users != Root. (Score:5, Interesting)
On the rare occasions I am not overruled by marketting I tend to prefer the following development setup.
The developers personal machine. Full access, do what you wish, if you cannot trust your people with that then do not employ them. No developer worth his salt will ruin a desktop machine. Someone who either on purpose or by accident comprimeses his machine should be fired. Almost every developer has his own preferences and methods of working, I see no reason to restrict them on their own machine. Want to test a replacement for apache? Go right ahead.
The test server is the step where the locally developed code/setups etc are being tested. Access to this machine is limited by protocol. Basically any chance will have to be documented and be reasoned(?). So a chance would have to accompenied by who did it, why they did, on whose authority and exactly what was done. The test server is NOT a development enviroment, it is a proving ground for new developments. This is sometimes very hard to explain. So in caps, "YOU DO NOT DEVELOP ON THE TEST SERVER, YOU TESTIF the test server works as expected with the new chance then this will be ported to the live server. Friday afternoon is a bad time for this but somehow always seems to be the time desired by the guys who sign the paychecks.
But in principle I see no reason to deny any developer root access to any of these machines. What needs to be in place is proper protocol to make sure that people know how to deal with chances (no point in documenting all the chances if people don't read them) and that you have good people who do not mess with machines.
I have been the victim of bad restrictions to often to have any fate in people that create them. I had to personally subvert a production webserver to handle IM traffic because the office network blocked them and our sales support staff needed it. (Case of an outside department being absorbed in the larger organisation) It tooks over 3 months for them to finally get official permission to upgrade the firewall rules. I myself was denied SSH access to the outside webservers for a full week until I told them I would simply work from home permanantly until it was fixed.
If you have good people they can be trusted with root access. If you do not have such people then they cannot be trusted with being let into the building. My first IT job had a guy who installed a keylogger. He didn't have root access, he simply had a limited account on a windows machine and downloaded some exploit kit.
But in the same job I was being outsourced to a very large dutch company and had root access on their AIX production machines. I, a new then new newbie noob had to do my development on the production machines since my desktop was to restricted to install the software needed and in any case couldn't handle the filesizes involved (good luck opening a 2gig database dumb in either word or notepad on NT4.0). One morning I was early (so I could leave early and miss the endless meetings) and was asked by the director of the company to start a database. I was the only person who could do that, if I had not started that database the entire national compnay with a hundred offices could not have started the working day. (Was a temp agency).
A stable production enviroment does not come from limiting your employees, it comes from not letting your unix admin quit in disgust and having proper training in place so your critical servers do not depend on a hired developer who is still reading his Unix for dummy's manual.
If the above sounds fancifull then be glad I did not tell the complete story. It was the most insane enviroment I ever been in. It was so bad that when the company was bought by a rival and they learned about the true state of the accounts it even made the one national newspaper. While it focusses mostly on financial issues it also reported that they found the IT department to be a total mess. Not bad for your first assignment eh?
Re:Users != Root. (Score:3, Interesting)
You're not a developer if you can't even maintain your own machine.
To be a decent developer, you have to understand not just how to write code (or, in too many cases, "move pretty icons on a screen") - you have to understand the environment it runs in, including file permissions.
Didn't your mother teach you to clean up after yourself?
So-called "god-like Software Devel
Re:Users != Root. (Score:5, Insightful)
Re:Users != Root. (Score:3, Insightful)
But they're right. Developers, including myself, have a tendency to spend time learning admin skills,
Re:Users != Root. (Score:3, Interesting)
Re:Users != Root. (Score:3, Insightful)
Re:Users != Root. (Score:3, Funny)
You've never had to do a work-around for a buggy environment (*cough* IE *cough* Windows *cough* the first 3.x version of gcc *hork*)?
It helps to have an understanding of what's actually going on under the hood. It can give you a clue on how to test for edge cases so you can do a work-around that actually works, rather than just seeming to work.
As for the coffee in the water cooler - definitely send you for a mop and bucket. Better you "waste" 5 minutes of your time, than 5 minutes of yours plus 5 minu
Re:Users != Root. (Score:3, Funny)
No problem ...
<clickety-click><clickety-click>
There - tons of free space.
Oh, what's that - your home directory is empty? Of course it is - you SAID you wanted more free space. Now you've got more free space than anyone. Just take your backup files and ...
Oh, you don't HAVE backups? You left them in your home directory? Gee, its a good thing we did this exercise today, and not a year from now, when a hard disk failure would have cost you another year's work. Here, let me fix you up ...
Re:Users != Root. (Score:3, Insightful)
The real reason that "IT Lackey" has a job is because someone has to know how to admin the system, and the developers certainly don't. They keep the system running, and that means not letting people, like developers
sudo chmod == pwnt (Score:5, Insightful)
THis is where I miss VMS (Score:5, Insightful)
Sarbanes-Oxley? (Score:3, Informative)
Two names... (Score:5, Insightful)
Re:Two names... (Score:3, Insightful)
That's fine. You just remember that the point of a work computer isn't for that computer to be secured; it isn't there so that access logs can be made for it, and it isn't there so that you have a system to hover over and say, ``I keep this system secure.''
That computer system is there for people to use to do work, and your job is to move Heaven and Hell to make sure that the people using that computer system can do their jobs. Your job has no point
It's just not safe... (Score:5, Informative)
And even allowing chmod, mv, etc via sudo can be dangerous. Someone accidentally issuing a "sudo chmod 777 -R / ", having meant to type "./" for everything below their current directory, isn't going to be good for your system health and is going to be somewhat of a pain to recover from, even if you do know who screwed things up.
Re:It's just not safe... (Score:3, Insightful)
the way I do it... (Score:5, Interesting)
Well, in an ideal world, it would be that way. We would setup systems for people to use and they could just use them without root privledge. Unfortunatley we know that isn't possible if you want your users to actually be productive and get things done.
I work for a large software company. Trust me you'd know the name of it if I could tell you. We use linux on the desktop, as well as the servers. We also have some Microsoft servers that are either for legacy purposes (havent been updated yet) or for testing applications against MS environments. Anyway...
All my users have laptops with Linux on it. They all have the root password to their individual laptops. Many of the also have a server at their desk for their own testing purposes. They have root to that.
However, the "real" servers that are accessed by someone that isn't themselves, the users do not get the root password, ever.
I look at it this way. If you bomb your laptop or your test server, either you can fix it, or you can call me and I'll walk you through fixing it, fix it, or just give you a new clean configuration.
If you bomb my server, I'm going to make sure you never have access to anything, ever.
Re:the way I do it... (Score:2)
Sudo on shared test machines can be a bit more liberal though. Much of the time developers need to start and stop services multiple times a day, if not an hour. It's impractical and e
Re:the way I do it... (Score:2)
As a developer... (Score:3, Insightful)
Re:the way I do it... (Score:3, Informative)
Re:the way I do it... (Score:5, Insightful)
OK, I'll bite.
My developers need to do things like these on their dev boxes:
* test new mod_alias rules for complex redirects we do
* create new accounts to experiment with privilege separation between the various processes that live behind our site
* Open ports in the default-deny iptables policy I have everywhere, so that other dev boxes can connect to the services they're developing.
* Change the display settings on their box when they haul it into a conference room to use with a projector
Giving them root lets them do these things easily. Conceivably I could write some crazy sudo scripts to accomplish these things, but I think it'd be a complete waste of time.
Mind you, this is for developer test boxes, and their personal desktops. When I give them root (Actually, just wide-open sudo), I give them an SLA: You get root, I get to ditch any responsibility for what you do to your box, other than reimaging it if you blow it up. I'd estimate when I do this that they screw one up once per ten sudo-enabled-machine-years. IE, if I have 100 boxes, I'll get to reimage ten of them per year. So, my choices for adminning 100 boxes are: a) spend a long time writing some narrowly-scoped sudo scripts to do these tasks, and explain to each person how to use them, and have to keep doing it every time they want to do something new or b) Less than once a month on average, log into the admin console (dev servers) or walk over to their desk (developer desktops), power cycle it, and type a one-line command at the PXE boot prompt to reimage it, and walk away in less than 60 seconds.
I'd rather give people root on boxes than have them try to cheat the system. They have physical access to their desktops. If they *really* want root to do something bad, they can get it anyway. I prefer to give it to them, and have them just ask me to reimage the box, rather than try to lie to me and pretend like they don't know why it suddenly doesn't boot, and leave me wondering why.
To be clear, this is for people's own dev boxes. I have an entirely set of policies for my internal servers (eg, my mail servers, DNS servers, LDAP servers, etc - users don't get login accounts, let alone root - only IT can log in), and for the production servers that run our site (they have a complex management scheme that's beyond the scope of this post).
So, giving people root on their own boxes has been very successful for me. You say my way is the wrong way, but I don't see a "right" way to set up their environment that wouldn't waste tons of both my time and the users' time, and even still I don't see what the benefit would be. Can you elaborate on what you think the "right" way is?
Re:the way I do it... (Score:3, Interesting)
To manage something like Apache with sudo, the effort required is really small:
e.g. in sudoers conf:
>User_Alias WEBMASTERS = bob, jane, sue
>WEBMASTERS ALL = (root)
At the risk of saying obvious (Score:2, Interesting)
Nope. (Score:3, Insightful)
Of course, that really just goes back to the fact that you should never do anything adminnish directly on a single server, ever. Your configuration management tool should do it for you, so it will also know to do it to the next one.
not root, but ..... (Score:2, Informative)
Root in dev environments only. (Score:5, Informative)
The problem is that as soon as people outside of the core sysadmin team have access to critical system commands (cp, chown, chmod) the integrity of the box is left to chance. There's always the possibility someone is going to do something outside the policy. Sysadmins make it their job to know and understand the impact of every change to a box. Developers tend to make changes in order to get their stuff to work, regardless of the consequence (hey, each group is just trying to do their job, which is "make it work!!" -- I'm not defending either side).
My rule of thumb:
- Developers get root in their dev environments.
- Sysadmins get root in the production environments (developers shouldn't even have user-level logins to these machines.) If your company is releasing software (even for internal use) properly, the IT group will be managing the code as a product, using developers as a help desk rather than letting them manage the applications directly.
Stick to this and everyone will be happy.
Re:Root in dev environments only. (Score:2)
Re:Root in dev environments only. (Score:3, Interesting)
Its usually the other group's reponsiblity to have the box and app running. That means changes are well communicated and planned, not ad-hoc.
Also, its to make sure the single app is the only thing running on the box.
Also, its to make sure that there is a known level of security on the box.
Re:Root in dev environments only. (Score:2)
Let me be the first... (Score:2)
As for keeping them 'in line' once they have root access... I recommend a pointy stick.
Umm... or training. Even knowledgeable users can accidentally forget to reset permissions up and since you're a gov't contractor, you have to be more careful about data security. Right?
I couldn't do my job without root access (Score:5, Insightful)
On my Windoze machine, OTOH, I have no need for system level permissions, and I don't ask for them. I can install software, but so can all the other developers (and, I think, anyone in the company). All I use that machine for is e-mail and testing client connectivity to my servers, when I'm not using my Linux test client.
Some people need root and some don't. Don't make blanket policies unless you're prepared to make exceptions. Oh, and, for everyone's sake, if you do restrict access, please, please make sure that at least one person who can change things is available 24/7. I can guarantee you that Peterson up in Accounting is going to have a system crash that requires help when trying to get the year-end reports out at 2:30 A.M. before the big board meeting at 9:00.
Re:I couldn't do my job without root access (Score:2, Troll)
If you are a developer, you don't need root access. All the examples you've given are the system admin job (system administration?)
>Don't make blanket policies unless you're prepared to make exceptions.
Its not really a blanket policy if there are exceptions.
Sigh... (Score:4, Insightful)
But for your other examples... You do not need root to upgrade your compilers. You do not need to install things in
If you're a developer, you should probably have your own machine, which you would have root access to, but save for setting the time, you don't need root for any of the tasks you've described. Your license to call yourself knowledgeable is hereby revoked.
Even if you do have root, unless you expect the users of your software to have root access as well, you shouldn't be using your root access, or you'll end up wondering why your users have problems that you don't see on your development system.
Re:I couldn't do my job without root access (Score:5, Insightful)
I come from the other side of the fence. I am a developer of complex client-server applications. For my part, I don't even have login permissions on production.
I have root on my local development machine and shared development. If there are problems during testing, I get a temporary logon on stage, with an admin sitting over my shoulder watching me type. But I've never had a logon on production, and I can't imagine why I would ever need one.
I develop the app. I write deployment, testing, and rollback documentation. That way, I never need to touch the production server. This is how every real shop I've ever been in works.
Dear slashdot... (Score:5, Insightful)
How can I convince others of this?
The keys to the kingdom (Score:2)
Users are stoopid? (Score:3, Insightful)
Need more info (Score:5, Informative)
I can see an adjustment period of a couple of months, where applications you regularly use aren't available, so you ask for them to be installed. After that, assuming they don't see the general need for an application (or they don't want to have to officially support it), you could theoretically install applications under your home directory. (I was thrilled when I became a grad student, and got 100MB of disk quota, so I could compile and run Blackbox as my window manager instead of the crappy twm we were generally stuck with. In fact, I made it globally executable, so my friends could use it as their window manager. In fact, I received a phonecall once from one of the admins, asking me what this spinning "blackbox" process was running on one of undergrad servers, since I was the only grad student or professor (and therefore in the phone directory) who also ran it.)
These days, as part of my regular job, I am one of the unofficial sysadmins of a Beowulf cluster (largely because I'm the one of the only ones who have developed MPI applications that run on it). I get the odd request from other users who want me to hook them up with some library or such. I compile and install it under
Again, I have to ask what you need that requires root or sudo access, that can't be solved by the rare admin call or installing under $HOME. (I really don't mean this in an insulting way. I do want to know. The story post is a little brief.)
Root Access? (Score:3, Informative)
Congrats!
Like most historically UNIX shops, they don't allow users even low-level SUDO access, to do silly things like change file permissions or ownerships, in a tracked environment. I am an ex-*NIX admin myself
Good, Good!
If you're an admin, do you allow your users basic SUDO rights like chmod, cp, mv, etc (assuming all SUDO commands are logged to a remote system)?
Hell No!
If no, why don't you?
Because it is my responsibility, and my responsibility alone, to keep those machines running. If you screw up the system, then I will have to work later and possibly come in on the weekend to fix it. This is not something I am willing to risk.
In addition, as an ex-*NIX Administrator, you should know that the best way to keep a secure system is to give the least privs possible.
Makes no sense... (Score:5, Insightful)
Why would you move the modelers to Linux from Solaris? There is no real advantage....
Sure a Beowulf cluster is a nice piece of hardware, but hardware can only compensate a bit for programmer productivity... If their code is written using MPI or OpenMP or some other standard clustering environment then there shouldn't be a need to move the developers, should there? Just recompile and go.
It is really much more efficient to shove faster hardware under a programmer then to force the programmer to adapt to a different programming environment. Programming for a cluster is hard enough without having to take into account the details of the operating system, forcing them from Solaris to Linux might improve the execution part (on a side note, have you considered Sun's clustering tools?). But it *will* set them back in productivity while they move to different compilers and adapt the execution of the program to the Beowulf environment.
In my opinion you have forced your customer to make a move on questionable grounds.
Now to the matter of security. As you are aware, Solaris has the highest level rating for security. Secure Solaris is the defacto operating system at a number of government agencies. Linux cannot hold a candle to the multiple access levels of the Secure Solaris operating system. You state that you are frustrated at needing the helpdesk for file permission changes. What is your point? Are you using the fact that YOU don't like the limitations to change a customer from Solaris to Linux? Or are you complaining that the customer's environment did not deploy secure solaris with its multiple access layers? In Secure Solaris there is no need to muck with sudo. Each file can be managed properly from a security point of view (come to think of it, much of that can be done with Linux too).
Before I answer your question, let me state that I understand your point of view. When I joined the navy as a UNIX project manager, the admins gave me absolutely no rights whatsoever on the production systems. Their reasoning: '.. he can do things I don't understand, can't control or prevent.' There will always be a tension between the lockdown desired by the admins to keep their environment safe and secure and the users who want total freedom....
In my mind there is NO good reason to give ANY user root access in a secure environment. Period. If you have frustrated in the past by having to interface with the helpdesk, then the helpdesk needs to be improved. At the same time, I assume, any user has full access to their files.
You mention that you have convinced modelers to move to a Beowulf environment, then why the issue anyway. If they run cluster code then they run as user. All the need are basic user access rights, nothing more...
Maybe I don't understand your point....
Re:Makes no sense... (Score:3, Insightful)
Beowulf can mean cheap hardware, Sun doesn't. Government doesn't always need secure. He doesn't say what part of the government.. it might be the department of the interior doing climate modelling or something. Trusted/Secure Solaris adds huge amount of overheads to installing and configuring and using a system that they just might not need. Sur
Re:Makes no sense... (Score:4, Interesting)
Well, I'm going to avoid using that kind of language on a bulletin board, but it's not good. But the amount of work necessary to weld together a cluster of 100 Suns into a large and flexible working unit with whatever software the users need to do their jobs is easily enough to pay for another 50 servers. The money saved by buying something other than Suns will buy the backup and cooling systems needed for the whole setup.
Sparc/Solaris vs. Linux + Access in the Real World (Score:4, Interesting)
I have a somewhat balanced view of this, as I work for a University [imperial.ac.uk] and have a variety of different interactions with Solaris and Linux. What follows is a few notes on Linux vs. Solaris and Access Rights across different categories of system
Firstly, our Production MIS Systems:
Nextly, our Development MIS Systems
Thirdly, Academic Development Systems
Makes more sense than you might realize... (Score:3, Interesting)
I agree that there's little reason to give any user root access. Note I didn't say in a secure environment, a cluster may very well not even be on the network. In general, a user should not have root access even in such an environment unless they are the person responsible if the system goes down. Limited privileges may be given through sudo, but any program with the ability to
Permitted with a clear duty to log actions (Score:3, Interesting)
This has never been a problem. Then again, they already know, prior to that, that they would be in bigger trouble if I were not trustworthy. I offer them more controls than they would have insisted on and this gets me more latitude than they normally would have offerred.
Give them their own systems (Score:3, Insightful)
It works for us.
Doesn't that defeat the point? (Score:3, Informative)
What you're asking is, essentially, to establish yourself as a certain class of user under whatever scheme you're using, or for some kind of "well, Slashdot agrees" circumvention of guidelines.
It reminds me of a time that I was working on such a machine, and I sat in a conference room where people were trying to bargain with me as if I represent the STIG. The simple fact of the matter is, the STIG is a set of guidelines, and nobody's opinion will change the contents of the document.
Stop trying to negotiate it.
Re: Got Root? (Score:2, Interesting)
ACLs? (Score:2)
There are only a couple of situations I've run into where I've needed more, such as applications that need to bind to a priviledged port, or where I've needed to run a cron job that needed more than 1024 file descriptors and so had to make it setuid root. (Setting the number of file descriptors up for a user via /etc/security/limits.conf doesn't apply to that user's cron jobs).
Chroot them... (Score:2)
No more Sun? (Score:2)
Another question: What's wrong with Solaris?
Re:No more Sun? (Score:5, Interesting)
Read a few "Ask Slashdot" questions, and you'll understand. Ask Slashdot can almost always be summed up: I'm old and cynical, and longer surprised by clueless fanboys with an exaggerated opinion of their own experience, intelligence, and skill.
I will never understand, however, how any of them manage to find jobs.
Developers need more discipline (Score:5, Insightful)
As a general policy, if a developer needs root access, they need to prove to me as an administrator that they actually do need root access. I'm not going to give root access (sudo, su -, or access to privileged accounts), even on a development box, to someone that needs occasional chmod privileges. More often than not, the people who are begging for root access are those that have been so spoiled by coding on their own Linux boxes that they lose sight of all the best practices that contribute to good code. They want foolish things like directories with 777 privileges so they can drop temp files when there are 30 better ways to do it. root is not a cure all... just because you're used to it on your own machine doesn't mean it's appropriate for coding in a multi-user environment developing customer-facing applications.
In the end, there are very few specialized applications that actually require root access to work. I will concede that sometimes root access is necessary but it needs to be treated on a case-by-case basis. I'm of the belief that a properly written application should be written such that it can be run with the least amount of privileges, and can be installed anywhere... not just
accountability and change control (Score:5, Informative)
I make them come to me for everything. But not directly. That's what the ticketing system is for. The ticketing system justifies my existence, keeps any requests from slipping through the cracks, and helps to keep track of ad-hoc changes made to any given system.
Many times end users think they need root for something when they don't. For example, there might be some niche tool that they need installed on a system. Or do they? If the one user is the only one that is going to use it, I advise him to do something like "./configure --prefix=~" to build apps to install in his home directory. You don't need root to install apps anymore. Besides, if you want an app installed for everyone to have access to, sysadmin should be doing that anyway.
It might be a pain in the ass to make you go to the sysadmin for everything, but in the long run it will keep things running smoothly and perhaps force you to be a little more disciplined in your work.
Why do they need root? (Score:4, Insightful)
Why in the world do your users need root access? On Windows it makes sense; all too many poorly written programs refuse to install or run unless they can run roughshod over the entire system. But this is Unix. It's a rare piece of software that can't be installed and run as a user. Most can even be installed as a user but made available to others. Yeah, it's a bit more frustrating that you can't just install the latest RPM, but if you're skilled enough to install an RPM, you can probably manage "./configure && make --prefix=~/mybin && make install". Changing file ownership? Again, why do you want that? If you're sharing files with other users, get a group set up and chgrp the files appropriately. If you have lots of complex sharing needs, set up one of the Access Control List options.
Ultimately users shouldn't need root. Professionally I development clustering software for Linux and other Unix systems. I regularly install new applications I'd like to use in my home directory. Our administrators set us up with a good ACL system (courtesy of AFS). I do the cast majority of my work as my own account. The only time I need it is to test root-specific aspects of our software (if launched as root, it runs jobs as the user who submitted it). I can't remember the last time I switched to root, probably a month or so ago.
Unless you've got a damn good reason, your administrators are right to withhold root access from you. Your desires aren't good enough.
Most just say no. (Score:3, Insightful)
I'm a developer... (Score:5, Insightful)
I have root on my workstation (cold dead hands and all), but not on a single server--not even a dev server.
sudo on things like mv and chmod gets you a root shell on the box fer chrissakes, why not just put the root password on a sticky on the rack?
When something goes wrong, I don't want to hear, "Maybe the dev did it." I didn't do it--no access. When we go to prod on something, I don't want to hear the admins complaining they don't know how to promote the app because some ass developer did it manually in dev instead of creating a proper install.
If you need root to chmod something, then your admin hasn't set up the box properly. Either he doesn't know what he's doing, or you haven't told him properly what sort of environment you need. Either get a better admin, or write up a clear description of all the functionality you require. Either way, you don't need root.
Of course, the smaller the business, the more likely an admin is a dev and vice versa. In that case, all bets are off.
Re:I'm a developer... (Score:5, Interesting)
You are the only poster so far who seems to have any understanding... Or at least the only one that doesn't let their understanding get clouded by their childish desire to "have root" even if they don't really need it.
With root access comes responsibility... and I don't mean that like the way they use it in a Spider Man comic book. It's not that you need to exercise caution, ethics, and good judgement lest you become evil; If you have root and something goes wrong, you are responsible. Even if you weren't the one that broke it. Root is a blame magnet. Period. End of story. Unless they're paying you the sysadmin's salary too, you should not want to have root access on any shared system.
Also, people who can't grasp the concept that sudo access to chmod is exactly the same thing as complete root access should have their *nix geek license revoked.
Unless you need to set the clock, signal a process you don't own, or listen on a well-known port numbered 1024 or lower (if it's not a well-known port, you don't need to use a low number. I don't care how much you insist. You don't have a good reason. I'm not listening anymore...), you do not need to be root. Yes, you can do every single other thing you need to do as a user without root. It's not even inconvienient. One must wonder how these people would have survived before PCs...
Ever hear of groups? (Score:5, Insightful)
Basically you need to have your entire filesystem layout setup properly, with "project" areas where each "project" has its own directory tree with setgid for the project's group on all the main directory and sub directories. Each major "project" would have a group setup for it. Then all file permissions would be covered by anyone in the group, or possibly a "project's lead" who keeps track of all the groups and knows what permissions should be set to different areas (i.e. for data sharing between projects etc.).
Once the infrastructure is in place, the worst thing that happens is that a person is not a member of the "group" and just needs a helpdesk call/form to gain group access ("ok'ed" by a lead member of the "group"). Basically something that can happen in 5-10 minutes time if implemented properly. With the setgid, all new files created in the areas will always be owned by the proper group, which has full access to chmod/chown those files (assuming someone doesn't do "chmod 700") but even then, cron jobs can be setup to run every hour or so that do a "chmod -R 770 /" to any/all project areas (with the cron job removed if you need to lock the area down to no access).
This is how it should be done, no sudo needed. All the work is in the preperation, with true processes needing to be setup and implemented (basically a form/forms for creation of a "new group" (which includes group ownership as well as a box to transfer "ownership" to another person), another form/forms for requesting new data areas (with what group owns the area), and finally a form/forms for adding/removing members to/from the group which gets signed off by the current group owner). Optionally another form for "locking" a data area to keep all access out. Then it simply needs to go to the IT staff which then simply reads down the "process" document and verifies the data in/on the form and either creates a new directory (setting the setgid bit and setting proper group ownership), adds/remove a user to a group, creates a new group, or moves a user to the first name in the group file (for easy tracking of the group owner or updates a seperate documents with this information).
groups + sudo can allow installation rights (Score:5, Informative)
To address the article's question, groups solve more than just file permissions; consider an environment in which users in the admin group have the ability to do things (via sudo) as the admin user, who owns /usr/local and all of its children. This lets priviledged users install things, but prevents them from accidentally messing with them (the admin group should not have write access to /usr/local, so sudo is required).
A more restricted implementation would chown /usr/local/stow to the admin user and grant the admin group sudo access as the admin user plus sudo access to the stow command (or perhaps a shell script that ensures items are stowed to /usr/local).
Of course, /usr/local is only one potential target. Perhaps your environment is better suited for /arch/beta or /opt. Also note that this idea is easily abstracted and applicable to other tasks.
Dedicated dit to users (Score:3, Insightful)
Sudo is a no-no for me on the other hand for mv/cp and so on, but you can ask for scripts to edit certain files and to change stuff in a dir.
You migh optionally enable a bin dir in the users dir, so a "bin" user group could install stuff there (similar to option 1)
but again your setup might vary by unix (even by linux distribution). It is doable, just molest the admins enough
in unix EVERYTHING is possible imho (yes, yes now all windows admins just flame me)
configure --prefix anyone? (Score:3, Interesting)
MIT Athena (Score:3, Informative)
Are You Kidding? (Score:3, Informative)
No.
Because they will break the system and then they will blame the IT department. Logging lets you know who did it but the blame is still entirely assigned to the IT department.
Developers are even worse because they think they know it all but 9 times out of 10 they know next to nothing about system administration. I would be more willing to give sudo rights to a normal user who follows a documented procedure than I would to a gung-ho know-it-all "hey I run Linux at home gimme full root access" developer. I've seen developers chmod 777 their files because they don't understand permissions. Do you think I'm going to trust them with root access to mv or cp? No chance.
I've seen developers ask for sudo access to run patchadd, to run pkgadd, to run pkgrm, to run vi (how I laughed at that one). They are rejected every single time. If they have a process that needs to run regularly as root then it can go into a script in /usr/local, permissions will be locked down so only root can modify the script, and a limited number of users will be granted access to run that script. That's as good as it will get without divine intervention (aka the CEO).
If the application developers are making changes that require frequent superuser access - especially to commands like chmod and mv and cp - then perhaps they need to rethink what they're doing. It sounds to me like they're doing something wrong.
NSA SELinux (Score:3, Insightful)
After a month or two worth of feedback, the system should stablize to the point of giving it to the researcher what they want in an extremely restrictive manner.
The time invested results in a secured system that behaves exactly as your policy dictacts AND still be giving out 'ROOT' liberally.
You're kidding, right? (Score:3, Insightful)
Just logging the sudo commands isn't going to give you nearly the auditing ability I suspect you're looking for, and giving them any kind of root-level access to the filesystem is game over.
Figure that any chmod u+s is suspicious and will get caught?
Figure you'd notice their subsequent use of whatever new sudo permissions they just gave themselves?
And, look at that, suddenly their UID is 0.
The list goes on...
Get a trouble ticket system for admin changes (Score:3, Insightful)
The whole reason people get pissed at admins and want to do it themselves, is that they always feel the time crunch. They have project X that is due on Friday, they submit a request to have something done to the server Wednesday morning, and it still is not done by Wednesday afternoon. This is not always the admin's fault, they have priorities too, and sometimes it is hard to juggle all the requests because he doesn't really know what the real priorities are in terms of the company as a whole.
The solution is to implement a trouble ticket system for all admin requests, and give managers access to it as well, allowing them (and only them) to adjust priorities of requests. That way, managers can set the priorities of the requests to the admin as they see fit. As well, because the managers all know that the developer *did* make the request, and there is a record of it, the developer feels less worried about delays coming from the admin department (passes the buck), and less pissed off at the admins.
The beauty of it is it also takes some responsibilities off the admin, and gives it to the managers, where it should be anyway.
What the admins did... (Score:3, Informative)
ftp://ftp.funet.fi/pub/local/src/omi-file.tar.gz [funet.fi]
It allows users to grab ownership[1] of files in certain per-user configured paths, whenever they need to (sample config file included). This allowed us to manage the incoming ftp directories without going insanse.
It was written some 15 years ago by Matti Aarnio.
[1] Ownership is "omistus" in Finnish, hence the name of the tool
You're wrong. (Score:3, Interesting)
We're currently setting up a Beowulf cluster and my job is to manage the queues, setup the resource management, and tune the scheduler to optimize the performace.
I've never seen a situation where anyone has needed to change ownership of a file except for where someone departs. De rigeur, you put in a request to the admin to chmod all files under that user id to g+r and directories g+rx. That's it. Anyone in the person's department can then copy out whatever they need.
Install software? We simply provide the software with instructions, and a log of installation on another machine -- or a binary RPM -- to the admin wit a request to install it. It's not like we install applications every day. This is doubly important in a Beowulf cluster since you need to sync the software amongst the compute nodes.
No, if you find yourself wanting root access for such things, then you are doing something seriously wrong.
Think it through ... (Score:3, Insightful)
Similarly, giving a Unix user the ability to execute mv or chmod (or quite a variety of other single commands) as root is functionally equivalent to giving that user full root access.
Even if all the authorized users can be trusted not to abuse the power, can anyone be sure they will protect their password (or other access token) so well that no intruder will ever use their account? I think not.
chmod and cp as root? (Score:4, Insightful)
We only give somebody root ability to do something if it's essential to their job, and a team reviews any new application of that to ensure it doesn't facilitate unwanted privilege escalation.
Their basic access to a system at all is reviewed quarterly by their manager, and if he doesn't take action to change the default answer to "yes, they still need this access", they get deleted.
Show me a publicly-traded company that's not acting like that, and I'll show you the next Enron.
How much would user-local apt / dpkg help? (Score:3, Interesting)
I know that in a home environment (such as if I'm setting up my parents' computer), I'd be a lot more comfortable having them use a version of Synaptic that installed software just for the current user. That would basically eliminate the need for them to have root access at all. Maybe a similar thing holds true even for most developers.
Granted people can usually install software for themselves by compiling the source code, but to require that is to basically ignore all of the benefits that apt / Synaptic offer.
(If you're a Gentoo, I think the same point can be made by using a find/replace on the terms apt/dpkg/synaptic.)
Re:Tie them up! (Score:2)
But worse yet, think of the residue the duck tape will leave on the chairs!
Re:Tie them up! (Score:3, Insightful)
Of course, this being Slashdot, the more likely scenario is flames coming at you from both sides. Good effort, though.
Re:Don't forget userlimits! (Score:2)
Re:Don't install the software as root. (Score:3, Interesting)
Re:Hell no (Score:3, Funny)
Or they give away files to avoid their quota limitations...
Re:Hell no (Score:3, Funny)
what do you mean with the system has left the building ?