How Hard is it to Manage Different Unices? 374
vrmlguy asks: "Where I work has several Unix-based servers, all running the same vendor's OS. We are getting ready to buy another big server, and management wants to get bids from other vendors. However, our staff is only familar with our current vendor's OS. Yes, I know that any two flavors of Unix are more alike than not, and yes, I know about the Rosetta Stone for Unix that makes it easy to transfer skills. I want to know about the down-side: What's the difference in the cost of operations between a mono-culture and a shop running two or more vendors' OSs?"
I see it like this... (Score:4, Insightful)
Re:I see it like this... (Score:3, Insightful)
Re:I see it like this... (Score:5, Insightful)
Hasn't the general consensus on slashdot been that a monoculture is a bad thing?
Microsoft uses the same reasoning (higher TOC) as a reason to move from whatever blend people use now to 100% Windows...
Re:I see it like this... (Score:3, Insightful)
As long as two different platforms have different strengths, you can reduce TCO by taking advantage of their strengths.
If you're using win95 for your servers and Red Hat 3 for your desktops, you'd be better off with a monoculture. (And may whatever god you believe in have mercy on your soul!)
In fact, I'd go so far as to say that effective deployment of technology will never increase costs.
Re:I see it like this... (Score:3, Funny)
I think you mean TCO (Total Cost of Ownership)
Unless you mean table of contents, I guess two vendors would have two different manuals, which would theoretically double the Table Of Contents you have to deal with. And don't even get me started on indices...
Re:I see it like this... (Score:3, Interesting)
The price advantage you see is likely to depend very much on the type of computing you do and the volume. If you only buy 5 machines a year I doubt that the price break you get by going to a multivendor environment is going to be worth loosing binary compatibility for, let alone the admin hassle. If on the other hand you buy 100 machines a year you should definitely get a second vendor in place.
The other issue is that the price gap from Sun to Intel is huge. Comparing machines of like performance Intel boxes can be up to a third of the cost. Unless you have a real definite need for the features of a non-Intel platform (and I can't think of many offhand) the cost saving of Linux or BSD can be great.
I can't think offhand of any reason to have six vareties of Linux arround unless you are a masochist of some sort.
Re:Then again. . . (Score:2)
Having a mixed shop might actually allow you to quickly determine who the wannabes are versus your real talent.
Allowing the wannabes to plod along happily day to day may cost you less in the short run. However, it will cost you more as soon as you occur problems of any significant complexity.
Hybrid environments (Score:3, Insightful)
Re:Hybrid environments (Score:2, Insightful)
There's something to be said for Unix's adherence to standards, unlike a certain closed source vendor...
Re:Hybrid environments (Score:2, Insightful)
Also, install your shell of choice, gnu utils (if you like them or know them) and it makes life a lot simpler.
Don't forget books like Essential System Administration, that list different flavors for every command/procedure.
That said, it is ALWAYS easier to take care of a bunch of things when they are all the same.
Re:Hybrid environments (Score:5, Insightful)
Think patches. Now you have to track multiple vendor advisories, handle patch management, what does the patch break, depend on, etc.
Next comes config changes. No longer can you write a simple script that makes the change on all boxes, you have to support multiple scripts, or write scripts to handle the ideosyncracies.
Third comes binary compatability. What I generally do is build a local set of binaries for all the specialized stuff. They can either be blasted across all systems or mounted via NFS. With multiple versions / flavors of the OS, work gets doubled, tripled, etc. What used to be a 2 hour task turns into a day long task.
Can't forget about security. What you do for one system you have to do differently for another. Different tools, binaries, rc scripts, etc.
The bottom line is that if you needed 4 people to support your current single OS environment, you may need 5 or 6 or even more when you go multi-platform.
I ran a shop where we supported Win98, NT4, Win2000, Solaris 7 & 8, RedHat Linux, FreeBSD, Macos9, MacOSX, and a smattering of other OS's for 500 users. This get's non-trivial Very fast.
Been no fun for me.... (Score:2)
Easy (Score:4, Funny)
$32,593.12
Now can we stop with these stupid, inane questions? I would rather read Jon Katz than these awful Ask Slashdot questions of the past 3 months or so.
Licensing (Score:3, Insightful)
Unless Vendor B is offering a competitive upgrade from Vendor A's software, it would be much cheaper to negotiate an additional Server and Client license pack from your existing vendor than to enter into a new business relationship with some new vendor. Unless, of course, the new "vendor" is (sigh) Linux.
It will affect your admins more than users (Score:2, Insightful)
All the good Sysadmins are retired or dead (Score:4, Funny)
The quality of Unix sysadmins has declined so much over the past decade that what passes for a sysadmin right now is what I used to call "an operator".
We have 5 unix sysadmins (major transportation company). Not one of them could write a shell script if their life depended on it.
They insist on doing everything by hand and then complain there are no automated tools to them. Their definition of an automated tool really means "graphical front end to those grubby text commands".
They have no appreciation for the modularity of unix, and they look longingly at Windows servers.
Meanwhile, they're all getting paid twice what they're worth because apparently as dumb as the Unix sysadmins are, the NT ones are apparently on a different evolutionary scale where "rock" is considered the most intelligent life form.
So my point is that getting these sysadmins to switch won't happen. They'll piss, bitch and moan about the opportunity to learn something to enhance their skills, then complain the application is screwing up "their" servers.
If only ASPs would take off, my life would be much better, because sysadmin skills suck so bad, black holes pale in comparision to the event horizon of these so-called admins.
Re:All the good Sysadmins are retired or dead (Score:2)
That's because times have moved on. Sysadmin is no longer considered a cool job. Cool dudes, though, still have cool jobs. Somewhere else.
Re:All the good Sysadmins are retired or dead (Score:2, Insightful)
Re:All the good Sysadmins are retired or dead (Score:2)
Currently I have to deal with 4 Unixes... as long as they keep to themselves, they're fine (and that's the case here). It's when they have to interact with each other that the complexity shows up. Or when they break... nothing like getting 2 hardware vendors and one big software vendor in a conference call and having them accuse each other as being the cause of the major malfunction that is interrupting business.
Re:All the good Sysadmins are retired or dead (Score:3, Funny)
*BEEP* *BEEP*
"Damn! Sorry boss, I'm afraid you'll have figure out how to change the color theme by yourself, the file server just went down."
*receding sound of footsteps*
Re:All the good Sysadmins are retired or dead (Score:2)
Humble apologies. I had a peek at your web page, so I'm guessing you weren't joking.
Re:All the good Sysadmins are retired or dead (Score:2, Insightful)
all the good sysadmins are probably working and not monitoring these posts
Dead wrong... All the good sysadmins have automated their jobs and have all day to surf.
Re:All the good Sysadmins are retired or dead (Score:5, Insightful)
Hire better admins. They are out there, and a lot of them are unemployed right now. Any problem in an organization that persists past a few days or at most a few weeks is a management problem.
Re:All the good Sysadmins are retired or dead (Score:2)
Re:All the good Sysadmins are retired or dead (Score:5, Funny)
Seeing as how I'm a senior admin (who *can* script), in a team of 5, for a major transportation company, I wonder if you're my boss? *grin*
Re:All the good Sysadmins are retired or dead (Score:3, Funny)
Re:All the good Sysadmins are retired or dead (Score:2, Insightful)
Re:All the good Sysadmins are retired or dead (Score:2)
Re:All the good Sysadmins are retired or dead (Score:2)
Re:All the good Sysadmins are retired or dead (Score:2, Insightful)
Good old seat of the pants generalists often are overlooked in favor of the latest Whiz-Bang rookies straight out of the memorize for the certification test prep school.
I was a trainer doing sysadmin training for one of the big iron multiprocessor Unix boxes -- and in '93 you could see the beginning of the end as folks who were basically operators became sysadmins.
Bill Pechter
Rosetta Stone (Score:3, Funny)
Re:Rosetta Stone (Score:4, Funny)
Knowing multiple unixes/unices is Good For You (Score:4, Insightful)
Sure, you might know how to do x,y,and z on your Solaris box, but once you understand how to do it also on RedHat and AIX, you'll understand much better how it works conceptually. Then when you get an HP box, it'll be pretty easy.
Of course, don't run killall on HP.
Re:Knowing multiple unixes/unices is Good For You (Score:5, Insightful)
How about knowing multiple OSes is good for you? Same logic applies. I "speak" Windows, *nix, MacOS, IOS and even some VMS. Now, I'm not afraid of any computer - I know I can figure out what to do with minimal info available.
If everyone didn't care so much about what the OS was because they were afraid of something new and just chose the right tool for the job, "vendor lock-in" might go away. A whole lot more understanding would come about in the IT field, in any event.
Soko
Re:Knowing multiple unixes/unices is Good For You (Score:2)
Re:Knowing multiple unixes/unices is Good For You (Score:2)
Go to http://www.crimeagainstamerica.com for more info on how H11 visas are destroying more than just the IT industry
Re:Knowing multiple unixes/unices is Good For You (Score:2)
Re:Knowing multiple unixes/unices is Good For You (Score:3, Insightful)
Fairly easy and good for the resume (Score:2, Insightful)
Plus, it is greate for the resume. When you get tired of this job, get fired, laid off, or transfered you will find it much easier to find another job.
Some of the differences between different versions of UNIX include:
BSD or AT&T based
Disk tools
Adminstrative interfaces and GUIs (SAM, SMIT, etc)
Startup / Shutdown scripts (rc.d vs init.d)
User management
Included tools ("top" is a big one)
Backup and recovery (hp includes fbackup / frestore)
X-Windows (CDE, VUE, etc.)
Some if the similarities include:
user land tools (ps, ls, find, etc)
Directory structures are slowing becoming the same
Caveats (Score:5, Insightful)
The problems can arise with:
1. vendor-centric admins who aren't willing to learn
2. different service contracts creating differing expectations of uptime between systems
3. added costs from maintaining multiple service contracts and training on multiple platforms
4. finger-pointing, if the systems interact
5. rewriting in-house tools which are needed on the new platform, but were not written generically before
6. 3rd party licensing costs may differ (if you are licensing the same product on both OSs)
7. dilution of expertise, since your admins will have to be more generalists (this is often overbalanced by the expansion of perspective in problem-solving that comes from broader experience)
Other than that, I can't think of anything off the top of my head which would make this hard. Generally, it is not a problem to do.
In proactice it depends... (Score:2)
Things that help include creating branchs in your login scripts (.profile or
There is a good O'Rielly book called "Unix for Oracle DBAs" that is a really good cross Unix reference that you should consider picking up.
Not enough info (Score:2)
1) What sort of applications do you have running on these servers and how interoperable are they? Does it matter how interoperable they are?
2) And further, are those apps dependent on that vendor's Unix?
3) How much resistance is there from the staff to learn something new?
Assuming the above aren't a problem, then it shouldn't really matter. Go open-source and save a buck or two.
Re:Not enough info (Score:2)
Value here? (Score:3, Informative)
Coukd the Slashdot folks be a little more discriminating in their choice of questions, please? The most entertaining/thought-provoking parts of this story, seem to be the idiot troll posts. This is hardly a thought-provoking or difficult question to answer/figure out with the most miniscule of job skill.
To answer this silly question:
The difference is: a lot, due to training/familiarization, support contracts, possible hardware differences, etc. DUH.
UNIX Differences (Score:2)
You learn one flavor of UNIX, get to know it inside & out. And because of the shop, the job market, whatever, you start working with another flavor. And it will look weird because it's different.
Sometimes the differences are due to developers' choices, sometimes they fix problems existent in the first flavor you've encountered, sometimes they cause problems you didn't have in the first flavor. And sometimes what's weird about this new flavor is because the guy who set the computer up botched things.
Also, the longer you know one flavor of UNIX, the more likely you are to call any new flavor you encounter ``braindead".
Except when it comes to SCO. Trust me on that one.
Geoff
Re: SCO is Evil (Score:2)
Exactly my point.
What else can you say about a UNIX flavor developed by Microsoft? It takes all of the user unfriendliness of UNIX & combines it with the bad programming habits of MS. And SCO (before they were bought out by Caldera) failed horribly at maintaining the resulting code.
At least Caldera did the sensible thing: let SCO die, & offered all of the customers still using it a way to migrate to a better OS.
Geoff
It depends... (Score:5, Funny)
How much of a raise are you asking for?
Heterogeneity has its place, too. (Score:3, Insightful)
Re:Heterogeneity has its place, too. (Score:3, Insightful)
And the solution will solve the problem on EVERY SINGLE MACHINE you have. Therefore, if a problem IS to develop, homogeneity is preferred, because you're going to have to solve it anyway.
Re:Heterogeneity has its place, too. (Score:2)
One thing to consider (Score:2, Informative)
Software costs (Score:3, Interesting)
Your admins, if they're any good, should be able to adapt to a different UNIX easily. Yes there are differences, but not ones that should trouble an experienced admin any longer then it takes him to read a couple man pages.
Dont change if your happy... (Score:2)
Explain that to management, and I'd be very suprised if they didnt continue with the origional vendor.
Tranining and Security (Score:5, Insightful)
Two factors come to mind:
No matter how close the systems are, you will still "loose" time to training (either formal or OJT) requirements for the new system. This may actually be a benefit for your staff (wider perspective, more to put on Resume).
Depending on how much focus is placed on security, you may end up doubling the time required to track vulnerabilities and install patches. Again, this may be an advantage as well since a single-os shop tends to have equal vulnerabilities on all systems. In a multi-os shop an attacker will have to work harder to get control of everything.
Re:Tranining and Security (Score:2, Insightful)
Re:Tranining and Security (Score:2)
Re:Tranining and Security (Score:2, Troll)
Yes, making their employees more attractive to other companies is always an organizations first priority:)
For one server it hard, for many easy (Score:5, Insightful)
It is easist to manage servers from only one vender. Unix makes ti easy to transfer skills, but here is the contradiction: It is easier to manage servers from many different venders and versions, than to manage just one server that is different from the rest.
When you have all OSes the same it is easy because everything automaticly transfers. When all are different it is harder because you always have to remember the correct incarnation of each procedure, but because they are all different you get in the habbit of looking it up each time. When all are the same except for one machine you forget on that one machine that everything is different and you aply the wrong incarnation (ofte with disasterious results, see discussions of killall linux vs hpux on comp.risks) Because of this, the one different machine will get [invalid] complaints often due to these differences.
If you can't stick with one vender, then you should go with many so you are in the habbit of checking the differences. At the very least get some linux (debian, redhat, suse), and bsd (free, open, net) machines in house now, and use them for production. You need to make sure that your admins are used to subtile differences. The other alternative is to just stick with one vender, but not only do you pay more, but your admins become lower quality as they learn only one system. (think of it is a resume builder, you want different systems on your resume!)
killall differences (Score:2)
Guess folks at SGI never heard of the "path of least astonishment", such as printing a usage message if there's no arguments. Then again you could argue that it's not very astonishing if typing 'killall' really does kill *all*.
It all hinges on scripts... (Score:4, Insightful)
Now, if your sysadmins employ a lot of scripts, figure you'll have to spend twice the time maintaining them if you have two different platforms that are not fully compatible. You can minimize this if you stick to POSIXly correct scripting, but you'll never completely eliminate it.
The same goes for custom programming. For instance, if you're running everything on BSD, and you want to take on a Sun machine running solaris, there may be some issues with the occasional socket call that Sun implments differently from the rest of the world.
So, the more custom scripting/custom apps you have, the more time your sysadmins will have to spend maintaining/porting/testing the stuff.
Re:It all hinges on scripts... (Score:5, Insightful)
I think if you were to ensure that all of your systems had the same shell installed or the same version of perl and selected modules you'd save alot of time.
Backup and recovery (Score:3)
My Solaris backup and recovery strategy is not as elegant in that I make backup tapes via
I do not expect both to work identically, but there are some significant differences between the two.
Third party products? (Score:3, Insightful)
Makes your backup/recovery fairly consistent across different products, plus everything can then be managed from a central console.
Re:Third party products? (Score:3, Informative)
We have looked at both TSM and NetBackup. Both are an improvement over my current Solaris backup and restore process, but neither are as nice as what AIX does out of the box.
It was an interesting experience talking to the vendors: the Veritas guys claimed their product was better than TSM, and gave use four or five reasons why. The IBM people claimed that their product was better than Veritas for the exact same reasons. Both products are pretty good, and each has strengths and weaknesses.
AIX has it's faults but +5 insightful to IBM for mksysb.
It will be CHEAPER in the long run!!!! (Score:3, Insightful)
The short term costs to retrain staff for the new system will be higher but the long term benefits will definitely outweigh them. Once you build a multi-OS capable IT department the cost to add new hardware later on becomes significantly less. By not being locked into one OS (and one vendor) you have the freedom and flexibility to choose the best solutions for future problems (as well as hunt for the lowest cost). The smart thing to do is diversify your IT shop as much as possible so that you can insure you always have the right tool for the right job. No single vendor or OS can provide all the answers, regardless of what IBM/Sun/Microsoft may try to tell you.
Re:It will be CHEAPER in the long run!!!! (Score:3, Insightful)
One small problem with this is if the person actually making the decisions wants to open up the possible vendor pool for political/economical reasons rather than technical. For this reason, you can end up with multiple *nix boxen from multiple vendors, neither of which were selected because they are technical more suitable for their individual tasks, but rather because of politico/economic reasons.
Re:It will be CHEAPER in the long run!!!! (NOT) (Score:2)
A TRIVIAL example would be changing your IP address space. Each flavor maintains it config in a different way. It doesn't matter that you know how each one differs. You won't be able to write a simple script that just makes the changes (it would be a complex script if you even chose to do it via script. Or you would write a script for each platform. You would probably end up doing it by hand.)
Another trivial example would be initial system load. With solaris for example, you setup (and maintain) a jumpstart server. When you get a new machine, an hour later you have a complete environment setup with all your customizations, up to date patches, etc. without hardly lifting a finger. Now add IRIX, Redhat, debian, freebsd, aix, HP, OSX into the mix. See the problem?
The list of examples of all the additional costs associated with maintaining multiple flavors is virtually endless.
You basically have it backwards. It's cheaper in the SHORT run. You can shop based on price. Initial setup isn't that bad. It's the LONG term maintenance costs that get you. It's ALWAYS easier / faster / cheaper to only have one platform to maintain.
It isn't too bad. (Score:3, Insightful)
I do this at home to juggle Solaris, OpenBSD, and Linux and it works well. If I forget how to setup DNS under Solaris, I just go to <base_dir>/Solaris/8/DNS_Setup.txt, for example.
Also, install all of the on-line documentation you can and have it network-accessible. For example, when the man pages aren't detailed enough, on-line Solaris Answerbooks can save the day.
Also, keep well-organized bookmark lists for useful websites, such as http://docs.sun.com or http://sunsolve.sun.com, that cover your particular UNIX.
Having any number of UNIX implementations really isn't unmanagable (unless they have broken network protocol implementations). The key to success is documentation and more documentation (and unambiguously sharp sysadmins). On that note, if you don't have faith in your system and network administrators, you should just give up and stick with one OS, since no amount of documentation helps a truly stupid human.
One downside is cost differences (Score:5, Insightful)
Money.
You obviously already know that managing support contracts from multiple vendors is going to suck. I would also recommend taking a long hard look at ongoing support charges.
For example, we have both HP/UX and Sun platforms where I work. We have both servers and workstations. For the workstation support contracts on similarly sized machines, there was a world of difference in cost.
The annual fee for an HP C240 workstation was somewhere between $2500 and $3000. The same annual charge for a Sun Ultra of equal speed, was between $1000 and $1500. Multiply that by the number of workstaions you have to maintain, and it can add up very quickly.
The up front cost typically isn't where they get you. It's on the back end. I would research the back end on all of the platforms you are considering very carefully before making any final decisions.
Hope that helps a little.
Coptic is the Unix of choice. (Score:4, Informative)
For example, on Solaris (without Veritas Volume Manager), you have to "carve out" your disk filesystem by filesystem, and work with devices in /dev/dsk/cAtBdCsD format. On AIX, the concept is totally different with Logical Volume Manager, wherein filesystems can be created on the fly. But HP-UX uses both in an odd fashion, forsaking slices and using a "castrated" form of LVM. This is just one example, as you will find other things in HP-UX such as the useradd command being identical to Linux and Solaris, and the SAM tool being very close to AIX's SMIT utility.
In the end, as you will find, there is no uber-Unix that will carry over to all of the other flavors. IMHO, HP-UX is as close as you will get. But, my personal preference of all Unices is AIX due to its ease of use (an IBM tool easy to use? I know it sounds like an oxymoron) and robust capabilities, combined with Linux integration in the most recent versions. Flame as you will, I'm interested in hearing anybody else's insight.
Rosetta Stone (Score:2, Funny)
OS hardware knowledge (Score:2, Interesting)
Esperantix? (Score:2)
It's definitely not a problem (Score:5, Funny)
"What kind of unixes do you run?"
"Oh, we have both kinds. RedHat and Debian.
Complexity is the answer (Score:3, Interesting)
Simply put, if every server and workstation is identical, interoperability is not an issue, and the work associated with tracking, testing, and applying changes to that one, homogenous OS image is minimal.
The moment you branch out into different configuations of the same OS version, different OS versions, or different OS platforms, you've increased the complexity of the system, and thus increased your workload. Suddenly, interoperability is a factor in every decision, and issues with multiple versions and/or vendors must be tracked.
I've been meaning to write a short paper on this for some time, and attempt to relate it to Christopher Langton's Lamba parameter for the measurement of complexity (in the 3rd Annual Proceedings of the Artificial Life Conference). I've studied the identification of single points of failure for some time, as well as the question of "how many sysadmins do I need?". Both answers are directly related to the complexity of the system being managed (here, defining "system" as the collection of applications, OSes, hardware, and networks that comprise the scope of a sysadmin's responsibility). There are indeed identifiable factors that define the heterogeneity of an environment, and the ways in which these dimensions impact such things as the number of SA's required to manage them can be defined.
Google cache (Score:2, Informative)
It's harder than you think (Score:2, Insightful)
The biggest problem with a mixed environment is keeping it up to date with patches etc. Keeping track of that stuff is a complete PITA; I can't imagine how much more difficult it is in Linux, where the patches aren't on the vendor site (are they?).
Besides that, the big thing that you'll need to do is make sure everything is sort of in the same directory structure. For apps that you install, put them in the -exact- same directory. For example, all my unix boxes have the same layout:
/opt/apps
/opt/servers
/opt/data
/opt/src
/ usr/local ->
That way, it doesn't matter as much which box I'm on, and I don't have to remember exceptional cases. It also makes maintenance easier, because all the exciting stuff (non base operating system) is in a known structure. That means you can write scripts, etc to monitor everything and you don't have to change them on a per-host basis. It also means you can just copy the config.status from box to box (or directory to directory) and build without reconfiguring everything.
'Luck!
Mirror's Of Rosetta Stone (Score:2, Informative)
the cost is in hardware (Score:5, Informative)
1) You can install the same shell on just about all UNIX's. Most people where I am prefer tcsh as it has some nice features.
2) You can standardize on scripts, either use csh (blah) or sh. We prefer sh as it is found on just about EVERY unix (Sun, HP, AIX, BSD's, Linux).
3) Avoid vender extensions to the basic shell. HP has done some aweful things there in its bourne shell and they are not compatible with Sun and in some cases Linux either. I.E. Always use `cat foo` and not $(cat foo) in sh scripts. There are other things like that.
There are problems in supporting more than one UNIX, but there are also workarounds if you do it right.
what about devices & drivers? Low level stuff? (Score:2)
Or do you write your own real-time communication software? Writing device drivers across platforms can be sticky (if you are writing a high level device driver, utlizing CDLI or DLPI) to down right icky (you go down to the metal).
For us, switching platforms has a higher cost than the money spent on the boxen.
Useless use of cat award! (Score:2)
go here [helsinki.fi]...
And of course, if you've been following along for a week or two, you know that this (BING!) is a Useless Use of Cat!
Rememeber, nearly all cases where you have:
cat file | some_command and its args
you can rewrite it as:
<file some_command and its args
and in some cases, such as this one, you can move the filename to the arglist as in:
some_command and its args
Just another Useless Use of
Dangerous Backticks
A special idiom to pay attention to, because it's basically always wrong, is this:
for f in `cat file`; do
done
Apart from the classical Useless Use of Cat here, the backticks are outright dangerous, unless you know the result of the backticks is going to be less than or equal to how long a command line your shell can accept. (Actually, this is a kernel limitation. The constant ARG_MAX in your limits.h should tell you how much your own system can take. POSIX requires ARG_MAX to be at least 4,096 bytes.)
Incidentally, this is also one of the Very Ancient Recurring Threads in comp.unix.shell so don't make the mistake of posting anything that resembles this.
The right way to do this is
while read f; do
done
/tmp/moo
/etc/passwd
and xargs would see two file names here. Changing the record separator to ASCII 0 means it's now valid for a file name to span multiple lines, so this becomes a non-issue.
Answer from 1994: (Score:2)
I don't know why, but when I read your post, I immediately thought of this [foad.org] thing.
Although it looks like a complete joke, there is a lot of truth in there.
Definitely harder the more different UNIXes (Score:2)
I've been sysadmin at various points for
a small cluster which has had up to 6
different UNIXes:
Digital UNIX
HP-UX
Linux
SunOS 4.1.x
Ultrix
Irix
Now, I was able to manage each of these pretty
OK, Unixes *are* alike. However, getting patches
and whatnot differs over each arch.
So crudely, I would say:
SysadminWork =
A * number of UNIXES
+ sum_i(Bi * number of machines_i)
where A is a very big constant,
i is the index of each UNIX,
Bi is a small constant, the marginal extra
effort to maintain one more machine of type i
What I mean is this:
for each UNIX, you have to do a fairly large
amount of research + effort to learn/aquire
materials and knowledge for things like upgrades.
Having done that, it's easy for you to maintain
another UNIX box of that type: the cost of each
extra machine is low, and you can do things
efficiently via scripts.
So the least wasteful way to use your sysadmin
is to have one arch/OS.
In my case, my life became progressively easier
as I got rid of UNIXes and concentrated on running
Linux only.
There is only one problem (Score:2, Informative)
Overall costs of diversifying (Score:2)
been there, some advice . . . (Score:2, Interesting)
We didn't have the luxury of 2, no . . . we had somewhere between 10 and 20 different versions of operating systems, that is if you include different revisions, etc.
we had everything from SCO to Solaris, to NT 4 to 2k, it was nasty.
The company had bought out a bunch of little ISPs and just threw all their boxes in the racks and made us try and get em all on the network.
Many of these were bought out ISPs and the admins were fired, so of course half of em had no passwords, and a bunch had all kinds of nasty little quirks.
I would say stick with no more than 2 versions at a time, maybe 3.
Different distros have their strong points and weak points, so balance it that way. There is not much of a learning curve unless you have like Solaris and Redhat and BSD in the same building.
Then you start forgetting which system commands work where because you log into em so frequently to do different things.
Its really not an issue of learning curve, its more an issue of annoyance.
the best recommendation I have is to make scripts to do the simplest tasks, that made things so much easier for us in our situation.
Admin attitude saves or sinks you (Score:2)
Easilly the number one cost is admin time to learn all the tricks necessary to get the job done.
Possible savings. (Score:2)
There are potential savings as well:
1. Better rounded employees will be better able to assess the benifits subtle differences in technologies for different applications.
2. Ability to attract good people with established skills on one platform who want the chance to transfer those skills to another platform.
3. *ability* to scare vendors when necessary into giving you a better deal because you already have in house expertise on competing systems. This can be very valuable when negotiating upgrades, new systems and renewed maintainance contracts. Just be careful when and how you wield it. If you are too heavy handed, they may just decide that they are going to loose you and try to milk all they can from you during the transition. It is probably best as a subtle threat wielded when trying to do a deal with them at the end of a tight quarter when their sales team is driven more by tactics to maximize short term revenue at the possible expense of strategic influences on pricing.
Re: (Score:2)
UNIX geeks perspective (Score:4, Interesting)
Knowing that (as an example) AIX has a pretty self tuning kernel, that Solaris has a modular kernel, and that UnixWare needs a recompile (relink) for any minor changes forces the admin to think about the operating system instead of just drooling on the keyboard.
The biggest differences are still SysV vs BSD. Understanding those is vital in a mixed OS environment. Beyond that, there are usually differences in disk layout (and filesystems), but they just add to the rich diversity that is my favourite OS.
At my work, we are big users of Solaris, but because we develop software for multiple platforms, I've also had exposure to AIX, UnixWare, Sequent Dynix/PTS, HP-UX and DRS/NX. These days we've dropped Dynix/PTS (EOL anyway I think), HP-UX (too expensive for our customers), and DRS/NX (dead?) but we're looking to port to OpenUnix 8 and Red Hat Linux, so things are still pretty mixed. I just think it's a shame that I don't get to work with HP-UX and that Unixware is dying (yes - I like it!).
We also port to NT/2000, which is good to compare - it's a nightmare to work with when used to UNIX.
On running a homogenous network (Score:3, Informative)
At Carrier1 had FreeBSD, Red Hat & Debian Linux, Solaris 9 & 9, HP-UX, even GNU/Hurd and Mac OS X (well, on *my* system
The only problem I've ever had is the fairly trivial (?!) one of getting the command flags right - stuff like the 'ps','route','ipchains, 'ipfw' and 'ifconfig' commands syntax being different, the different flags for package management tools, that sort of thing.
I quickly came to realise that it's not possible to remember all the flags for all programs and remember the best way to do something on a particular system if you are busy all the time, things just seem to seep out. This happens if you are spending lots of time programming or in meetings or working on large projects - in which case you might not touch one type of system for months (until there is a problem with it), at which point you find your self quickly reading man pages and referring to Google a lot. All you need to do is remeber what's improrant, especially things you'll need for troubleshooting, and not worry about the rest - it's enough to know about tool's like Solaris 'ndd' and Linux's 'mknod' and what they do, if you need to remeber exactly how to use them in a given instance you can refer to man pages, O'Reilly Books or Google (which I often find the fastest).
Staying current, reading Freshmeat everyday, installing and configuring new Unixes and new & un-familer packages regularly, being on mailing lists and reading Slashdot are good ways to stay up to date - the more you know the less likely you are to run into something completely unexpected. If your resourceful (which you should be as a Systems Engineer) the only real problems arise went you don't even know where to start, everything else is a piece of cake.
Basically, if you really know unix (and are not just a Red Hat Linux or Solaris flunky who has convinced themselves they are Gurus while they still run Windows 2000 day to day) then you won't have any problems.
Oh, and making lame excuses like 'well I need Windows for work stuff' and 'they won't let me run Unix on my desktop' DO NOT wash - they are just that - excuses for lameness.
I have been for job interviews and been introduced to guys who called themselves (literally!) 'Unix Gods', yet they had only ever used Solaris - if you have any of those you are in deep shit right now. [ Needless to say I ran a mile! ]
Most people fall somewhere in the middle of those two, you'll probably only have one or two decent guys, if your lucky, though if you need to ask you are very possibly in trouble already!
YMMV.
Re:Only 2 Versions Of Unix (Score:2, Interesting)
Re:Only 2 Versions Of Unix (Score:2)
Re:Unix Flavors (Score:2)
Open Source is a philosophy of software distribution, not a standard for setup and maintenance.
Try switching from Mandrake to SuSE without pulling out a few hairs relearning where all the init scripts are kept and how the system is configured and maintained. Then jump to a BSD for shits and giggles.
Re:Unix Flavors (Score:3, Informative)
Going from SuSE to LFS wasn't as bad as you might think. The main difference that I can recall is that the scripts that control various services live in /sbin/init.d on a SuSE box, but /etc/init.d on an LFS box.
The biggest difficulty is dealing with the automated config software that most distros use. I can usually set up most things on a SuSE box through YaST, but I haven't figured out whatever config utilities are used by the one Redh*t box at work that I haven't nuked yet. (Then again, I ran SuSE at home for a couple of years. I ran Slackware before that, and SLS before that. I've never installed Redh*t or had to deal with it prior to my current job.) I'd still rather tweak the different config files manually for the few apps that need adjustment, though; it's usually easier to dial in the exact setup you want that way. That's why most of the Linux machines I control run LFS now (the only exceptions are the aforementioned Redh*t box and an ancient 486 print server that was set up with Slackware because I didn't want to wait for that slug to build LFS).
Re:Unix Flavors (Score:2)
Re:Value Of A Good Admin (Score:3)
You have a skewed view of the UNIX admin world. In my workplace (small investment firm), we have 6 UNIX admins who jump at the chance to learn something new. They'll dive into Linux, then realize OpenBSD is better for their task because of its security and inhale the documentation, all while keeping a fleet of Solaris servers running for production work.
If one of them does not know how to program, he picks up a book and starts writing python in a couple of days.
It sounds like you and I are at the extreme ends of the UNIX admin experience, because my situation sounds so opposite to yours.
To the original poster: what kind of workplace is yours? If your UNIX people jump at new stuff, they'll soon figure out if the new system can be successfully integrated and how long it will take.
Give me a CLI, or give me death! (Score:2)
I have a medium-sized Tivoli installation, and it has not really reduced the number of SysAdmins we have. Tivoli still has some bugs in it, and some of the modules (i.e. software distribution) do not work consistently. Furthermore, Tivoli has problem running large scripts on remote servers.
In fairness, it does a very good job of monitoring systems, and it takes care of the more mind-numbingly dull and repetitive tasks for us, but we still need to have skilled UNIX admins around.
Re:Forget the command line (Score:2)
Using what you ask? Korn Shell code.
this includes automated, unattended reinstalls, backups, printer selection and setup, software installs, X configs. Everything, and we were bored.
So Tivoli, or Unicenter? yeah, it's doable with them, but it's also doable with shell scripts. And a hell of a lot cheaper.
Re:it depends on the architecture (Score:2)
According to this [microsoft.com], "The POSIX subsystem included with Windows NT and Windows 2000 is not included with Windows XP Professional." It is a separate ($$$) product called Windows Services for UNIX [microsoft.com]. Don't know if this applies to the "server" flavor of XP though. MS can be so inconsistant for a monopoly... I like how they call it "windows services for unix" when it's really "UNIX compatability for Windows." Ahh, MS marketing...
Besides the "optional" posix API layer (and optional generic utilities,) there is NOTHING unix like in Windows.