Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Unix Operating Systems Software

How Hard is it to Manage Different Unices? 374

vrmlguy asks: "Where I work has several Unix-based servers, all running the same vendor's OS. We are getting ready to buy another big server, and management wants to get bids from other vendors. However, our staff is only familar with our current vendor's OS. Yes, I know that any two flavors of Unix are more alike than not, and yes, I know about the Rosetta Stone for Unix that makes it easy to transfer skills. I want to know about the down-side: What's the difference in the cost of operations between a mono-culture and a shop running two or more vendors' OSs?"
This discussion has been archived. No new comments can be posted.

How Hard is it to Manage Different Unices?

Comments Filter:
  • by Smelly Jeffrey ( 583520 ) on Friday June 07, 2002 @03:17PM (#3661348) Homepage
    You have a team of mechanics, and for the last 20 years all they have serviced, as well as driven themselves are Ford automobiles. Now, your boss tells them to jump right in and service Chevrolet autos too. How easy will this change be? Depends on the mechanics and how they've been trained I suppose.
    • That's a great analogy, but it doesn't fully answer the question. The TOC is going to be higher. You'll have to test programs across the other platform, and possibly find an alternative if said program doesn't work both ways. Some extra training will be on hand. It's always more effecient to stay with a single, homogeneous environment. The question you should pitch to your managers is will the higher TOC be less or more cost effective than making the larger initial investment.
      • by Microlith ( 54737 ) on Friday June 07, 2002 @04:11PM (#3661787)
        It's always more effecient to stay with a single, homogeneous environment.

        Hasn't the general consensus on slashdot been that a monoculture is a bad thing?

        Microsoft uses the same reasoning (higher TOC) as a reason to move from whatever blend people use now to 100% Windows...
      • I disagree. Platforms have different strengths. If we say that platform foo is excellent for servers but lousy for desktops, and platform bar is excellent for desktops but lousy for servers, then your TCO will be lower with both than with either alone.

        As long as two different platforms have different strengths, you can reduce TCO by taking advantage of their strengths.

        If you're using win95 for your servers and Red Hat 3 for your desktops, you'd be better off with a monoculture. (And may whatever god you believe in have mercy on your soul!)

        In fact, I'd go so far as to say that effective deployment of technology will never increase costs.
      • TOC? Like a table of contents?

        I think you mean TCO (Total Cost of Ownership)

        Unless you mean table of contents, I guess two vendors would have two different manuals, which would theoretically double the Table Of Contents you have to deal with. And don't even get me started on indices... ;)
    • My experience is that in the 1990s when we ran an all DEC shop we got our DEC prices on hardware down by 10% when we bought a sun box. At another institute we saw a similar effect with SGI.

      The price advantage you see is likely to depend very much on the type of computing you do and the volume. If you only buy 5 machines a year I doubt that the price break you get by going to a multivendor environment is going to be worth loosing binary compatibility for, let alone the admin hassle. If on the other hand you buy 100 machines a year you should definitely get a second vendor in place.

      The other issue is that the price gap from Sun to Intel is huge. Comparing machines of like performance Intel boxes can be up to a third of the cost. Unless you have a real definite need for the features of a non-Intel platform (and I can't think of many offhand) the cost saving of Linux or BSD can be great.

      I can't think offhand of any reason to have six vareties of Linux arround unless you are a masochist of some sort.

  • by 2names ( 531755 ) on Friday June 07, 2002 @03:19PM (#3661359)
    It has always been my opinion that if you have people who understand the concepts and underpinnings of how *nix systems work, the flavor of the OS doesn't matter. People who have a good understanding from an abstract point of view will easily pick the differences in syntax, location, etc.
    • Right-- all you'd have to worry about in this case would be updates. You're not likely to have the problems you would with an Apple or MS network, in that machines won't be able to talk to one another.

      There's something to be said for Unix's adherence to standards, unlike a certain closed source vendor...
    • Also, install your shell of choice, gnu utils (if you like them or know them) and it makes life a lot simpler.

      Don't forget books like Essential System Administration, that list different flavors for every command/procedure.

      That said, it is ALWAYS easier to take care of a bunch of things when they are all the same.

    • by walt-sjc ( 145127 ) on Friday June 07, 2002 @04:18PM (#3661824)
      Yeah, basic management is similar. HOWEVER: maintaining multi flavors of unix is quite expensive in terms of admin time and effort.

      Think patches. Now you have to track multiple vendor advisories, handle patch management, what does the patch break, depend on, etc.

      Next comes config changes. No longer can you write a simple script that makes the change on all boxes, you have to support multiple scripts, or write scripts to handle the ideosyncracies.

      Third comes binary compatability. What I generally do is build a local set of binaries for all the specialized stuff. They can either be blasted across all systems or mounted via NFS. With multiple versions / flavors of the OS, work gets doubled, tripled, etc. What used to be a 2 hour task turns into a day long task.

      Can't forget about security. What you do for one system you have to do differently for another. Different tools, binaries, rc scripts, etc.

      The bottom line is that if you needed 4 people to support your current single OS environment, you may need 5 or 6 or even more when you go multi-platform.

      I ran a shop where we supported Win98, NT4, Win2000, Solaris 7 & 8, RedHat Linux, FreeBSD, Macos9, MacOSX, and a smattering of other OS's for 500 users. This get's non-trivial Very fast.
  • I have been doing SysAdmin for a mere 3.5 years. At work we have a few that I need to deal with every so often: HP-UX, Solaris, AIX, SuSE, and Redhat. Seems like all the Techie SysAdmins (myself included) have taken to using the OS they know for a project and then expecting other folks to pick it up ("Oh, it is close enough..."). I wish our company has the insight to pick a standard - 1 or 2! I would LOVE for the versions to follow the same file placement conventions, command conventions, and system management tools. Maybe, someday....
  • Easy (Score:4, Funny)

    by Anonymous Coward on Friday June 07, 2002 @03:21PM (#3661378)
    What's the difference in the cost of operations between a mono-culture and a shop running two or more vendors' OSs?

    $32,593.12

    Now can we stop with these stupid, inane questions? I would rather read Jon Katz than these awful Ask Slashdot questions of the past 3 months or so.
  • Licensing (Score:3, Insightful)

    by Computer! ( 412422 ) on Friday June 07, 2002 @03:22PM (#3661387) Homepage Journal
    Your biggest expense is going to be training, but your company will probably choose to take it out of your clients' and employees' pockets by doing it "on-the-job". Next up would be licensing fees.

    Unless Vendor B is offering a competitive upgrade from Vendor A's software, it would be much cheaper to negotiate an additional Server and Client license pack from your existing vendor than to enter into a new business relationship with some new vendor. Unless, of course, the new "vendor" is (sigh) Linux.

  • For the users, all of the familiar commands shoud work fine. But maintaining the boxen will have a cost. For instance, I know how to create a disk partition under both Linux and AIX and can say that the process is totally different. Also, you'll have to keep two different platforms up to date with the latest patches. And don't forget your apps, which probably won't have binary compatibility. You'll have to make sure that all of the apps that you wish to run are ported to your new Unix flavor of choice.
  • by Anonymous Coward on Friday June 07, 2002 @03:23PM (#3661396)
    Not a flippant post.

    The quality of Unix sysadmins has declined so much over the past decade that what passes for a sysadmin right now is what I used to call "an operator".

    We have 5 unix sysadmins (major transportation company). Not one of them could write a shell script if their life depended on it.

    They insist on doing everything by hand and then complain there are no automated tools to them. Their definition of an automated tool really means "graphical front end to those grubby text commands".

    They have no appreciation for the modularity of unix, and they look longingly at Windows servers.

    Meanwhile, they're all getting paid twice what they're worth because apparently as dumb as the Unix sysadmins are, the NT ones are apparently on a different evolutionary scale where "rock" is considered the most intelligent life form.

    So my point is that getting these sysadmins to switch won't happen. They'll piss, bitch and moan about the opportunity to learn something to enhance their skills, then complain the application is screwing up "their" servers.

    If only ASPs would take off, my life would be much better, because sysadmin skills suck so bad, black holes pale in comparision to the event horizon of these so-called admins.

    • The quality of Unix sysadmins has declined so much over the past decade that what passes for a sysadmin right now is what I used to call "an operator".

      That's because times have moved on. Sysadmin is no longer considered a cool job. Cool dudes, though, still have cool jobs. Somewhere else.
    • by medcalf ( 68293 ) on Friday June 07, 2002 @04:04PM (#3661739) Homepage
      We have 5 unix sysadmins (major transportation company). Not one of them could write a shell script if their life depended on it.

      Hire better admins. They are out there, and a lot of them are unemployed right now. Any problem in an organization that persists past a few days or at most a few weeks is a management problem.

      • Easier said than done my friend. I consider myself a DAMn good UNIX admin. I say that now in the past 6 months, because having these monkeys test my knowledge in interviewws, getting every oneof their stupid questions rigth, then they decide not to hire me, and when I meet the admins they have, and they interview me, I'm appalled. Wait..YOU are interviewing ME?? When they find out I've been admining UNIX practically as long as they've been alive, they get scared. They don't WANT a killer admin. It stirs up the pot. I'm going to chnage my resume, and say I have 1 year of Solaris and see how things change.
    • by Amarok.Org ( 514102 ) on Friday June 07, 2002 @04:13PM (#3661800)
      We have 5 unix sysadmins (major transportation company). Not one of them could write a shell script if their life depended on it


      Seeing as how I'm a senior admin (who *can* script), in a team of 5, for a major transportation company, I wonder if you're my boss? *grin*

    • Not all of us are dead. Some, like me, are unemployed and looking for our next job. Given your description, I'm sure I could do better than two or three of your existing admins. Where are you located?
    • Amen brother.....you had the balls to say it. I have interviews now, and they ask me, "Do you know shell scripts?" I'm like 'DUH! Of COURSE!!" then I realize that most admins aren't admins. After being in this industry for 15 years, and seeing ex-MCSE-cum-Sr Unix Admin's being 20 with NO concept of UNIX as a thought process, it realy chaps my ass.
    • Well, there's a few of us old dinosaurs out there... but the companies look to hire "certified" admins who work cheap.

      Good old seat of the pants generalists often are overlooked in favor of the latest Whiz-Bang rookies straight out of the memorize for the certification test prep school.

      I was a trainer doing sysadmin training for one of the big iron multiprocessor Unix boxes -- and in '93 you could see the beginning of the end as folks who were basically operators became sysadmins.

      Bill Pechter
  • by PsychoSpunk ( 11534 ) on Friday June 07, 2002 @03:23PM (#3661397)
    So, is it just me or does it bother you that the "Rosetta Stone" states "This custom drawing feature requires IE 5"?
  • by bill_mcgonigle ( 4333 ) on Friday June 07, 2002 @03:23PM (#3661399) Homepage Journal
    The same principle applies to natural and computer languages - the more you know, the better you understand the fundamentals.

    Sure, you might know how to do x,y,and z on your Solaris box, but once you understand how to do it also on RedHat and AIX, you'll understand much better how it works conceptually. Then when you get an HP box, it'll be pretty easy.

    Of course, don't run killall on HP. :)
    • by Soko ( 17987 ) on Friday June 07, 2002 @03:46PM (#3661612) Homepage
      The same principle applies to natural and computer languages - the more you know, the better you understand the fundamentals.

      How about knowing multiple OSes is good for you? Same logic applies. I "speak" Windows, *nix, MacOS, IOS and even some VMS. Now, I'm not afraid of any computer - I know I can figure out what to do with minimal info available.

      If everyone didn't care so much about what the OS was because they were afraid of something new and just chose the right tool for the job, "vendor lock-in" might go away. A whole lot more understanding would come about in the IT field, in any event.

      Soko
      • Many companies I've come in contact with won't believe you if you say you know more than 2 UNICES. They can't comprehend it. If they are a Sun shop, and if their admins know nothign but Solaris, they think you're lying. It's beyond their comprehension. I think having multiple UNICES on my resume actually HURT me more then they help. I've been a Sr. Admin, NOC Director, etc. I'm applying for Jr admin jobs now. I can't even get those.
  • It is fairly easy to transfer sills from one version of UNIX to another.

    Plus, it is greate for the resume. When you get tired of this job, get fired, laid off, or transfered you will find it much easier to find another job.

    Some of the differences between different versions of UNIX include:

    BSD or AT&T based
    Disk tools
    Adminstrative interfaces and GUIs (SAM, SMIT, etc)
    Startup / Shutdown scripts (rc.d vs init.d)
    User management
    Included tools ("top" is a big one)
    Backup and recovery (hp includes fbackup / frestore)
    X-Windows (CDE, VUE, etc.)

    Some if the similarities include:

    user land tools (ps, ls, find, etc)
    Directory structures are slowing becoming the same
  • Caveats (Score:5, Insightful)

    by medcalf ( 68293 ) on Friday June 07, 2002 @03:24PM (#3661411) Homepage
    Generally, this is not difficult to do, as long as your admins understand the bases of UNIX. (Vendor-centric admins sometimes don't, as they get dependent on their vendor's tools.)

    The problems can arise with:

    1. vendor-centric admins who aren't willing to learn
    2. different service contracts creating differing expectations of uptime between systems
    3. added costs from maintaining multiple service contracts and training on multiple platforms
    4. finger-pointing, if the systems interact
    5. rewriting in-house tools which are needed on the new platform, but were not written generically before
    6. 3rd party licensing costs may differ (if you are licensing the same product on both OSs)
    7. dilution of expertise, since your admins will have to be more generalists (this is often overbalanced by the expansion of perspective in problem-solving that comes from broader experience)

    Other than that, I can't think of anything off the top of my head which would make this hard. Generally, it is not a problem to do.
  • on how you are using each platform. The biggest problems I have seen deal with propietary features in the different Unices. For instance, I worked as a Solaris Admin using NIS+ and while it supported authenicating other Unices that could just use NIS, it don't work well. But that was a years ago.

    Things that help include creating branchs in your login scripts (.profile or .cshrc) that set your preferences on the different boxes using uname.

    There is a good O'Rielly book called "Unix for Oracle DBAs" that is a really good cross Unix reference that you should consider picking up.

  • There are a lot of variables you haven't talked about, e.g.:

    1) What sort of applications do you have running on these servers and how interoperable are they? Does it matter how interoperable they are?

    2) And further, are those apps dependent on that vendor's Unix?

    3) How much resistance is there from the staff to learn something new?

    Assuming the above aren't a problem, then it shouldn't really matter. Go open-source and save a buck or two.
    • MANY companies are requiring experience with things like EMC, Veritas, Oracle, Weblogic, websphere, and a plethora of other crap. If you are a good unix admin, you WILL have worked with at least ONE of these tools. The funny thing is, an interview I had yesterday wanted me to know ALL those tools mentioned above. I have used them all, but not at the SAME TIME! I honestly thing the job requirements are written by the present admin, knowing NOBODY will have them, thus insuring he keeps his job.
  • Value here? (Score:3, Informative)

    by greygent ( 523713 ) on Friday June 07, 2002 @03:25PM (#3661425) Homepage
    I'm going to try and not sound like a troll here. But this Ask Slashdot question seems complete rubbish.

    Coukd the Slashdot folks be a little more discriminating in their choice of questions, please? The most entertaining/thought-provoking parts of this story, seem to be the idiot troll posts. This is hardly a thought-provoking or difficult question to answer/figure out with the most miniscule of job skill.

    To answer this silly question:

    The difference is: a lot, due to training/familiarization, support contracts, possible hardware differences, etc. DUH.
  • It works like this . . .

    You learn one flavor of UNIX, get to know it inside & out. And because of the shop, the job market, whatever, you start working with another flavor. And it will look weird because it's different.

    Sometimes the differences are due to developers' choices, sometimes they fix problems existent in the first flavor you've encountered, sometimes they cause problems you didn't have in the first flavor. And sometimes what's weird about this new flavor is because the guy who set the computer up botched things.

    Also, the longer you know one flavor of UNIX, the more likely you are to call any new flavor you encounter ``braindead".

    Except when it comes to SCO. Trust me on that one.

    Geoff
  • by swagr ( 244747 ) on Friday June 07, 2002 @03:26PM (#3661437) Homepage
    What's the difference in the cost of operations between a mono-culture and a shop running two or more vendors' OSs?

    How much of a raise are you asking for?
  • by jimhill ( 7277 ) on Friday June 07, 2002 @03:26PM (#3661438) Homepage
    Sure, an environment with only one vendor's OS deployed is easier for the admins to handle. However, if a problem develops, that problem will affect EVERY SINGLE MACHINE you have. Don't lose sight of that in your zeal to minimize the admins' workloads.
    • if a problem develops, that problem will affect EVERY SINGLE MACHINE you have.

      And the solution will solve the problem on EVERY SINGLE MACHINE you have. Therefore, if a problem IS to develop, homogeneity is preferred, because you're going to have to solve it anyway.
  • One major consideration will be the service contract with the vendors. With 2 vendors, you'll buy 2 service contracts. You can have all the best sys admins you want, but I'm sure you'll need at least a minimal service contract for a commercial Unix. Can adding a second contract add a lot to overall cost? Hard to say without any details of the company, but I'd guess adding a basic service contract from the new vendor will significantly add to TCO.
  • Software costs (Score:3, Interesting)

    by Krieger ( 7750 ) on Friday June 07, 2002 @03:27PM (#3661465) Homepage
    Are what's going to kill you. Having to support software and software interoperability between different platforms can be a serious pain. A mono-culture is easier when dealing with software. However if you are presented with a significant enough savings from another platform, consider it.

    Your admins, if they're any good, should be able to adapt to a different UNIX easily. Yes there are differences, but not ones that should trouble an experienced admin any longer then it takes him to read a couple man pages.
  • If you are happy with the vendor you are with, and everyone likes working on their product, it makes NO sense to switch vendors to save a couple bucks. The technicians will spend (as read by management: waste) their time learning the ins and outs, do's and dont's of the new OS. Not to mention possible incompatibilities, and more wasted time futzing with network integration, plus warranty and support calls (Sun: Its your IBM box at fault! IBM: its your Sun box at fault!) If you are happy with the platform you are on, stay with it!

    Explain that to management, and I'd be very suprised if they didnt continue with the origional vendor.

  • by The Fat Guy ( 12582 ) on Friday June 07, 2002 @03:30PM (#3661484)
    Statement of Bias: I "administer" several UNIX OS versions (Solaris, IRIX, Linux, occasional HP-UX), but in an isolated network with no outside connections (so very little emphasis on security).

    Two factors come to mind:

    No matter how close the systems are, you will still "loose" time to training (either formal or OJT) requirements for the new system. This may actually be a benefit for your staff (wider perspective, more to put on Resume).

    Depending on how much focus is placed on security, you may end up doubling the time required to track vulnerabilities and install patches. Again, this may be an advantage as well since a single-os shop tends to have equal vulnerabilities on all systems. In a multi-os shop an attacker will have to work harder to get control of everything.
    • This is true, but the old addage goes "don't put all your eggs into one basket". This can be applied to system security too. Multiple different OSs means that it is less likely they will all be hit with the same security hole at the same time. That may not be the case everytime, but what the heck nothing's perfect, just look at M$ Windows ;)

      • But when it comes to key services, a single vulnerability is often found to affect multiple vendor's products. A bind or sendmail vulnability, for example, is going to hit everyone.
    • more to put on Resume

      Yes, making their employees more attractive to other companies is always an organizations first priority:)
  • by bluGill ( 862 ) on Friday June 07, 2002 @03:30PM (#3661490)

    It is easist to manage servers from only one vender. Unix makes ti easy to transfer skills, but here is the contradiction: It is easier to manage servers from many different venders and versions, than to manage just one server that is different from the rest.

    When you have all OSes the same it is easy because everything automaticly transfers. When all are different it is harder because you always have to remember the correct incarnation of each procedure, but because they are all different you get in the habbit of looking it up each time. When all are the same except for one machine you forget on that one machine that everything is different and you aply the wrong incarnation (ofte with disasterious results, see discussions of killall linux vs hpux on comp.risks) Because of this, the one different machine will get [invalid] complaints often due to these differences.

    If you can't stick with one vender, then you should go with many so you are in the habbit of checking the differences. At the very least get some linux (debian, redhat, suse), and bsd (free, open, net) machines in house now, and use them for production. You need to make sure that your admins are used to subtile differences. The other alternative is to just stick with one vender, but not only do you pay more, but your admins become lower quality as they learn only one system. (think of it is a resume builder, you want different systems on your resume!)

    • see discussions of killall linux vs hpux on comp.risks
      I recall the admin at my college coming into the advanced lab (SGI Indys, "advanced" when they were new) one evening, and calling up the man page for 'killall' on my terminal. "Read the second paragraph there" he says. It read that typing killall without arguments will attempt to kill all processes not in the current group. "Guess what I just fat-fingered on Elvis" (our main server)

      Guess folks at SGI never heard of the "path of least astonishment", such as printing a usage message if there's no arguments. Then again you could argue that it's not very astonishing if typing 'killall' really does kill *all*.
  • by drenehtsral ( 29789 ) on Friday June 07, 2002 @03:33PM (#3661510) Homepage
    It seems to me that the biggest cost is in sysadmin time. I figure it this way, at work I use a few UNIX systems. We have one machine running IRIX, a couple running BSD and one running Linux. Now, when I write a script one one of the BSD machines, it works on all of them, but it may or may not work on the Linux machine, and certainly won't work on the System V-esque IRIX machine.

    Now, if your sysadmins employ a lot of scripts, figure you'll have to spend twice the time maintaining them if you have two different platforms that are not fully compatible. You can minimize this if you stick to POSIXly correct scripting, but you'll never completely eliminate it.

    The same goes for custom programming. For instance, if you're running everything on BSD, and you want to take on a Sun machine running solaris, there may be some issues with the occasional socket call that Sun implments differently from the rest of the world.

    So, the more custom scripting/custom apps you have, the more time your sysadmins will have to spend maintaining/porting/testing the stuff.
    • by Neil Watson ( 60859 ) on Friday June 07, 2002 @03:51PM (#3661642) Homepage
      I think the key here is to try and have common tools for all your systems. Shells vary from system to system. Even tools vary. GNU grep is different than Solaris grep as is tar. Remembering the various differences can be time consuming.

      I think if you were to ensure that all of your systems had the same shell installed or the same version of perl and selected modules you'd save alot of time.

  • by why-is-it ( 318134 ) on Friday June 07, 2002 @03:38PM (#3661548) Homepage Journal
    From personal experience, I found that backup and recovery can be quite different depending on the flavours involved. For example, I back up my AIX systems with /usr/bin/mksysb, ftp the file to a system that is connected to a tape library and copy the image to a 4mm tape. I can do a bare-metal recovery from that tape to any equivalent or better RS/6000 in about an hour or so and have an exact clone of the original system as of the date the backup was taken. In this regard, AIX rocks.

    My Solaris backup and recovery strategy is not as elegant in that I make backup tapes via /usr/sbin/ufsdump, but restoring a system from tape is more involved, and I cannot restore that tape on a different class of Sun hardware.

    I do not expect both to work identically, but there are some significant differences between the two.
    • by sheldon ( 2322 )
      Most companies I have worked at or know people at go with a third party backup solution such as the ones from Tivoli or Veritas.

      Makes your backup/recovery fairly consistent across different products, plus everything can then be managed from a central console.

      • Most companies I have worked at or know people at go with a third party backup solution such as the ones from Tivoli or Veritas.

        We have looked at both TSM and NetBackup. Both are an improvement over my current Solaris backup and restore process, but neither are as nice as what AIX does out of the box.

        It was an interesting experience talking to the vendors: the Veritas guys claimed their product was better than TSM, and gave use four or five reasons why. The IBM people claimed that their product was better than Veritas for the exact same reasons. Both products are pretty good, and each has strengths and weaknesses.

        AIX has it's faults but +5 insightful to IBM for mksysb.
  • by Myrv ( 305480 ) on Friday June 07, 2002 @03:39PM (#3661563)

    The short term costs to retrain staff for the new system will be higher but the long term benefits will definitely outweigh them. Once you build a multi-OS capable IT department the cost to add new hardware later on becomes significantly less. By not being locked into one OS (and one vendor) you have the freedom and flexibility to choose the best solutions for future problems (as well as hunt for the lowest cost). The smart thing to do is diversify your IT shop as much as possible so that you can insure you always have the right tool for the right job. No single vendor or OS can provide all the answers, regardless of what IBM/Sun/Microsoft may try to tell you.

    • > you have the freedom and flexibility to choose the best solutions for future

      One small problem with this is if the person actually making the decisions wants to open up the possible vendor pool for political/economical reasons rather than technical. For this reason, you can end up with multiple *nix boxen from multiple vendors, neither of which were selected because they are technical more suitable for their individual tasks, but rather because of politico/economic reasons.
    • It's NEVER cheaper to be multi-platform. More flexible, yes. Right platform for the app, yes again. Cheaper, no. The overhead of managing multiple flavors is large. It matters not who you have for sysadmins, or how capable they are. Patch management, change control, binary compatability, backups, security management, OS upgrades, service contracts, hardware compatability, etc. are all issues that cost you more. You end up having to do the same work over and over for each flavor. Been there, done that.

      A TRIVIAL example would be changing your IP address space. Each flavor maintains it config in a different way. It doesn't matter that you know how each one differs. You won't be able to write a simple script that just makes the changes (it would be a complex script if you even chose to do it via script. Or you would write a script for each platform. You would probably end up doing it by hand.)

      Another trivial example would be initial system load. With solaris for example, you setup (and maintain) a jumpstart server. When you get a new machine, an hour later you have a complete environment setup with all your customizations, up to date patches, etc. without hardly lifting a finger. Now add IRIX, Redhat, debian, freebsd, aix, HP, OSX into the mix. See the problem?

      The list of examples of all the additional costs associated with maintaining multiple flavors is virtually endless.

      You basically have it backwards. It's cheaper in the SHORT run. You can shop based on price. Initial setup isn't that bad. It's the LONG term maintenance costs that get you. It's ALWAYS easier / faster / cheaper to only have one platform to maintain.

  • It isn't too bad. (Score:3, Insightful)

    by pmz ( 462998 ) on Friday June 07, 2002 @03:46PM (#3661608) Homepage
    Just keep text-file logbooks as you learn new things about the different UNIX implementation. Keep them in a hierarchical database on an NFS file system or web server somewhere, name the directories and files consistently by OS and topic (topics such as DNS, network booting, firmware, SCSI naming conventions, package management, etc.).

    I do this at home to juggle Solaris, OpenBSD, and Linux and it works well. If I forget how to setup DNS under Solaris, I just go to <base_dir>/Solaris/8/DNS_Setup.txt, for example.

    Also, install all of the on-line documentation you can and have it network-accessible. For example, when the man pages aren't detailed enough, on-line Solaris Answerbooks can save the day.

    Also, keep well-organized bookmark lists for useful websites, such as http://docs.sun.com or http://sunsolve.sun.com, that cover your particular UNIX.

    Having any number of UNIX implementations really isn't unmanagable (unless they have broken network protocol implementations). The key to success is documentation and more documentation (and unambiguously sharp sysadmins). On that note, if you don't have faith in your system and network administrators, you should just give up and stick with one OS, since no amount of documentation helps a truly stupid human.
  • Aside from having to watch two patch lists, and maintain a skillset for two platforms, there's another large consideration to be made.

    Money.

    You obviously already know that managing support contracts from multiple vendors is going to suck. I would also recommend taking a long hard look at ongoing support charges.

    For example, we have both HP/UX and Sun platforms where I work. We have both servers and workstations. For the workstation support contracts on similarly sized machines, there was a world of difference in cost.

    The annual fee for an HP C240 workstation was somewhere between $2500 and $3000. The same annual charge for a Sun Ultra of equal speed, was between $1000 and $1500. Multiply that by the number of workstaions you have to maintain, and it can add up very quickly.

    The up front cost typically isn't where they get you. It's on the back end. I would research the back end on all of the platforms you are considering very carefully before making any final decisions.

    Hope that helps a little.

  • by Chagatai ( 524580 ) on Friday June 07, 2002 @03:50PM (#3661637) Homepage
    As Greek was the telltale language that helped greatly with the Stone, I would have to side that HP-UX is about as close as you will get. I work on a daily basis with the major business-tailored Unices, AIX, Solaris, HP-UX, and Linux. As all of these other posts have said, commands to perform one action on one OS greatly differs from another. But I have noticed that HP-UX seems to be an amalgam of the other three Unices listed above.

    For example, on Solaris (without Veritas Volume Manager), you have to "carve out" your disk filesystem by filesystem, and work with devices in /dev/dsk/cAtBdCsD format. On AIX, the concept is totally different with Logical Volume Manager, wherein filesystems can be created on the fly. But HP-UX uses both in an odd fashion, forsaking slices and using a "castrated" form of LVM. This is just one example, as you will find other things in HP-UX such as the useradd command being identical to Linux and Solaris, and the SAM tool being very close to AIX's SMIT utility.

    In the end, as you will find, there is no uber-Unix that will carry over to all of the other flavors. IMHO, HP-UX is as close as you will get. But, my personal preference of all Unices is AIX due to its ease of use (an IBM tool easy to use? I know it sounds like an oxymoron) and robust capabilities, combined with Linux integration in the most recent versions. Flame as you will, I'm interested in hearing anybody else's insight.

  • Apparently the Rosetta Stone [bhami.com] can survive 4,000 years of Mother Nature's worst, but cripples in minutes under the power of the Slashdot effect.
  • by cprice ( 143407 )
    I've found that operationally, my ability to move between Tru64, Soalris, AIX, HP-UX is relatively seamless. My biggest hurdles have come when doing hardware troubleshooting, upgrades and maintenance. Each vendor has their own unique approach to device names, hardware settings and architectures, which I've found to be the most difficult to master when moving between unixes.
  • The real metric is how many servers can your admins admin. Whatever makes that value higher than all other values is the winner.
  • by ftobin ( 48814 ) on Friday June 07, 2002 @04:02PM (#3661720) Homepage

    "What kind of unixes do you run?"

    "Oh, we have both kinds. RedHat and Debian.

  • by .@. ( 21735 ) on Friday June 07, 2002 @04:05PM (#3661745) Homepage
    The reason homogenous environments are easier to manage than heterogenous environments is due to complexity.

    Simply put, if every server and workstation is identical, interoperability is not an issue, and the work associated with tracking, testing, and applying changes to that one, homogenous OS image is minimal.

    The moment you branch out into different configuations of the same OS version, different OS versions, or different OS platforms, you've increased the complexity of the system, and thus increased your workload. Suddenly, interoperability is a factor in every decision, and issues with multiple versions and/or vendors must be tracked.

    I've been meaning to write a short paper on this for some time, and attempt to relate it to Christopher Langton's Lamba parameter for the measurement of complexity (in the 3rd Annual Proceedings of the Artificial Life Conference). I've studied the identification of single points of failure for some time, as well as the question of "how many sysadmins do I need?". Both answers are directly related to the complexity of the system being managed (here, defining "system" as the collection of applications, OSes, hardware, and networks that comprise the scope of a sysadmin's responsibility). There are indeed identifiable factors that define the heterogeneity of an environment, and the ways in which these dimensions impact such things as the number of SA's required to manage them can be defined.
  • But not as hard as you'd imagine. I use AIX, Solaris, and HP-UX on a weekly basis, as well as MacOS, MacOSX, and w2k (workstation and server).

    The biggest problem with a mixed environment is keeping it up to date with patches etc. Keeping track of that stuff is a complete PITA; I can't imagine how much more difficult it is in Linux, where the patches aren't on the vendor site (are they?).

    Besides that, the big thing that you'll need to do is make sure everything is sort of in the same directory structure. For apps that you install, put them in the -exact- same directory. For example, all my unix boxes have the same layout:

    /opt/apps
    /opt/servers
    /opt/data
    /opt/src
    / usr/local -> /opt/apps/local

    That way, it doesn't matter as much which box I'm on, and I don't have to remember exceptional cases. It also makes maintenance easier, because all the exciting stuff (non base operating system) is in a known structure. That means you can write scripts, etc to monitor everything and you don't have to change them on a per-host basis. It also means you can just copy the config.status from box to box (or directory to directory) and build without reconfiguring everything.

    'Luck!
  • http://www-ccar.colorado.edu/~jasp2/Graph.html http://cam.radioactivecat.com/unix-rosetta.pdf Not as graphically friendly as the orginal... But, Still gets the point across...
  • by josepha48 ( 13953 ) on Friday June 07, 2002 @04:19PM (#3661836) Journal
    If you are like the shops that I have been in then the biggest cost of running more than one UNIX is the hardware.

    1) You can install the same shell on just about all UNIX's. Most people where I am prefer tcsh as it has some nice features.

    2) You can standardize on scripts, either use csh (blah) or sh. We prefer sh as it is found on just about EVERY unix (Sun, HP, AIX, BSD's, Linux).

    3) Avoid vender extensions to the basic shell. HP has done some aweful things there in its bourne shell and they are not compatible with Sun and in some cases Linux either. I.E. Always use `cat foo` and not $(cat foo) in sh scripts. There are other things like that.

    There are problems in supporting more than one UNIX, but there are also workarounds if you do it right.

    • I mean, waht do you do? Do you just serve up content on software someone else writes? (Http, or SQL database?)

      Or do you write your own real-time communication software? Writing device drivers across platforms can be sticky (if you are writing a high level device driver, utlizing CDLI or DLPI) to down right icky (you go down to the metal).

      For us, switching platforms has a higher cost than the money spent on the boxen.

    • 3) Avoid vender extensions to the basic shell. HP has done some aweful things there in its bourne shell and they are not compatible with Sun and in some cases Linux either. I.E. Always use `cat foo` and not $(cat foo) in sh scripts. There are other things like that.

      go here [helsinki.fi]...

      And of course, if you've been following along for a week or two, you know that this (BING!) is a Useless Use of Cat!

      Rememeber, nearly all cases where you have:

      cat file | some_command and its args ...

      you can rewrite it as:

      <file some_command and its args ...

      and in some cases, such as this one, you can move the filename to the arglist as in:

      some_command and its args ... file

      Just another Useless Use of /.

      Dangerous Backticks

      A special idiom to pay attention to, because it's basically always wrong, is this:

      for f in `cat file`; do
      ...
      done

      Apart from the classical Useless Use of Cat here, the backticks are outright dangerous, unless you know the result of the backticks is going to be less than or equal to how long a command line your shell can accept. (Actually, this is a kernel limitation. The constant ARG_MAX in your limits.h should tell you how much your own system can take. POSIX requires ARG_MAX to be at least 4,096 bytes.)
      Incidentally, this is also one of the Very Ancient Recurring Threads in comp.unix.shell so don't make the mistake of posting anything that resembles this.

      The right way to do this is

      while read f; do
      ...
      done /etc/passwd, normal find /tmp -print would output

      /tmp/moo
      /etc/passwd

      and xargs would see two file names here. Changing the record separator to ASCII 0 means it's now valid for a file name to span multiple lines, so this becomes a non-issue.

  • I don't know why, but when I read your post, I immediately thought of this [foad.org] thing.

    Although it looks like a complete joke, there is a lot of truth in there.

  • Hello,

    I've been sysadmin at various points for
    a small cluster which has had up to 6
    different UNIXes:

    Digital UNIX
    HP-UX
    Linux
    SunOS 4.1.x
    Ultrix
    Irix

    Now, I was able to manage each of these pretty
    OK, Unixes *are* alike. However, getting patches
    and whatnot differs over each arch.

    So crudely, I would say:

    SysadminWork =
    A * number of UNIXES
    + sum_i(Bi * number of machines_i)

    where A is a very big constant,

    i is the index of each UNIX,

    Bi is a small constant, the marginal extra
    effort to maintain one more machine of type i

    What I mean is this:
    for each UNIX, you have to do a fairly large
    amount of research + effort to learn/aquire
    materials and knowledge for things like upgrades.

    Having done that, it's easy for you to maintain
    another UNIX box of that type: the cost of each
    extra machine is low, and you can do things
    efficiently via scripts.

    So the least wasteful way to use your sysadmin
    is to have one arch/OS.

    In my case, my life became progressively easier
    as I got rid of UNIXes and concentrated on running
    Linux only.
  • As far as problems go, the only real problem you have is getting used to the new environment. My company is running HP-UX 11, AIX and RedHat Linux. HP-UX is a dream to configure and when I have to work on AIX, I have to most of the time, take the back roads through the console. It can be a pain but it's just like any new system. You just have to learn it. Oh, and sometimes root on HP-UX != root on AIX... but we're working on it.
  • Note that training cost should be added to the overall cost of the equipment purchase. Tell your managers to send the staff to a training course offered by the vendor if they do buy this new box with a different OS.

  • we had multiple OSes to support.

    We didn't have the luxury of 2, no . . . we had somewhere between 10 and 20 different versions of operating systems, that is if you include different revisions, etc.

    we had everything from SCO to Solaris, to NT 4 to 2k, it was nasty.

    The company had bought out a bunch of little ISPs and just threw all their boxes in the racks and made us try and get em all on the network.

    Many of these were bought out ISPs and the admins were fired, so of course half of em had no passwords, and a bunch had all kinds of nasty little quirks.

    I would say stick with no more than 2 versions at a time, maybe 3.

    Different distros have their strong points and weak points, so balance it that way. There is not much of a learning curve unless you have like Solaris and Redhat and BSD in the same building.

    Then you start forgetting which system commands work where because you log into em so frequently to do different things.

    Its really not an issue of learning curve, its more an issue of annoyance.

    the best recommendation I have is to make scripts to do the simplest tasks, that made things so much easier for us in our situation.
  • Where I started, we had five main types of unix running (SunOS, HP-UX, OSF/1, AIX, and Linux) and multiple revs of all of them. It was trying at times, sure, but if the admin team is able and willing to think about things and isn't afraid to make mistakes, they will get the job done.

    Easilly the number one cost is admin time to learn all the tricks necessary to get the job done.
  • Lots of people have mentioned the various sources of additional costs that can come from having a multi-vendor system environment.

    There are potential savings as well:

    1. Better rounded employees will be better able to assess the benifits subtle differences in technologies for different applications.

    2. Ability to attract good people with established skills on one platform who want the chance to transfer those skills to another platform.

    3. *ability* to scare vendors when necessary into giving you a better deal because you already have in house expertise on competing systems. This can be very valuable when negotiating upgrades, new systems and renewed maintainance contracts. Just be careful when and how you wield it. If you are too heavy handed, they may just decide that they are going to loose you and try to milk all they can from you during the transition. It is probably best as a subtle threat wielded when trying to do a deal with them at the end of a tight quarter when their sales team is driven more by tactics to maximize short term revenue at the possible expense of strategic influences on pricing.
  • Comment removed based on user account deletion
  • by jregel ( 39009 ) on Friday June 07, 2002 @07:47PM (#3663023) Homepage
    I've no idea about the cost issue, but as a UNIX geek, the more versions of UNIX the better. I like the subtle differences and would hate it if there were only a single version.

    Knowing that (as an example) AIX has a pretty self tuning kernel, that Solaris has a modular kernel, and that UnixWare needs a recompile (relink) for any minor changes forces the admin to think about the operating system instead of just drooling on the keyboard.

    The biggest differences are still SysV vs BSD. Understanding those is vital in a mixed OS environment. Beyond that, there are usually differences in disk layout (and filesystems), but they just add to the rich diversity that is my favourite OS.

    At my work, we are big users of Solaris, but because we develop software for multiple platforms, I've also had exposure to AIX, UnixWare, Sequent Dynix/PTS, HP-UX and DRS/NX. These days we've dropped Dynix/PTS (EOL anyway I think), HP-UX (too expensive for our customers), and DRS/NX (dead?) but we're looking to port to OpenUnix 8 and Red Hat Linux, so things are still pretty mixed. I just think it's a shame that I don't get to work with HP-UX and that Unixware is dying (yes - I like it!).

    We also port to NT/2000, which is good to compare - it's a nightmare to work with when used to UNIX.

  • by @madeus ( 24818 ) <slashdot_24818@mac.com> on Saturday June 08, 2002 @05:10PM (#3665999)
    As a senior systems engineer from a similar organization (Carrier1 (FALCO!)) I can say there were no issue running a multi unix environment, and I've never had any issue with it at any of my previous companies (nor have any of the engineers I've worked with).

    At Carrier1 had FreeBSD, Red Hat & Debian Linux, Solaris 9 & 9, HP-UX, even GNU/Hurd and Mac OS X (well, on *my* system :). I had Mac OS X, GNU/HURD, Debian and Solaris all on my desk at one point.

    The only problem I've ever had is the fairly trivial (?!) one of getting the command flags right - stuff like the 'ps','route','ipchains, 'ipfw' and 'ifconfig' commands syntax being different, the different flags for package management tools, that sort of thing.

    I quickly came to realise that it's not possible to remember all the flags for all programs and remember the best way to do something on a particular system if you are busy all the time, things just seem to seep out. This happens if you are spending lots of time programming or in meetings or working on large projects - in which case you might not touch one type of system for months (until there is a problem with it), at which point you find your self quickly reading man pages and referring to Google a lot. All you need to do is remeber what's improrant, especially things you'll need for troubleshooting, and not worry about the rest - it's enough to know about tool's like Solaris 'ndd' and Linux's 'mknod' and what they do, if you need to remeber exactly how to use them in a given instance you can refer to man pages, O'Reilly Books or Google (which I often find the fastest).

    Staying current, reading Freshmeat everyday, installing and configuring new Unixes and new & un-familer packages regularly, being on mailing lists and reading Slashdot are good ways to stay up to date - the more you know the less likely you are to run into something completely unexpected. If your resourceful (which you should be as a Systems Engineer) the only real problems arise went you don't even know where to start, everything else is a piece of cake.

    Basically, if you really know unix (and are not just a Red Hat Linux or Solaris flunky who has convinced themselves they are Gurus while they still run Windows 2000 day to day) then you won't have any problems.

    Oh, and making lame excuses like 'well I need Windows for work stuff' and 'they won't let me run Unix on my desktop' DO NOT wash - they are just that - excuses for lameness.

    I have been for job interviews and been introduced to guys who called themselves (literally!) 'Unix Gods', yet they had only ever used Solaris - if you have any of those you are in deep shit right now. [ Needless to say I ran a mile! ]

    Most people fall somewhere in the middle of those two, you'll probably only have one or two decent guys, if your lucky, though if you need to ask you are very possibly in trouble already!

    YMMV. :)

One man's constant is another man's variable. -- A.J. Perlis

Working...