Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Linux Business

What Goes into an Enterprise Network? 61

Komi asks: "I work for a big semiconductor company, and I'm part of a group that is spear heading the Linux movement here. Right now everyone uses Sun machines to design, but you can get a cheaper Linux x86 machine that is four times faster. So it is my job to prove that Linux works. The problem is that I'm an analog circuit designer stuck in the role of sysadmin. So I need some advice on what goes into a network. It won't be that large right now, but it has to be scalable for up to a couple of hundred machines. If this works, then hopefully we'll convince all designers at my company to make the switch."

"Here's the hardware that I am planning on getting:

  • 2 servers:

    These would hold the home accounts and tools, as well as serve out NIS, NTP, etc. I know I'll need a lot of hard drive space (2x72GB SCSI each), but do I need a lot of memory? (It's 4GB RDRAM max.) Should the processor be fast, or dual?

  • 3 batch machines:

    These would be a small compute farm running LFS or something. Jobs would get queued up and run continuously. So these should be dual CPU with lots of memory, probably 4GB each. Any other particular details?

  • 10 desktop machines:

    These would be on the designers and developers desktops. These should be reasonably fast (~2GHz) single CPU machines with probably need at least 2 GB RAM. The simulations we run do not benefit from dual CPUs. They probably don't even need SCSI. I'm thinking a $2k PC should work.

  • 1 Itanium server:

    This would be to play around on to test our 64-bit applications. The only advantage of 64-bit is applications using huge amounts of data.
We plan to run Red Hat 8.0 on these machines. Is there anything I'm missing? I don't have much redundancy in the servers. I plan to do backups to DVDs. Is this asking for trouble? Any further advice would be appreciated."
This discussion has been archived. No new comments can be posted.

What Goes into an Enterprise Network?

Comments Filter:
  • by Anonymous Coward on Friday March 07, 2003 @06:17PM (#5463152)
    What Goes into an Enterprise Network?

    Dilithium?
  • 1. Computers with users.
    2. ?????
    3. Big Profits!!!!

    Sorry, had to be done, go easy on me!!!
  • by WasterDave ( 20047 ) <[moc.pekdez] [ta] [pevad]> on Friday March 07, 2003 @06:25PM (#5463239)
    Either this is the biggest troll ever, or you're deeply in the shit. Assuming it's the latter, for now, I shall toss my orb and see where it lands:

    *HIRE A REALLY GOOD SYSADMIN*

    You're horrendously out of your depth and there are shedloads of really good sysadmins around who need jobs. Take someone on for three months to look at the problem properly. Advice 2:

    *DON'T BUY AN ITANIUM MACHINE*

    There is simply no point, particularly if you don't really know what you're going to use it for.

    Cheers,
    Dave
    • by 7-Vodka ( 195504 )
      Agreed.
      Hire a good sysadmin and the job will get done much better and faster.
      Itanium also doesn't sound like the way to go for you.
      Think about it. Hammer is going to be out in a few weeks which will give you better 64-bit & 32-bit performance than the itanium for a fraction of the price. It's not the solution for everyone, but for you.. it sounds like it.
      Now about redhat... You could consider other distros as well like Gentoo [gentoo.org] which give you added benefits like better package management especially if you are going to have a lot of your own source around. You can write ebuilds for them and easily install your source packages on all your machines. It could also give you a nice performance benefit.
      But the distro might best be picked by the sysadmin you hire. (who needs to be a specialist in linux, but maybe not already tied down to a specific distribution)
    • by Zapman ( 2662 ) on Saturday March 08, 2003 @01:00AM (#5465483)
      It's a big troll, sure. However it is also a chance to dispense some good advice:

      1) There's a difference between PC's and 'Server Class' hardware. The biggest is testing. It will work, and it can be supported easily. Drivers are nice and available (generally speaking). Usually dual proc, usually RAID enabled. You can use RAID to speed up read access, but almost no one does. They use it for redundancy (in case a disk flakes out on you). How much money you spend depends on how much downtime costs. If it really costs, you need RAID 1+0 or 0+1. Go with hardware based RAID if you can.

      2) Sun hardware. There are many more advantages to sun hardware than what's obvious. Never over look what a good support organization can do. You pay for it, but if something fried, I can have a part in my hands 4 hours later. Sun's low end desktop's are nothing to write home about. However, if you've got Ultra60's or SunBlade 1000 or 2000's, that's some really class hardware. You can do some supprising things with it. [1]

      3) Dual procs. On Desktops, even if your simulations don't benifit from dual proc, if they take a while, and they eat that 1 CPU, you'll be happy to have a second (web browsing, etc). On your servers, it's effectivly a must.

      4) RAM. On the servers, crank it. On the desktops, you should probably crank it.

      5) Cost. If your work is anything like mine, you have 'capital' money, and 'O and M' money. When in doubt, over spec the machines, so that your less likely to have to request more money from the 'capital' pool than you initially quoted. "Going back to the well" viewed poorly.

      6) NIS. NIS is evil and the plague. If your in a relativly local office with good connectivity, it's alright. If you try to spread it over WAN links, you're going to get hurt at some point.

      7) NTP? Why run a seperate server when you don't have too. Leverage what's already in use in the company. This leads to my last point (and what was the best point of the parent)

      8) Get yourself a real sysadmin. These are decisions that s/he is experienced in, and paid to do. Your trial by fire that would come from this will probably drive you insane. Good sysadmins are a rareish breed. I know, I am one. There are a fair number of good ones out of work now. Find one.

      [1] The reason largely has to do with cache. Sun chips made in the last 2ish years have 8 MEGS of cache on them (that's even mirrored so it's 16 in total, but you can only use 8). We built a GIS app, and field tested it on Sun and Intel hardware. The intel hardware could deal with 1 to 4 users with less resources than the sun box could. However, the the sun box kept growing up to several hundered users, while the intel box started thrashing hard after 10 or so. We compared a dual US3 box to a dual Xeon P4.
      • by drsmithy ( 35869 )
        You can use RAID to speed up read access, but almost no one does.

        Er, every commonly-used RAID level will speed up read access...

        If it really costs, you need RAID 1+0 or 0+1.

        You want RAID 10 (1+0, stripes over mirrors), not 0+1.

        There are many more advantages to sun hardware than what's obvious. Never over look what a good support organization can do. You pay for it, but if something fried, I can have a part in my hands 4 hours later.

        Dell can have a part in my hands in two hours - often with an on-site tech included. There are certainly reasons to buy Sun hardware, but hardware support isn't a major one.

        However, the the sun box kept growing up to several hundered users, while the intel box started thrashing hard after 10 or so. We compared a dual US3 box to a dual Xeon P4.

        And how did the intel boxes go when you spent the same amount of money on them (purchasing multiple machines and scaling horizontally) as you had spent on the Sun box(es) ? Or didn't your app scale horizontally ? (that *would* be a good reason to buy Sun - or some other lots-of-CPUs-in-a-machine). Similarly, if you have an app that really benefits from massive amounts of L2/L3 cache, then machines that have CPUs with lots of cache might give disproprtionately better results. "It all depends".

        The idea with going intel over Sun is to use the enormous price difference to buy lots of intel machines and cluster them. This works only if your application can efficiently scale across multiple machines and doesn't benefit disproprtionately from things that simply aren't available on intel - like massive amounts of fast cache memory.

        • For the most part, you gain roughly the same advantages, with roughly the same cost. What it's going to come down to in the end is how you recover from a failure...

          I mean, if you're using 0+1, and a single drive fails, it goes to raid 0. However, it's just a little ahead in performance to a 1+0 system, so you have to determine if that performance is worth the slightly reduced redundancy. So, if you're using something that's more interesting in keeping the data, because the data's rapidly changing [databases], you'd most likely go with 1+0.

          For something that's all about performance, but does want some redundancy [high load file servers], 0+1 might be better for you.

          Oh....and as for whole concept of horizontal scaling -- there are times for more machines... however, in this case, if it's 10 users vs. hundreds, that'd mean at least a 20:1 difference. I don't know what your background is, but well, I'm personally not big into having to perform maintenance on 19 machines I don't have to. [not to mention that most support contracts are based on the system... that you have to have 20 times the disk space allocated for OS and applications, and similar overhead for RAM, etc...]

          Cost calculations need to be performed on the TCO -- Total Cost of Overhead, not on the initial cost of the hardware. [oh...and if you're buying software, you might have to buy 20x the number of licenses... that little problem when not all software is free]. Sure, you might save some cash up front, and it may be easier to have rolling outages for upgrades later, but when it comes time to do some app upgrade that takes 1 hr donut spinning, and an hour of interactive configuration... you want to do it overnight, or want it to burn your entire weekend. [and well, don't think that after a long weekend, you get to take the upcoming week off... you've got to be there extra hours, in case something went wrong, and won't show up 'till under load.... and you don't get off in advance, as you're prepping everything for the upgrade, and going to meetings to assure people that it's not going to be a problem, but they weren't told the upgrade that's been planned for months is happening in 3 days]
          • However, it's just a little ahead in performance to a 1+0 system, so you have to determine if that performance is worth the slightly reduced redundancy.

            I'd in benchmarks to qunatify how much faster 0+1 is. My guess is it would be insignificant - even more so given that it would only apply to throughput, not latency and, really, how many of your fileservers are maxing out on throughput ?

            I don't know what your background is, but well, I'm personally not big into having to perform maintenance on 19 machines I don't have to. [not to mention that most support contracts are based on the system... that you have to have 20 times the disk space allocated for OS and applications, and similar overhead for RAM, etc...]

            I sysadmin at a fairly large Australian Uni, so I do get to deal with a nice wide range of machines. With decent configuration management tools, maintaing lots of machines (particularly when they are configured nearly identically) is not overly taxing. Heck, compared to the mess a large domain can get to after having a dozen different sysadmins and service admins banging away on it for a few years, a bunch of small, single-purpose boxes is a godsend, IMHO. Indeed, we are hoping to move away from our big machines (e10ks) to lots of intel boxes, simply because of the phenomenal cost of Sun equipment. A single processor board for an e10k costs nearly as much as a rack full of 2.4Ghz+2G Dell 2650s, so it makes a fairly compelling argument in terms of hardware costs.

            And as for whole concept of horizontal scaling -- there are times for more machines... however, in this case, if it's 10 users vs. hundreds, that'd mean at least a 20:1 difference.

            It depends on what you're doing. Your GIS app probably gained enormous benefits from the large CPU cache, whereas some other things would not gain (relatively speaking) as much of an improvement, and so benefit more from being able to spread the load across a bunch of machines will smaller caches (that cost 1/10th as much). Something like webserving (for example) benefits enormously from horiztonal scaling.

    • by Komi ( 89040 ) on Saturday March 08, 2003 @01:48PM (#5467597) Homepage
      This is not a troll. The issue is money. We can't afford to hire anyone, or to buy hardware. But we have a source of free loaner equipment. Our deal was to prove that their machines work, that way when design groups start getting money again, they will buy it from a proven source. So they asked us what we need, and I have to compile a list. This is a proof of concept on zero budget (except my salary I suppose). And it has to be all linux, because that's the deal.

      Sysadmins will be hired for this once money gets freed up and we can prove to groups that linux works. A later post was correct that there are really two issues. a) Getting everyone to switch to linux, and b) getting designers to put linux on their desktop. We really only care about b), but by the nature of the deal, we have to prove a) and b). Also we don't care about the cost of switching a design group over to linux either. That's someone else's job. We just show that the end result works.

      And finally, we do need 64-bit machines. Some of the programs we run use huge ammounts of data that need 64-bit to address them. So if we're getting free loaner equipment, then why not play with an Itainium? :)

      I appreciate the advice from everyone.

      Thanks,
      Komi

      • I work for a medium sized (~900 employees) semiconductor company. We have been migrating from Solaris to Linux for about 18 months now. I'm an IC designer who uses a Linux desktop every day. I pretty much spearheaded the move to Linux here, so I know what I'm talking about. We're doing mostly digital design and IC layout, but I think our experiences will apply for analog design as well.

        1. OS selection: RedHat 7.3 or 8.0. Don't bother considering anything else.

        2. Skip the file servers. You already have a large network of Suns, so I'm assuming you already have Sun, Netapp, or some other enterprise file servers. Don't mess with this. Your cheap Linux boxes are best used as fast compute servers. If one dies, chuck it into the dumpster and swap in a new one.

        3. Skip the expensive graphics card. Any sub-$100 2D card is fine for the EDA apps used for IC design.

        4. Skip the multiprocessor servers. I don't think the memory systems of most PC's are up to the task. Buy single processor systems, and run one job at-a-time on them. Use Gridware from Sun for free batch processing. Or pay for LSF if you're already using it in your existing network. Set up correctly, either can dispatch jobs to a Sun or a Linux box transparently to the user.

        5. Skip the CD drive in the compute servers; you can install the OS from the network. Make your desktop users happy by including a CD drive so they can listen to tunes...

        6. Put small IDE hard drives (30-40 Gig) in all of your Linux boxes. Discourage using the local drives for anything but transient data. Keep some spares on hand; some of these drives are going to croak.

        7. Pay for fast processors and lots of RAM. We bought Athlons, but P4's are probably faster today. There's a 3Gig per process limit, so more than 3 gig is not useful.

        8. Forget the Itanium except as a science project. Run any jobs that are too big (>3gig) for the Linux boxes on a 64-bit Sun. You won't find (m)any commercial EDA apps for Itanium. If you have some in-house applications that you really want to port to Itanium, go for it. I wouldn't bother...

        9. Install Win4Lin to run those pesky windows apps. It's cheap, and gets the job done.

        10. Prepare for people to start fighting over who gets the new Linux boxes. We don't have a single person here who would go back to their Sun.

        Good luck. EDA on Linux works. It's as simple as that.
  • Backup (Score:3, Informative)

    by the eric conspiracy ( 20178 ) on Friday March 07, 2003 @06:26PM (#5463252)
    For a network with centralized file stores you will want some sort of automated backup system. Probably an LTO tape drive/autoloader.

    • Don't waste the valuable thousands. Just script a backup routine and then have it spit the tape out afterwards. It doesn't take two minutes to swap the tapes in the morning, and if money is the issue, then you'll appreciate the saving.
  • Feasibility Study... (Score:3, Informative)

    by NetRanger ( 5584 ) on Friday March 07, 2003 @06:31PM (#5463289) Homepage
    Obviously your ROI will be based upon two metrics:

    1 > Time savings versus average hourly rates for computing & employee time costs. This would be an agressive ROI metric.

    2 > A more conservative metric would be the cost of replacements of Sun systems over time versus costs of , say, a small farm of Dell Optiplex PCs.

    You could then also compute the value of gigaflops per dollar, showing the clear advantage of the PCs.

    • Your missing one metric which my company has found to be the most important!

      QUALITY!

      Because of the reduced price and increased performance of our DELL XEON boxes we are able to run more simulations on our circuits. This allows us to check a greater range of operating conditions thus improving the overall quality of our product.
  • by pmz ( 462998 ) on Friday March 07, 2003 @06:48PM (#5463449) Homepage
    So I need some advice on what goes into a network. It won't be that large right now, but it has to be scalable for up to a couple of hundred machines.

    1) You had better find some damn fine PCs to replace those Suns, because a couple hundred PCs can make your life miserable due to lots of random breakage.

    ...you can get a cheaper Linux x86 machine that is four times faster.

    2) This is not true (unless you found Pentiums with SPECfp of over 3000!). If you buy the right-sized computers for your task, the hardware costs won't be a dominating part of your budget. Human costs and non-OS commercial licensing will be, regardless of your platform choice.

    Whenever people say that Linux is absolutely outright cheaper then commercial UNIX, then I'm pretty convinced they haven't figured out all the costs involved. Also, I'm not convinced they understand just how simple maintaining a Solaris box can be, for example, due to sunsolve.sun.com, ample documentation, optional support out the wazoo, etc.

    Before you go blazing these new trails, just stop and think for a minute. Put aside the zealotry and really think hard about what is and is not cost effective. Regardless of your choice, you really need to be convinced it is the right one.
    • I wonder what side of the aisle your on? Are you a sys admin or an end user?

      Like the author of this email I work at a larger semiconductor company. We are in the middle of switching from Sun to Linux. The price/performance difference is huge. There is more than a 10X difference in price between a DELL box and slower Sun machine.

      We have been up on Linux for over a year and so far haven't had many issues. Ok, well we have one issue.
    • by Anonymous Coward
      I also work at a company that does chip design. While it's true that the software costs are overwhelmingly huge in this space, that has not seemed to blind our money minders to the cost of the hardware. An inexpensive dual cpu PC will complete the same job as a Sun 420 in about a third of the time. This is empirical fact. The PCs also install in about half of the time a sun takes to jumpstart. We still have quite a few Suns, but the engineers only use them for huge memory jobs that won't fit in the RAM available on the PCs.

      As far as your concerns over reliability, we buy name brand PCs that are meant to be servers, not crappy integrator machines or desktops. I find that the hardware reliability on these machine is at least as good as the Suns.

      That said, we *have* to have the Suns for those jobs. We use Suns for infrastructure stuff as well. And some of the engineers cannot *live* without a Sun on their desktop, though they all whine they their desktops are too slow to run a browser.

      My feeling is that until Hammer hits the streets it's going to take a mixed environment to get the job done. Certainly when we have an option to run Linux servers with many GBs of RAM we won't be buying more Suns.
  • $2000 a PC? (Score:2, Interesting)

    by Magus311X ( 5823 )
    These would be on the designers and developers desktops. These should be reasonably fast (~2GHz) single CPU machines with probably need at least 2 GB RAM. The simulations we run do not benefit from dual CPUs. They probably don't even need SCSI. I'm thinking a $2k PC should work.

    A 2.4GHz chip is $160. 2GB of memory is around $500 (1.5G? More like $250). $85 for a DVD/CD-RW, $150 for a board with onboard sound, $60 for a decent video card, $80 for a good case, $15 floppy, $30 on KB and optical mouse ... with a 19" Monitor for $249. XP Pro is $140 for OEM to buy yourself, though the major OEMs get it FAR cheaper

    Figure $1400-1500 a PC, even from a major OEM, tops. Anymore and you're getting hosed.

    -----
    • part of a group that is spear heading the Linux movement here

      The Linux movement, not the Windows XP Pro movement. You can now take off $140. $80 for a case? jeebus, that must be one mighty fine case...
      • Re:$2000 a PC? (Score:2, Insightful)

        by Ex-MislTech ( 557759 )
        Case I just bought was $150, but it is a
        SOHO server with lockable front and side
        panel with a 430 watt power supply .

        Good cases and power supplies are worth it
        when you have expensive hardware you are
        powering and housing .

        Cases with no filters lead to dirty parts
        inside, that is why most Sun hardware has
        filters on them .

        As for $2,000 for these machines, I agree,
        that is WAY over priced if you shop around
        on http://www.pricewatch.com

        Money saved build an extra server or 2,
        several extra desktop boxes for rapid
        replace, and house all the data on the
        servers with a RAID5 array with hot
        swappable drive trays with HOT standby
        spares .

        Cool thing is you can do this with IDE
        drives now and save a small fortune .

        IDE's tend to not last as long according
        to factory testing, so keep a few spares
        handy and you should be fine .

        Well Ia m re-writing what I have already
        posted, my apologies .

        Peace...
        Ex-MislTech
    • The original article mentioned that these workstations would be used by chip designers, which I assume means they will be using a CAD program of some sort. This means a video card along the lines of a ATI Fire GL X1, which is ~$600.00 (which is the cheap one). Of course since Linux will be run on it, you can subtract the $140 for XP, that helps some....
    • you planning on putting a hard drive in that?
    • Are you insane? 19"? 21" is the bare minimum for anything even sitting on an "enterprise" network. Dual monitors or flat screens are becoming more common too.
    • No offense, but if I were an engineer with a $15,000 Sun workstation, I would laugh at the idea of a $1400 homebuilt PC replacing my workstation. Then I would probably punch you in the face for suggesting something so stupid. People seem to have absolutely no idea of the massive learning curve (even if short) Linux would have for a user moving from Solaris. All the customized menus on the Sun box for CAD applications would need to be recreated on the Linux side. You couldn't just give them KDE or GNOME and be done with it; any competent admin would replicate their environment completely and exactly in order to minimize the amount of utterly wasted time spent learning and re-learning new ways of doing things in Linux.

      The poster of this article claims eventually all user workstations will be replaced with Linux boxes (in his fantasy world at any rate.) So, let's say he has 200 highly-paid CAD engineers whose workstations might be replaced. Let's figure it will take them each one week (40 hours) to re-learn everything on the Linux side, which IMO is not unrealistic at all assuming he simply plops a new Linux box down on their desk one day. So, when all is said and done, his "cheaper" solution will have cost the company 8000 man-hours in wasted labor. Let's say each engineer or developer makes $40 an hour (again, not unrealistic.)

      So, that's $320,000 out the door on top of hardware costs, which I am sure will be more than you quoted because you can't do CAD on a machine with the kind of parts you listed (hint: a $60 video card will not work for CAD. Don't even attempt it. No, AutoCAD does not count. Real pro-level CAD cards cost more than the PC you specced out.) A truly realistic figure for a PC capable of being used by engineers would cost closer to $4000, especially if you want someone else to do hardware support (trust me, with 200 or more users, this is what you want.)

      Then there's the countless hours (hundreds, perhaps thousands) spent porting any custom applications and simulation software you may have to Linux.

      Then there's the Sun machines you'll still have to keep around for the applications which don't run on Linux.

      Then there's the two or three full-time sysadmins you will need to hire to at least oversee this tremendous effort (there's no way in hell I'd trust anything like this to an amateur who read about Linux and thought, hey, it would be awesome to switch everyone over to that at work.)

      Then, unless you hire sysadmins permanently for this, there's the 40 more hours you'll be working every week as one.

      Enjoy your Linux boxes!!

      - A.P.
  • by Inexile2002 ( 540368 ) on Friday March 07, 2003 @10:17PM (#5464815) Homepage Journal
    Never let Captain Kirk talk to the main computer. Every damn time he does he tricks it into self destructing. You'd think he doesn't want the Enterprise to have a network...

    DAMMIT Jim! I'm a Doctor not a UNIX admin!
  • by MerlynEmrys67 ( 583469 ) on Saturday March 08, 2003 @01:02AM (#5465492)
    Well you have listed some trivial hardware requirements, what you haven't said are things like: 1) Does your application that the designers use to do their daily work exist on Linux, does it run as well, is as fully featured, cost the same amount of money... if the answer is NO then this is a non-starter 2) How are you going to handle signon, login, desktop managment, etc. 3) Backup is a big issue 4) Frankly 2 72 GB hard drives isn't enterprise or scalable. Look into RAID, LVM, and other options to make the hard drive system more reliable 5) The Linux solution isn't 4X cheaper, frankly it is significantly more expensive... you have all ready purchased the current solution correct, so the cost to maintain it is 0 (well not really but still) vs. having to buy this list of hardware and very possibly new software licenses (you have the solaris licenses right now correct, probably not Linux ones.. if they exist, see point 1) So the cost of this system going forward is significantly higher than the current solution Other than that, go for it... just remember it is much easier to tell you to spec it out and then say "We can't spend that kind of money" rather than tell you No up front
  • by crmartin ( 98227 ) on Saturday March 08, 2003 @01:04AM (#5465504)
    The first thing you've `got to do is stop thinking about how you're going to buy a couple of boxes and that'll make your network, because, Bullwinkle, that trick never works. Except, at least, for those of us who consult for a living, because we often get gigs out of saving someone's shorts from the George Foreman.

    Now, back up and think about this:
    • who will use the machines on the network?
    • what will they be doing?


    In your case, you're talking primarily about engineers, and they are primarily (for job functions) going to be doing engineering ... which means (this is not sarcasm) that they will spend anywhere from 2-4 hours a day interacting with their tools of choice for circuits and engineering, and the remainging time with web browsers, email programs, etc., particularly including word processors or the like. Since you're starting with a Sun network, you at least have confidence that everything people would normally use is UNIX-able.

    Now, on you EXISTING network, measure what a few users do for at least a few days. If you've got admin on, you should be able to extract information from the logs. This will give you a chance to get at how much load there really is.

    Next task: establish some of your "non-functional" requirements. In particular, how long can response time be for your most important tools, how long can you afford to have the system as a whole be unavailable, and how much work (an hour, half a day, a week?) can you afford to lose. Divide all of those by two and make them your basic "service level agreement" -- which is simply a statement of the service you promise the users, it doesn't have to be fancy.

    Here are some reasonable values, from experience, but YMMV: most people will put up with the whole system being unavailable for an hour, they want half-second response time from specialized tools and more like about 4 seconds on a web page, and engineers hate losing ANYTHING but usually don't get too pissed off if it's less than a couple of hours work and doesn't happen very often.

    Next: what's the environment? Do you have to think about firewalling yourself from the rest of the network? (Don't assumme just because you're inside the corporate firewall that you're protected. Get AND READ the corporate security policy, as well as talking with the admins who own the network as a whole.) How will you do backups? How do you fit into the corporate disaster planning scheme? (Lots of people forget that one, but just look into what happened to the Wall Street Journal on 9/11 to see how essential it really is.) This analysis will give you a good idea what you need.

    And now, having said all that, it will turn out that what you're going to need is (1) a "big enough" file server with 5/4 RAID and a good periodic backup onto "archival media" like tapes or writeable CDs; (2) one workstation good enough for all your applications, and with at least a years' room for growth, for each desktop (plan to buy at leasy one for a spare, and set it up "hot" so a single failure doesn't slow anyone down"); (3) a smallish box as a print server (if you manage your own email, it can often go onto this); and (4) a firewall box or a router (betcha 50 cents Canadian that the company will insist on this.)

    Plan for a full week, plus one day per user workstation, for installation. That is, with 4 users, plan on 5 + 4 = 9 days for two people.

    All the other stuff, like using NIS, NFS, Kerberos, etc, will more or less fall out if you get these steps right first.
  • by jbolden ( 176878 ) on Saturday March 08, 2003 @01:42AM (#5465592) Homepage
    I'm not a system admin but it seems like you are confusing two different battles:

    1) Getting the whole company moved over to Linux for everything

    2) Getting engineer workstations running on x86s so you can get 4x the speed.

    (2) is a much easier battle to fight than (1). Don't spec a whole Linux solution for everything, spec out a Linux solution for the workstations that allows them to work with the Suns. There you can make the cost difference really obvious. Reliability isn't a big deal.... Your software vendor might even give you the test software in hopes of the license switch down the line. In the back of your mind you can keep the total Linux solution but your strategy should be to take out the Suns piece by piece by piece.

    Total overhauls come down from above not up from below. Incrimental change that overtime turns into a total overhaul comes up from below. You don't sound like you have anywhere near the juice to get a total overhaul through the company regardless of how good your analysis is.
    • Agreed. You're biting off more than you can chew. You'd be absolutely insane to throw away your Solaris infrastructure on day one. Quality will sell your ideas, and consistency is the one true measure for quality. Work with your sysadmin staff to make Linux a first-tier quality desktop. Don't go cheap. Let Linux and Solaris compete on equal terms, and it will be easy to pick a winner.

      Do you have all of your ISV's lined up? Getting all of the software pulled together that you need to be productive at your job is the hardest part. Are you there? If you're not, you might not want to even consider taking another step until you convince your ISV's to support Linux.

      We've replaced hundreds of SGI's with Linux workstations and seen huge gains in performance and employee output. We started with a single specialized application on about thirty systems, and three years later we're down to our last 20-30 IRIX boxes out of about 1000 systems in house.

      • Yeah, this is true.

        A total switch out is usually very painful,
        and ppl will fight it like crazy .

        Alot of it is not baseless either .

        Verify all aspects of software will work
        under Linux, and setup some kind of training
        for the ppl similar to a Intranet or
        just a video cd . This training can be a
        project unto itself .

        Make it a progressive gradual switch out,
        perhaps with volunteers at first , or ppl
        you are comfortable with making the switch .

        Make the needy, emotional, nuerotic freaks
        the last ones to get switched .

        Good Luck,
        Peace...
        Ex-MislTech
    • You don't sound like you have anywhere near the juice to get a total overhaul through the company regardless of how good your analysis is.
      This is probaly the crux of the matter, management is setting up the Linux thing for failure predominatly because they can say they "tried the Linux thing but it didn't work for us" this poor guy seems like he locked into a death spiral and if he even gets close to sucess they'll either cut his resources or increase his requirements. I'm sure any help he gets from us would be greatly appreciated.
  • Buy *OVERBUILT* hardware... But SuperMicro (supermicro.com) motherboards. Their boards are built like TANKS. I have a dual PII-300 machine here that has *never* been turned off for more then about 30 mins (add ram, add hard drives, etc). As far as uptime, everytime a new Mandrake comes out, I reboot to install it, and the machine stays up *until the next version of Mandrake comes out*.
  • Right now everyone uses Sun machines to design, but you can get a cheaper Linux x86 machine that is four times faster.


    You can get a cheaper Linux machine, yes. It might be four times faster, than a Sparc10, but new x86's aren't anywhere near as realiable or powerful as a new Sun. As I said, people do buy Sun stuff for a reason, and pay a hefty premium.
    4 x faster, pah! If you plonked a PC that is four times faster than the one I'm using in front of me, I wouldn't notice during the bulk of my work, because the machine is 90% idle on avg. Processor speeds go up and up, and some OS's just bloat and bloat to make up for it.

    So it is my job to prove that Linux works.

    This is already done for you. Convincing the management that you can use it to save them money I think is what you need to do, and at the end of the project you might find that this wasn't the case. Just because the OS installs for free, doesn't mean it doesn't cost anything.


    Methinks you've just started at the job, been using Linux at home for a while and think you can plonk it anywhere and go. On a production machine, I can't see the argument for Linux over (presumably) Solaris, and definitely not x86 over SPARC. I admit, I was guilty of the same Linux zealotry three years ago. Now I only want to replace every NT server w/ Linux, and leave the Solaris machines well alone, for I've learned a lot about them now, and it just can't be beat.


    BTW, what Linux distribution were you thinking of, because that makes all the difference too. It's hard to find one with a name that management will take seriously, and that doesn't suck at the same time.


  • > What Goes into an Enterprise Network?

    Prise, of course.

  • not only hardware (Score:4, Informative)

    by Ludoo ( 12304 ) on Saturday March 08, 2003 @06:03AM (#5466231) Homepage
    as a previous poster said already, hardware is not the most important factor. you will eventually find yourself working on old or semi-obsolete hardware anyway, so getting top stuff is not a priority, especially given the number of users.
    What I would concentrate in is:
    • a single source for authentication (login) and profiling (groups, home dirs location, etc.); study pam a bit, a good option is to store everything in ldap and use pam_ldap; if security is a primary concern, consider kerberos
    • network file sharing; you don't want your users' data scattered around on every desktop (your management costs will increase dramatically, and your backup strategy will be much more complex); nfs is quick and easy, but offers only decent performance and poor security; a good (but complex) alternative is openafs [openafs.org] or IBM's DFS (which is the evolution of afs
    • centralized backup on a single server, possibly running amanda so that you can backup different servers on a single medium; mondo rescue is a good option to backup systems periodically on bootable cds for quick recovery;
    • standard distro, eg pick Redhat or Debian or whatever, based on a number of factors like ease of automating installation, software distribution and package management options, etc., and stick with it; reme,ber that you have to know your patricular distro well to handle emergencies (and emergencies DO happen);
    • standard desktop, eg pick one of gnome or kde, develop suitable policies and management strategies, and stick with it; one of the factors in deciding a desktop is the toolkit used and its licensing, if you intend to develop custom software in the future;
    • software distribution strategy, plan or at least try to learn a bit about possible ways to handle updates and software installation on your desktops (and servers); you can automate package management (apt or rpm) or enterprise software (red carpet or rhn);
    • printing system, again for printing you have different options: lprng, cups, etc; check what printers/plotters you already have in house and if they're supported by printing systems;

    • Just a quick overview, to sum it up I would second the advice somebody else gave you in a previous posting: hire a decent sysadmin and plan things with him.
  • Right now everyone uses Sun machines to design, but you can get a cheaper Linux x86 machine that is four times faster. So it is my job to prove that Linux works. The problem is that I'm an analog circuit designer stuck in the role of sysadmin.

    Most engineering work, whether it's CFD and FEA or ICE is bound by memory bandwidth, not CPU speed. It requires the construction of very large in-memory data structures, which see a combination of random access and sequential traversal. Before you assert that an x86 machine is 4x faster, benchmark it with the actual applications you use, don't rely on SPECmarks and the like, which can run entirely in cache, because benchmarks aren't representative of real applications. And if you've got UPA in your workstation (like say the old Ultra 1) then no bus-based x86 can match you for I/O. If you want workstation-class hardware, it costs more than a PC.

    This is my experience: for benchmarks, my 1Ghz P3 beats my 225Mhz Octane easily, but for work, the Octane runs rings around the PC - one I/O bound task and the PC is almost unusable 'til it completes, but the Octane can max out its disks, run at 100% CPU and still remain responsive. I see similar comparing PC with Sun.

    Secondly, when you buy Sun, you aren't just buying a piece of hardware, you're buying a service. Support and maintenance you can get are far, far beyond what you can expect from an x86 vendor. If a part goes bad, you can get it replaced in a few hours. All the components in your machine have been certified as working together and working with the OS. And can your Linux vendor do this [sun.com]? (No, they can't even stabilise on one libc!) Running a network of workstations for your company's core business is a completely different game than running a network of PCs for ordinary office workers.
    • I/O on x86 can be quite fast. I'm running a massive I/O job on x86 now and the machine is quite responsive. But this is server-class x86 hardware not cheap desktop stuff. The disks are U320 SCSI on 133MHz 64-bit PCI. Most people are only thinking in terms of IDE, which can't handle any amount of real work.
  • Sorry for the horrible formatting, slashdot forces me to hit a certain line count, and I am tired of messing with coding HTML just to post to a damn msg board .

    They need to get with the future ...

    I have had some killer boxen I have built that have worked well for years and have passed on thru hands of other ppl.

    PC hardware like one poster pointed out is cheap and is gonna break Make up several extra PC's ready to go with a "image" if identical hardware is used .

    Keep several ready to go and working in a storage closet out of site and keep their
    existence little known or else they will get appropriated just because ppl "feel" they need an extra boxen .

    Don't tell anyone either they will slip and tell someone and then they will never stop pestering
    you til you have no extra boxen .

    They will even stoop to calling in favors of ppl in authority to try to scrounge them an extra boxen . They are snakes !!!

    Users should keep all their files on the servers, BECAUSE ...They will have RAID5 with complementary back ups of each server.

    If one server catches on fire, the other is backing it up during "low load" times, or at pre-scheduled cron times .

    Monitor load usage of network and servers, plan back ups and other simlar tasks off peak.

    IDE based raid is now cheap and reliable and you can get awesome amounts of storage for reasonable money .

    Ex.: 12 channel IDE Raid 5 controller with 12 - 120 gig drives pushing 1.4 Tera prior to losing 33% due to overhead of parity .

    Keep several extra IDE drives laying around, use all the same size and order them in bulk factory direct if you can .

    Hot swap trays are essential, read reviews and get the best RAID .

    Alot of ppl on slashdot have used 3ware and promise, Adaptec is always damn good too .

    Ex: order several cases of drives from the manufacturer . In IDE stay away from Seagate, and Maxtor drives that were Quantum's .

    Alot of ppl I know generally like Western Digital, IBM, and the better Maxtors .

    Again read reviews online, learn to form your own opinion . learn from the pain of others, serach news groups for model#'s you are considering buying .

    Never buy the newest, just got on the shelf products, alot of the time they are buggy and need BIOS updates.

    I know I just bought one.

    Tried and true is what should go in a server. If it is not the pillar of praise, you do not want it in your server .

    If you want to be 100% sure, go with SCSI, but be prepared to pay hideous amounts of money for equal storage .

    Set the 2 Raid 5 arrays to snapshot each other every day , and you can restore a backup in minutes this way or incrementally .

    The sheer volume of volume will let you do these monster backups, cheaply, and quickly if you use 64 bit controllers, and 64 bit PCI slots .

    Dual Xeon's for the Servers is most likely best . As for waiting for AMD's hammer,
    that is postponed damned near indefinitely, I have heard 3rd or 4th quarter .

    When I worked for cisco this is how they did it, and they snapshotted the desktops too .

    The servers, build to the teeth, MAX RAM, Dual or Quad Ethernet NIC's . Then bond the NIC ports as needed , load balance as needed . Set up some basic SNMP package with an e-mailer to let you know when boxen are burping .

    Careful not to over do it on the SNMP it can burden your servers or your network, just the essentail info, the books will clue you in on this .

    Don't bother with the expense of RDR RAM , go DDR, use the extra money to buy more of it.

    Hell use the extra money for an extra server .

    Fast RDR costs almost triple what DDR does, and RDR only outperforms in select apps .

    price compare here : http://www.pricewatch.com

    I'd recommend a top of the line Ethernet switch, after all what good is your servers if the network is crap .

    Consider fiber GBIC's from the servers to to a blade on a nice cisco switch .

    Giga-bit ethernet over fiber is a beautiful thing to behold .

    Consider a Giga-bit link from server to server to the backups so they do not load the network .

    You can just use a crossover cable if you you use Giga-bit over copper .

    Cisco is expensive as hell, but they are good . Juniper and Extreme are good too as long as you are just running one protocol and not trying to make a hybrid multi-protocol network.

    The "Hire a real sysadmin" statement is true, unless you are one to like new HUGE challenges .

    If you are stuck with this, you need to do ALOT of reading, O'reilly has some good books, but there are others you will need as well .

    Don't skimp here, read the highly recommended Unix Bible and any books it recommends .

    Unix Admin's guide too, but these alone will not be enough .

    You are about to read several thousands of pages of material, you might point that out to the ppl that dumped this on you .

    Software for the servers, I'd do alot of research, I have no recommendations, I am a hardware guy . Linux of course, I am partial to Redhat .

    As for Sun boxes beating x86 boxes ...well yeah sure, but for the cost of one 4 processor Netra 1400t with 16 gig of RAM
    you can build many x86 boxes and use somthing like a beowulf cluster or www.mosix.com .

    When it comes down to $$$'s, x86 is gonna win, if you want support, and someone to hold your hand and be there 24x7x365 go sun .

    Sun support, parts, and just about you name it is mucho deniro . I think if you get your learn on, you can better spend the money elsewhere.

    The learning curve on this is going to look like the combined eliptical orbits of every planetary body in our galaxy .

    Network Security ??? Call in a well known expert and have them set up a plan , follow it religiously or get hacked.

    Security is almost becoming a science unto itself , a good firewall, well setup and maintained.

    IP access lists in your Cisco, or other managed Router or layer 3 switch .

    Oh, and if your religious, you might pray .

    If you have any specific questions just e-mail me at my addy on the webpage below .

    If I do not know it, the *nix wizards that taught me will for sure . I am still learning myself, but if your a REAL IT person you always will be.

    Peace...
    Ex-MislTech
    http://www.geocities.com/duanenavarre
    • When I was much younger I to loved keeping up with what parts are good, or fast or whatever...

      But you spent a lot of time keeping up and matching parts together. Just get you x86 hardware from an A-brand and get their servergrade stuff. (Good examples: NEC (good service!), HP/Compaq, even Dell. )

      Nowadays in many applications they can even outperform brands like SUN. (with quality RAID controllers!)

  • Linux is clearly the best solution. Now, tell me why!
    • Better is relative ...

      Linux is cheaper, and is more flexible .

      It is open source, and it free except for
      the learning curve, and the cost of migration .

      Migration even in the M$ world can be painful
      with their own damn OS, I have done of a few
      of those as well .

      1. Cost up front , TCO is variable .
      (if you KNOW linux well you can get low TCO)
      (If you do not know linux well, you won't)
      2. more flexible (less lawyers)
      3. open source (you get the community)

      Sun has 5 and 6 - 9's reliablility machines
      for telecom networks that cost as much a
      porsche a piece .

      If you want to compare $1,500 x86 boxes to
      these high end high reliability hardware/
      software systems .

      Linux is gonna lose .

      But dollar for dollar, the linux system is
      going to win . There is PC hardware out
      there that is more reliable than most , but
      does not carry a HUGE mark up for it .

      www.google.com runs on x86 hardware and linux,
      and www.mosix.com does as well now .

      There are ALOT of squid boxes out there now,
      and no telling how many Linux/BSD firewalls
      and routers .

      Linux is building a pretty strong case here .

      The city of houston chose linux over Sun,
      so did Oracle just recently, hardware too,
      they went with Dell .

      Sun's day in the sun may be at an end .

      The bottom dollar may get them laid off
      just like it has many of my friends .

      I wish them the best of luck .

      Peace...
      Ex-MislTech

  • Hi. Hardware wise you are heading in the right direction. Anything that you don't have to pay for is good (loaner machines right?).

    As for your NIS question, I would be very tempted to use LDAP. NIS is horribly horrible whereas LDAP is much easier to understand, implement, support and interoperate.

    Check out LDAPGuru [ldapguru.com] and OpenLDAP [openldap.org].

    As for the hardware, go for the biggest, baddest you can. Assuming you use RAID on your servers (make it hardware RAID) can you survive with only 72 GB of storage ?

    Anyway, I'll have a think about your hardware some more.

    cheers, Tim

  • by duplicate-nickname ( 87112 ) on Saturday March 08, 2003 @07:06PM (#5469010) Homepage
    LOL...these are the type of people Windows admins have been putting up with for years, and now you *nix guys can start dealing with them.

    "Hi, I was a desktop support tech, now I have been thrown into the job of managing our Windows network, how do I install that Active Directory thing?"

    Windows has had the burnden of bad, inexperienced sysadmins for years, now Linux can share in the joy as it's more widely deployed.
  • Don't try to do everything at once. Find some server function you can move to Linux and move it. Then, see how it goes. Then, after a couple of months, do another. If your software is compatible between Sun and Linux, move over those users that want to move, again, slowly, one-by-one. This will give you and your users time to adapt.

    And if it isn't broken, don't fix it. That little Sun sitting in a corner running some server all by itself, just leave it alone until it starts causing problems (bad hardware, can't access NFS, whatever). Then, deal with it.

    The other thing that goes into an "enterprise network" is lots of diversity. Enterprise networks pretty much always have a diverse mix of machines in them.

  • by ameoba ( 173803 ) on Sunday March 09, 2003 @12:43PM (#5471445)
    If your current Sun servers are handling the load properly, don't mess with them. They're probably upgradable to bigger disks & more RAM, if they're getting worked, but for mail/nis/etc you don't really need a lot of horsepower for only a few dozen people and don't forget the immeasurable bonus of having it actually working.

    As for the desktops, if you're careful, you can -stay- with Solaris _AND_ switch to fast, cheap, x86 hardware for the workstations. You might be stuck with Linux on the Itanium compute server (which is only really going to be useful if you get >4GB RAM...), but you can keep the desktops virtually the same (assuming your software has Solarix86 support).

    If you're not really a 'qualified' admin, I'd try to change as little as possible. Doing LFS for a handful of compute servers is pointless; Take slackware or debian, do a custom kernel compile, remove some unneeded packages and services and then recompile a few key apps with excessive optimizations. You'll save yourself a load of time, have a system that actually works right and the engineers won't notice the difference. It might be different if you were building a cluster or something, but it's not worth your (or the company's) time in this situation.
    • If you're not a qualified admin, why would you be recompiling anything? Why would you consider LFS? Buy RH8.0, buy a service contract, insert CDROM, install, profit!

      The old saying - Linux is only free if your time has no value.
  • 1. don't buy an Itanic, if you're going with Opteron for its ultra-fast RAM ( compared with Itanic ) and drastic cost-effectiveness ( ditto ), an Itanic won't show you whether Opteron'd be a good match: the architectures are totally different.

    2. RAID storage: don't buy Promise 'raid' cards ( and DON'T do 'raid' 0/1, do RAID-5 ).

    Why? ..
    1. it ISN'T possible to use S.M.A.R.T. diagnostics in your drives with the Promise ones, at least ( you'll crash the PCI-bus, hanging, fatally, the machine, using Promise chips .. don't know about Highpoint or Adaptec ), and...
    2. they oppose Open Source drivers, and coders, for their own products [kerneltraffic.org].

    Highpoint has only SuSE 7.3-8.0/Redhat-whatever ( IIRC ) drivers for their fast 1520 cards, but if you want compute performance, you want Gentoo... ( and SuSE's been at 8.1 for ages, now... )

    Adaptec? I don't know if their cards have the same issues as the Promise/Highpoint, but their cards compete with Promise's, and so probably cut corners in similar ways ( I'd love to see hard data on that point, though )...

    3ware [3ware.com] are the only cheap ( compared with SCSI ) RAID controllers I know-of, that offer bootable, real, actual, S.M.A.R.T.able RAID on ATA drives.

    ( I'd stick scads of 120GB IBM 180GXPs on 'em, because they're cooler-running than the 180GB versions, and better than most other drives: fast, quiet, reliable-looking, etc .. quiet means, to me, that wear&tear isn't happening as much, though I wonder-at the No-Seagate rule expressed earlier... is it that fluid-bearings fail soon? or that Seagate has worthless support from our perspective? )

    3. SuSE or Gentoo are really your only choice, that I can see.

    Why? .. 1. Redhat's trying to microsoft linux, by ignoring standards and making its way law, and Mandrake's .. a flaky ( though fast ) variant originally based on Redhat... I'm fed-up with both, but YKMV ( metric, here )..
    2. SuSE includes damn-near every program-capability one could imagine, and has excellent hardware support ( beyond any others' )..
    3. Gentoo's compiled specifically for the hardware you are running, and with --buildpkg you get to build on one, then copy all the tbz2's built, to all of the other ( identical ) machines, and just install 'em, and voilá: ultra-performance.

    Misc Links:

    Chassis [calpc.com], suitable for lots-of-drives NAS type thing.. or this one [skyhawkusa.com] for well-cooled system ( thick aluminum's a good conductor of heat, and that makes for a longer-living, less-downtime machine )

    I'd use Athlons, but that's just me ( Intel's murdered/crippled WAY too many CPUs, and chipsets, for me to be loyal to them ), and would use these HSFs [thermalright.com] with Verax.de [verax.de] ( or Panasonic Panaflo ) fans on 'em, just because the noise machines make increase sick-time and reduce health/sanity/productivity so damn much.

    Consider using P/Ss like these [enermax.com.tw], remembering that 1. they're REALLY quiet only when running at about 50% load, and 2. the UPS-VA-rating you need for each one is DOUBLE the delivered-watts rating of the P/S.
    Also, you want LINE-INTERACTIVE UPSs on all machines. ( NO data-corruption due to brown-outs or other glitches ).

    I'd consider dual-CPU machines standard for the desktop, simply because even if a CPU was saturated, on that machine, the machine'd still respond, and I'd stuff as much quick RAM into it as I possibly could ( 3GB/desktop, for engineers ), and I'd ALWAYS use ECC RAM.

    Consider this board [tyan.com] as something to compare against, with Something Like This KVR266X72C25/1G [valueram.com] or this [crucial.com] times 3 of 'em, per motherboard.

    Like the Marines: Capability-based, not capability-choked, right?

    The best advice I've seen on this page is

    1. get a GOOD admin ( character, more than anything, values, sanity, cultural-harmony-with-you: you CAN change someone's skillset, you CANNOT change their nature ), and

    2. metrics, understanding precisely what 'success' means, what the context is, etc...

    3. do it one unit at-a-time

    Oh, yeah, here's [amdboard.com] an Opteron-board news link... ( I'm waiting for lots-of-SATAs-on-board )...

    Finally, change the ferro-resonant ballasts in your flourescent lighting to RF ballasts, and switch to Phillips TL-930 4' fluorescent tubes ( Colour Rendition Index of 95, rather than the cheap-cool-white CRI 50!! ), and your health will improve, significantly ( you can then ask for a raise, for your increased effectiveness, see )... if you find the warm-white of the TL-930s ( 3000K ) not brilliant/awakening enough, then mix-in a couple of TL-950s ( 5000K, mid-day-sunshine/sky colour ), to punch-up your alertness.

    More info here [www.akva.sk]

    • Damn, sorry I forgot:
      IF you CAN find 'em, you can also use the Silicon Image SATA chip based motherboards/add-on-cards with linux ( the 2.6 kernel's going to be fully supporting 'em, though for the 2.4 kernel, 3Ware's your only open-source choice, it seems, UNLESS you can get drivers specifically for that SATA board from somewhere )

      Reason for using SATA rather than normal/parallel ATA? Very Low CPU Usage, that's why..

    • Fucked-In-The-Head mistakes I make when annoyed/tired:

      THIS [tyan.com] is the board I was trying to recommend you try in your prototyper machine...

      Why?

      Athlon's floating-point-optimized CPUs are, I gather, drastically faster than Intel's streaming-multimedia optimized CPUs in most engineering stuff, and the DUAL CPU 'board will mean the machine still responds, even when one CPU's saturated.

      Why'd I recommend 3GB? because you can't functionally get 4GB into 'em: the PCI devices eat about .5GB, so 3.5's as high as can sanely be got.

      Sorry I can't provide the link to the quotes/benchmarks of that chip-designer guy who'd compared both Intel 'boards and AMD 'boards, but .. damn, it was significant difference, between 'em.

      Also, I'm REALLY recommending/seconding that advice that you take it one unit at a time, but amplifying on it: build prototypes so you understand the 'gotchas' involved, and are able to get hard data on the different subsystems in your intended answer.

  • I'd recommend IDE RAID over SCSI RAID. There are a number of hardware IDE RAID cards on the market, and it works out much cheaper.

    Reasons why people prefer SCSI:

    • SCSI is built for hot swapping. But you can get IDE drive holders with the extra circuitry for not very much. (Beware: hot swap hardware does not imply hot swap software. But your RAID card should sort that out).
    • SCSI drives are better made (at least sometimes). But so what? These things are going in a RAID array. If one fails you pull it and plug in another. If this still bothers you then configure RAID 1 with 3-way duplication instead of 2-way. It will still be cheaper than SCSI, especially after you add in the cost of the replacement drive you have to keep on hand.
    • SCSI drives are faster. But IDE isn't that much behind. Again, you can configure 3 or 4 way striping for RAID 0 instead of just 2 way. This helps on data throughput but not latency.

    Avoid cheap IDE RAID cards: they are often just conventional IDE cards with software RAID drivers. Take a look at the 3ware cards instead. And see if you can get Serial ATA cards and drives to cut down on the ribbon cables.

    BTW, if you do the sums then you will find that the most cost effective backup solution per gigabyte for media only (never mind buying the tape streamer) is a collection of IDE drives.

    Paul.

"When it comes to humility, I'm the greatest." -- Bullwinkle Moose

Working...