Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Data Storage Hardware

Compelling Alternatives to RAID Setups? 113

jabbadabbadoo asks: "Our software shop has about 30 Linux servers and 15 NT servers running enterprise applications for our customers. Since we have service level agreements with most of them, uptime is crucial. One of the things we've done is to use RAID setups extensively, using products from well renowned disk- and controller vendors. However, we have discovered the paradox that introducing RAID controllers actually reduces overall uptime! Not only does more 'steel' increase the probability of failure, but what fails first is usually the RAID controllers. What is your experience? Have we been having bad luck?"
"A related problem, especially on Linux, is that setting up RAIDs is actually a quite costly process. There seems to be endless problems with library versions, and upgrading existing servers simply takes too many hours. To keep the customers happy, we routinely have to create a 'shadow' server while upgrading which in turn means we, at some point, have to synchronize data to the new server, which in turns means a bit of a downtime. Ouch. Does anyone have a good solution to these problems? Of course, cost is a major issue, but so is uptime (which also means cost if we don't provide the uptime dictated in the SLA). What setup gives the best cost/uptime ratio? Thank for any thoughts!"
This discussion has been archived. No new comments can be posted.

Compelling Alternatives to RAID Setups?

Comments Filter:
  • RAID is good (Score:2, Informative)

    by kansei ( 731975 )
    I remember swapping quite a few Compaq RAID controllers in my day. They wouldn't outright fail, but get in a "compromised" mode, and you usually had enough time to schedule downtime to swap them out. This was much better than messing with software mirror or raid settings, because it's transparent to the OS - the OS just sees a single large disk.
  • We run about twenty systems with RAID storage devices (about half of them fibre channel). I've only had one system go down due to the storage device, ever. I think the power supply on our FC bays failed once or twice, but they have backup PSUs, so it wasn't a problem (hot swap, even!).
  • A few tips (Score:5, Insightful)

    by menscher ( 597856 ) <menscher+slashdot@u i u c . e du> on Tuesday April 20, 2004 @09:50PM (#8924570) Homepage Journal
    First off, you're looking at the wrong "uptime" number. Don't look at how many days since your last reboot. Look at how many hours/year you are offline. If you're not doing raid, a failed disk means restoring from backups. That's a time-consuming, and therefore costly, process. If your controller fails, just pop in your spare controller. You do have a spare in-house, don't you?

    I'll agree that setting it up is a nightmare. I'm currently helping test two 4TB arrays for use on a Linux box (16 SATA drives presented as a single SCSI device). Benchmarks under linux are slower than under windows. It's a mess figuring out why. Meanwhile, vendors (who I will not name [dell.com] ship crappy software, and take months to act on bug reports.

    As for transitioning servers, I've been there too. And yes, copying a terabyte of disk in single is a very long process. It'd have taken several days, which is of course unacceptable. This is where the magic of rsync comes in handy. Copy the data over several days in advance, sync it just before the scheduled downtime, and you'll have a fairly short downtime.

    • Re:A few tips (Score:1, Interesting)

      by Anonymous Coward
      I really don't understand some folks fascination with SATA on "servers".

      SATA is designed for desktops. SATA drives don't meet MTBF criteria of the equiv. SCSI drive, nor the performance.

      If you've chosen it because it's the cheaper of the solutions, ok... if you chose it for performance... well, make sure you have a good backup solution.
      • As I said, it's 16 SATA drives in hardware raid5 that presents itself to the server as a single scsi device.

        Yes, they have a lower MTBF, but it's in a raid array, so who cares? It'll automagically rebuild on our hot spare, and when we wake up to the email, we just go in and replace the downed drive with another.

        And yes, they have slightly lower performance. But the real performance reason for using SCSI instead of IDE is that it offloads the work to the SCSI controller/disk instead of wasting CPU time

        • And you can buy enterprise grade drives now such as this drive [wdc.com].
        • When that slow 250GB ATA class drive is dead, and while its fellow drives are chugging their little hearts out (and probably maxing out that 3ware controller), how long will it take to rebuild your array?

          Have you tested how long it takes? Probably better than 24 hours if your system is moderately loaded.

          Guess what you have now? The marvelous opportunity for a CASCADING FAILURE!

          That's right kids! Because you just had a drive fail, and all the other drives are doing double the work to rebuild from pari
          • You sound like a sales guy pushing SCSI.

            I've had a 250gb SATA drive fail on a 1TB array on a 3ware card. It was about 4 hours for it to rebuild. The system was slower, but we didn't have 'cascading failures.'

            In fact, the only time I've experienced 'Cascading Failures' was on an expensive Mylex SCSI raid controller. There is nothing like saying "Shit, we just lost two drives on that raid 5."

            Banks can continue to use SCSI, but I'm going to use SATA everywhere. It'll save me over half the cost for the same
          • We use RAID 0+1 for our large DAS. Our system guys seem to think that it's safer than RAID 5. We've spent the last couple of years migrating a lot of our storage to SAN, though, and I'm not sure if the 0+1 methodology got migrated along with it. Seems unlikely.

            If you spread your RAID 5 over sixteen volumes (as someone upthread said they did), it seems to me that any individual drive failing wouldn't incur a ton of work on the rest of the drives, because the amount of data any one of them would have to c
        • Wrong, or half right, which is sometimes worse than just wrong. The main reason for using SCSI over IDE technology is simultaneous command queueing. Ever wonder why a SCSI drive makes a machine, server or workstation feel so much faster?

          It's because even workstations do simultaneous read requests. SCSI has this great feature that basically when you request data from 3 different sections of the drive, it reprioritizes on the fly, picking up everything you requested along the way to the furthest request.

    • Would you mind detailing the hardware and software being used for those 4TB arrays? In particular, what kind of drivers do you have to use and what kind of monitoring of the array do you have? I'm just getting started building a 1TB array using a 3ware Escalade 8506-12 card and 5 250GB Western Digital SATA drives. Right now it's running under Win2k and the 3ware software is helpful, providing an alarm app, web interface for monitoring and configuration and the ability to send alert emails. They have drivers
      • Re:A few tips (Score:3, Informative)

        by menscher ( 597856 )
        The 4TB arrays are units we're evaluating (one from Excel, the other from RaidKing). They're just rack-mountable boxes that have a scsi uplink. So, as far as the computer is concerned, you just have one massive scsi drive. (There's a catch, which is that these units can't seem to have more than 2TB per "device", so you really get two scsi devices presented to the computer.)

        Life is made a litte annoying by the 2TB limit in the 2.4 kernel. But we're willing to live with that, for now. I'm told there ar

        • Thanks for the reply. I'm guessing the Excel is the SecurStor 16 SATA RAID [excelmeridian.com] and the RaidKing is the RAIDking 827 [raidking.com]. The Excel site provides some info about their monitoring software, RAIDWatch. RAIDking doesn't say anything on their site about monitoring. What's the point of having redundant disks if there's no reliable way of being notified when one fails?

          I wouldn't have a problem with the 2TB limitation, I've been thinking that I'd make each array no bigger than 1TB anyway (or 5 drives as RAID5, whichever
          • Those are the exact units we evaluated. We chose Excel for serveral reason, one of which was the RAIDWatch software, which seems to work perfectly. We got it stocked with 16 X 250GB drives (WDJB type), and 1GB of cache. About 10.5K for that package.

            But we are still confused about many benchmark tests that we get, in windows 2003 and RHEL3-AS. I'd love to speak with someone who has a lot of experience with running benchmarks on RAID arrays, especially if they use SATA-SCSI enclousers, from any manfacturer.
            • 16x250GB for $10,500 is $656.25 per disk which is not bad, especially when you take the cache into account.

              The Gateway 840 would be $6,549 (if you bought the disks separately) 12x250GB ($545/disk) but that's with only 12 bays and 256MB cache. It uses StorView Storage Management from nStor [nstor.com] (the 840 is probably a re-branded version of nStor's NexStor 4700S [nstor.com]). Does anyone have any experience with StorView? It only lists RedHat as a supported Linux distro but again I'm wondering if that really matters.

              The App
        • I would also recommend NexSan's ATA-boy. Their ATA-beast sucks on performance, but hasn't let us down. Their ATA-boy has decent performance, has a nice footprint and is competitivly priced.
    • RAID 10? (Score:4, Insightful)

      by b!arg ( 622192 ) on Wednesday April 21, 2004 @05:57PM (#8933958) Homepage Journal
      If uptime is so absolutely crucial how about a duplexed mirror of RAID 5 arrays. Two controllers and a RAID 5. When in doubt throw more money at the problem. :)
  • Brands? (Score:5, Informative)

    by JLester ( 9518 ) on Tuesday April 20, 2004 @09:58PM (#8924653)
    You don't list what brand controllers you are using, but your problems are not typical in my experience. We are a 100% Compaq shop and use their SmartArray controllers with Novell Netware and Debian Linux. We've never had a controller failure and have only lost about 3 drives over the last six years or so.

    I'm a firm believer that you get what you pay for with enterprise-class servers. You shouldn't expect Tier-1 reliability from servers that are built with commodity hardware. There is a reason that Compaq/Dell/IBM servers are more expensive.

    We also haven't had any issues installing other than the default Debian boot disks not supporting the SmartArray controller. A custom set of disks took care of that though.

    Jason
    • I'll second that. We have close to 150 Compaq ProLiant servers with SmartArray controllers [different versions] and had no more than 3 controllers fail in the last 5 years. Drives failed quite a bit but then again, just pop a new one in and use the warranty to replace the dead one. BTW, all 3 of the controllers that failed were the "embedded" kind, none of the "add-in" boards failed.

      We are using Novell Netware as well, so this is more of a comment about hardware reliability rather than software woes.
  • Software RAID? (Score:4, Interesting)

    by Marillion ( 33728 ) <ericbardes&gmail,com> on Tuesday April 20, 2004 @10:04PM (#8924694)
    I've been using linux software raid with an old non-raid symbios scsi-3 card. Performance isn't a requirement in this environment so the penalty (which isn't that much actually) is acceptable.

    In the past two years, none of the "downtime" that I've experenced has been attributed to the disk array or controller.

    The biggies have been: power outage that exceeded the capacity of the UPS (3 hours), planned upgrades and an anonymous gremlin who bumped the reset button - since detached.

  • So would XSan help? (Score:5, Interesting)

    by 2nd Post! ( 213333 ) <gundbear.pacbell@net> on Tuesday April 20, 2004 @10:07PM (#8924713) Homepage
    XSan [apple.com] can 'hide' the complexity of RAID, as well as providing management tools and 'intelligent' cascading failure... but that's just from reading the specs, not from actual experience. I hear XSan is based on CVFS? I should look at that too.
    • Don't believe the marketing.

      From what I read, the XSan software is first and foremost a distributed file system for shared volumes from the Xserve RAID.
      If you look at the applications, it's about multiple servers or workstations with concurrent access to a single volume - distributed file locking.

      Great stuff for the stated purpose, can't wait to get my hands on it!

      Hiding the complexity of RAID is the domain of storage 'virtualization' solutions. The ones that let you mix and match raid types across any
  • by Futurepower(R) ( 558542 ) on Tuesday April 20, 2004 @10:08PM (#8924724) Homepage

    This is on a lower level than the RAID you are using, but we are having major problems with 10 Promise Technology TX2000 mirroring RAID controllers that we bought. The mirrors go critical for no detectable reason. Promise Technology technical support is unable to find the problem, and the company is unwilling to escalate the issue. The Promise Technology technicians escalate the issue, but 2nd level technical support never calls back.

    Promise mirroring controllers on ECS (EliteGroup) L7VTA v 1.0 motherboards have the same problem. When we call ECS tech support, there is a recorded message saying they are busy and to call back later.

    We've been supplying computers with Promise mirroring RAID controllers since the company began doing business, and we've had very few problems until now.

    Possibly the problems are associated with newer, faster motherboards, or with AMD VIA chipset motherboards. We've never had problems with RAID controllers on Intel chipset motherboards.

    Another possibility is that the RAID controllers are incompatible with DVD burner drivers that are installed with Roxio or Nero DVD burning software.
    • I've got an ancient Promise FastTrak66 in my desktop PC. I can attest to it being a potential source of problems. I haven't had any specific crashes since installing it, but I can tell it's not playing nice with IRQs and whatnot (i.e. the mouse locks hard for a moment when doing big-time disk thrashing). I could see this causing problems with PCI cards (network adapters / other raid controllers). Luckily for me, the only other card in my rig is the AGP video, and games usually don't thrash during fps-se
    • by GoRK ( 10018 ) on Tuesday April 20, 2004 @11:44PM (#8925312) Homepage Journal
      There is a very important thing that you have not realized...

      Those are not really true hardware RAID controllers. They are regular hacked up IDE controllers with a bit of BIOS firmware on them that handles software RAID via INT13 until the OS loads and the software RAID in the "driver" can take over.

      They offer nothing that a legitimate hardware raid setup should give you such as cache RAM or CPU offloading. Mirrored setups on these types of pseudo-hardware RAID controllers HURTS PERFORMANCE. Don't believe me? Benchmark it yourself versus software raid and hardware raid on a real controller such as Adaptec AAA or 3ware...

      • The performance we get with Promise controllers (when they work) has been satisfactory. The application is a cash register; the computer is always faster than the operator. We only need a mirror copy of our data.

        3Ware told me they cannot boot from one drive, after one fails. A 3ware formatted drive cannot boot from the IDE controller on the motherboard. Promise can do both. We need features, not performance, in this case.

        Do you have a link to an Adaptec IDE mirroring RAID controller you would recom

        • Question: Are Adaptec ATA RAID 1200A cards the same as HighPoint RocketRaid 133 cards? I notice the BIOS setup screens look identical.
        • I don't understand why you'd need to boot from a RAID'd drive disconnected from the RAID itself; if the RAID is in RAID5 format or something, this would be impossible anyhow! Unless you're just RAID1-ing everything??

          Even then, why can't you just plug into a 3ware again and get the data off the drive?

          The Adaptec you linked to is another one of those software-driven RAID cards, and offers no real value.

          You're going to have to spend $200-$300+ to get a decent RAID hardware card, and then make sure you have
          • You never know what kind of crazy sitation you're in. What if you have no spare 3ware cards with you? What if it's 10 years later and 3ware IDE cards are ancient history? What if you want to recover data off a recovered drive for some reason and you just want to put it in a USB box?

            It's a nice backstop that Promise "arrays" are still accessable with conventional hardware.

            This author is talking about cash registers - where they're likely out and about all day from place to place doing a service run with
            • What if it's 10 years later and 3ware IDE cards are ancient history?

              That is my primary reason for using software raid. The other reasons being that a raid card is much more expensive than an ordinary IDE controller, and I have read more than once, that it is really still software raid. My setup with three 120GB disks and identical partitioning of all three disks goes like this. One 31MB /boot partition, one 31MB FAT partition (just in case), one 627MB partition for /, one 2GB partition for swap, and the
              • Well.. yep, for 10 years in the future, Promise and Software would have the same "readibility".

                We're using Windows 2000 & 2003, and trust me, it's simpler to use a Promise card to mirror the boot volume than to use 2k's software mirroring. Recently moved to SATA based drives.

                There are advantages to the promise cards still - with the enclosures, you get hot-swap and you get status LED's and what not. (We're hoping to be able to say to person on the phone, "which one has the orange light?")
      • They offer nothing that a legitimate hardware raid setup should give you such as cache RAM or CPU offloading.

        Actually, they do provide one particularly useful feature and that is to present the RAIDed disks to the OS (and the BIOS) as a single device.

        Mirrored setups on these types of pseudo-hardware RAID controllers HURTS PERFORMANCE.

        The software overhead for RAID1 should be, for all intents and purposes, insignificant. It just doesn't *do* anything that requires much CPU work.

        • The software overhead for RAID1 should be, for all intents and purposes, insignificant. It just doesn't *do* anything that requires much CPU work.

          What it does is blocking. A good hardware raid mirror will have a battery backed up cache so it can acknowledge the write as successful either immediatly or after the data is on one disk, which a software raid setup can't do reliably. What you end up with is the additive rotational latency for two disks, which can signifigantly hurt performance for small random
        • Actually, they do provide one particularly useful feature and that is to present the RAIDed disks to the OS (and the BIOS) as a single device.

          The int13 firmware does this for the code before the OS loads and the driver is responsible for doing this job afterwords. Don't let this software trickery fool you. Behind the veil of the driver, the system software is reading and writing to the two disks individually. The int13 stuff is a nice trick, but it's only necessary due to the inability to replace the OS's
          • The int13 firmware does this for the code before the OS loads and the driver is responsible for doing this job afterwords. Don't let this software trickery fool you. Behind the veil of the driver, the system software is reading and writing to the two disks individually. The int13 stuff is a nice trick, but it's only necessary due to the inability to replace the OS's bootloader, otherwise they probably wouldn't bother with the difficult task of writing such firmware.

            I'm well aware of how the "trickery" work

            • The IO overhead will (should, at least) be the same whether it's hardware RAID or software RAID.

              On a real hardware raid controller this overhead exists only on the controller CPU (normally an i960 or somesuch) and is further alleviated by the cache ram on the card.

              Compared to what, though ? OS-level software RAID is going to have to do precisely the same thing and IMHO the processing involved, taken in the context of modern, fast CPUs, is insignificant.

              Well, I wasn't trying to compare promise/hpt/et a
          • Well, it does at least twice the disk IO, plus it's (hopefully) doing consistency checking by comparing the two data streams for discrepencies.

            No commonly used software RAID does this.

  • by stienman ( 51024 ) <adavis@@@ubasics...com> on Tuesday April 20, 2004 @10:09PM (#8924744) Homepage Journal
    It's hard for me to believe that RAID causes more downtime than single drive setups, unless you have a really bad raid system and a really good backup system.

    The only time RAID should ever be down, is during initial setup. Thereafter you should replace bad drives while it's running and you should never have cause to shut it down due to a RAID issue.

    If you are experiencing RAID hardware problems then take a good look into these areas:
    RAID Hardware --> Are you using cheap stuff? It honestly isn't worth it. Perhaps you're just discovering the 'real' value of 'cheap' hardware.
    RAID Software --> If you're using unsupported drivers (ie, vendor doesn't supply or support them) then ditch the hardware and get hardware with supported drivers - make sure they support them on your configuration. You've already proven that you can't support them yourself.
    System Hardware --> If the system is generally cheap (cheap power, bad airflow, cheap components, etc) then you simply can't expect the RAID card to work 24/7.
    Server Room --> Make certian your server room can handle the power and ventilation needs of the servers. This should go without saying, but all too often it is the problem.

    The reason people go with cheap components is the lower initial cost. They only work for a few thousand hours of heavy operation. You must get server rated components if you want them to operate for more than a year or two. There really is a difference.

    Lastly, I use 20+ Promise FastTrack ATA RAID cards in 20+ Novell networks. I use cheap components, and they work in harsh conditions. They are not set up for hot-swap, as that's not a need in this situation. I have to replace the cheap hardware every 2-4 years, powersupplies every year, hard drives every 2-3 years. The only time the RAID cards have gone bad is when a power supply failure (usually due to a power outage/surge/brownout) fries the motherboard and usually most of the components in the case.

    I have never had a failure where both HDs completely failed simultaneously, though usually when the rest of the computer goes I replace the whole thing and get the data off one of the old hard drives. This is not an advertisement for Promise. They simply are the only one's with supported Novell 3.12 drivers. :-) Soon to go away... :-(

    I'd be surprised if you've covered all these bases and are still having problems.

    -Adam
  • by wowbagger ( 69688 ) on Tuesday April 20, 2004 @10:10PM (#8924751) Homepage Journal
    There is an old saw in the aviation industry: "A twin engine aircraft will have twice as many engine problems as a single engine aircraft."

    However, which would you rather be in, a twin engine aircraft that just lost one engine, or a single engine aircraft that just lost an engine?

    Yes, RAID cards die - I've been shocked at how often that happens. And 5 disk RAID will have more failures than a 4 disk JBOD (just a bunch of disks) array.

    But the question is, are you seeing a reduction in UPTIME, or just in mean time to failure? Maybe the RAID system throws an error once a month and the JBOD system throws an error every two months, but if you can recover in 5 minutes by swaping cards or drives rather than 5 hours for restoring the JBOD from backup, you are better off.

    Perhaps what you might look at would be using RAID software on the server's processor, coupled with Firewire drive bays, disks, and multiple Firewire cards. If you have a card die, move the disks to another card until you can schedule downtime. A disk dies, hot-swap and rebuild in background.
    • However, which would you rather be in, a twin engine aircraft that just lost one engine, or a single engine aircraft that just lost an engine?

      That depends, can the twin engine plane successfully land with only one engine running?
  • by duffbeer703 ( 177751 ) * on Tuesday April 20, 2004 @10:23PM (#8924831)
    The answer is SysAdmin 101 stuff.

    1. Buy quality hardware.

    IDE RAID for critical servers is a bad idea.

    In my experience, RAID hardware tends to be very picky and suffers from subtle and often bizarre hardware conflicts. In general, using a RAID solution that is packaged with the hardware is the best idea.

    If you cannot afford good RAID hardware, stick to conventional JBOD configurations.

    2. Configuration

    Design your the configuration of your systems around consistency first, performance second.

    You need to document your procedures for building servers, allocating storage, etc. Create scripts whenever possible.

    If you are not confident that you could not talk a marginally qualified technician through a server rebuild over the phone, your docs aren't good enough. If you don't have the time to write docs, make the time or work late.

    3. Backups

    You need documented, tested backup AND restore procedures. All of your oncall staff need to be able to restore a server. ..

    With 50 servers, disk controller or disk failures should not be a common event. We work with approximately 400 datacenter and 200 field servers (varying in age from 1-9 years), and replaced 3 controllers and 19 disks last year.

    Look for electrical issues, you may have crappy electrical service.

    • What RAID controllers would you recommend?

      What hardware is "quality"?
      • Where I work now we mostly use IBM hardware with ServRAID controllers.

        In the past I've worked with Compaq hardware, which I believed shipped with Symbios controllers. (Been awhile)

        Sun storage was usually good, except that we tended to get alot of flaky gbics and cabling from them.

        "Certified" hardware really is important, especially in larger environments, where wasted time is more expensive than buying the vendor's recommened hardware. A good example is when due to a supply shortage, we ordered a differe
      • 3Ware Escalade series. Relatively inexpensive, rock solid, vendor Linux support.
    • Speaking as a small-time sysadmin myself, I disagree on principle--ATA Raid for critical servers becomes a great idea once you realize that you can buy 2 1TB-usable RAID5 machines with 3Ware ATA RAID controllers for less than the price of a single 1TB-usable RAID5 SCSI machine.

      Granted, I'm fully in agreement with you--SCSI is more reliable and better for processing servers etc. But when it comes down to a cost-effective way to get a hell of a lot of disk, I can heartily recommend the 3Ware Escalade stuff.
  • by MoOsEb0y ( 2177 ) on Tuesday April 20, 2004 @10:54PM (#8925034)
    I spent the past week and a half trying to set up a 4x160 SATA Raid-5. It was a huge excercise in frustration because every time I'd try to build a volume, my machine would promptly freeze after a few percent. I changed out IDE emulation for SCSI emulation in kernel... same thing... I changed SATA controllers, same thing. I changed SATA cables, same thing. I changed power supplies, same thing. I added 4 80 mm case fans, same thing. In the end, it turned out that the culprit was raidtools. Nobody had ever bothered to post that raid-5 + raidtools + kernel 2.6 locks up a computer. I changed to mdadm, and I had a working array 50 minutes later.
  • Storage Cluster (Score:2, Interesting)

    If your bandwidth requirements are not too high you may be able to use a distributed file system on many redundant (cheap IDE & G ethernet) nodes and allow for replacements. Your uptime should be constant, given enough UPS and redundancy of nodes.
  • Look at Google (Score:3, Interesting)

    by Bruha ( 412869 ) on Tuesday April 20, 2004 @11:42PM (#8925303) Homepage Journal
    They're systems are probably 80% auctioned desktops and such from busted dot coms.. and I suspect that many of them are not RAID at all. I have yet to hear of a redundant raid controller either. Your best bet is just replication of data on you backend servers and using something in the nature of a Cisco CSS or some other services balancer device to handle keeping alive servers available while redirecting away from dead servers.

    You can still do RAID with this setup but you'd have the added security of 2 or more systems making up your entire functional system so if one is down the other can continue normally. Then it's trivial to repair the dead machine and bring it back into the cluster.
    • Here's a description of the redundant raid controller I'm familiar with.

      The compaq proliant 7000 (xeon) I've got came with a smart array 3100es, which does raid on the 3 hot swap scsi cages, and there's a special slot on the i/o board for a second 3100es, so that if one dies, i can just hotswap in a spare. (the two slots pci-x plus 3 channels of scsi that go to scsi cages, and a fourth scsi channel for controller to controller communication)

    • I have yet to hear of a redundant raid controller either.

      Well you obviously don't know anything about proper RAID then do you? All enterprise storage costing 25k+ at least has this option.

      The normal configuration is an array, which has 2 controllers in it. You create LUN's, and assign them to the primary + secondary controller. The primary + secondary controllers have a heart beat, which ensures one takes over the others configuration if it fails. You dual attach your host to each controller. Set up IO
      • Well you obviously don't know anything about proper RAID then do you?

        Hey...be nice.

        The cheapest dual redundant RAID controllers I've seen are these for about 10k all up, these are rebadge d infotrend devices, so something similar should be available where ever you are.

        Dell sells some cheapie dual/redundant controllers for well under $10K -- I know that they're available in their tower servers for sure.

    • They're systems are probably 80% auctioned desktops and such from busted dot coms.. and I suspect that many of them are not RAID at all. I have yet to hear of a redundant raid controller either. Your best bet is just replication of data on you backend servers and using something in the nature of a Cisco CSS or some other services balancer device to handle keeping alive servers available while redirecting away from dead servers.

      There are whole classes of applications where that can't possibly work. If you'r

  • All things being equal in terms of build quality, the thing most likely to fail is the thing with the most moving bits.

    You say you've had more raid controller failures than disk failures. Did any of the raid controller failures require a restore from backup? A non-redundant-disk failure would have.

    Add up the total time you were down due to raid controller failures and the total time you would have been down for disk failures if you didn't have raid. That's a better measure than instances of failure.
  • by photon317 ( 208409 ) on Wednesday April 21, 2004 @12:44AM (#8925674)

    You can't slap a buzzword like RAID onto whatever you were doing before and expect results. Reliable systems have to be carefully engineered correctly.

    From the sound of your posting, I'm assuming when you say you're using RAID, you mean internal RAID cards inside a server with internal disks attached, and relatively small amounts of it. In these types of scenarios, the highest performing, most reliable, and most cost effective option is to put two seperate scsi controllers in your boxes, buy twice as much storage as you need, and mirror between the controllers using the OS's software mirroring capabilities. You are now indepedant of controller failure, the controllers themselves are less likely to "fail" (which doesn't always mean hardware frying) than a complex raid controller by their simpler nature, and you're getting the performance benefit of full mirroring instead of that clunky raid5 business. If you have enough storage to warrant four or more internal disks of some size, use mirror+striping. Always mirror at the lowest level, and then stripe on top of that (in a 4 disk design actually it doesn't matter which way you layer them, but in 6+ disk designs it gives higher data availability in the unlikely event of multiple disk failures). Or in other words - raid5 and hardware cards = bad, mirroring/striping + software raid = good.

    Your goal is not to be buzzword compliant by slapping in a raid controller, your goal is to carefully analyze your systems, your options, your requirements, and your budget, and eliminate single points of failure everywhere that it's feasible and desirable to do so, starting with the lowest MTBF items in the system and working your way up. There are no magic bullet answers of course - change the situation and the "right" answer can change dramatically.
  • If you want uptime for an enterprise, you have to use enterprise class storage products, or distribute the data the way Google does. There is a reason EMC and Hitachi and others can charge what they do for storage - you can't match the performance, uptime and features.

    Shadow copies? Look at SnapView and SANCopy in EMCs CLARiiON line - no downtime to create a copy. I would expect Hitachi and others to have similar features. There are a lot of used EMC disk arrays on Ebay and other places - just make sure yo
  • by caseih ( 160668 ) on Wednesday April 21, 2004 @01:39AM (#8925969)
    I don't see why setting up the RAIDs under any OS should be more time consuming than on other OSs. Certainly if you use the right hardware-based RAID things should be very simple and very fast.

    Bang for the buck, you can't beat the Apple Xserve RAID. They are IDE, but almost as fast as the fastest scsi arrays, and seem to be very reliable. The array can be easily partitioned into a variety of raid types with hot spares. The unit can then connect to Windows or Linux via standard fibre channel interface and look like simple scsi drives. The RAID is administered via an ethernet connection using a nice java gui tool.

    We set our Xserve RAIDs up such that each array (each Xserve RAID box has 2 arrays with separate controller logic for each) is RAID 5 plus a hot spare, and then the array is mirrored with the other one. This gives is .8 TB or so at a very reasonable price and very reliable. So far it has worked well.
    • [I haven't tried either of these products.]
      Gateway 840 Serial-ATA RAID Enclosure [gateway.com] is cheaper per GB than Xserve RAID. It has 12 bays and uses U320 SCSI instead of Fiber Channel for the connection to the system. Currently the cheapest config you can do is $4,749. That's with 4 250GB SATA drives and their cheapest 3yr warranty (another nice thing is you can increase the warrany to 4 or even 5 years and they have a variety of response times you can choose). Gateway gives you all 12 carriers no matter how many
    • Bang for the buck, you can't beat the Apple Xserve RAID.

      Yes you can. Easily. Shop around even a little, you'd have to work pretty hard to find an ATA-based solution as expensive as theirs.
  • Perhaps doing RAID over network block devices [uc3m.es] would solve your reliability problem. NBD is designed for RAID, you distribute over partitions that are physically separate from each other on different machines and segments, you can do heartbeat, etc. Don't assume that this is necessarily the "cheap way out with cheap hardware". You can do this with fast hardware that's backed by hardware RAID too and use it in a network RAID 0, 1, or 5 scenario for example.
  • I maintain a large number of Dell servers and I have NEVER seen computers malfunction so often before in my life. Our desktops seem to be far more reliable. Try RAID-10 if you want belt and suspendors (two hardware RAID 5 arrays put together in software as a mirror set). Even beter, try some kind of server clustering (Reduntand Array of Inexpensive Servers?)
    • Uhhh, that isn't RAID 10. That isn't RAID 0+1. (Technically, there is no standardized version of RAID 10, however, in my experience, that's not what the general public means by it).

      RAID 10 is when you take 2n raw drives, building n mirrors (The RAID 1 portion of RAID 10). You then take the n mirrors and put them in a RAID 0 stripe.

      RAID 0+1 is less preferrable, but is sometimes all you can do. Take 2n drives, now build two RAID 0 stripes in n devices in them. Now, take the 2 stripes and mirror them

      • The Dell RAID configuration calls two arrays of RAID 5 mirrored together RAID 10. It may be that no one else calls it that. I have never done it myself. I think two clustered servers is a better idea if you really need to avoid downtime. 2003 Server for Datacenters does this with automatic failover and I am sure there is some *nix equivalent.
        • Okay, now, I'm being more then little pedantic... I've used Dell servers before, and configured their stuff. I've never seen anything that refers to what you call RAID 10 as RAID 10.

          http://www1.us.dell.com/content/topics/global.asp x /power/en/ps1q02_long?c=us&cs=555&l=en&s=b iz [dell.com]

          That's a link to Dell documentation discussing the in's and out's of RAID configuration and reliability. Any chance you've got a link that shows where a mirrored RAID 5 configuration is referred to as RAID 10. I'm

          • On Dell's site they are calling RAID 10 a stripe set of mirrors. I could swear the PERC setup was doing the opposite, calling it a mirror set of stripes, but I didn't set it up that way so I am not sure. Regardless of what you call it, hardware RAID looks like just one HD to the OS, so you can take two or more HARDWARE RAID arrays and make a SOFTWARE RAID array out of them if you want to. I am tending now to just using two servers both running RAID 5. Too many single points of failure in one server no matte
  • We had some Compaq's that had Mylex 960s in them. Those things failed more often than not. We ended up just re-installing with software RAID or no RAID at all and it works better.

    Compaq's newer SmartArray have seemed to be more stable.. however only time will tell.

    -un1xloser


    • If Mylex cards are failing, that's important! If RAID cards fail, then the company, and all its employees, are out of business. And that's what apparently happened to Mylex [lsilogic.com]. It's now owned by LSI Logic.

      At the low end of the scale, we seem to be having the same kind of problem. We are having a high failure rate with Promise Technology FastTrak Tx2000 controllers. Promise Technology seems to have lost the will, or maybe ability, to deal with problems.

      When I read through the comments to this story, th
      • When I read through the comments to this story, there are a lot of situations where RAID cards are failing. But why?

        It seems that most (all?) of those stories relate to controllers doing IDE RAID. I suspect the answer to the question of why so many are failing is that it's still a relatively new technology, only really widely available in the last 18 months. SCSI RAID controllers on the other hand don't seem to be plagued with the same issues.

        This is a new problem. Did Microsoft do something to break m


        • Microsoft's solution is that everyone should buy Windows 2003 server and use software RAID, available only on that Windows OS.

          That's all we need is software RAID mirroring, but it doesn't make sense, for this application, to support a much more complex system and much more expensive system to get it.
          • Microsoft's solution is that everyone should buy Windows 2003 server and use software RAID, available only on that Windows OS

            But Microsoft only recommend software RAID for small environments. They don't even use it themselves - they use massive HP EVA SAN's.

  • If your having problems with controllers, drives, enclosures, etc., going bad, then maybe you need to buy better hardware (i.e. more expensive).

    I have been working with compaq proliant servers for several years (support for RedHat Linux is good) with nary a hardware problem.

    http://h18004.www1.hp.com/products/servers/prol i an tml530/index.html
    http://h18004.www1.hp.com/produ cts/servers/prolian tstorage/arraycontrollers/index.html
    http://h1800 4.www1.hp.com/products/servers/prolian tstorage/drives-enclosures
  • If (disk)space and performance is not a problem (i.e. HD below 200GB, non-fancy single CPU), you could simply go with two (or three) cheap PC boxen instead of one "data center quality" RAID machine (for the same total price). If you mirror data+setup over from "production" to "standby" daily, any downtime due to any failure (HD, controller, mobo, OS, filesystem) can be minimized to 1-2 minutes (switch service over to the standby) - continuing with yesterdays data, which should be sufficient for most cases.
  • Buy a NetApp Filer, mount it and use it for all your variable data. Get rid of the RAID arrays attached directly to the servers.

Beware of Programmers who carry screwdrivers. -- Leonard Brandwein

Working...