Become a fan of Slashdot on Facebook


Forgot your password?

No Hassle RAID 5 Implementations? 51

LambSpam asks: "I had a nightmare week (last week) with two of our servers running Intel's U3-1L RAID controller (RAID 5). Whenever there's a power outage in our building these controllers randomly mark one or more of the drives in the array offline (even with adequate UPS support), which means I have to manually mark them online and/or rebuild. Intel acknowledged the problem, but their solution involves updating the backplane's firmware, the controller firmware (destructive upgrade!), and even the firmware on our IBM drives in the array because they 'draw too much power' in certain conditions. I've only used one other RAID 5 implementation (MegaRAID), and it NEVER had these kinds of problems, whereas if you sneeze too hard around this U3-1L card it will go offline. Is this common with most hardware RAID implementations? What RAID 5 implementations works without hassle? What should I stay away from?"
This discussion has been archived. No new comments can be posted.

No Hassle RAID 5 Implementations?

Comments Filter:
  • I've never had any problems with the PERC (PowerEdge Raid Controller) in the Dells i (used) to use for Sendmail servers. That kind of limits your choices, though..
    • Re:PERC? (Score:3, Interesting)

      by krangomatik ( 535373 )
      I haven't personally had any big problems with the PERC boards, although friends and co-workers always seem to have had bad experiences with them. I've had really good luck with IBM ServeRAID boards. We have quite a few of these in production boxes and haven't had any problems with them(the IBM hard drives on the other hand...plenty of failures there). If your RAID problems are big enough that you're willing to put up lots of $$$ to get rid of them you could look at buying a SAN or NAS. That way, in theory, you could have the vendor install and maintain the disk for you. Generally they seem to do an okay job. I must mention however, that I have seen a vendor make an oops and drop power to an array while trying to fix a power supply problem. That took some time to get back online because the CE out on site wasn't familiar with that product and ended up having to get a senior CE to drive out and fix it. All and all it seems like the big boys(IBM, EMC, Sun, STK, etc) are pretty good about keeping uptimes in the 99.99%+ range(i guess that's what you give them the big bucks for).
      • Re:PERC? (Score:3, Interesting)

        by AnalogBoy ( 51094 )
        Keep in mind there are a few different versions of the PERC, some better than others.

        Just a note on EMC.. When i've had the joy of working with a Symmetrix, EMC has always done a wonderful job of never having any downtime. They would come out at any hour of the day or night to replace a redundant card or a spare disk that wasn't even being utilized. They always evaluate any changes before they are made. I'm sure its possible for them to make a mistake, but for mass storage they're the ones i would choose.
      • Re:PERC? (Score:3, Interesting)

        by foobar104 ( 206452 )
        All and all it seems like the big boys(IBM, EMC, Sun, STK, etc)

        Just FYI, Sun doesn't actually make their high-end storage product. I think they call it the StorEdge 9900 or something but it's actually a rebranded Hitachi Data Systems 9960.

        Funny thing about HDS. When you buy one of their 9960 systems-- a minimum investment of about $250,000-- you get a guarantee. If you ever lose any data at all on that storage system due to hardware or firmware fault, HDS will give you 30% of your purchase price back.

        According to one of the senior HDS VPs that I spoke to last month, they've never had to pay out that penalty clause.
    • Most PERC boards are AMI MegaRAID cards rebranded.
    • Count yourself lucky. I have problems with the PERC 3 boards in PowerEdge 6400s. If they run out of juice (power outages that last longer than the UPS), they forget about the array. The only fix is to remove the cache SIMM, power up, power down, and reinsert the cache SIMM.
      • Were these PERCs with or without on-card battery backup? I'll also point out that your UPS really should power your systems down when the UPS is at 25 percent charge, or (average shutdown time, including all services)*2, whichever is longer.
  • Xiotech, affiliated with Seagate, offers a Mag unite in fractional TB's, with highly customizable options. They're linked to servers with QLogic fiber-channel cards and are easy to setup; they even have knowledgable tech come on site for install.
  • IBM HDs (Score:1, Informative)

    by Anonymous Coward
    I've also had performance problems with IBM drives in RAID 0/1, and especially RAID 5 setups. I contacted IBM tech support to see if any of the settings could be tweaked, but the response was the drives are not RAID optimized. I switched to Seagate drives, and subjectively I would say the performance quadrupled under heavy load.
    • Re:IBM HDs (Score:1, Interesting)

      by Anonymous Coward
      I had two new IBM 36-GB drives fail this week on a Dell 2450 with PERC3/Si and RAID 5. Not good. I replaced 'em with Seagate 15k rpm drives and all is better and the performance of the machine seems better, too.

      FWIW, I've found the drivers for the PERC in FreeBSD to be far better than those in Linux.
      • It's worth noting that the linux drivers were ripped off of FreeBSD. I say ripped off, because that's exactly what they were, taken with no credit given to the author, in violation of the BSD license.
  • I've used 2 Voyager 3100's [] with the fibre module.
    Due to excessive server room heat, we did lose a drive, but data was fine. While it has Windows software to monitor it when connected via scsi, they didn't have anything for unix, so configs had to be done via telnet on its serial port.
  • Tried Adaptec? (Score:5, Informative)

    by Judg3 ( 88435 ) <(jeremy) (at) (> on Saturday March 16, 2002 @07:18AM (#3172666) Homepage Journal
    Were I used to work (An all-windows shop) we used Adaptec [] RAID cards in all our "tower" based servers. Even the lower priced models (AAA-131U2) always performed without a hitch and we never had any problems with them at all. AMI's RAID controllers are real nice and all, but for the price it just wasn't worth it. The Adaptec solutions performed just as well and at a lower cost. You'd do good to check em out.

    Now the 3200 RAID Controllers int he Compaq's, thats another diffrent story altogether.
    We had roughly 2000 servers, operating 24/7 @ 67 degrees F. Two times a year we had a site shutdown. Every single time we had to bring everything back up we would have anywhere from 3-5 Compaq array controllers die. But never once did the low-buck Adaptecs crap out on us.
    • Re:Tried Adaptec? (Score:3, Informative)

      by Sivar ( 316343 )
      The general consensus on (a site that I would trust for anything storage related) is that Adaptec cards are crap, the performance under load is mediocre, they tend to die (despite being a solid-state device) and that often times the non-windows drivers aren't the best.
      Don't take it from me, ask around there. If they worked for you, however, great. Whatever works.
  • Firmware (Score:4, Informative)

    by Holophax ( 21693 ) on Saturday March 16, 2002 @10:53AM (#3172978)
    Just as a shot in the dark, I would suggest trying to upgrade the firmware on the drives first. At one of my old jobs, we used nothing but IBM drives, and we constantly had problems with the drives becoming marked as bad or off line, but simply pulling them and plugging them back in (hot swap) would bring them back. In our situation, we were using IBM Netfinity servers with IBM raid controllers. When we talked to IBM, they admitted there was a problem with the firmware on the drivers which would cause the drive to not spit out just one error whenever an event (even a simple read error) happened, but to spew them constantly, which made the raid controller mark the drive as bad. Seeing as it only takes a few minutes of downtime and is non-destructive, it might be worth a shot.
  • Two possibilities... (Score:4, Interesting)

    by Vrallis ( 33290 ) on Saturday March 16, 2002 @11:25AM (#3173040) Homepage
    First, are you sure your UPS is a *TRUE* UPS? Even a lot of the 'high end' UPSes out there are still REALLY switched UPSes. This could very well be your problem.

    The other one is something I've heard of (I'm not an electrical expert, but I'll try to explain). Larger (older installations, particularly) sites were wired for three-phase electricity. Over time, they split the phases for normal 110 volt usage. There is a chance where if the PC is connected to power on one phase, but the external unit is connected to power from a different phase, that the differential between the two can cause problems, due to the ground connection between the two through the cable shielding. I know, it sounds like something from the BOFH daily calendar, but it does make sense. Try making sure both pieces of equipment are on the same true UPS, or at least switched UPSes on the same circuit.

    • Sounds like good advice in the post above.

      Some UPSs switch. Some are always online. You want the latter for a RAID array.

      The second paragraph is important. Check your input power. Everything attached to your network should be wired to the same power circuit. Otherwise there is a possibility for feeding large spurious signals to your hardware through the power line.
      • Ahhh! NO!!! Do NOT NOT NOT put everything on one circuit. First, computers with switching power supplies (almost 100% are) are NON-linear in power usage. They draw LARGE spikes of current sporadically. Second, if you blow a circuit, EVERYTHING YOU HAVE goes down. BAD BAD BAD! Third, if you run dual power supplies on your equipment, a power problem / spike on the circuit will affect both power supplies, not even counting that 50% of the benefit of dual power supplies is so that you have power redundancy.

        As others have statued, make sure you have a true "online" ups, but ALSO make sure that you don't run over 50% power utilization on the UPS either due to the non-linear nature of switching power supplies.

        Of course the BEST power stability solution is to use all 48VDC equipment like Telco's do. When was the last time your phone went down due to telco hardware failure? Note that most Major hardware vendors have 48VDC versions of their equipment (Sun, Cisco, etc.)
        • Clarification (Score:3, Informative)

          Everything needs to be on the same Ground circuit. It is necessary to avoid ground loops.

          "They draw LARGE spikes of current sporadically."

          I don't think this is correct. I have designed power supplies, and I don't immediately think of any reason why the power input of a switching power supply should vary differently from the power output. The only surge is when the hard disks spin up, but with SCSI there is a means to stagger the spin-up.
    • "First, are you sure your UPS is a *TRUE* UPS?"

      The term you're looking for here is "On-line UPS". There are two basic varieties of UPS, switched and on-line. Both share the following common features: The AC (mains) power coming into the UPS is rectified (converted to DC, usually in the range of 24 to 48 VDC). The DC is used to charge the batteries which are the source for backup power when the mains fails. AC backup power is supplied to your equipment by an invertor (DC to AC convertor) in the UPS which takes the battery's DC juice and "builds" a 50 or 60 Hz AC sine or pseudo sine wave at the right voltage.

      Switched UPS: When the AC mains is OK, your equipment is being powered by it. When the mains fails, the UPS literally switches to backup power from the invertor. This switching takes a measureable amount of time to complete and relies on your equipment's electronics to ride-through the loss of power until the switch to invertor power is complete. Advantage? Switched UPS's are generally less expensive.

      Online UPS: Regardless of whether the mains power is OK or not, the UPS's invertor is already on and already supplying your equipment. When the AC mains does fail (momentary loss, glitch, blackout or brownout), it takes zero time to switch to UPS power, because your equipment was already on UPS power! Advantages? (1) Zero switching time, (2) the online UPS will feed a constant, glitch-free sine wave to your equipment at the right frequency, the right RMS voltage all the time .

  • Unless you're limited by cost, don't use host based RAID. It will always be less reliable then a dedicated RAID controller. If you must use host based RAID, try and find a card that supports RAID 0/1 because it's faster and more reliable. I've had good experiences with MegaRAID cards, and the IBM host based raid controllers, but by good experience I mean that I've only had a few problems. There is always a chance that something will get screwed up when you change your setup.
  • Use a high end ICP Vortex controller with 15K RPM Cheetah SCSI drives or Fujitsu drives. Its the only combo I trust in any of my PC servers.
    Alternatively you could try Sun's A3500 FCAL drive arrays with the 15K cheetahs for non PC hardware.
  • Compaq is good. (Score:3, Interesting)

    by NetJunkie ( 56134 ) <jason DOT nash AT gmail DOT com> on Saturday March 16, 2002 @02:13PM (#3173668)
    When I took over my current job the last network team had overloaded the circuits in the server room. We've had 3 circuits trip and had servers drop hard. None of the Compaq SmartArray controllers had any problems recovering.

    I suggest you also fix you power problem. The systems should have no idea power was lost to the building. If you are using a UPS and this is still happening, I'd find a better one.
  • I've used 3ware Escalade cards repeatedly, and never had any problems. I've only actually used RAID 5 once, but so far no worries with it. Of course, these are IDE RAID cards, which may not be acceptable if you have lots of SCSI drives already.


  • IBM ServeRaid (Score:2, Informative)

    by decep ( 137319 )
    I have built serveral RAID configuration with IBM ServeRAID cotrollers. One RAID5 array (16 drives, 1 hot spare) that I've managed has had 2 drives fail in the past year; the only thing I've had to do is take the bad drive out, pop another one in and it is automatically marked as a hot spare.

    I was expecting a hassle, but it was mind-blowing to see how easy it was. The cross-platform remote management utility is a plus too.
  • If you want bulletproof and are willing to pay for it, you wont go wrong with a sun A1000. l? cid=22455&parentId=67713

    they range in size from 75gb to 436 gb, I work for an EDU so we get almost a 50% discount on them, but they are worth every penny ... we've had the 200+gb model running for almost 3 years now with no problems at all.
    • I really hope you're kidding.

      The A1000's stink. The firmware is awful; the RM6 management software is worse!

      Be careful upgrading your firmware (which you need to do from time to time) -- the controller _can_ deadlock. And of course, if it does, you lose all your data, since the only copy of the LUN configuration is in the controller.

      Seriously. They're crap. Built on the same crap as the A3000/3500 series. It's all old, re-branded Symbios stuff. Yuck-o.

      You'd be better off getting an A5200 tray (or D1000 tray) and using the RAID-5 functions of Veritas Volume Manager instead. It actually has a shot at working :)

      • Ditto. RM6 stands for Raid Mangler 6 in our parts.
        I have about 10 A1000 and 30 D1000 in production
        and I'll take the simplicity of the D1000 jbod configs over Raid Mangler.
      • You'd be better off getting an A5200 tray (or D1000 tray) and using the RAID-5 functions of Veritas Volume Manager instead. It actually has a shot at working :)

        I hope your kidding.

        Software RAID5 on arrays with no cache? Heavens no, it sucks. Read performance sucks pretty bad considering the number of drives involved in the stripes and write performance is worse than dreadful even on high end machines. Write performance gets *even* worse the more drives you add unless you go across arrays - even then it just sucks. It's better on Veritas than Disksuite, but not much. Mirror, don't use RAID 5 on anything other that A3x00, A1000 or T3. It's especially good on the T3 where the XORs are done on the controller and it's almost as fast as striping.

        I agree though, RM6 is pretty bad but if managed properly it's deployable. I know of one of Sun's customers who threw out terabytes of A5x00 storage after the GBIC debacle - as in deposited on the pavement outside of Sun's City of London office - only to replace them with A1000's and lots of them.
        • Yup. I was one of the early adopters of the A5000. Pile of donkey doo. I had an entire team of Sun engineers out after escalating up to the VP level trying to get my setup working reliably. Not to mention that they won't even fit in a normal sized rack without taking the side panels off, and they are about a mile long (makes it a PAIN to use in a caged datacenter environment - you can't get around the equipment.)

  • Where I work all we use is Mylex cards for both 4 and 6 drive raid-5 implementations. We use IBM drives, but because of bad experiences lately (6 of them blowing up) we've recently switched to Seagate Cheetahs U have to drop your kernel down (we run stably under 2.2.12, and have had problems getting it to work on 2.2.18) but if you're running on linux-based servers, mylex is the way to go You can get both 32 and 64 bit PCI cards, and at only about 3-4 grand CDN a pop... it isn't that costly for a hardware RAID-5
  • We had this exact problem on our servers at work and it was a real headache getting them upgraded to the new firmware. It's a serious problem and it's imperative that you upgrade to the newest rev. of the firmware, not just the patch.

    Intel's site has a technical advisory dated Jan 29th, 2002 regarding drives being 'marked offline". rv er/ta_445.htm
  • These are under $300 on Ebay, work great, and have many features. You'll have better compatibility experience with AMI cards, Mylex have more features though, but older eXtremeRAID have proprietary memory modules (which will cost $1000 retail if you want upgrade, if you find one somewhere).

    ICP Vortex have great reputation, though I don't have any experience with them.
  • In this situation, I use XML. I invent my own markup language that is self-consistent and describes the API of a system. I then use an XSLT processor, Apache Xalan [] to be precise, to transform the source to various other formats including: a web site, one big printable web page, PDF, and I've been thinking about writing a stylesheet for man pages as well.

    The only issue with a system like this is version control of your source files, which is highly situation specific.

  • We have used Raidtec boxes for quite a long time, and they have always been very reliable.

    I think all of our Raidtecs are kitted out with Seagate drives.. anyway, check out for a little more information on what they sell.
  • ... is Compaq.

    I've probably set up over 100 servers over the last 10 years or so, and I wouldn't use anything but Compaq Array controllers. I've never lost data because of a drive subsystem problem. I've got over 20 that I'm responsible for now, and all of them use Compaq Array controllers. They are reliable, easy to configure, well supported, and easy to maintain. The tools under NetWare and Windows work well. Most are supported under Linux. They aren't cheap,but they are simply great.

    For details look here. []

    I have worked for one large regional financial institution, and one large entertainment conglomerate, and one of the things they have in common is that both use Compaq hardware. There's a good reason - it works.

    FWIW, I do not now, nor have I ever worked for Compaq, nor do I have any direct investment in Compaq.

Money is better than poverty, if only for financial reasons.