Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Hardware

Dependable SCSI RAID Controllers for Linux? 63

PalmKiller asks: "I have been using DPT (now Adaptec) SmartRaid controllers for years with great results in Linux, but with the advent of the Adaptec assimilation they became more unreliable in current kernels (in the 2.2 or 2.4 tree)...at least for the DPT SmartRaid V and Adaptec branded equivalents, ie: the kernel panics under stress...you know when you try to use it for anything but an idle uptime box. The crashes have something to do with 5 minute long command queues creating havoc in the kernel. I have a few SmartRaid IV controllers that plug away without issue, but they use a different driver. I suspect the programmers and people who knew how to program the Adaptec/DPT controllers got lost in the buyout, or perhaps driver quality control took a dive. I would greatly appreciate other Slashdot readers opinions on a good replacement that is available in the US."

"I have been considering ICP Vortex RZ and RS series and AMI Megaraid as possibles, along with the Mylex line of controllers. I would like some opinions, praises and even nightmare stories on any of these. I am not wanting to invest $350-$1500 per controller on another nightmare like Adaptec/DPT line. It should be obvious but cost is not primary, reliability and to a lesser degree performance are the key issues. In addition I run my controllers in RAID 5 with a hot spare, so suggestions should be for controllers that can do that RAID mode and ones that can be administered from a running Linux system so I can do hot swapping. I would also like controllers whose manufacturer keeps current patches available for the stock kernel tree or is in the kernel tree (for both 2.2 and 2.4, I use 2.2 mostly due to issues with 2.4) as I never use a canned kernel after the install is done. If you run Windows or some other truthfully Adaptec supported OS look for a few *good* DPT or Adaptec controllers on eBay when the swap-out is all over."

This discussion has been archived. No new comments can be posted.

Dependable SCSI RAID Controllers for Linux?

Comments Filter:
  • by krow ( 129804 ) <brian@@@tangent...org> on Wednesday January 23, 2002 @10:34PM (#2892009) Homepage Journal
    I would blame a really large portion of Slashdot's downtime, and the recent downtime with Freshmeat, on those controllers. Outside of a Megadrive that I used at the Virtual Hospital [vh.org], those are probably some of the worst pieces of Hardware I have ever ran into.
    I would never recommend that anyone ever use those cards. Flaky hardware is one issue, but those cards have consistently been the root of a lot of sleepless nights for me fixing the mess that they have caused.
    • Thats nice. he said nothing about mylex. thank you for listening to my nitpick.
    • OK, I will be scratching that mylex line off my list immediantly :).
    • Links to where anyone at slashdot or freshmeat claimed that their problems were caused by Mylex controllers? Or should I just take your word on it?
      • Seeing how I am the guy who maintains the databases I would assume my word is enough :)

        Probaby could have a couple of sysadmins come in and offer their opinion since they have to stay up too.
    • I have used a Mylex adapter for a quite busy web- and mailserver for 1,5 years. It works very well.
    • Actually, I have completely the opposite experience. The Mylex controllers I've used have never failed me once. I'd recommend them to anyone. That said, my personal opinion is that a RAID controller card is the wrong way to go anyway. Just get an external standalone RAID box. CLARiiONs used to be the best of the bunch, but they've headed up market now, leaving the SCSI arena to Baydel, BoxHill, etc.
      • I've been using Mylex RAID cards (mostly the AcceleRAID 150 and 250s) for over 2 years without any problems. Very solid.

        Many, if not most of the problems I've heard about with RAID and SCSI in general are cable-related. If you're experiencing problems, check to see if you're using the correct type of SCSI cable, that it's not too long, and you're using the correct type of terminators (preferably forced-perfect).

        Though I've been able to work out SCSI-related problems in the past, but don't think I want to deal with that anymore. The next low-end server I build will probably use IDE-RAID. If I have to build a high-end server, I'd rather use FC-AL.

      • Mylex and DPT are both crap.

        I had the firmware fail in a mylex controller in a database server. Of course, the busier, and therefore more important, databases were the ones with the most garbage written over their files. This was on NT.

        I know of no good raid controllers. Look for a scsi-to-scsi setup rather than pci-to-scsi, though.
    • We've had issues with Mylex too. Here's our setup: a P-III 1GHz box with a Mylex eXtremeRAID 2000; an Adaptec 2940UW and a 2944UW. One of the Adaptecs is hooked to a Sun A1000 (which has a "Symbios" RAID controller built in, but we're using as a JBOD).

      Anyways: when we hooked up the A1000 (our Sun server died), the system suddenly became flaky! We boot from a standalone SCSI disk, so booting wasn't a problem. But the Mylex would lose its settings; half the disks in one of the trays wouldn't show up, etc. We spent days trying to figure it out, but to no avail.After repeated messages to Mylex support, we get the solution: disable the BIOS on the Mylex. It turns out that the Symbios RAID controller in the A1000 was confusing the Mylex BIOS! Even though the A1000 was on a separate Adaptec controller. Go figure.
    • I spent five years with a system integrator building RAID systems.

      We started off with Mylex and eventually had to stop using them due to deteriorating quality of the boards.

      Krow is dead on. Mylex is junk.
    • I've been using a Mylex Acceleraid 150 for just about a year now with zero problems. The box
      had about 235 days of uptime until I shut it down to add memory.

      What precisely was your problem with Mylex?
  • an LSI Megaraid should work quite nicely in Linux, I have an older AMI Megaraid Enterprise 1200 (model 428) with a couple of small disks as my linux disks (and one of them is my windows swapfile :)

    never had any problems with it whatsoever.
    • I appreciate this you letting me know this as I have a lot of storage that needs good raid controllers. I was more or less down to the AMI/LSI and the ICP/Intel controllers at this point. Its interesting that a good bit of the raid controller companies got bought over the last few years, its also interesting seeing the hard drive company buyouts.
      • My company bought a bunch of intel server boxes loaded with ami megaraid 5XX (cant remember what) controllers and i must say that atleast in solaris, they suck big time. (Due to drivers i think). Best uptime for such box is around 3-5 days, depending on hd usage. Things start falling done that ssh stops responding first and then, after an hour or so, all other services die too (like apache,tomcat,jserv or mysql).

        Well, rest of the company is sun advocates but as im part of another division, ive taken liberty to install linux on production and now, ive got these machines running over 2 months and i cant ever remember logging into the boxes after i installed them... (Allthought i had to tweak the setup first, server's scsibackplane didnt support maximum speed, and also, someone reported that current driver might not work well in 64BIT mode so i dropped it to 32BIT..)

        So, atleast, megaraids seem to work for me but buy one and do your tests and make your own choise =)

  • IBM ServeRaid (Score:3, Informative)

    by Ringlord ( 82097 ) on Friday January 25, 2002 @05:50AM (#2899786) Homepage
    We have two IBM Netfinity servers that use IBM ServerRAID 3L. The cards are not that great as they only have 4 MB of cache, but they run reliably under 2.4.13.

    The drivers are maintained in the kernel, so there is now patching or downloading of drivers.

    I think IBM has other models that come with more cache, so you could try calling them.
    • I just looked at them on their website, they have a 4x with 64M cache and battery backup installed for $900 street.
    • Re:IBM ServeRaid (Score:3, Informative)

      by velkro ( 11 )
      Disclaimer: I'm an ex-IBMer, who worked in the Linux services area.

      I've used the 3[hl] and 4[hl] series of ServeRaids for over a year under Linux (both 2.2.x and 2.4.x kernels) with decent results. I currently have about 15 IBM x340's with ServeRaid 4l's running in production for nearly a year - no problems so far, however I did avoid early 2.4.x kernels (only upgraded after 2.4.7). I've suffered through failed drives and whatnot without datalos.

      If you can find the ipsutils.rpm out there you can manage it from the commandline, otherwise the Java-based ServeRaid manager will let you do everything the Windows tools to under Linux.
    • Ditto on the ServerRaid's. I've got one running in a Netfinity box doing Oracle and it's been problem free since it arrived.

      -Bill
  • by dschuetz ( 10924 ) <.gro.tensad. .ta. .divad.> on Friday January 25, 2002 @09:13AM (#2900102)
    Does anyone have any thoughts about IDE raid, especially the offerings from Promise Technology [promise.com]? They've got cards that do RAID 5 with regular IDE drives, including hot failover capability. They've also got subsystems that put a full 8 disks into a RAID array, but presents it to the controller as a single SCSI device.

    Advantages: Cheap drives.
    Disadvantages: Speed, maybe, though since it's all going directly into the PCI bus, I'm not sure this is an issue.

    Anyone used these? Comments? I figure with their SuperTrax controller and a bunch of 80 or 100-G drives, you could have half a terabyte in your basement for under two grand.
    • The Promise and Highpoint controllers are actually soft-RAID meaning they use the host CPU. 3ware has a good hardware IDE RAID up to 8 drives but they seemed to stop selling them execpt in their sub-systems. Everything I've heard about the Linux drivers is that they are good and I know the FreeBSD drivers are rock-solid. I think the Adaptec controller is hardware too but am not 100% on that.
      • The Adaptec 2400A ATA RAID controller is a hardware based solution from Adaptec (more info on the product can be found here [adaptec.com]. The 1200A is the soft-RAID controller that you were mentioning.

        On the Promise side, the SuperTrak SX6000 is their hardware ATA RAID solution (the PDF datasheet can be found here [promise.com]. The older version of the SuperTrack SX6000, the S/T66, is also a hardware ATA RAID controller. The FastTrak series are their soft-RAID controller series.

        I'm personally looking at the 3Ware offerings (as the FreeBSD 4.x kernel has support for it, I believe in the default kernel) and possibly the Adaptec 2400A.

      • Actually, 3ware has reversed their decision, and is again selling the IDE RAID cards.
        • Unfortunately for use small guys, 3ware is discontinuing their 32-bit PCI cards, in favor of 64-bit.

          Well, if I ever decide to build a server based on IDE RAID, maybe I'll buy a 64-bit mobo.

  • Compaq (Score:4, Informative)

    by JediTrainer ( 314273 ) on Friday January 25, 2002 @09:18AM (#2900129)
    Don't discount the Compaq line of SmartArray controllers. I've been using one for 2 years without a hitch. Supports everything you need them to do (I'm using the Smart/2P controller in my server). Never had a single problem with it. You can find these on eBay really cheap too.
    • Re:Compaq (Score:3, Informative)

      by Chuck Milam ( 1998 )
      I would agree. Compaq SmartArray driver support has come a long way in the past 18 months: I had a Compaq to put linux on, and 18 months ago, I couldn't get the SmartArray controller to work well as mouch more than just a straight SCSI controller. So, I switched to an ICP Vortex. 18 months later, while doing some hardware upgrades, I switched the SmartArray card back in, and went to Compaq's web site for flash updates and driver information. Amazing what a difference a year and a half makes--went from little to no information to "Oh my god! There are so many support options and Linux drivers I'm not sure where to start!"

      The SmartArray works great. The little lights now light up on the drives (ya know, green, yellow and "uh-oh"). Heh.
    • Compaq also has an online configuration tool that runs under Linux now. With it, hot plug support and LVM, it's possible to add a hard drive, and add it to the filesysystem with no reboots.

      I'm not sure how well these controllers work outside a Compaq server though, I have never tried.
  • As always, your mileage may vary...

    I'm having a lot of success with my Adaptec 29xx (2940 for SE like CD or external SE device, 2944 for LVD) and 39xx series cards. We don't use anything else in any of our operating systems (unless they are built-in to a motherboard). Granted, I'm not stressing my systems 24 hours a day... more like a few hours spread out over a regular business day.

    I'm sure there are plenty who will readily disagree, but I don't think I've found, end-to-end, better hardware for SCSI controllers. Sure, getting the AAA-133 RAID controllers to work can be a challenge, but we've been nothing but happy with the rest.

    We also have a lot of success with Mylex RAID controllers on several critical production boxes, though those are not *nix machines (NT 4.0 SP6).

    fwiw, we pulled the DPT cards we have and replaced them with Adaptecs.
  • Compaq's SMART array controllers work well under Linux, and support RAID 0, 1, 5, and 0+1 with hot spares. I have been running RedHat 7.1 on one of these for months, in a production environment, with great success.
  • ICP Vortex (Score:2, Informative)

    by IpSo_ ( 21711 )
    I work for a medium sized web hosting company which sells dedicated/managed servers to customers. We will only put ICP Vortex cards in them. These are the only cards we put in our own servers as well, I would say we have at least 40 of them in our datacenter and they work great. Not to mention if a drive fails you can easily hear them beeping from outside the datacenter, even with all the server/air conditioner noise.

    Great cards, great speed, and a not so bad price. They work flawlessly in Linux and Windows.
    • Re:ICP Vortex (Score:3, Informative)

      by Zurk ( 37028 )
      yep. i'll second that. ICP vortex also has linux utilities to control the cards from within linux instead of booting to DOS. i've never had any problems with vortex controllers. just make sure you use a stick of good (crucial or somesuch) ECC SDRAM for the cache memory. dont spare any expense for the cache memory. BTW, ive run them on AlphaLinux with the 164LX boards and SRM bioses. works great. A 21264 Alpha, a gig of RAM with a ICP Vortex card and 256 Megs of ECC cache on the card connected to an 8 drive arra cant be beat.
    • I can second this too.

      We used several ICP controllers with 2 to 7 disks (RAID-1, RAID-5, with and without hot spare) and they worked well. Mostly using Windows NT, but I built one with Linux and an Oracle DB (several GB of data) on Linux (2.2 at that time) andwe did some stress tests (about 10 users connecting doing full table scans, updating large amounts of data) and while the RAID array was working really hard, the box was entirely stable. Even simulating a drive failure did not cause data loss. And the Linux support is great, aswell as the support in general (but that was before Intel bought them, so now it might be either worse or asgood as it was before).

  • I have used Mylex RAID controllers very successfully with the older kernels on Linux (I run debian-stable) and on FreeBSD. Hot-Swap worked fine, and the on-board BIOS could be used for all configurations, plus there was adequate information from the kernel on RAID state. So, unless they have become significantly worse recently, I would at least consider Mylex.

    But, you might want to consider one of the alternatives like RaidTec or its ilk. These are large boxes with RAID controllers built in and capacity for a fair number of disk enclosures. The RaidTec, for instance, can take 512GB+ (maybe 768GB+ now) and has options for redundant controllers, either fiber channel or SCSI. Just shows up as drive space. I haven't yet had a RaidTec unit up with Linux, but they claim it's fine. There are many others, with the EMC units being at the top of the cost heap.
  • Since cost is not your primary worry, can you run a test? Get one of each, configure each with several harddrives and see what happens to each under load. While there is a large difference between the way system behave when running 50GB over 7 harddrives and the way systems work when running with 50TB in a real raid system, anything that doesn't work with the smaller systems will fail in the large ones too.
  • we just put together a system here at work with a dual athlon xp tyan board and a 64bit adaptec raid card rinning raid 5 + 1 hot spare, and have not had any trouble what soever suse 7.3 detected it all on its own no driver disk needed. I am not sure of the exact model number (and iam not going to go pull the cover off one of our main production servers) tho dmesg gives this Loading Adaptec I2O RAID: Version 2.4 Build 5. I dunno why this poster has had such trouble maybe he should upgrade his distro to suse :-)
  • We just bought a system that has a usable Terabyte of disk space and payed about $7500 for it.

    We're using 10 Maxtor 130gig drives on 2 3ware [3ware.com] 7510 Controllers. We could still put 6 more drives on the two controllers.
    • Small correction to my posting. The controller is a 7810 and not a 7510. :)
    • I use the 6800 at home for about 300 gigs of RAID-5 storage. I use FreeBSD as the OS, though, for this particular machine. (Linux seems to be 3ware's preferred OS, however.) So far, things have been fine. Unfortunately, the first card I was sent was DOA (seemed to have cache problems.) The second one worked fine, and is still in the system working happily. I'm not sure i'd recommend these cards for HA systems, though, for a couple of reasons:

      Can you buy hot-swappable IDE enclosures? I've never seen any.

      Performance-wise, these cards aren't top-notch. They have a very small amount of cache. Modern SCSI RAID cards take DIMMs and can be easily upgraded to more cache if necessary. These things have soldered-on memory.

      For mass storage, they're great. For high-performance mass-storage, I'd still look to SCSI. Where else can you get 15000 RPM drives with 5-year warranties?

      - A.P.

    • Maxtor (and Western Digital) are pretty bad these days. We quit using them in our servers at all awhile back (I work for a VAR). IBM and Seagate (with a couple of notable individual model exceptions) are the best. Quantum being middle-of-the-road of course, though that may change since Maxtor bought them a year or so ago.
  • I wonder if your hardware isn't going bad. I too run DPT SmartRAID controllers. A 2654U2 to be exact (2-channel version) in a 440LX P2/266 (we're network bound, not CPU bound) which is used for fileserving about 50 people in a file-heavy office environment. Before that, it was a SmartRAID V with the hardware cache/RAID card (which is in use ino another, heavier-hit webserver).

    Zero stability issues on both. The 2654U2 has a 5-drive RAID5 + hot spare (UW2 SCA drives) on one channel and a 6x24 DDS-3 on another. I've done some pretty I/O intense things on this controller (including rebuilding the array during office hours) with no problems at all. This is on kernel 2.4.17. The SmartRAID V is on a 2.2.14 system which has about 50 colocated web and mail sites (it does a pretty good job of keeping the T1 busy). It runs a RAID1+0 array with really old Seagate Baracuda SCSI-1 drives and a single external DDS-3 for backup. Again, zero stability issues. I'd buy these again without hesitating.

    Perhaps you need to delve deeper into the problem. The 2654U2 did not like the original P90 system the server used to be; we had bad issues there and the tech basically said that the original PCI spec was not good enough for the card. Upgrade the motherboard and all was fine. If you're running 2.4, make sure you're in the .16/.17 kernels, as earlier 2.4 kernels had issues with all manner of things, but not specifically the DPT I2O drivers, IIRC.

    Both of these systems run the kernel drivers and use the dptutil software that DPT used to have (which you're right, has gone the way of the dodo after Adaptec's assimilation of DPT); what specifically can you do to cause problems? I don't think it's the card/drivers in general but if you give me a test or two I can run to see if I'm affected as well we might be able to fix this.

    Slashdot's braindead lameness filter is not letting me post my dptutil -L output. Sorry.

    • Is is interesting that you have no problems. I have tried 3 mainboards (a dual pent III, and two athlon boards) and two seperate 2654U2 controllers (well one is a 2654U1) on one of my test machines. I have not had too many times that the lockups happen predictably, but I do have one senerio that is repeatable on the test machine. Compile a kernel four times in a row with no more that a couple seconds pause between... I use

      make dep;make clean;make bzImage; make clean; make bzImage; make clean; make bzImage; make clean; make bzImage (make dep only the first time), and then usually in about 3-4 seconds after that, its locked, if not doing a ls -lR / > /tmp/listing.txt will finish it off. This happens on 2.2.19 and 2.4.17 kernels and even 2.4.18preX ones. This has to be done at the console or using screen or simlar as telneting in slows it down to the point that it don't kill it.

      Let me know if this kills you box!
  • There are three great options to get your servers out of the RAID-controller business. One is NAS (Network Attached Storage), the second is using native SCSI or IDE controllers with RAID provided by your OS. And lastly, you can buy a box that already is a RAID but just looks like one big fat drive and plug it in.

    At work run all our linux boxen at work with kernel mirroring and it uses almost NO CPU even under pretty heavy parallel load. Great for the base OS with SCSI or IDE, since the only thing they'll do once they boot is swap to these. Striping your swap space across multiple drives really helps when a server starts running low on memory.

    I have mirror sets running at 48 Megabytes a second on two year old 18 Gig 10k SCSI drives for streaming output, and can provide very good performance under parallel load as a database disk set.

    I've never had the kernel RAID drivers act flakey since I started using them over two years ago, and I've done various things like hot insert a raid disk in both RAID 1 and RAID 5 (both were pretty easy to do.) and typed the respected, yet undocumented --really-xxxxx (xxxxx=a 5 letter word not mentioned here!) flag a few times.

    A friend is in the process of building NAS servers in 2U units with multiple IDE cards and ~500 Gigs of storage for ~$3500 or so. SCSI versions would be a bit more, bigger, and probably need more cooling, but be faster too. Right now the IDE ones are fast enough with a RAID 5 configuration.

    The IDE ones can flood a 100 Base-TX connection, so performance isn't really an issue for anything on less than gigabit, and even then the IDEs will use up a goodly chunk of that.

    The external RAIDS are often the fastest for databases, offering fibre optic connections. they're not cheap, but if you're running EBay's database, cheap isn't the point anymore. :-)

    If you have to have a RAID card, I can recommend the AMI Megaraid 428, which used, on Ebay, goes for $100 right now. Not that fast (I never got more than 20 Megabytes a second from one) but very solid and reliable, and they can hold up to 45 SCSI hard drives if you can afford the cooling and electrical for them. Plus the first channel looks like a regular SCSI card to anything other than a hard drive, like a tape drive or CDROM, so you don't need another SCSI card if you want a tape drive to back it up.

    While the Megaraid site no longer has configuration software available, this site:

    http://domsch.com/linux/#megaraid

    points to this site:

    http://support.dell.com/us/en/filelib/download/i nd ex.asp?fileid=R28825

    on Dell where you can find management software for the MegaRAID controllers.
  • In my albeit limited experience w. RAID controllers, you cannot buy a better controller than ICP Vortex. Support is good, and they make a solid product. And you pay for it!

    IDE RAID is fine for workstation's and home use....as far as I am concerned, it has no business in a corporate server environment. Anyone who tells you differnet is shaving pennies, and hasn't a clue. Of course, your opinion may differ!

THEGODDESSOFTHENETHASTWISTINGFINGERSANDHERVOICEISLIKEAJAVELININTHENIGHTDUDE

Working...