
Dependable SCSI RAID Controllers for Linux? 63
"I have been considering ICP Vortex RZ and RS series and AMI Megaraid as possibles, along with the Mylex line of controllers. I would like some opinions, praises and even nightmare stories on any of these. I am not wanting to invest $350-$1500 per controller on another nightmare like Adaptec/DPT line. It should be obvious but cost is not primary, reliability and to a lesser degree performance are the key issues. In addition I run my controllers in RAID 5 with a hot spare, so suggestions should be for controllers that can do that RAID mode and ones that can be administered from a running Linux system so I can do hot swapping. I would also like controllers whose manufacturer keeps current patches available for the stock kernel tree or is in the kernel tree (for both 2.2 and 2.4, I use 2.2 mostly due to issues with 2.4) as I never use a canned kernel after the install is done. If you run Windows or some other truthfully Adaptec supported OS look for a few *good* DPT or Adaptec controllers on eBay when the swap-out is all over."
Mylex controllers are junk (Score:5, Interesting)
I would never recommend that anyone ever use those cards. Flaky hardware is one issue, but those cards have consistently been the root of a lot of sleepless nights for me fixing the mess that they have caused.
Re:Mylex controllers are junk (Score:1)
All corrections are incorrect (Score:1)
The parent post would seem to be an opinion, and perhaps even a nightmare story, about the Mylex line of controllers. Way to read, read-boy.
Re:All corrections are incorrect (Score:1)
Re:Mylex controllers are junk (Score:1)
Re:Mylex controllers are junk (Score:2)
Re:Mylex controllers are junk (Score:2)
Probaby could have a couple of sysadmins come in and offer their opinion since they have to stay up too.
Re:Mylex controllers are junk (Score:1)
Re:Mylex controllers are junk (Score:1)
Re:Mylex controllers are junk (Score:1)
Re:Mylex controllers are junk (Score:2)
Re:Mylex controllers are junk (Score:2)
I've been using Mylex RAID cards (mostly the AcceleRAID 150 and 250s) for over 2 years without any problems. Very solid.
Many, if not most of the problems I've heard about with RAID and SCSI in general are cable-related. If you're experiencing problems, check to see if you're using the correct type of SCSI cable, that it's not too long, and you're using the correct type of terminators (preferably forced-perfect).
Though I've been able to work out SCSI-related problems in the past, but don't think I want to deal with that anymore. The next low-end server I build will probably use IDE-RAID. If I have to build a high-end server, I'd rather use FC-AL.
Re:Mylex controllers are junk (Score:2)
I had the firmware fail in a mylex controller in a database server. Of course, the busier, and therefore more important, databases were the ones with the most garbage written over their files. This was on NT.
I know of no good raid controllers. Look for a scsi-to-scsi setup rather than pci-to-scsi, though.
Re:Mylex controllers are junk (Score:3, Informative)
Anyways: when we hooked up the A1000 (our Sun server died), the system suddenly became flaky! We boot from a standalone SCSI disk, so booting wasn't a problem. But the Mylex would lose its settings; half the disks in one of the trays wouldn't show up, etc. We spent days trying to figure it out, but to no avail.After repeated messages to Mylex support, we get the solution: disable the BIOS on the Mylex. It turns out that the Symbios RAID controller in the A1000 was confusing the Mylex BIOS! Even though the A1000 was on a separate Adaptec controller. Go figure.
Re:Mylex controllers are junk (Score:1)
We started off with Mylex and eventually had to stop using them due to deteriorating quality of the boards.
Krow is dead on. Mylex is junk.
Re:Mylex controllers work fine for me (Score:2)
had about 235 days of uptime until I shut it down to add memory.
What precisely was your problem with Mylex?
Re:Mylex controllers work fine for me (Score:2)
Re:SCSI is DEAD (Score:3)
I would really like to see you run servers on IDE.. HaHa. You are not just a troll, but a stupid one.
SCSI will not die any time soon. If it does, it will be replaced by Fibre Channel.. You couldn't pay me to use an IDE disk anymore, except maybe to boot legacy (x86) hardware with so I can boot from an NFS sever.. for a workstation.
For a server, IDE has no place. Half-duplex, cpu intensive, unreliable, do I need to say more? Oh, and incrediably limited in the number of disks. Raid5 array of IDE? yeah right. You can only have 8 IDE disks in a system, all of which use interrupts.
Re:SCSI is DEAD (Score:1)
but yeah, one of <a href="http://www.adaptec.com/worldwide/product/pr
Re:SCSI is DEAD (Score:1)
Re:SCSI is DEAD (Score:2, Interesting)
Just about all PCI ide controllers can use DMA, which cuts the cpu intensiveness down to almost the scsi-level.
Further, Not all drives have to have their own individual interrupt - this depends on the ide chip and how they are arranged on the pci daughterboard, or on the motherboard. (interrupt sharing, etc.) Promise chips will use one interrupt for two interfaces.
SCSI does offer a whole slew of advantages, disconnect, command queueing, etc. These are advantages in a RAID setup. IDE does suck, but not because it's cpu intensive or gobbles interrupts.
It really depends on what you're doing... (Score:2, Insightful)
On one of our production servers we have twin 18 Gig 10krpm Ultrawide SCSI drives for the database, and a pai rof 80 Gig IDE drives for the static data like web content.
The pair of U2W SCSI drives in a RAID1 can be read at about 48 Megs a second by bonnie, while the pair of 80 gig IDEs can be read at about 28 Megs a second.
pgbench, a little benchmarking program for postgresql, gets about 150 to 200 transactions per second on the dual SCSI drives, while it gets about 100 to 120 on the dual IDE drives.
the problem is, even under it's heaviest loads, that machine never handles more than 10 or 20 transactions every second. Both sets of drives are plenty fast enough to hand the load.
For servers that need hundreds of gigabytes of storage but only have to provide static storage for a medium, to small group, the money you'd spend on SCSI is probably better spent on other options for that server.
For a database server handling hundreds of concurrent users, SCSI (via electrical cables) is a good choice, but maybe a SCSI over FC-AL setup would be needed.
Engineering isn't about which component is the absolute best, it's about which component makes the most sense for what you're doing.
Re:SCSI is DEAD (Score:2)
Fibrechannel is just another connector.
AMI sold the Megaraid divison to LSI (Score:2, Informative)
never had any problems with it whatsoever.
Re:AMI sold the Megaraid divison to LSI (Score:1)
Re:AMI sold the Megaraid divison to LSI (Score:2)
Well, rest of the company is sun advocates but as im part of another division, ive taken liberty to install linux on production and now, ive got these machines running over 2 months and i cant ever remember logging into the boxes after i installed them... (Allthought i had to tweak the setup first, server's scsibackplane didnt support maximum speed, and also, someone reported that current driver might not work well in 64BIT mode so i dropped it to 32BIT..)
So, atleast, megaraids seem to work for me but buy one and do your tests and make your own choise =)
IBM ServeRaid (Score:3, Informative)
The drivers are maintained in the kernel, so there is now patching or downloading of drivers.
I think IBM has other models that come with more cache, so you could try calling them.
Re:IBM ServeRaid (Score:1)
Re:IBM ServeRaid (Score:3, Informative)
I've used the 3[hl] and 4[hl] series of ServeRaids for over a year under Linux (both 2.2.x and 2.4.x kernels) with decent results. I currently have about 15 IBM x340's with ServeRaid 4l's running in production for nearly a year - no problems so far, however I did avoid early 2.4.x kernels (only upgraded after 2.4.7). I've suffered through failed drives and whatnot without datalos.
If you can find the ipsutils.rpm out there you can manage it from the commandline, otherwise the Java-based ServeRaid manager will let you do everything the Windows tools to under Linux.
Re:IBM ServeRaid (Score:1)
-Bill
What about ATA RAID 5? (Score:3, Interesting)
Advantages: Cheap drives.
Disadvantages: Speed, maybe, though since it's all going directly into the PCI bus, I'm not sure this is an issue.
Anyone used these? Comments? I figure with their SuperTrax controller and a bunch of 80 or 100-G drives, you could have half a terabyte in your basement for under two grand.
Re:What about ATA RAID 5? (Score:2, Interesting)
Re:What about ATA RAID 5? (Score:1)
On the Promise side, the SuperTrak SX6000 is their hardware ATA RAID solution (the PDF datasheet can be found here [promise.com]. The older version of the SuperTrack SX6000, the S/T66, is also a hardware ATA RAID controller. The FastTrak series are their soft-RAID controller series.
I'm personally looking at the 3Ware offerings (as the FreeBSD 4.x kernel has support for it, I believe in the default kernel) and possibly the Adaptec 2400A.
Re:What about ATA RAID 5? (Score:2, Informative)
Re:What about ATA RAID 5? (Score:2)
Unfortunately for use small guys, 3ware is discontinuing their 32-bit PCI cards, in favor of 64-bit.
Well, if I ever decide to build a server based on IDE RAID, maybe I'll buy a 64-bit mobo.
Compaq (Score:4, Informative)
Re:Compaq (Score:3, Informative)
The SmartArray works great. The little lights now light up on the drives (ya know, green, yellow and "uh-oh"). Heh.
Re:Compaq (Score:2)
I'm not sure how well these controllers work outside a Compaq server though, I have never tried.
Adaptec 29xx/39xx (Score:1)
I'm having a lot of success with my Adaptec 29xx (2940 for SE like CD or external SE device, 2944 for LVD) and 39xx series cards. We don't use anything else in any of our operating systems (unless they are built-in to a motherboard). Granted, I'm not stressing my systems 24 hours a day... more like a few hours spread out over a regular business day.
I'm sure there are plenty who will readily disagree, but I don't think I've found, end-to-end, better hardware for SCSI controllers. Sure, getting the AAA-133 RAID controllers to work can be a challenge, but we've been nothing but happy with the rest.
We also have a lot of success with Mylex RAID controllers on several critical production boxes, though those are not *nix machines (NT 4.0 SP6).
fwiw, we pulled the DPT cards we have and replaced them with Adaptecs.
Compaq is worth a look (Score:2, Informative)
ICP Vortex (Score:2, Informative)
Great cards, great speed, and a not so bad price. They work flawlessly in Linux and Windows.
Re:ICP Vortex (Score:3, Informative)
Re:ICP Vortex (Score:1)
I can second this too.
We used several ICP controllers with 2 to 7 disks (RAID-1, RAID-5, with and without hot spare) and they worked well. Mostly using Windows NT, but I built one with Linux and an Oracle DB (several GB of data) on Linux (2.2 at that time) andwe did some stress tests (about 10 users connecting doing full table scans, updating large amounts of data) and while the RAID array was working really hard, the box was entirely stable. Even simulating a drive failure did not cause data loss. And the Linux support is great, aswell as the support in general (but that was before Intel bought them, so now it might be either worse or asgood as it was before).
Two options to consider (Score:1)
But, you might want to consider one of the alternatives like RaidTec or its ilk. These are large boxes with RAID controllers built in and capacity for a fair number of disk enclosures. The RaidTec, for instance, can take 512GB+ (maybe 768GB+ now) and has options for redundant controllers, either fiber channel or SCSI. Just shows up as drive space. I haven't yet had a RaidTec unit up with Linux, but they claim it's fine. There are many others, with the EMC units being at the top of the cost heap.
Can you run a test? (Score:2)
Adaptec works for me (Score:1)
Re:Adaptec works for me (Score:1)
IDE Raid Controllers (Score:1)
We're using 10 Maxtor 130gig drives on 2 3ware [3ware.com] 7510 Controllers. We could still put 6 more drives on the two controllers.
Re:IDE Raid Controllers (Score:1)
Re:IDE Raid Controllers (Score:2)
Can you buy hot-swappable IDE enclosures? I've never seen any.
Performance-wise, these cards aren't top-notch. They have a very small amount of cache. Modern SCSI RAID cards take DIMMs and can be easily upgraded to more cache if necessary. These things have soldered-on memory.
For mass storage, they're great. For high-performance mass-storage, I'd still look to SCSI. Where else can you get 15000 RPM drives with 5-year warranties?
- A.P.
Re:IDE Raid Controllers (Score:1)
Maxtor?!? (Score:1)
Stability issues?! (Score:2)
I wonder if your hardware isn't going bad. I too run DPT SmartRAID controllers. A 2654U2 to be exact (2-channel version) in a 440LX P2/266 (we're network bound, not CPU bound) which is used for fileserving about 50 people in a file-heavy office environment. Before that, it was a SmartRAID V with the hardware cache/RAID card (which is in use ino another, heavier-hit webserver).
Zero stability issues on both. The 2654U2 has a 5-drive RAID5 + hot spare (UW2 SCA drives) on one channel and a 6x24 DDS-3 on another. I've done some pretty I/O intense things on this controller (including rebuilding the array during office hours) with no problems at all. This is on kernel 2.4.17. The SmartRAID V is on a 2.2.14 system which has about 50 colocated web and mail sites (it does a pretty good job of keeping the T1 busy). It runs a RAID1+0 array with really old Seagate Baracuda SCSI-1 drives and a single external DDS-3 for backup. Again, zero stability issues. I'd buy these again without hesitating.
Perhaps you need to delve deeper into the problem. The 2654U2 did not like the original P90 system the server used to be; we had bad issues there and the tech basically said that the original PCI spec was not good enough for the card. Upgrade the motherboard and all was fine. If you're running 2.4, make sure you're in the .16/.17 kernels, as earlier 2.4 kernels had issues with all manner of things, but not specifically the DPT I2O drivers, IIRC.
Both of these systems run the kernel drivers and use the dptutil software that DPT used to have (which you're right, has gone the way of the dodo after Adaptec's assimilation of DPT); what specifically can you do to cause problems? I don't think it's the card/drivers in general but if you give me a test or two I can run to see if I'm affected as well we might be able to fix this.
Slashdot's braindead lameness filter is not letting me post my dptutil -L output. Sorry.
Re:Stability issues?! (Score:1)
make dep;make clean;make bzImage; make clean; make bzImage; make clean; make bzImage; make clean; make bzImage (make dep only the first time), and then usually in about 3-4 seconds after that, its locked, if not doing a ls -lR / >
Let me know if this kills you box!
Controllers? Who need controllers? (Score:2, Informative)
At work run all our linux boxen at work with kernel mirroring and it uses almost NO CPU even under pretty heavy parallel load. Great for the base OS with SCSI or IDE, since the only thing they'll do once they boot is swap to these. Striping your swap space across multiple drives really helps when a server starts running low on memory.
I have mirror sets running at 48 Megabytes a second on two year old 18 Gig 10k SCSI drives for streaming output, and can provide very good performance under parallel load as a database disk set.
I've never had the kernel RAID drivers act flakey since I started using them over two years ago, and I've done various things like hot insert a raid disk in both RAID 1 and RAID 5 (both were pretty easy to do.) and typed the respected, yet undocumented --really-xxxxx (xxxxx=a 5 letter word not mentioned here!) flag a few times.
A friend is in the process of building NAS servers in 2U units with multiple IDE cards and ~500 Gigs of storage for ~$3500 or so. SCSI versions would be a bit more, bigger, and probably need more cooling, but be faster too. Right now the IDE ones are fast enough with a RAID 5 configuration.
The IDE ones can flood a 100 Base-TX connection, so performance isn't really an issue for anything on less than gigabit, and even then the IDEs will use up a goodly chunk of that.
The external RAIDS are often the fastest for databases, offering fibre optic connections. they're not cheap, but if you're running EBay's database, cheap isn't the point anymore.
If you have to have a RAID card, I can recommend the AMI Megaraid 428, which used, on Ebay, goes for $100 right now. Not that fast (I never got more than 20 Megabytes a second from one) but very solid and reliable, and they can hold up to 45 SCSI hard drives if you can afford the cooling and electrical for them. Plus the first channel looks like a regular SCSI card to anything other than a hard drive, like a tape drive or CDROM, so you don't need another SCSI card if you want a tape drive to back it up.
While the Megaraid site no longer has configuration software available, this site:
http://domsch.com/linux/#megaraid
points to this site:
http://support.dell.com/us/en/filelib/download/
on Dell where you can find management software for the MegaRAID controllers.
ICP Vortex good (Score:2)
IDE RAID is fine for workstation's and home use....as far as I am concerned, it has no business in a corporate server environment. Anyone who tells you differnet is shaving pennies, and hasn't a clue. Of course, your opinion may differ!