Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Data Storage Software Linux

Experiences w/ Software RAID 5 Under Linux? 541

MagnusDredd asks: "I am trying to build a large home drive array on the cheap. I have 8 Maxtor 250G Hard Drives that I got at Fry's Electronics for $120 apiece. I have an old 500Mhz machine that I can re-purpose to sit in the corner and serve files. I plan on running Slackware on the machine, there will be no X11, or much other than SMB, NFS, etc. I have worked with hardware arrays, but have no experience with software RAIDs. Since I am about to trust a bunch of files to this array (not only mine but I'm storing files for friends as well), I am concerned with reliability. How stable is the current RAID 5 support in Linux? How hard is it to rebuild an array? How well does the hot spare work? Will it rebuild using the spare automatically if it detects a drive has failed?"
This discussion has been archived. No new comments can be posted.

Experiences w/ Software RAID 5 Under Linux?

Comments Filter:
  • Works great (Score:5, Informative)

    by AIX-Hood ( 682681 ) on Saturday October 30, 2004 @06:45PM (#10675195)
    Been doing this with 5 Maxtor Firewire 250gig drives for a good while, and regular ide drives for years before that. It's always been very stable and has had no problems with drives going bad as long as you replaced them quickly. I moved to firewire though, because it was much easier to see which drive went bad out of the set, and you could hot swap them.
    • I agree, haven't had any probs, save for 1 drive crash, and i have migrated the array twice to new boxes.
      I love the idea of firewire, too, it makes perfect sense, 'cause if you are gonna have raid reliability, then you might as well have hot-swap. (note to self: save up for firewire enclosures).

      I would have 2 years of uptime, but NOOOOO 1 national power outage and 1 drive crash (perfect recovery). Uptime 71 days =( [I WAS at 260 at one point]
      just make sure that the drives are up to that much spin-time, a
      • Re:Works great (Score:5, Interesting)

        by k.ellsworth ( 692902 ) on Saturday October 30, 2004 @07:08PM (#10675350)
        Normally a drive crash anonunces itself some time before... use the smartctl tool.
        that tool checks the SMART info on the disk about posible failures..

        I do a lot of software raids and with smartctl, no drive crash has ever surprised me. i always had the time to get a spare disc and replace it on the array before something unfunny happened.

        do a smartctl -t short /dev/hda every week and a -t long every month or so ...

        read the online page of it:
        http://smartmontools.sourceforge.net/

        A example of a failing disc:
        http://smartmontools.sourceforge.net/exampl es/MAXT OR-10.txt
        a example of the same type of disc but with no errors:
        http://smartmontools.sourceforge.net/exam ples/MAXT OR-0.txt

        Software raid works perfect on linux... and combined with LVM the things gets even better
        • For those looking (Score:3, Informative)

          by phorm ( 591458 )
          smartctl often comes as part of the package "smartsuite." For Debian users there is an apt package available under that name as well.
        • by anti-NAT ( 709310 ) on Saturday October 30, 2004 @10:16PM (#10676321) Homepage

          You can get smartd to execute tests automatically, using the -s option.

          In my smartd.conf file, I have :

          -s (L/../../7/03|S/../.././05)

          on the device lines, which means do a weekly online long test at 3 am Sunday, and a daily online short test at 5 am every day.

          mdadm running as a daemon, and watching the md arrays is also a good idea.

    • You can hot swap SATA. Deffiately the way to go nowdays, seeing as they are only $1-5 more expensive than their IDE counterparts.
  • Generally for situations where you really need to make sure the data stays safe, I'd just stick with hardware. If you can spend that much on some harddrives, I don't see why you can't spend the money on hardware.

    Though from what I hear, software RAID on Linux works decently.
    • by Anonymous Coward on Saturday October 30, 2004 @06:50PM (#10675218)
      Actually, a big disadvantage to hardware RAID is what happens if your controller fails.

      Consider--your ATA RAID controller dies three years down the road. What if the manufacturer no longer makes it?

      Suddenly, you've got nearly 2 TB of data that is completely unreadable by normal controllers, and you can't replace the broken one! Oops!

      Software RAID under Linux provides a distinct advantage, because it will always work with regular off-the-shelf hardware. A dead ATA controller can be replaced with any other ATA controller, or the drives can be taken out entirely and put in ANY other computer.
      • by Anonymous Coward
        So true. In many cases HW RAID doesn't offer any advantage over software RAIDs and it's only a one new part that can broke and cost $$$ and time to replace.

        Moderators, mod this up!

      • by Futurepower(R) ( 558542 ) on Saturday October 30, 2004 @07:06PM (#10675345) Homepage

        This is a VERY big issue. We've found that Promise Technology RAID controllers have problems, and the company doesn't give tech. support when the problems are difficult, in our experience.

        --
        Government data compares Democrat and Republican economics. [futurepower.org]
        • by ErikTheRed ( 162431 ) on Saturday October 30, 2004 @07:35PM (#10675485) Homepage
          The Promise controllers are SHRAID, which is my own non-standard acronym for Software w/ Hardware-assist RAID or SHitty RAID in less polite company. And the "promise" of true redundancy is a charade (rim-shot, please). Basically, you have all of the disadvantages of software RAID - the need to manually configure bootability of both drives (assuming you're running RAID 1 or RAID 0+1 - if you're running RAID5 or JBOD it's an even bigger pain), plus the need to have specialized drivers on the OS, etc. These controllers (Promise, Highpoint, etc.) should be avoided like the plague for technical reasons alone.

          Good, relatively inexpensive IDE and SATA RAID can be had with 3Ware Controllers [3ware.com]. 2-drive models start around $140, and they support up to 12 drives on their more expensive controllers. The drives appear as a single physical device to the O/S, whether it's Windoze, Linux, BSD, DOS 3.1, etc.
          • by ErikTheRed ( 162431 ) on Saturday October 30, 2004 @07:47PM (#10675539) Homepage
            This is slightly off-topic because it won't take care of the particular solution being sought, but another interesting way to do RAID-1 is using the controllers from Arco Data Protection. The have some that are physically connected between your IDE or SATA controller and the two drives to be mirrored - they just seamlessly mimic a single IDE device. This makes it possible to RAID-1 any IDE or SATA drive under any operating system or device. I've used them in places like phone systems and voice mail systems that have no provisioning whatsoever for RAID. It can take a little bit of case tweaking, and you have to be sure the power supply can handle it, but it's an interesting solution in certain situations where nothing else can do the job.
      • by pjrc ( 134994 ) <paul@pjrc.com> on Saturday October 30, 2004 @08:10PM (#10675691) Homepage Journal
        Consider--your ATA RAID controller dies three years down the road. What if the manufacturer no longer makes it?

        This happened to me. The card was sorta still working... could read, with lots of errors usually recoverable, but writing was flakey.

        Luckily, even after about 3 years, 3ware (now AAMC) [3ware.com] was willing to send me a free replacement card. They answered the phone quickly (no long wait on hold), they guy I talked with knew the products well, and he had me email some log files. He looked at them for about a minute, asked some questions about the cables I was using, and then gave me an RMA number.

        The new card came, and my heart sank when I saw it was a newer model. But I plugged the old drives in, and it automatically recognized their format and everything worked as it should.

        This might not work on those cheapo cards like Promise that really are just multiple IDE controllers and a bios that does all the raid in software. Yeah, I know they're cheaper, but the 3ware cards really are very good and worth the money if you can afford them.

      • I have used raidweb.com enclosures in the past and they work quite well. They handle all the RAID configuration inside the box and appear as one drive to the host (hence the boxes are totally host independent). The connection between the box and the host is SCSI and I've used off-the-shelf high-end SCSI controllers for this. Their boxes have redundant fans and power supplies. They sound like a jet taking off, but my experience is that they're stable and rock solid. They're rack mountable too.

        The only
      • Suddenly, you've got nearly 2 TB of data that is completely unreadable by normal controllers, and you can't replace the broken one! Oops!

        This is also a good reason to use mirroring rather than fancier schemes like striping or RAID-5, if you can afford the capacity hit. You can always mount the drive individually.
    • Is hardware supposed to be better? If so, why?

      From what I read software is just as good as hardware RAID these days, and sometimes better. But its only what I read, i dont have first hand info.
    • RAID 5 hardware tends to be rather expensive, and most RAID hardware tends to be "pseudo hardware", the drivers for the raid card make the CPU do the actual work anyway. Your 500Mhz CPU is faster than all but the most expensive RAID controllers anyway.

      Stick with Linux RAID. It knows how to do it better.
    • I would support the sentiment.

      Back when I was using a PII-450 as a file server, I tried out software RAID on 3 x 80 Gb IDE disks. It mostly worked fine - except when it didn't. Generally problems happened when the box was under heavy load - one of the disks would be marked bad, and a painful rebuild would ensue. Once two disks were marked bad - I follwed the terrifying instructions in the "RAID How-To", and got all my data back. That was the last straw for me...I decided that I didn't have time to wat
    • by kcbrown ( 7426 ) <slashdot@sysexperts.com> on Saturday October 30, 2004 @07:14PM (#10675392)
      Generally for situations where you really need to make sure the data stays safe, I'd just stick with hardware. If you can spend that much on some harddrives, I don't see why you can't spend the money on hardware.

      I disagree with this. Here's why: the most important thing is your data. Hardware RAID works fine until the controller dies. Once that happens, you must replace it with the same type of controller, or your data is basically gone, because each manufacturer uses its own proprietary way of storing the RAID metadata.

      Software RAID doesn't have that problem. If a controller dies, you can buy a completely different one and it just won't matter: the data on your disk is at this point just blocks that are addressable with a new controller in the same way that they were before.

      Another advantage is that software RAID allows you to use any kind of disk as a RAID element. If you can put a partition on it, you can use it (as long as the partition meets the size constraints). So you can build a RAID set out of, e.g., a standard IDE drive and a serial ATA drive. The kernel doesn't care -- it's just a block device as far as it's concerned. The end result is that you can spread the risk of failure not just across drives but across controllers as well.

      That kind of flexibility simply doesn't exist in hardware RAID. In my opinion, it's worth a lot.

      That said, hardware RAID does have its advantages -- good implementations offload some of the computing burden from the CPU, and really good ones will deal with hotswapping disks automatically. But keep in mind that dynamic configuration of the hardware RAID device (operations such as telling it what to do with the disk you just swapped into it) is something that has to be supported by the operating system driver itself and a set of utilities designed to work specifically with that driver. Otherwise you have to take the entire system down in order to do such reconfiguration (most hardware RAID cards have a BIOS utility for such things).

      Oh, one other advantage in favor of software RAID: it allows you to take advantage of Moore's Law much more easily. Replace the motherboard/CPU in your system and suddenly your RAID can be faster. Whether it is or not depends on whether or not your previous rig was capable of saturating the disks. With hardware RAID, if the controller isn't capable of saturating the disks out of the box, then you'll never get the maximum performance possible out of the disks you connect to it, even if you have the fastest motherboard/CPU combination on the planet.

      • Another advantage is that software RAID allows you to use any kind of disk as a RAID element.

        Also, it's partition based, not disk based (under Linux, at least). This means that with just two drives you can create one two-disk RAID-1 array (for safety) and one two-disk RAID-0 array (for performance). Just create two partitions on each drive, pair the first partition on each drive in a RAID-0 config and the second partitions as RAID-1.

        You can't do a single RAID-1/0 array with only two disks though. Yo

  • by Anonymous Coward
    Take it from me, stick with a hardware raid 5, reliablity is thru the roof, and cards are now around 300-500 for one with 128 mb of ram. Ince you spent 960 dollars on the harddrives, you might as well trust their organization to something of equal quality.

    my 2 cents
  • by PrvtBurrito ( 557287 ) on Saturday October 30, 2004 @06:51PM (#10675222)
    Is there a good resource for hardware/software RAID support on linux? Tech support is always a challenge and we have a number of 3ware 8way and 12way powered by 250gb drives. We often have lots of mysterious drops on the array that require reboots or even rebuilding the array. Royal pain in the ass.
    • by GigsVT ( 208848 ) on Saturday October 30, 2004 @07:03PM (#10675320) Journal
      I just posted in another thread about 3ware and mysterious drops of seemingly good drives. Even with the ultra-paranoid drive dropping, we have never lost data on 3ware.

      Other than that, 3ware has been decent for us. We are about to put into service a new 9500 series 12 port SATA card.

      I wish I could say our ACNC SATA to SCSI RAIDs have been as reliable. We have three ACNC units, two of them went weird after we did a firmware upgrade that tech support told us to do, lost the array.

      We call tech support and they say "oh we didn't remember to tell you when you upgrade from the version you are on, you will lose your arrays".
  • To me, a cheap 500 Mhz computer (who probably have 64-128 megs of ram, I guess) is gonna choke on 8 250 GB hardrives on Software RAID 5. Some other posters suggested buying an hardware controller, and I agree. If you host stuff for your friends, you can always charge them a little extra to compensate.
    • by mortonda ( 5175 )
      If all it does is serve files, it should do fine. The 500Mhz is not going to be a factor at all, in fact, the CPU will be idle most of the time. The real thing to optimize in a file server is the ATA bus speed and hard drive latency.
    • Performance Tips (Score:5, Informative)

      by Alan Cox ( 27532 ) on Saturday October 30, 2004 @07:09PM (#10675353) Homepage
      There are a few things that really help in some cases, but RAM isn't always one of them.

      If you've got a lot of data that is read/re-read or written/re-read by clients then RAM really helps, streaming stuff which doesn't get many repeat accesses (eg running a movie editing suite) it might not help at all

      For performance its often worth sacrificing a bit of space and going RAID 1. Again depends if you need the space first or performance first.

      Obviously don't put two drives of a raid set on the same IDE controller as master/slave or it'll suck. Also if you can find a mainboard with multiple PCI busses that helps.

      Finally be aware that if you put more than a couple of add on IDE controllers on the same PCI bus it'll suck - thats one of the big problems with software raid 5 versus hardware which is less of a problem with raid 1 - you are doing a lot of repeated PCI bus copies and that hurts the speed of drives today.

      I use raid1 everywhere, disks may be cheap but you have to treat them as unreliable nowdays.
  • by suso ( 153703 ) on Saturday October 30, 2004 @06:52PM (#10675233) Journal
    I used to work at Kiva Networking [kiva.net] and we used hardware raid 5 on some machines and software raid 1 and raid 5 on others. Maybe it was just me, but the software raid 5 disks always seemed to last longer. Never much problems with it. In fact, we had more problems getting the hardware raid controller to work with Linux or with buggyness than anything.
  • Vinum with FreeBSD (Score:3, Informative)

    by Anonymous Coward on Saturday October 30, 2004 @06:58PM (#10675290)
    While it's not Linux, I've been using Vinum with FreeBSD for about 3 years with RAID 5 and have never had any problems. My current box is an old VIA 600MHz C3 with FreeBSD 4.8 and a measly 128MB of RAM. As far as benchmarks go, my RAID seems to blow away all of the cheapy hardware cards performacewise as well.

    BTW, I switched from Linux to FreeBSD for the server years ago for the stability.
    • by Nick Driver ( 238034 ) on Saturday October 30, 2004 @08:44PM (#10675890)
      Vinum on FreeBSD absolutely rocks! You're old 500MHz machine will run FreeBSD beautifully too.

      Anybody here remember Walnut Creek's huge ftp archive at "cdrom.com" which back in it's heyday of the late 1990's used to be the biggest, most highest traffic ftp download site on the planet? They used a combination of Vinum software raid and Mylex hardware raid to handle the load. I remember reading a discussion article from them once that until you get a totally ridiculous volume of ftp sessions hammering away at ther arrays, that Vinum was actually a slight bit faster than the hardware array controller.
  • by ErikTheRed ( 162431 ) on Saturday October 30, 2004 @07:01PM (#10675309) Homepage
    Software raid is fine for simple configurations, but if you want to "do it right" - especially considering that you just dropped about a kilobuck on HDDs, go Hardware. A good, reasonably priced true hardware RAID controller that will fit the bill for you is the 3Ware Escalade 7506-8. It has 8 IDE ports, 1 for each drive - you don't want to run two RAID drives in master/slave mode off of a single IDE port; it will play hell with your I/O performance. It's true hardware raid, so you don't have to worry about big CPU overhead and being able to boot with a failed drive (a major disadvantage to software RAID if your boot partition is on a RAID volume, certain RAID-1 configurations excepted). You can buy them for under $450. provantage.com price [provantage.com] is $423.48 (I have no relationship with them other than I've noticed that their prices tend to be decent).
    • by Hrunting ( 2191 ) on Saturday October 30, 2004 @07:50PM (#10675553) Homepage
      Software raid is fine for simple configurations, but if you want to "do it right" - especially considering that you just dropped about a kilobuck on HDDs, go Hardware. A good, reasonably priced true hardware RAID controller that will fit the bill for you is the 3Ware Escalade 7506-8. It has 8 IDE ports, 1 for each drive - you don't want to run two RAID drives in master/slave mode off of a single IDE port; it will play hell with your I/O performance. It's true hardware raid, so you don't have to worry about big CPU overhead and being able to boot with a failed drive (a major disadvantage to software RAID if your boot partition is on a RAID volume, certain RAID-1 configurations excepted). You can buy them for under $450. provantage.com price [provantage.com] is $423.48 (I have no relationship with them other than I've noticed that their prices tend to be decent).

      Hardware RAID5 is fine if your sole goal is reliability. If you need even an iota of performance, then go with software RAID5. The 3wares have especially abysmal RAID5 performance, specially older series like the 75xx and 85xx cards. 3ware's admitted it, and something targeted for fixing in the 95xx series (haven't gotten my hands on those yet, so I don't know).

      As for software RAID reliability, I find that Linux's software RAID is much more forgiving than even the most resilient of hardware RAIDs. I've lost 4 drives out of a 12 drive system at the same time, and Linux has let me piece the RAID back together and I've lost nothing. Was the machine down? Yes. Did I lose data? No. Compare that with a 3ware hardware RAID system where I lost 2 drives. Even thought I probably could have salvaged 99% of the data off that array, the 3ware just would not let me work with that failed array.

      Also, on any reasonably modern system, the software RAID will be faster. You just have a much faster processor to do the RAID processing for you. The added overhead of the RAID5 processing is nothing compared to a 1-2GHz processor.
      • by rpwoodbu ( 82958 ) on Sunday October 31, 2004 @02:46AM (#10677344)

        This logic doesn't hold. Let's first talk about the performance.

        Also, on any reasonably modern system, the software RAID will be faster. You just have a much faster processor to do the RAID processing for you. The added overhead of the RAID5 processing is nothing compared to a 1-2GHz processor.

        The actual RAID processing is relatively easy, and any RAID solution, be it hardware or software, that is worth anything will not have any trouble doing the logic (perhaps the cards mentioned are indeed not worth anything). The processing isn't your limiting factor; it is data thoughput. This is where hardware shines. A lot of extra data has to be shipping in and out to maintain and validate the RAID. This can easily saturate busses. A hardware solution allows the computer to communicate only the "real" data between itself and the hardware device, and then allows that device to take the burden of communicating with the individual drives on their own dedicated busses. Sure, that device can become overwhelemed, but I submit to you that if it does, it was poorly designed.

        I am not saying that one shouldn't consider software RAID solutions. Just don't consider them because you think the performance will be better.

        Now lets talk about data recovery.

        I've lost 4 drives out of a 12 drive system at the same time, and Linux has let me piece the RAID back together and I've lost nothing. Was the machine down? Yes. Did I lose data? No. Compare that with a 3ware hardware RAID system where I lost 2 drives. Even thought I probably could have salvaged 99% of the data off that array, the 3ware just would not let me work with that failed array.

        Let us be clear: we are talking about RAID5. In RAID5, you simply cannot lose more than one drive without losing data integrity. And it isn't like you can get back some of your files; the destruction will be evenly distributed over your entire logical volume(s) as a function of the striping methodology. So it is quite impractical to recover from this scenario. I don't know what kind of system was being employeed with this 12-drive array that can withstand a 1/3 array loss, but it certainly wasn't a straight RAID5. I can come up with some solutions that would allow such massive failure, but then we aren't comparing apples to apples. I'd be very interested in knowing what the solution was in this example case. It should also be noted that we don't know how many drives were in the system that lost 2 drives, much less what kind of RAID configuration was being used. No conclusion can be derived from the information provuded.

        As an aside, more often than not, when we as individuals want a large cheap array, we are less concerned about performance than reliability. We put what we can into the drives, and we hope to maximize our data/$ investment while minimizing our chances for disaster. A software RAID5 is a good solution. Some posts have said that if you can spend so much on the drives, what's stopping you from spending on a nice hardware controller? I submit that perhaps he's broke now! And besides, a controller that can RAID5 8 drives is quite the expensive controller indeed. This has software RAID written all over it.

    • by photon317 ( 208409 ) on Sunday October 31, 2004 @01:48AM (#10677198)

      Don't forget that hardware raid is a single point of failure. The best solution for the absolute best redundancy and performance is software raid set up to be fault tolerant of controller failures. For example, put two seperate scsi cards in the box, and software mirror your data between them, and then stripe on top of that for added performance if you have the drives. When using striping and mirroring together, always mirror at the lowest level, then stripe on top of that.

      The basic idea is:

      C == controller
      D == disk
      R == virtual raid disk

      C1 --> D1,D2,D3
      C2 --> D4,D5,D6

      R1 = mirror(D1,D4)
      R2 = mirror(D2,D5)
      R3 = mirror(D3,D6)

      R4 = stripe(R1,R2,R3)
  • by mcleodnine ( 141832 ) on Saturday October 30, 2004 @07:07PM (#10675348)
    Don't hang a pair of drives off each controller. Get a truckload of PCI ATA cards or a card with multiple controllers. Don't slave a drive. (No, I do NOT know what the correct PC term is for this).

    Also, give 'mdadm' a whirl - a little nicer to use than the legacy raidtools-1.x (Neil's stuff really rocks!)

    Software RAID5 has been working extrememly well for us, but it is NOT a replacement for a real backup strategy.
  • by brak ( 18623 ) on Saturday October 30, 2004 @07:13PM (#10675389)
    You will get responses from people with good and bad experiences, but they are all jaded by their small particular case. After seeing what can happen with dozens of machines (8 drive and 4 drive) running Linux software RAID5, here is some concrete advice.

    First, ensure that all of the drives are IDE masters. Don't double up slaves and masters.

    Secondly, DON'T create gigantic partitions on each oft he 250's and then RAID them together, you will get bitten, and bitten hard.

    Here's the skinny...

    1) Ensure that your motherboard/IDE controllers will return SMART status information. Make sure you install the smartmon tools, configure them to run weekly self tests, and ensure you have smartd running so that you get alerted to potentially failing drives ahead of time.

    2) Partition your 250GB drives into 40 GB partitions. Then use RAID5 to pull together the partitions across the drives. If you want a giant volume, create a Linear RAID group of all of the RAID5 groups you created and create the filesystem on top of that.

    Here's why, this is the juice.

    To keep it simple, let's say there are 20 secotrs per drive. When a drive gets an uncorrectable error on a sector, it will be kicked out of the array. By partitioning the drive into 5 or 6 partitions, let's say hd(a,c,e,g,i,k,l)1 are in one of the RAID5 groups, which contain sectors 1-4 (out of the fake 20 we made up earlier)

    If sector 2 goes bad on /dev/hda1, Linux software RAID5 will kick /dev/hda1 out of the array. Now, it's likely that sector 11 might be bad on /dev/hdc. If you hadn't divided up the partitions, you would lose a second disk out of the array during a rebuild.

    By partitioning the disks you localize the failures a little, thus creating a more likely recovery scenario.

    You wind up with a few RAID5 sets that are more resilient to multiple drive failures.

    If you are using a hot spare, your rebuild time will also be less, at least for the RAID5 set that failed.

    I hope this makes sense.

    My advice to you is to bite the bullet and simply mirror the disks. That way, no matter how badly they fail you'll have some chance of getting some of the data off.
    • Comment removed (Score:5, Informative)

      by account_deleted ( 4530225 ) on Saturday October 30, 2004 @08:12PM (#10675704)
      Comment removed based on user account deletion
      • Actually, I can see some sense to that. He did mention failing during rebuild. That's when we are at the greatest risk of another failure after all since they are working harder than normal.

        If you have one large partition and impending drive failure wipes out any cylindar on that drive, all the data on it is shot. That drive won't be used at all during the rebuild... a rebuild of 250Gb. You are at risk if, during any time of the long rebuild, a 2nd drive fails completely or even coughs up a bad cylinda

      • Hard drives have spare sectors set aside for sectors that die, and they are automatically remapped. If software RAID is detecting errors, just REPLACE THE DRIVE. The entire drive will die soon anyways.

        Not quite. In my experience, bad sectors are only remapped by the drive firmware on write. Attempts to read bad sectors will return errors. This makes sense if you think about it; you might be trying to recover data, and the sector might be readable once in a hundred tries, but if you're writing to the secto

  • by Anonymous Coward on Saturday October 30, 2004 @07:23PM (#10675441)
    Two pieces of advice: (1) Look into mdadm, it saved my array once when I had to move it from one server to another, (2) look into smartd as a way to monitor the individual disks and detect failures. Okay, well then, _three_ pieces of advice. (3) make sure you look into ext2/3 filesystem parameters like the size of the journal (max it out) and the -R stride= option.

    mdadm will allow a "spare pool" shared between multiple RAID devices and smartd will check the state of the disk controllers at regular intervals. You should put the system _and_ the disks on UPS to avoid losing data in the event of a power failure (the disks need to write their cache to the physical media before it evaporates). Set up something (mdadm or smartd) to email you in the event of a disk failure, or you may be running in degraded mode for quite a while before you discover it (unless you look at /proc/mdstat regularly).

    All in all it seems to work fairly well if you spread the disks across multiple channels, if you have enough RAM for page (buffer) cache, and if you get reliable disks. I have a 4-disk SCSI storage box that I have in RAID 5 mode. It has been running for over two years. The server failed and I had to move it, that is when I discovered mdadm -- A LIFE (DATA) SAVER!
  • 500 MHz? (Score:3, Informative)

    by tji ( 74570 ) on Saturday October 30, 2004 @07:32PM (#10675479)
    You may not need much CPU performance for file service.. after all, it's mainly just doing DMA to/from disks. But, I assume it's just your standard PC motherboard, with a single 32bit 33MHz PCI bus.

    If you're spending $960 for the disks at Fry's, why not spend another $80 to $250 at that same Fry's and get a current generation motherboard and CPU (they have package deals that are dirt cheap).

    For $80, you can get a 5x faster processor, and a much newer chipset with ATA133 and Serial ATA.

    For $250, you can get a board with multiple PCI busses, PCI-X and a chipset capable of handling much more throughput than a cheap PC motherboard.

    The I/O bandwidth will be your bottleneck with an 8 drive RAID array. The standard 32bit / 33MHz PCI bus only does about 1Gbps. Serving a gigabit ethernet connection will use all your bandwidth by itself.. when you have 8 ATA drives fighting the NIC for bandwidth, you can see a clear problem.

    If you're spending that much for the drives, don't hamstring it by skimping on the motherboard. And, in any case, once you have a Linux box installed, you inevitably start using it for many tasks (caching proxy, mail server, ftp server, dns server, www server, etc). So, a beefier system will stand up better.
  • by bicatu ( 256030 ) on Saturday October 30, 2004 @07:33PM (#10675481) Homepage
    Hi,

    The scenario you've mentioned is probably OK to use a software RAID. I use it in a production enviroment without problem with a higher stress that your setup will probably have.

    I'd suggest you to consider the following items :
    a) cooling system - those HD can generate a lot of heat. Buy a full tower case and add those HD coolers to make sure your HDs stay cool

    b) Buy the HDs from different brands and stores - RAID5 (either hardware or software) can recover from one drive. If you buy all from the same brand/store chances are that you end up with 2+ drives with the same defective hardware

    c) cpu - if you are going to use this number of drives the processor will be a majo bottleneck. Do not forget that RAID5 XOR your data to calculate the parity.

    d) partition scheme - use smaller partitions and group them together using LVM. This you help you to recover from a smaller problem without taking a lot of time to reebuild the array

  • by mikej ( 84735 ) on Saturday October 30, 2004 @07:44PM (#10675531) Homepage
    To answer your actual question, whether or not the linux kernel's software RAID implementation is safe... "yes". I used it in production for NFS fileservers as far back as the 2.2 series; it performed wonderfully under high load then and has worked just as well when I've used it off and on since, both in production and on test systems. There are lots of suggestions elsewhere in the thread about things to avoid - multiple devices on the same IDE channel is the big gotcha: don't do it, its performance is particularly horrific during array reconstruction, just when you need it to run as fast as it possibly can. Keep those suggestions in mind when you build the system, but you can categorize the RAID implementation itself as more than sufficiently reliable.

  • by rimu guy ( 665008 ) on Saturday October 30, 2004 @07:51PM (#10675565) Homepage

    I manage a lot of servers remotely. I started out using the hardware RAID support on my server's mobos. But there were issues with that.

    First, it was hard getting Linux driver support (I think drivers were available, but it was a matter of downloading them. And I don't beleive they worked on the 2.6 kernel's I used).

    Then the RAID setup required BIOS settings. When you only have remote access to a server (and no KVM-o-IP) that means you need to work through a tech at the DC. Not, umm, ideal.

    And finally, there was the issue of 'what if I need to move these disks to a different server'. One that doesn't have the same raid controller. Well, it wouldn't work.

    Anyway, I ended up using software raid. I've used it now on a few dozen servers. And I'm really happy with it. Performance seems fine, albeit I'm not using it in really IO critical environments like a dedicated database server. In in 99% of cases I'd now use software raid in preference to hardware raid.

    What follows are a few tips I'd like to pass along that may be a help with getting a software raid setup...

    If you get the chance setup RAID on / and /boot via your OS installer (on a new system). Doing it afterwards is a real pain [tldp.org].

    Build RAID support and RAID1,and RAID5 into the kernel (not as modules). You'll need that if you boot from a raid1 boot partition. Note: if you are using RAID5 you'll need RAID1 built in (since I beleive in the event of a failed disk the raid personaility swaps from RAID5 to RAID1).

    With a 2.6 kernel build I've been getting "no raid1 module" errors at the make install phase when building with a RAID-ed / or /boot. The 'fix' is to compile the RAID support you need into the kernel (not as modules) then run: /sbin/mkinitrd -f /boot/initrd-2.6.8.1.img 2.6.8.1 --omit-raid-modules (substituting your kernel image name/version).

    Every now and then I've had the kernel spit a drive out a raid array. I've found that sometimes the kernel may be being overly cautious. You can often raidhotremove then raidhotadd it back again. And you may never see a problem again. If you do, it probably really is time to replace the disk.

    Rebuilding a RAID array goes smoothly. It happens in the background when the Linux machine is in multi user mode. The md code rebuild guarantees a minimum rebuild rate. From memory it takes about an hour or two to do a 200GB RAID1 array.

    You can see the RAID rebuild status in /proc/mdstat. I run a very simple script [rimuhosting.com] to check the RAID status each day and send out an email if it is broken.

    If you are using a RAID-ed /boot, grab the latest lilo [rr.com] since IIRC it has better RAID support than what is in the distros I use.

    Hard drive-wise I've been happy with Seagate Barracudas. I've had to replace a few failed Western Digital drives. (Just my recommendation from experience, it could just have been good/bad luck on my part).

    One neat trick with Software raid is that your drives don't have to be the same size. You do RAID on partitions. And your raid array sizes itself according to the smallest common denominator in the array.

    Tip: always create a bit of spare space on any device you are RAID-ing. e.g. a 4GB swap partition. Then if you have a drive fail and it needs to be replaced, and your replacement varies in size slightly you'll still be able to use it. Not all 40/120/200GB drives are created with equal sizes :).

    In summary: Software RAID=good. Decent performance. I've had no real kernel bugs with it. No need for BIOS access. Easy to move drives between servers. Easy to monitor failures. Non-intrusive/minimal downtime when recovering a failed devi

    • A few other hints (Score:3, Informative)

      by anti-NAT ( 709310 )

      If you run smartmontools, you can configure smartd to not only monitor the SMART status of the disks, but also execute online tests - have a look at the "-s" option of smartd. For my RAID1 array, for each device, I have -s (L/../../7/03|S/../.././05) entries.

      mdadm also has a daemon mode which can monitor the arrays, and if there are any failures, send an email to a designated email address.

  • by mprinkey ( 1434 ) on Saturday October 30, 2004 @07:53PM (#10675576)
    I have build at least two dozen software RAID5 boxes over the past few years. Usually Promise controllers, Maxtor drives. Performance is generally pretty good. Here are bonnie numbers for my 1.2 TB media server (five Maxtor 300 GB drives in Software RAID5). These numbers are a little slower then other systems because it uses an Athlon motherboard. I have found that Intel chipset boards generally give read performance ~100-140 MB/sec.

    [root@media root]# more bonnie20.log
    Bonnie 1.2: File '/raid/Bonnie.27772', size: 2097152000, volumes: 10
    Writing with putc()... done: 14517 kB/s 83.2 %CPU
    Rewriting... done: 25060 kB/s 17.1 %CPU
    Writing intelligently... done: 41987 kB/s 29.5 %CPU
    Reading with getc()... done: 18830 kB/s 96.1 %CPU
    Reading intelligently... done: 82754 kB/s 62.2 %CPU

    Using an older processor/motherboard is probably not a huge concern. I've used 300 MHz Celerons before. Of course, your performance might not be as high as this, but if you are using this as network attached storage (NFS or SMB), you will likely be limited to 12 MB/sec due to fast ethernet. If you have (and need) gigabit transfer speeds, you should probably use a better motherboard/CPU.

    Lastly, remember that you shouldn't skimp on power supplies and an UPS that automatically shuts the system down. The *only* data loss I have ever had on raid5 arrays came because of power-related issues. Heed my warning! 8)
    • Sorry to reply to my own post. More information...avoid putting master and slave on the same port. Sometimes, if one of the drives goes, it will whack the entire port and drop out the other drive. In raid5, this is bad though unrecoverable. It might require you to manually rebuild (mkraid --secret-option) to get the data back after replacing the drive. That is a scarely situation that can be easily avoided by only using one drive per ide port.

      That information may be (and probably is) outdated with reg
  • by jusdisgi ( 617863 ) on Saturday October 30, 2004 @08:42PM (#10675883)
    Jeez, I've never seen so many plain fools in all my life. Hardware RAID controllers! How quaint.

    Here's what you do with those 8 fine drives of yours.

    You'll need 9 486's. Get some sort of *nix on each one, preferably several different Linux variants and at least 2 BSD machines (I'd say more, but you know, netcraft confirms and all....) and get them all networked together. Put one drive each in 8 of the machines, format with the filesystem that's most convenient for the system on each box, and get an NFS server going serving that partition.

    Then, on the ninth box, mount all the NFS shares and software RAID them.

    Trust me. This is exactly what you want to do, and anybody who says different is a dumbass. People who point out what they will invariably say are "obvious shortcomings" of this setup are merely trolls, and not worth your time reading.
  • by AaronW ( 33736 ) on Saturday October 30, 2004 @09:26PM (#10676066) Homepage
    After months of problems with DMA timeouts and lockups caused by using a Highpoint RAID controller and a Promise IDE controller I finally bit the bullet and bought a 3Ware Escalade controller. All the sudden, everything is completely stable.

    Do yourself a favor and get a good hardware raid controller and make sure it has good Linux support. Promise sucks. They advertise Linux support on the box - they lie, only with specific 2.4 kernels. 3Ware has good driver support for Linux included with the Linux kernel source code.

    -Aaron
  • PCI bottleneck (Score:3, Interesting)

    by Mike Hicks ( 244 ) * <hick0088@tc.umn.edu> on Saturday October 30, 2004 @09:37PM (#10676126) Homepage Journal
    I haven't read all of the comments in detail, but I think one thing that people are often forgetting is that a standard PCI bus has a theoretical maximum bandwidth of 133 MB/s, a level you'll probably never see in real life, especially when there's a fair amount of chatter on the bus from different devices (and you'd get a lot of that with 8 drives plus networking plus who knows what else). Of course, PCI bus layouts vary considerably between simple motherboards and high-end ones.

    I don't know if anyone makes PCI-X ATA-133 controllers (non-RAID), so in the final analysis it might be best to get a 3ware card with a 64bit connector and plop it in a long slot. Of course, you need a pretty nice motherboard for that. I guess I haven't gone shopping recently, but they weren't that common the last I checked (and everyone is going to head for PCI-Express shortly anyway).

    Of course, it all depends on what you'll use the machine for. If it's just file serving over a 100Mbit network, there's no need to worry that much about speed. It's only a big deal if you're concerned about doing things really fast. I believe good 3ware RAID cards can read data off a big array at 150-200 MB/s (maybe better). My local LUG put a ~1TB array together for an FTP mirror with 12 disks (using 120GB and 160GB drives, if I remember right) about 2 years ago, and testing produced read rates of about 120 MB/s on a regular PCI box (I think.. my memory is a bit flaky on that). Of course, I don't think anything was being done with the data (wasn't going out over the network interface, to my knowledge, just being read in by bonnie++ I suspect).
  • by itzdandy ( 183397 ) on Saturday October 30, 2004 @09:44PM (#10676164) Homepage
    in linux, raid5 is a very solid and fast solution. even with a 500Mhz, it is faster than all but the most expensive hardware cards, as most cards have a 133mhz chip or even less.

    also, software raids are hardware independent. they can be modified easily while booted and without rebooting. if a hot-swapable drive is used, downtime can be eliminted by a hot-swap and a rebuild of a failed drive.

    also, i have been in a discussion about the new cachefs patch in rescent mm kernel patches(or maybe nitro?), allowing you to use a cache in ram with any filesystem, so you could mount your raid array through the cachefs with a given amount of RAM for write cache :) should give a nice performance boost on many systems. this patch is designed to improve transfering files on networks but is show to work equally well for local devices.

    AND, linux software raid works on a per-partition basis, so you can mix and match drive sizes without wasting space. 8 250GB drives can mate up with 4 300GB drives, and then the wasted 200GB can be made into another array.

    you can easily add IDE cards and increase the size of your array.

    you can spread your array over a large number of IDE cards for better redundancy, no single card will criple your array, and IDE cards are much cheaper than hardware raid cards.

    LINUX can be booted from a software raid! while is has trouble on some hardware raids!(driver issues)

    i run a software raid5 over 12 seagate 120GB drives with no problems. i get great transfer speeds accross the (gigabit)network and it's easy to manage drive spindown because the system sees each individual drive while hardware raid solutions typically only allow the system to see the array as a single device.

    most hardware arrays are mainly configured at boot time. to build or repair an array, your system will not be working. if you run a linux fileserver/firewall, your firewall doesn't function on hardware raid rebuild, while it does in software.

    --

    though i would go with a faster processor, you should have very good luck, reliability and performance from an 8 device software raid5. and have a nice 1.7TB array
  • by patniemeyer ( 444913 ) <pat@pat.net> on Saturday October 30, 2004 @09:47PM (#10676175) Homepage
    I am very happy with my linux / 3Ware 4 port raid card combination. It makes it brain dead simple and takes linux out of the loop of things that could trash the raid. I even forgot to install the *drivers* for the raid in the initial install and it all just worked fine... because the box thinks it's one big magical drive. (The drivers were only necessary for monitoring...)

    Spend the extra $200 on a 4 port card... put a *big* fan on the drives because that's the #1 killer and you'll be happy.

    Pat
  • by Skuld-Chan ( 302449 ) on Saturday October 30, 2004 @09:49PM (#10676191)
    It worked okay until one day I had to reboot my file server (moved locations) and I couldn't get the raid to come back up. I lost all my data :(. The bad part is - when it came to forums, irc and generally trying to get help there really wasn't any, a good amount of the documentation out there and the troubleshooting information is for the older tools. I generally believe that when it comes to your data you can only trust tools you can actually support - software raid for all intents and purposes seems highly alpha/beta.

    Anyhow I bought a 3ware 7450 Raid controller and haven't looked back - its brutally fast (over 20-30 megs a second in a sequental write), fully supported in linux and it a piece of cake to setup.

    Its not bad at recovering either - I had a power failure and the ups failed later on - machine restarted of course when the power came back on and the 3ware controller automatically rewrote all the parity on the disks - everything was fine. While it wrote the parity the system was up and running instantly (raid was in a fail state of course).
  • Fine (Score:3, Informative)

    by captaineo ( 87164 ) on Saturday October 30, 2004 @10:29PM (#10676383)
    I have a 160GB Linux software RAID-5 consisting of three 80GB disks, running 24x7 for years now. (when I built the RAID, 80GB was the largest disk capacity you could buy :).

    No problems at all. I once had an IDE controller fail - I replaced it (had to reboot of course), and Linux rebuilt the array automagically.

    I have not tried using a hot spare.

    Warning: a lot of the documentation out there on the web about Linux software RAID is very out of date. If you go this route, DEFINITELY buy the book "Managing RAID on Linux" (O'Reilly). Also be prepared to compile the "raidtools" package, which you need to set up arrays.

    I have since added an 8-disk system based on 3Ware's 9000 series SATA RAID controller. I recommend 3Ware for higher-performance systems. (I have 8 250GB disks in a single 1.6TB RAID-5, I get about 180MB/sec read, 90MB/sec write.)
  • raid5 + debian (Score:3, Interesting)

    by POds ( 241854 ) on Saturday October 30, 2004 @10:56PM (#10676511) Homepage Journal
    I'm running raid 5 on i think 2.6.8 with 3 drives. That is, i'm running it on the root partition and it runs alright, although, i have noticed it has goten slugish... maybe a defrag is in order?

    When i started out, firefox was loading in 2 seconds and it now appears to be taking around 4 seconds to load. At least i think those mesurements are ok. If you want real speed, i'd think about using raid01 as it seems 4 discs in a raid0 array would be faster than 8 in a raid5? I'm not too sure about that, but raid5 is significantly slower than raid0 apparantly. Also, using those other 4 discs to mirror the raid0 array could be more usful then raid5s parity/crc redundancy.
  • by tylernt ( 581794 ) on Sunday October 31, 2004 @12:41AM (#10676924)
    I've stuck 4 7,200rpm IDE drives in a case... and promptly killed two of them within days. I had to add a rear exhaust fan to the case, a PCI slot blower, and I removed the blanking panels in front of the drives to get more airflow. The drives now stay merely warm to the touch (instead of HOT), and the drives have been fine ever since.

    8 of those suckers are going to get toasty without plenty of auxilliary cooling.
    • Yes, heat is definitely an issue, and an issue I didn't even think about when setting up my 4 disk linux software raid 5 set.

      After I set it up for the first time, I had a drive die on me really quickly and noticed when I replaced it that it was murderously hot. As in "burning my fingers" hot. So I went and bought these little hd cooling fans that fit in front of a 5 1/4" drive bay (and come with 3.5" drive mounting adapters) and have 3 little fans on them. They cost about $7 each. I put 4 of them in my mac
  • My experience (Score:3, Informative)

    by Mike Markley ( 9536 ) <madhack&madhack,com> on Sunday October 31, 2004 @04:41AM (#10677697)
    I've got a 4x160GB SATA software RAID-5 array (about 450Gb usable) serving up files on my home network right now, running under the 2.6 kernel.

    These drives are all crammed into an old Dell that was my Wintendo a couple of years ago. A few months back, the grilles on the drive-bay coolers I installed got clogged up and I lost one of the drives to overheating. Upon replacing the drive, the rebuild took the better part of an evening (but didn't need to be attended). No lost or corrupt data.

    The only major problem I had was that the RAID was dirty in addition to being degraded (insert "your mom" joke here), because I brought my machine down hard before realizing what was going on. In theory, I could have done a raidhotremove on the bogus drive and brought things down normally

    I ended up having to do some twiddling to get it to rebuild the dirty+degraded array. I don't remember what that was, but as long as you don't do something boneheaded like ignore kern.log messages about write errors to a specific drive, get annoyed that it's taking so long to cleanly unmount the filesystem, and hard-reset the box, that shouldn't be an issue :).

A morsel of genuine history is a thing so rare as to be always valuable. -- Thomas Jefferson

Working...