Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Data Storage Software

What Kind Of Software RAID Are You Running? 148

ErikZ asks: "Lately, I'm having issues with my RAID. Specifically, closed source drivers for my RAID card that only support Red Hat 9. So I've decided to Ebay the card, and try to figure out how to turn 4 SATA drives into a software driven RAID 5 setup. Yes, I know I'll lose all the data, and I'm not worried about it. Finding a 4 port (or more) SATA controller card, that's well supported under Linux, has been difficult. Everyone wants to slap on their own RAID chip and charge you another 100$ for the pleasure. Where can a guy get a highly recommended, well supported, 4 port SATA card for Linux? The Rocket 1540 cards have vanished off the face of the earth. There are a few motherboards out there that have 4+ SATA connectors on them, but they also add RAID and some other cutting edge features that aren't well supported under Linux. So, I thought I'd try another route and ask Slashdot: What are you using for your Linux software RAID needs? What do you suggest?"
This discussion has been archived. No new comments can be posted.

What Kind Of Software RAID Are You Running?

Comments Filter:
  • by eakerin ( 633954 ) on Monday April 11, 2005 @09:27PM (#12207618) Homepage
    Why not just let linux handle the raiding of the drives? No special hardware needed ourside of the drive controllers you already need to hook the drives up.

    For more information check: man md

    Also RAID 5 is distributed parity raid, no data loss if only one drive goes. it takes two failures to lose data on a raid 5 array.
    • by TheWanderingHermit ( 513872 ) on Monday April 11, 2005 @09:34PM (#12207678)
      If it's not included, the full package is mdadm. There are a number of tutorials on the web for it. It's easy to set up, and easy to run (just ignore it). I don't remember if it can work with hot swapping (I don't need that yet), but I'm using it on several systems. I set it up, and I haven't had to worry about it since.

      When I first got it, I stuffed a lot on the raid drive, disabled it, wiped out one disk, and re-activated the raid. It rebuilt it and worked fine.

      I asked this question on a Debian (or Debian based) user list at a time when a lot of experienced admins were around, and overall the feeling was that there was no need to go hardware and the software raid would do the job.
      • Unless you want to boot, in which case you will need a seperate boot drive because the BIOS cant load the kernel off a software raid.

        Which defeats the purpose, as the performance gains from RAID are going to be greater on the system drive (swap space, loading programs, libraries, program resources etc) than on the data drive, which is typically multimedia data where performance isnt a factor, as when you save it, your download speed is the bottleneck, and when you play it, the multimedia files have set bit
        • Those are good points, and ones I had not considered, since all the data on my raid is text or tables for MySQL. So far I haven't seen any indication of a bottleneck at all. The boot drive is separate (and both RAID and boot are backed up and stored on a separate system which will be offsite one day).

          I also am confused -- you say data drives are mostly multimedia, and point out that they have set bitrates for reading, then get vulgar about someone getting good performance for mp3 players. It seems you a
      • Don't ignore it, be sure to run mdadm in monitor mode to tell you when something fails.
    • Why not just let linux handle the raiding of the drives? No special hardware needed ourside of the drive controllers you already need to hook the drives up.

      Another point to this: there are three kinds [linux.yyz.us] of raid setups:
      • hardware raid - where the OS speaks RAID-specific commands to the controller
      • firmware raid - where software RAID is implemented in the firmware of the card
      • software raid - RAID done by the OS

      From my reading on forums and other various articles, there's almost no (if any) benefit to using fi

      • Actually, the RAID cards you refer to as 'firmware RAID' perform most of the RAID operations in the driver of the OS (though some, at least, provide enough BIOS support in order to get the OS booted). Promise and Highpoint's cheaper cards work like this, and Linux's ide-raid drivers used to support some of these chipsets (along with the manufacturer's own drivers).

        IMHO, the only reasons to use these as anything other than bog-standard ATA controllers are a) if you have a pre-existing RAID setup that you w

      • From my reading on forums and other various articles, there's almost no (if any) benefit to using firmware-based raid over true software raid.

        Yes, there is - the same major (practically *only*, these days )advantage hardware RAID gives you:

        Transparency.

        Both hardware and "firmware" RAID present a single block device to the OS and BIOS, making it feasible to install and boot the OS on that RAID device and not worry about a hardware failure crippling the machine (a distinct possibility with software RAID i

    • by Andy Dodd ( 701 )
      Read his post - He's trying to find drive controllers that support 4+ SATA drives but *do not* include hardware raid.

      He's not asking how to set up the software, he's asking about hardware that contains the features he needs for software RAID (many ports) but not redundant features that reduce compatibility and/or add significant cost. (hardware/firmware RAID).

    • Because first I would need a Motherboard with 4 SATA ports on it. Which I don't have, so I need to get a SATA card.
  • It's been running my raid for a couple of years...I think. I cannot honestly remember when I first configured it.

    There are some things to be aware of. If you want to mount / as a raid, it can be tricky. The initrd needs to be properly configured, or the drivers must be built into the kernel.

    Sometimes, the raids don't shut-down completely. I've never been able to completely solve this problem. Most of the time it's OK, but some machines have trouble. The most common culprit has been NFS.

    GRUB &
  • megaraid (Score:3, Informative)

    by Johnny Mnemonic ( 176043 ) <mdinsmore@NoSPaM.gmail.com> on Monday April 11, 2005 @09:34PM (#12207672) Homepage Journal

    I like this one: MegaRAID SATA 150-4 [lsilogic.com]. Admittedly, I've only used it under OS X Server, as it's apparently what Apple uses in their OEM; but they do have linux drivers and I can only assume that they work as well, if not better. Straightforward setup on the CLI, and not too expensive.

    Personally, for $300 I wouldn't screw around with a software raid unless this is your own personal box and the drives only have MP3s.
    • We use this one, too. There was actually an article a while back here and quite a few folks liked software RAID because CPUs are now fast enough to support it without causing significant degradation. After skipping over the low end Adaptec (the $150 product), Promise, and Highpoint; the middle ground came up and the LSI MegaRAID SATA 150-4 made a lot of sense. We currently use it under Windows 2000 and the product is supported under Linux. Unfortunately it's not supported under *BSD. We ran into a problem
    • noooo! noooooooo!

      the linux megaraid drivers are pretty terrible. we have a couple of machines at work with *unbelievable* throughput problems due to the poor linux megaraid drivers.

      • I'd mod you up myself, but then people would just think you were just modded up for saying something that *might* be true. Instead, I'd rather say that it *is* true. We've been rather pissed off at the pitiful performance from our megaraid arrays. Stuff like 35M/s writes. You could almost beat that with a single disk. Sheesh.

        For cheap setups, I go with 3ware. For more expensive ones, we use an external raid array with a scsi uplink to the computer. The cache, battery backup, and simplicity of host

    • Although it costs an extra $80, the MegaRAID SATA 150-6 [lsilogic.com] is really one of the best of its kind on the market simply due to its available battery backup support. IME, battery backup can really make the difference in terms of reliability, especially when you have controller caching enabled (particularly if you are using a DB/transaction work with RAID 5). Just a thought.

  • by joto ( 134244 ) on Monday April 11, 2005 @09:36PM (#12207693)
    Lately, I'm having issues with my RAID. Specifically, closed source drivers for my RAID card that only support Red Hat 9. So I've decided to Ebay the card, and try to figure out how to turn 4 SATA drives into a software driven RAID 5 setup. Yes, I know I'll lose all the data, and I'm not worried about it. Finding a 4 port (or more) SATA controller card, that's well supported under Linux, has been difficult. Everyone wants to slap on their own RAID chip and charge you another 100$ for the pleasure.

    But can't you just use your raid card as a SATA card, and ignore the raid functionality? Why do you absolutely need it to be non-RAID? I'm sorry, but I'm having real trouble understanding what's the difficulty here...

  • This is not software raid it is a SATA-RAID card so I am not exactly answering your question. Bet this card has worked well for me and they perform well too.

    # hdparm -tT /dev/sda /dev/sda:
    Timing cached reads: 2052 MB in 2.00 seconds = 1026.16 MB/sec
    Timing buffered disk reads: 380 MB in 3.01 seconds = 126.27 MB/sec
  • 3ware, 3ware 3ware. (Score:5, Informative)

    by zsazsa ( 141679 ) on Monday April 11, 2005 @09:56PM (#12207843) Homepage
    The best option for real hardware SATA (or IDE) RAID in Linux is 3ware [3ware.com], bar none. Their drivers have been in the official Linux kernel since the early 2.4 days, and they just work. Highly recommended.

    Why real hardware RAID? Say, for example, your boot drive goes out in a software RAID configuration. Your system is suddenly unable to boot, requiring manual intervention for a rebuild. With hardware RAID, the BIOS built-in to the card handles things smoothly and your system can boot without a problem.

    • Of all the PC RAID card manufacturers, 3Ware has the best reputation. However, you cannot boot from one drive in a 2-drive mirror. If for some reason you don't have a working 3Ware card, you cannot get your data. It is lost.

      If you use 3Ware cards, keep one or two spare cards.
    • Bingo.

      This month's Linux Magazine has a review of a four port 3ware hardeware RAID5 controller that is (duh) supported under Linux. They gave it 5/5 Penguins.

      Now the card is $440, which may be more than you are willing to spend, but that would solve your problem.

    • Say, for example, your boot drive goes out in a software RAID configuration.

      Hmm, I've never really had that problem with bootable mirrored software RAID's that I've setup.
      There was a HOWTO about bootable software raid somewhere.... but it's what I used in the following.

      I had a lab server on a remote site set up with two mirrored drives and the BIOS set to boot the first drive...then the second. That way, if one died, as they are a mirrored pair, everything still reboots fine. md detects the dud/missing
      • I had a lab server on a remote site set up with two mirrored drives and the BIOS set to boot the first drive...then the second. That way, if one died, as they are a mirrored pair, everything still reboots fine.

        I don't know if you've actually battle-tested this, but you'll almost certainly find it won't work because /boot (or wherever your kernel and initrd are) can't be on a RAID partiition.

        Unless you're "manually" syncing up two copies of /boot onto each device, of course, but that's a rather ugly hack j

        • I don't know if you've actually battle-tested this, but you'll almost certainly find it won't work because /boot (or wherever your kernel and initrd are) can't be on a RAID partiition.


          Sure you can, since grub will boot off a raid1 partition easily. I do it on all of my remote servers, separate raid "array" for /boot, /, and a backup OS array just in case.
          As long as the kernel has support for raid built in (or you have the modules in an initrd), you'll be fine with a RAID1 /boot with grub.
    • The 3ware cards are well-supported, but they're SLOW. I have an older generation 8500, one that is supposedly 'optimized' for RAID-5. When under a heavy write load, the machine essentially grinds to a near-halt, making interactive use of the system very painful. Just abysmal performance. RAID-10 was reasonably snappy, but no speed demon. They make big speed claims, but they just don't deliver on them, in my experience.

      It works great, however, in JBOD mode (Just a Bunch of Disks), running software RAID
      • Either you are expecting too much, or there is something wrong with your setup. I havent done a lot of direct comparison of ide software RAID, so I wont speak to that. Obviously its not scsi, but if you need a lot of capacity for a reasonable price, SCSI blows.
        I have a 7810 running 4x250G in the XP2600 box I'm typing this on. The card is in a normal pci slot. The fs of the single 702G (usable) partition is xfs (with su and sw set properly). I'm running kernel 2.6.11 w/ cfq io scheduling, udev, and a ge
        • I don't think there's a whole lot to get wrong here. I was running the most current firmware and most current utilities, and I manually patched the most recent driver into the 2.4 kernel. (at the time, the 2.6 kernel wasn't supported by the monitoring utilities, so I stayed on 2.4). I was probably using ReiserFS, though I don't remember for sure. I didn't start using XFS until later.

          All I had to do to make interactive use of the machine exceedingly painful was to start dumping a big file from my Window
    • FUD (Score:4, Insightful)

      by adolf ( 21054 ) * <flodadolf@gmail.com> on Tuesday April 12, 2005 @06:59AM (#12210345) Journal
      On my systems, I have a software RAID-1 "boot drive."

      If one drive in the pair fails, things keep ticking along smoothly. They're really just identical partitions with identical data on different disks.

      LILO merrily writes boot code to the array without episode. Meanwhile, the machine's BIOS is happy to boot from disks other than primary-master, all by itself.

      I've booted the system after randomly unplugging devices. It works just fine.

      Why do all of you 3ware goons think that the world wants to buy hardware which offers no clear advantage over having no hardware at all? (As if I want to add -more- potential points of failure to my systems . . .)

      • Sorry, I wasn't trying to spread FUD. It's good to know that newer BIOSes can cope with booting from drives other than primary master. The only problem I can see is if the drive first in the boot order is in a failure state and the motherboard's BIOS isn't aware and tries to boot from it anyway. This may be something people can live with to save the $400 on a 3ware card.
      • Re:FUD (Score:5, Insightful)

        by GoRK ( 10018 ) on Tuesday April 12, 2005 @12:40PM (#12213239) Homepage Journal
        The ol' software raid 1 boot trick depends highly on the behavior of your BIOS under a failed drive condition. This is not the same thing as you get when you unplug a drive. Some BIOS may boot fine; some may not boot from the 2nd hard drive if the first is still attached and failing. It may also depend on how your drive has failed. If the drive electronics are failed and shorting the wrong pins on your IDE controller, then you may not get past the drive detection code in the BIOS at all.

        This is simply one advantage to using a real hardware raid card like the 3ware. There are plenty of other reasons too: Does your chipset/hardware support hot swapping? If you use SATA, does it support command queueing? Do your drives? How much cache does it have? Does it have cache? Can it tolerate all types of hardware failure? Does it have *ahem* 16 ports with individual controllers for each drive? It's not like the BIOS/IDE chipset makers write out in their specs how their hardware performs under drive failure conditions so you have the overhead of testing each configuration to make sure it works proeprly before you have to rely on it. It's not so much a performance difference between hardware and software raid (until RAID-5 anyway) but an issue with how the hardware will respond when something goes wrong, which is one of the primary reasons for using anything above RAID-0 anyway.

        Yes, running a 3ware card costs more. There are times when that $400 costs a lot less than the time spent configuring and testing an alternative software-only implementation. There are times when it doesn't and spending another $400 doesn't make a lot of sense. I have run both setups. I have machines deployed with both IDE software-only RAID arrays, IDE 3ware arrays, SCSI software RAID5's, SCSI Adaptec RAID's etc.. it's all application specific. There's no reason to call somebody a goon for recommending 3ware hardware. It's really good hardware; maybe you should try it sometime.
    • I'm sorry, but this is commonly repeated, but wrong.

      3ware's SATA implementation is ugly; it's effectively a bridge from their PATA one, so it doesn't support NCQ.

      Personally I use Areca [areca.com.tw] cards - a 16-way card that can run RAID '6' (RAID 5 but with two parity discs) and a hot-spare, has its own Ethernet port for remote access to the firmware is rather good. Oh, and it has (unofficial) kernel sources suitable for 2.3 - 2.6.

      Very good.

      • The new 9xxx series 3ware cards are much better than the old series and support command queueing and all that jazz that you get from a native SATA implementation. They're not even much more expensive either. The driver has finally made it into 2.6 too, but you gotta patch it into 2.4. You can get really really good performance on them for the cost - I run one with 4 WD Raptor 10K RPM 72GB drives and don't get any of the same kind of slowdowns I used to get on 3ware's older PATA cards, especially during heav
      • A RAID controller with its own ethernet port and protocol stack all the way up to a telnet service? That is just stupid.

        The right way to do this is either just get one of these [realweasel.com] or one of the many more expensive/featureful alternatives. Or better yet, just get a real server that has a real serial console (or if you run windows and/or have more money than brains, get some KVM over IP thing)...

        A RAID controller with its own telnet service for remote access to the firmware... *shudder*...
        • Umm. Riiight. You are aware that the two options you suggested both require restarting the machine to access the firmware of the RAID card, however "cool" you think it is?

          Restarting = downtime = bad. No, really.

          The Areca card is for proper environments where you just don't have the option to take the system down to pull a disc or three from the array. Hotswap discs, hot-rebuilding of RAID arrays, etc. It's a proper RAID card.

          • Umm. Riiight. You are aware that the two options you suggested both require restarting the machine to access the firmware of the RAID card, however "cool" you think it is?

            Restarting = downtime = bad. No, really.

            The Areca card is for proper environments where you just don't have the option to take the system down to pull a disc or three from the array. Hotswap discs, hot-rebuilding of RAID arrays, etc. It's a proper RAID card.

            No... "proper" RAID cards have software that you can use to administer and

    • I agree. 3ware cards are great. They have excellent linux support and an excellent reputation. I use them under linux and windows.

      Two areas you may run into problems with:

      1) The cards require a lot of power and riser cards can be troublesome.

      2) They aren't the fastest cards on the market. They do use a custom PATA=>SATA bridge, even on the 9500 cards. That being said, they are still blazingly fast and reliable.

      I'd recommend getting the 9500 boards. The 8500's didn't support onboard cache and there is
  • by porcupine8 ( 816071 ) on Monday April 11, 2005 @10:00PM (#12207880) Journal
    You know you've seen one too many memes on LiveJournal and other such places when you just read that "What Kind of Software RAID Are You?" and think somebody's posted a link to a quiz.

  • HighPoint RocketRAID cards do not function well when used as the boot device for Windows XP. This was verified by HighPoint technical support. We did not try them under Linux. But read my comment above about timing issues.
  • by zymurgy_cat ( 627260 ) on Monday April 11, 2005 @10:13PM (#12207974) Homepage
    Why not a rocketraid 1640 [newegg.com]? They support 4 SATA drives and support (so they claim) Linux. I've run a Highpoint card under FreeBSD with no problems whatsoever...well, the management software won't work, but, hey, I can check things with the command line....
    • Why not a rocketraid 1640?

      I bought a RocketRaid 100 [highpoint-tech.com]. While I had no problem getting it to work under Windows, I was unable to get it to work under any of a number of flavors of Linux. Of course, my ineptitude at compiling a patched linux kernel may have led to my difficulties.

      I wound up using the card as a plain old IDE interface and then build software RAID on the drives connected to it. In retrospect, I should've bought a 3ware card, despite its significantly higher cost because it would've saved

  • by happymedium ( 861907 ) on Monday April 11, 2005 @10:30PM (#12208088)
    A righteous and just one against those godless peer-to-peer commies [slashdot.org], thank you very much.
  • I took some notes when I was setting up my home 4x250GB RAID 5 server. I found there to be three categories of RAID solutions. Might be helpful for you in deciding. Copied them below and added a few extra comments.

    It really depends on what you are using the server for. I ended up going for the pure Software RAID option. Its for home and I'm a cheap. If you're not cheap or it is for a work server, I'd stick with the pure Hardware solutions.
    ________

    Hardware RAID:
    The expensive Adaptec, 3ware, etc SCSI cards found in most servers.

    Pro - Offloads XOR calculations from the CPU to internal processor.
    Pro - No manual intervention required in case of a raid failure.
    Con - Expensive.
    Con - Third Party and/or closed source drivers often required.

    Semi-Hardware RAID:
    These are typically the SATA RAID controllers that are built into motherboards but includes cheapo add-on cards. Generally if less than 150 bucks, not full hardware RAID. I believe all of the RocketRAID cards fall in this section.

    Pro - No manual intervention in case of a disk failure.
    Pro - Cheap.
    Con - Minimal or No CPU offloading.
    Con - Third Party and/or closed source drivers often required.

    Software RAID:
    Use Linux and plain old SATA/PATA controllers to handle all of your RAID needs.

    Pro - Very cheap.
    Pro - No worry about driver incompatiblity or closed source drivers.
    Con - No CPU offloading. You essentially trade CPU power for Disk speed/redundancy... and its a significant trade.
    Con - Manual intervetion required in case of disk failure.
    Con - PATA Only. Must be one drive per channel! NO SLAVES! Apparently data loss can occur on both drives in the chain if one goes bad. http://www.tldp.org/HOWTO/Software-RAID-HOWTO-4.ht ml#ss4.1
    Performance is also hurt in a Master/Slave combo.

    ________

    • If I might add a comment to your last point. IDE drives take a performance hit when you are working with master and slave devices period. This is not a tradeoff of software raid but just a general issue with ide. You can run software raid across master and slave devices just fine, I do right now both at home and on 168 linux boxes where I work, and we hold the service those boxes provide to 5 9's. The only issue I have ever had with software raid has been throughput. You just cant beat a nice fiberchanne
    • Software RAID:
      Use Linux and plain old SATA/PATA controllers to handle all of your RAID needs. ...

      Con - PATA Only. Must be one drive per channel! NO SLAVES! Apparently data loss can occur on both drives in the chain if one goes bad. http://www.tldp.org/HOWTO/Software-RAID-HOWTO-4.h t ml#ss4.1
      Performance is also hurt in a Master/Slave combo.


      Um...why PATA only? I've done software raid using loopback devices. I can't see why SATA drives would be any less likely to work.
    • > Software RAID:
      > Con - Manual intervetion required in case of disk failure.

      You can get around this for some failure modes, as long as your boot partition is always raid1. I do this at home, /boot is raid1 across 5 disks, the rest is raid5. Read performance for a 5 disk raid1 would probably be fantastic :)

      The success of this depends on the disk either failing so badly that the system can't see it anymore and so boots off another disk, or that the part of the failed disk that holds /boot is still rea
    • i don't know what kind of kernel you run, but my 2.6.11 supports md with SATA and PATA mixted together just fine:

      Personalities : [raid0] [raid1] [raid5] [multipath]
      md0 : active raid1 sda1[1] hdc1[0]
      120060736 blocks [2/2] [UU]

      sda is a sata, hdc is a pata drive ...
    • Con - No CPU offloading. You essentially trade CPU power for Disk speed/redundancy... and its a significant trade.

      Maybe if you're using a 486.

      The CPU overhead of software RAID 5 is insignificant on any remotely modern machine. Even "ancient" ca. 500Mhz P3s have checksumming speeds over 1GB/sec

    • One thing to note: slower writes, faster reads.

      And one thing I've been wondering about, obviously you would keep the drives on different IDE channels if possible (hda, and hdc here).

      If you also have non-RAID drives on hdb, and a CD-ROM on hdd... that should not overly influence the data speeds of the RAID drives except when there is actual data transfer on hdc/hdd, or would a machine automatically split the available pipe upon having two IDE devices (master/slave) on a given IDE channel?
    • "Con - PATA Only. Must be one drive per channel!"

      Woah! Wait, what?

      SATA is one driver per channel.
  • by OmgTEHMATRICKS ( 836103 ) on Monday April 11, 2005 @10:55PM (#12208245) Journal
    The general consensus among linux kernel engineers and software RAID users is:

    1. As long as the onboard SATA chip is well supported on your linux kernel, use the onboard chip.

    2. Don't worry about the "hardware RAID" built into the motherboard. You don't have to use it. In fact, most people bypass it.

    3. Use the non-BIOS SATA driver for your motherboard. Some motherboards have two different chips. Mine (an Epox 8RDA+Pro nForce Ultra2/400) uses both the common Silicon Image SIL3114 which supports 4 SATA drives and an additional 2 SATA drives provided by the onboard nForce 2 Ultra Gigabit MCP chipset. Quite nice for RAID and I still have normal PATA IDE drives 0 - 3.

    4. Quite often the SATA RAID hardware only supports RAID 0,1 and 10 (or 01 depending). If you're looking for RAID 5 then you'll have to buy a more expensive outboard solution. The problem with outboard solutions are that they will eat into your PCI bandwidth. If you will be using PCI-X then you will probably also be paying significantly more for your outboard solution. Most people have a ton of CPU lying around, so handing off the I/O doesn't really buy you that much.

    5. When it comes down to it you might as well just use software RAID because you have more control over it. You can use the onboard SATA controllers which allow you to take advantage of the increased on-motherboard bandwidth as well as having a significantly less expensive solution.

    6. Another advantage to using Linux software RAID is that you don't have to learn a new RAID management system everytime you upgrade your machine and controller. You can also connect to your machine remotely and manage your raid system through a firewall. Sometimes you can do that with your hardware RAID system and sometimes you need to manage it from the BIOS itself.

    7. Once you get comfortable with software RAID you can experiment with mixing and matching various I/O systems underneath it. One of the things I'd like to play with would be using software RAID with Firewire 800 external drives in a pseudo-SCSI arrangement.

    8. The LVM2 system doesn't need software RAID, but it works very nicely with it none-the-less and gives you snapshot support etc.

    9. Personally, I'm going for RAID 10 (striped mirroring) because drives have gotten very inexpensive and I don't mind burning a few more to get higher I/O rates. Remember, if you go with a mixture of RAID 0 and 1 then you want a striping over mirroring -- that way if you have a single drive failure the array keeps going.

    Have fun and don't use RAID instead of backups. Backups save the stuff that you deleted intentionally but need to recover.

    • I like striping on my desktop. If one of my drives decides to fail, I'm hoping to get plenty of warning, and besides, I have the essential stuff backed up to gmailfs.

      I'm doing this because the mirroring is a performance AND a disk space hit, and is only worth it if I am planning for the disk to fail. With striping, I lose $20 for buying two 80 gig drives instead of one 160 gig drive, and I get twice the speed.

      The annoying part is that I have to redo stuff if I want to add to the array. That's the one a
  • I learned about the Promise SATA150 TX4 from the aaltonen forums [aaltonen.us]. The card is just a disk controller and support is in the 2.6 kernel. I'm using it in a software raid5 configuration and haven't had any problems. It's about $75 at newegg.
  • Nothing fancy... just a RAID-1 set, using Linux md.

    Since a desktop/workstation machine does mostly reads anyway, I am getting the benefit of striped reads. I don't really care that my writes incur a slight penalty.

    Granted, hardware RAID would use less CPU time... but hardware RAID is tied to a particular card. What happens if you move your disks to a new machine? You have to move the RAID card. If you go with an integrated RAID solution on your motherboard, that's tough.

    With Linux md RAID, that is
  • Failure question (Score:2, Interesting)

    by wfeick ( 591200 )

    I've considered setting up software raid on my Linux server, but I haven't found any doc yet about what happens in the event of an unexpected crash or poweroff part way through writing a RAID-5 stripe.

    Suppose I have 4+1 disks in a RAID-5 configuration, and during a write to a stripe of the disk only two of the disks are written to before the system crashes. This leaves me with 2 disks with new content, 2 disks with old content, and a useless parity.

    I found a page [redhat.com] at RedHat that indicates that as of 200

  • OK, I'd like to piggyback on this question.

    I have a Promise FastTrak100 Lite controller built into my MB, and I've been using it for firmware RAID for about three years now. It worked fine in Windows (using the Promise SCSI emulation drivers) and in Linux 2.4 (via /dev/ataraid/d0pN). But Linux 2.6 can't see it. I've done some reading and from what I can tell it doesn't support the /dev/ataraid tree anymore.

    Is there any way to get a 2.6 kernel to see the array while leaving the data intact?
  • I wonder if someone has done RAID mixing local harddrives and network block devices like GNBD [redhat.com] or iSCSI? Should be ok on gigabit speed, right?

    I know DRDB [drbd.org], but that's more for HA pairs and cannot sync drives in background while mounted.

    /graf0z.

  • Some tips (Score:5, Informative)

    by obi ( 118631 ) on Tuesday April 12, 2005 @07:59AM (#12210559)
    hardware RAID has these advantages:
    1) offloads operations to the controller, so eats less CPU/IO bandwidth.
    2) can have battery backed cache
    3) often looks like "just a scsi controller" to linux and the boot loaders, so booting from f.e. a RAID5 set is often easier.

    software RAID has these advantages
    1) is cheaper
    2) CPU time lost makes hardly any difference
    3) has well-tested and supported tools to manage your raid setup. (imagine if you could only set up your raid sets by rebooting and entering the raid bios)
    4) disk-layout is non-proprietary (controller died? don't have the same brand lying around? manufacturer left the market? no problem!) - so all-around more flexibility.

    Look here for properly supported sata disk controllers:
    http://linux.yyz.us/sata/

    Some of these cards come with BIOS smarts that provides you with software raid which offer you the advantage of point 3) of hardware raid, ie: bios and boot loader support for your raid.

    however, this does mean that the on-disk layout has to be recognized in linux, so linux can make sense of it and set up the raid sets properly. In linux 2.4 there were some drivers that did that themselves, however for linux 2.6 there's now a little userspace program that recognizes a whole bunch of on-disk layouts, and sets them up using the device-mapper facility (part of LVM2).

    The advantage of this is that you can use the same well-tested and -supported linux drivers mentioned on http://linux.yyz.us/sata/ , but still use the (bios) facilities provided by the hardware. Another advantage is that this program will probably be used by all ATARAID ("mostly-software-raid") devices on linux, so it is, or will be well-tested and -supported in itself.

    You can find this program, called DMRAID here:
    http://people.redhat.com/~heinzm/sw/dmraid/

    So if you decide to go the SW-RAID way, think and decide if you want the advantage of dmraid. I haven't tried this myself yet, and the only aspect I'm unsure of is the management aspect of it (like with HW-RAID drivers) - DMRAID doesn't use MDADM, so how can you properly monitor, hot-add, ... your raid sets? I don't know, you'll have to investigate this yourself.

    MDADM itself isn't going away any time soon either, if I understand correctly. (And even if it does, it's probably very likely that they'll make DMRAID understand the MDADM on-disk layout to provide an upgrade path.)

    If however you decide to go the HW-RAID way, make sure you get a reliable and reputable manufacturer - with open source drivers (!), preferably with a known on-disk layout, and be prepared to spend money. I've heard a lot about 3ware, but I have no direct experience with them myself, so I can't vouch for them.

  • by Intron ( 870560 ) on Tuesday April 12, 2005 @09:45AM (#12211205)
    Why aren't we discussing the real issue? Should we allow "ebay" to be used as a verb.
  • by gseidman ( 97 ) <(gss+sdot) (at) (anthropohedron.net)> on Tuesday April 12, 2005 @11:29AM (#12212170)
    You can't just ask what RAID solution to use; you need to specify your optimization criteria. For example, if you are going for inexpensive, high capacity, and good redundancy but don't care about hotswapping, you can go with a reasonably cheap PATA or SATA hardware RAID. If you are willing to skip high capacity you can get an external SCSI SCA rack, an inexpensive SCSI card, and a bunch of small SCSI SCA drives. If you want high capacity and high performance without too much cost, get a bunch of large PATA drives, a similar number of FireWire or USB 2.0 (or both) enclosures, as many FireWire or USB 2.0 interface cards as necessary, and do it that way (which also gives you hotswap). If you are willing to spend lots and lots of money, get a Network Appliance server.

    Personally, I'm not worried about performance beyond being able to play full-motion video. I have a PPC 604 180MHz from 1997 with a SCSI card and a RAID rack. 8x18GB at RAID 5 gives me 118GB or so of redundant storage, and I serve it over NFS to my other machines. Just for kicks, I have it going through a cryptoloop, too (LVM on cryptoloop on Linux RAID5 on SCSI). The initial cost was low (the drives were $15 each, the rack was around $100 on eBay, the trays were given to me, the SCSI card was under $40 on eBay, the 100Mbit ethernet card was about $20, and the computer had been a doorstop until I put Linux on it). The ongoing (electricity and cooling) costs are a little high (they are 10K drives), but that's life. I can play an MPEG or AVI from two machines on the network at once without hiccups, so I'm happy.

    If I were going to build a RAID server today, I'd probably buy a Mac Mini, four large PATA drives, and four FireWire enclosures. Assuming 160GB drives, I'd have 320GB of RAID5 storage available over NFS (with a spare drive to swap in) for an investment of under $1200, and I can vary that cost with the size and number of drives. Yes, I'd be daisy-chaining FireWire, which means that each drive has only a portion of the total bandwidth. Then again, my network card will only manage 100MBit, so 3/4 of the FireWire bandwidth will be of minimal use anyway (except for reducing latency due to readahead and such, of course).
  • I'm not the only one that tested, and concluded, that software RAID outperformed a 3Ware card.

    The problem you run into is bandwith.

    Picture four drives on a single PCI controller running in a software RAID5. For every block written, four commands must be issued to the PCI card, one for each drive.

    This works great w/ a hardware RAID controller, because they emulate a single SCSI drive, thus only one write command.

    Even so: we saw better throughput via software raid ( tested via Bonnie++ on a knoppix 'tora
  • I have a fairly extensive background with raid setups but not within Linux nor on an X86 platform so if someone would fill in some blanks, I would be very appreciative.

    1) The original poster is looking to setup a 4 drive array, RAID-5 prefered and is looking for a 4 port SATA adapter. My recommendation would be get two adapters with two drives each to provide greater redundancy. I am guessing this can be done with stock PCI SATA controllers. Would a configuration such as this have a negative performance
  • On Usenet I posted [google.ca] a detailed description of how I built a 2.8TB RAID storage array for under $4100.
  • At work we've been testing RAID 0 (strip) setups for performance on a number cruncher machine. Hardware raid isn't always the best.

    Our first setup was 2x15kRPM U320 SCSI drives on an LSI MegaRAID controller. Apparently the 2.6 kernel driver has serious issues, because we can't get read performance over 50MB/s. This is slower than reads off a single drive on a vanilla SCSI controller.

    Our second setup was the same two drives on an LSI U320 SCSI HBA. The HBA has a 'simple' stripped raid via firmware. This wo

We are each entitled to our own opinion, but no one is entitled to his own facts. -- Patrick Moynihan

Working...