Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Data Storage Hardware

Which RAID for a Personal Fileserver? 898

Dredd2Kad asks: "I'm tired of HD failures. I've suffered through a few of them. Even with backups, they are still a pain to recover from. I've got all fairly inexpensive but reliable hardware picked out, but I'm just not sure which RAID level to implement. My goals are to build a file server that can live through a drive failure with no loss of data, and will be easy to rebuild. Ideally, in the event of a failure, I'd just like to remove the bad hard drive and install a new one and be done with it. Is this possible? How many drives to I need to get this done, 2,4 or 5? What size should they be? I know when you implement RAID, your usable drive space is N% of the total drive space depending on the RAID level."
This discussion has been archived. No new comments can be posted.

Which RAID for a Personal Fileserver?

Comments Filter:
  • RAID 1 (Score:5, Informative)

    by Oculus Habent ( 562837 ) * <oculus.habent@gm ... Nom minus author> on Wednesday June 16, 2004 @03:21PM (#9444757) Journal
    For personal use, a two-drive RAID 1 is probably the easiest way to go, and involves the fewest drives, but loses the most space (half). Raid 5 is the standard, but the hardware is more expensive and it involves at least one additional drive.

    For simplicity and low expense, even though you lose a full drive worth of capacity, go with RAID 1.

    You might want to read The Tech Report's recent article [techreport.com] mentioned on Slashdot [slashdot.org] if you haven't already.
    • Re:RAID 1 (Score:5, Informative)

      by Anonymous Coward on Wednesday June 16, 2004 @03:30PM (#9444887)
      You should really try out some of the SATA RAID solutions. They offer the best bang for your buck. I know that the next time I have a few hundred dollars lying around I'm going to go with a 1 TB RAID 5 with some WD SATA 250s [westerndigital.com]. Also, Supermicro [supermicro.com] makes a very nice 5 drive chassis [supermicro.com] that only takes up 3 - 5.25" bays. This is the ideal home setup in my mind.
      • What I really want to know is what sort of performance you get from software raid solutions. After all, the concept of being able to get redundancy without forking money over for a raid card (even from ebay, they're expensive), is rather tempting.
        • Re:Software raid (Score:3, Informative)

          by Linux_ho ( 205887 )

          What I really want to know is what sort of performance you get from software raid solutions. After all, the concept of being able to get redundancy without forking money over for a raid card (even from ebay, they're expensive), is rather tempting.

          Depends what kind of RAID you're doing. If it's just a mirror, writes are slowed slightly, and read performance is significantly improved over a single drive. Don't even bother trying to do RAID 5 in software. Buy a 3ware Escalade controller or a SCSI RAID contr

          • Re:Software raid (Score:5, Informative)

            by Ed Random ( 27877 ) on Wednesday June 16, 2004 @04:22PM (#9445501) Homepage Journal
            Depends what kind of RAID you're doing. If it's just a mirror, writes are slowed slightly, and read performance is significantly improved over a single drive. Don't even bother trying to do RAID 5 in software. Buy a 3ware Escalade controller or a SCSI RAID controller if you need RAID 5. Keep in mind that many of the cheaper RAID IDE cards (Promise, for one) do much of their work in software too, and often perform about as well or even worse than straight software RAID.

            I've run software RAID-5 on Linux for several years on two of my home fileservers.

            The only problem I ever encountered were hardware failures (Promise *ack* *spit* PCI IDE cards) and one drive failure. Performance is not really an issue for home use; I can easily saturate my 100Mbps network card.

            My Fileserver: AMD Duron 1300MHz, 768MB RAM

            /dev/md0 441G 339G 93G 79% /home

            This device was built from 4x 160GB 7200rpm SW RAID-5 for online storage (including all of my digital photos, and my collection of CD's ripped to MP3).

            For backup I have an old Celeron 433, 512MB RAM box with 4x 120GB 5400rpm SW RAID-5

            The main fileserver is rsynced to the backup server once a week. CPU on the backup server is a bottleneck; the Celeron is a bit underpowered for rsync, but it works ;)

            My $0.02:
            - Software RAID is perfectly usable, especially for typical home use. Performance is adequate.
            - With RAID-5 you "lose" only one disk to parity so it is quite cheap to build
            - Yes, I'd really like a 3Ware Escalade but if the card fails I need to get a new one pronto; software RAID sets can be migrated to most PCs.

          • Re:Software raid (Score:5, Interesting)

            by Oestergaard ( 3005 ) on Wednesday June 16, 2004 @04:37PM (#9445640) Homepage
            If it's just a mirror, writes are slowed slightly

            Hardware controllers with batter backed RAM (note; not all controllers have this), will have an edge over software solutions on ALL writes - no matter which RAID level you use.

            Don't even bother trying to do RAID 5 in software

            SW RAID is usually a lot faster than HW RAID solutions, when you factor out the battery-backed RAM part. Any HW RAID controller with battery backed memory will lose big-time to SW RAID on even moderately faster CPUs (like 500MHz P-IIIs), especially on RAID-5 which is compute intensive, an even more on RAID-6 which is also compute intensive but not XOR based.

            Modern HW RAID controllers have reasonably fast CPUs with XOR accelerators built in - therefore they can do RAID-5 as fast as the pure SW solution. But this is not the case with older controllers.

            I know of people who use 3ware cards for large RAID-5 servers, but only use the 3ware cards as "dumb" IDE controllers, and leave the RAID-5 handling to SW-RAID. The reason? Their benchmarks indicate that this is significantly faster.

            And when you think about it, it makes sense. Nobody puts a GHz processor on a RAID controller. Even a slow-by-todays-standards P-III is able to XOR more than a gigabyte of data per second - much much more than anything you put thru most file servers out there.

            So, the "HW RAID is faster than SW RAID" is true in one scenario only; when you have write-intensive workloads and a HW RAID controller with battery backed cache.

            In *all* other cases, SW RAID will be a win, performance wise.

            For a personal file server, I wouldn't hesitate to run RAID-5 in plain software. It's as fast or faster than any HW RAID controller in the sub-$3K price range, it's reliable, and the flexibility beats the heck out of any HW based solution out there (mixing IDE/SCSI, allowing a cryptographic layer between the RAID layer and the physical disks, etc. etc...)

          • Re:Software raid (Score:4, Insightful)

            by mnemoth_54 ( 723420 ) on Wednesday June 16, 2004 @07:04PM (#9447059)
            IMHO, the real value in SW RAID is the hardware independence.

            If your HW RAID controller dies, you have to get another one of the same controller, and hope that you can re-import your config w/o losing all your data. If your running SW RAID and your SCSI/IDE controller dies, you can replace it w/ whatever is cheap/available at the time. As long as the failure itself didn't bork your data, you shouldn't have to do much, if anything, to see your data again.

            If you can afford to get the top of the line SCSI RAID controller from a good vendor it's probably the better option, but if cost is an issue, IDE SW RAID is the only way to go.
        • Re:Software raid (Score:4, Insightful)

          by ryanwright ( 450832 ) on Wednesday June 16, 2004 @03:44PM (#9445087)
          Software raid is plenty fast for a personal fileserver. It's not like you'll have a hundred users on it at a time. Unless you have an ancient CPU, you'll be fine.

          • Re:Software raid (Score:5, Informative)

            by JWSmythe ( 446288 ) <jwsmythe@nospam.jwsmythe.com> on Wednesday June 16, 2004 @03:54PM (#9445223) Homepage Journal
            Actually, I've used it quite successfully under Linux for web, MySQL, and mail servers. The mail server is the most abused server, and it has no speed problems. We have 3 IDE drives as a RAID5 under Linux (md device). That server has been known to pass over 100k Emails per day. Sure, it's mostly spam and viruses coming in, but they're still received, scanned, and everything but the high scoring spam and viruses are delivered.

            So, several hundred users using IMAP and POP3 to collect mail, SMTP to send mail, and the 100k or so incoming messages do add up to a lot of work, and it handles it flawlessly.

            $ cat /proc/mdstat
            Personalities : [linear] [raid0] [raid1] [raid5] [multipath]
            read_ahead 1024 sectors
            md0 : active raid5 hdc2[2] hdb2[1] hda2[0]
            351100416 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU]

            unused devices:

            $ df -h
            Filesystem Size Used Avail Use% Mounted on
            /dev/md0 330G 11G 302G 4% /
            /dev/hda1 122M 8.0M 108M 7% /boot
            none 499M 0 499M 0% /dev/shm
          • Re:Software raid (Score:3, Interesting)

            by brandon ( 16150 )
            I've run software raid 5 for some time and in my experience it has been more stable than running ide raid with promise ide controllers. I have had 2 promise cards go bad but 2 systems running software raid 5 under debian have worked extremely well. Even hot-swapping works well and I've had to use it several times.

            Performance with an IDE raid controller is pathetic. You can't get much more than 22MB/s. I can hit 68MB/s reading and 31MB/s on one system with 4 7200 8 MB cache IDE drives. (This system has
        • Re:Software raid (Score:4, Informative)

          by H310iSe ( 249662 ) on Wednesday June 16, 2004 @05:39PM (#9446338)
          The problem w/ Software RAID is it depends on the OS, if you OS fails you can loose your data - I've confirmed this w/ Windows Software RAID at least, it's a real, real bitch to recover from if you have any OS problems (and no matter what anyone tells you Signed Disks in Windows are a horror story waiting to jump out at you).

          As for forking $ for RAID cards, I've had really good experiences w/ the MegaRaid cards from LSI Logic [lsilogic.com] - really, really good tech support and exceptionally inexpensive cards.
        • Re:Software raid (Score:4, Insightful)

          by puke76 ( 775195 ) on Wednesday June 16, 2004 @08:54PM (#9447905) Homepage
          No one uses software RAID for performance, although the performance is good compared with the cheap 1+0 cards available.

          The real advantage of software over hardware RAID is that you don't need to keep a spare RAID card around. With hardware RAID, when your RAID card fails you'll need exactly the same make & model card to read your data.

          With Linux software RAID, you can read the drive set on any system with the raid modules.
    • Re:RAID 1 (Score:5, Interesting)

      by arth1 ( 260657 ) on Wednesday June 16, 2004 @03:37PM (#9445013) Homepage Journal
      For a file server, I'd use the combination of RAID 1 and striping known as RAID 1+0 or RAID 10.
      The benefits are that you get the same protection as with RAID 1, but lose the speed penalty, all without needing special hardware or spare CPU power for expensive CRC calculations.
      With a 4 drive RAID 1+0, you'll get read performance of 2x-4x a single drive, while writes will be from 1x-2x. In theory, that is. In reality, if using a RAID PCI card or motherboard solution hooked to the south bridge, you'll most likely max out the read speed.

      Anyhow, it's a very cheap solution that doesn't tax your CPU too much even if done through software (like with a highpoint controller), and it does give you piece of mind.

      The worst downside is that you will have to take the system down to change a drive (correct me if I'm wrong, but I've never seen a hot-swappable RAID 1+0 solution), and the performance before you do that will take a substantial hit.

      Raid 4/5 is nice because it doesn't waste a lot of drive space, but it comes at the price of very slow writes, and very high CPU use unless you also get a hardware controller with an onboard CPU.

      Regards,
      --
      *Art
    • Re:RAID 1 (Score:4, Informative)

      by ArsonSmith ( 13997 ) on Wednesday June 16, 2004 @03:40PM (#9445044) Journal
      One thing I did rather than actual raid 1 is have two patitions for /usr/local and /usr/localmirror. I would use rsync nightly to copy everything from /usr/local to /usr/localmirror and biweekly I would do the rsync with the --delete flag. this way I would also have a nightly "snapshot" like file recovery option if need be.

      I had heard that the new VM for linux supports snapshots so I will probably be looking into that soon but I haven't messed with my file server in over 3 years. It just works (TM).

    • Re:RAID 1 (Score:5, Informative)

      by ryanwright ( 450832 ) on Wednesday June 16, 2004 @03:41PM (#9445053)
      The really best way is RAID 1 + a third drive for backups, on another system. If you can afford to pay 3 times the normal cost to store your data, this will virtually guarantee you'll never lose it.

      My fileserver has a mirrored pair of drives in front mounted, hot-swap bays. I have a third drive on my workstation and I sync to that every time I add significant amounts of data to my server. The mirroring protects against drive failure and the third drive protects against server failure, operator error, filesystem corruption or other problems that can wipe out a RAID array.

      Lastly, the stuff that changes often and is worth the most to me - small documents and other things I create - gets a nightly sync to the server's boot drive and I keep a month's worth of revisions. This lets me "go back in time" to retrieve things if I need to. Considering the relatively small size of this type of material, this doesn't take up a lot of space. I think the whole month's worth of revisions only takes up 10GB or so.

      The hot swap bays let me yank a drive out on my way out of the house if the place catches on fire. Yes, I know I should be storing that third drive at a friend's house, but it's too inconvenient to retrieve it every time I want to backup my array. So a fire may destroy everything if I'm not home or can't safely pull a drive on my way out. I'm comfortable with that.
      • Re:RAID 1 (Score:5, Informative)

        by kfg ( 145172 ) on Wednesday June 16, 2004 @03:44PM (#9445092)
        The really best way is RAID 1 + a third drive for backups, on another system.

        At a different site.

        KFG
      • Re:RAID 1 (Score:5, Informative)

        by ryanwright ( 450832 ) on Wednesday June 16, 2004 @03:46PM (#9445126)
        I need to amend this, in case it wasn't perfectly clear:

        DO NOT RELY ON RAID TO PROTECT YOUR DATA. If you do, you will lose it some day. Raid only protects against hardware failure. There are plenty of other ways you can lose data and one of them will catch up to you eventually.

        If you can't afford to lose it, back it up to another drive on another computer. If you really can't afford to lose it no matter what, store your backup drive with a friend.
        • Re:RAID 1 (Score:5, Interesting)

          by robi2106 ( 464558 ) on Wednesday June 16, 2004 @03:58PM (#9445254) Journal
          No kidding. I wonke up one morning, turned on my system and found my one an only partition for my storage drive (non-raid) totally gone. WinXp Pro just decided to wipe it clean. I surfed around a while and find a nice Russian (or some other foreign site) that served up a juicy hacked exe for a harddrive recovery app. It did the trick and recovered my data by rebuilding the partition table based on the data (or something like that).

          I was even thinking of buying the app until I surfed to the company's site and found it was >$2K US. Screw that. If it happens again, I may not reciver my stuff.

          I didn't have anything critical on there, but it woudl have been very time consuming to re-rip my CDs again.

          jason
        • Re:RAID 1 (Score:5, Interesting)

          by jsebrech ( 525647 ) on Wednesday June 16, 2004 @04:06PM (#9445338)
          DO NOT RELY ON RAID TO PROTECT YOUR DATA.

          Amen. I have vivid memories of typing rm -rf * in the wrong directory (and that was WITH pwd in my prompt). It took an entire week to duplicate the work lost.

          Combining the rm command and lack of sleep is like combining a loaded gun and your forehead. You can only do it so often before you destroy something valuable.
          • Re:RAID 1 (Score:5, Interesting)

            by bersl2 ( 689221 ) on Wednesday June 16, 2004 @05:57PM (#9446502) Journal
            I've done that once. This is why I've started to touch -- -i in every directory with important data. In case of accidental rm -rf *, you're not fucked. I forget where I learned that trick, but I'm sure it will be a life-saver someday.
      • Re:RAID 1 (Score:5, Interesting)

        by tedgyz ( 515156 ) * on Wednesday June 16, 2004 @04:40PM (#9445666) Homepage
        The hot swap bays let me yank a drive out on my way out of the house if the place catches on fire. Yes, I know I should be storing that third drive at a friend's house, but it's too inconvenient to retrieve it every time I want to backup my array. So a fire may destroy everything if I'm not home or can't safely pull a drive on my way out. I'm comfortable with that.

        You can resolve this issue with high-capacity, portable storage. I keep all most critical stuff (software, licenses, photos, pr0n, etc.) on my 40GB portable drive. Forget those keychain things. The FireLite SmartDisk [smartdisk.com] is a USB 2.0, aluminum encased laptop drive. It draws power from USB - it even worked on my old USB 1.l system. They provide a special power cable, in case your old USB ports aren't pushing enough power. I toss thing in backpack every day and lug it all over - it has yet to show signs of weakness.

        I totally agree with your configuration. For my Linux server, I've been using Linux (RH7.2) Software RAID-1 mirrored for ~3 years without a single issue.
      • Re:RAID 1 (Score:4, Funny)

        by Fjord ( 99230 ) on Wednesday June 16, 2004 @04:52PM (#9445773) Homepage Journal
        Using RSync to do snapshots [mikerubel.org] is a good way to go. With the snapshot structure on the RAID system, you cannot accidentally wipe out data: it'll remain in the past versions.
    • Re:RAID 1 (Score:3, Informative)

      by Carnildo ( 712617 )
      For my home system, I'm planning to go with RAID 1 for both backup and protection. Two hard drives in hot-swap mounts. Weekly backup procedure is to remove one of the drives, put in a third drive, and take the removed drive to work with me as an off-site backup.

      It'll be cheaper than tape, but more work.
    • RAID 5 is only really appropriate if you are building a large array. The money you will spend on the controller will make the cost/megabyte higher than RAID 1 unless you are looking for a very big array (more than you can get with a mirrored pair.) I have a RAID 5 array I built about 2 years ago with 4 160GB drives on a 3ware 6000 series RAID controller. It has worked great and I'm planning on using RAID 5 again for my next array. I've only had one drive failure so far but it recovered from it beautifull
  • RAID 0, you need a hero,
    RAID 1, is equally fun,
    but RAID 5 keeps you alive!
    • You forgot the final four lines to that song!

      RAID 0, you need a hero,
      RAID 1, is equally fun,
      but RAID 5 keeps you alive!


      RAID 5 - better keep an extra drive
      Or you'll be down until the replacement arrives
      RAID 10 is better my friend
      Work doesn't stop when the drive comes to an end
    • by Shalda ( 560388 ) on Wednesday June 16, 2004 @03:36PM (#9444997) Homepage Journal
      And here's the definitions:
      RAID 0: This is a striped set, there is no redundancy. One drive goes, everything's gone. Useable space = 100%
      RAID 1: This is a mirrored set. Typically this involves 2 drives. One drive is an exact copy of the second. If a drive fails, you replace it and rebuild the set. Life goes on. Useable space = 50%. Most IDE raid cards only support RAID 0 AND 1.
      RAID 5: This is a striped set with parity. You get the performance associated with a striped set. Particularly on reads. If you have 4 drives, there are 4 stripes. 3 of those stripes are data stripes, the 4th is parity. Lose 1 drive and the parity information is used to rebuild the set. Useable space = (n-1)/n. To do this in hardware is typically fairly expensive.

      There's a lot of hardware solutions out there. It can also be done in software. Windows supports creating disk sets in software. Other options include the purchase of a Snap! server, or other brand of NAS. If you've got a little $ to throw around, NAS is the way to go. Plug it into your network, minimal setup, and your off and running. Not very upgradeable, and somewhat problematic if your drive does actually die, but I use them at the office for a zero maintenence file server.
  • RAID -1 (Score:5, Informative)

    by Mz6 ( 741941 ) * on Wednesday June 16, 2004 @03:23PM (#9444778) Journal
    I would choose RAID-1.. because RAID Level 1 provides redundancy by writing all data to two or more drives. The performance of a RAID-1 array tends to be faster on reads but slower on writes when compared to a single drive. However, if either drive fails, no data is lost. This is also a great entry-level starting point as you only need 2 dirves. The downside is the cost per MB is high in comparison to the other levels. This level is often referred to as disk mirroring.
  • RAID 5 or RAID 10 (Score:5, Informative)

    by strictnein ( 318940 ) * <strictfoo-slashd ... m ['oo.' in gap]> on Wednesday June 16, 2004 @03:23PM (#9444787) Homepage Journal
    Try RAID 5 [acnc.com] or RAID 10 [acnc.com] (not to be confused with RAID 0+1 [acnc.com]). This site has a nice overview of all the RAID options [acnc.com]. And, of course, Wikipedia has some info [wikipedia.org].

    Quick overview:
    RAID 5 - Requires at least 3 HDs (many times implemented with 5 - can be used with up to 24 I believe). Data is not mirrored but can be reconstructed after drive failure using the remaining disks and the parity data (very similiar to how PAR files can reconstruct damaged/missing RAR files for the Newsgroup pirates out there). % of total space available dependent on number of drives used.

    RAID 10 - High performance, but expensive. You get ~50% of the total HD space as it is fully mirrored. So, 1 TB total disk space nets you 500 GB total storage space. Your data is mirrored so if one drive fails you do not lose everything. However, if you experience multiple drive failure you can be in big trouble.
    • Re:RAID 5 or RAID 10 (Score:4, Informative)

      by harryk ( 17509 ) <jofficer AT gmail DOT com> on Wednesday June 16, 2004 @03:37PM (#9445007) Homepage
      Depending on your setup, RAID-10 can be extremely reliable, especially in a multi-drive failure scenario.

      Specifically the setup is as follows

      1 == 2
      3 == 4
      5 == 6
      7 == 8

      Setting up a RAID in this way will allow you to experience multiple drive failures while still keeping the raid alive. The most detremental in this scenario is if you lose two drives on the same deveice. Meaning if you lost drives 1 & 2 you expereince a more of a problem as opposed to losing drive 1 & 4.

      Just my 2 cents, poke holes where necessary
  • by TheCoop1984 ( 704458 ) <thecoop @ r u n box.com> on Wednesday June 16, 2004 @03:24PM (#9444793)
    Whatever you do, never have more than one disk on an ide channel. Only one disk per channel can be written to at the same time, so you will get absolutely horrible performance if you get more than one hd per channel. If possible, get an ide raid card (if you can afford it) or a SATA card/mobo and drives, which dont have this problem
    • An ideal solution, although it's somewhat expensive, is to do SATA RAID. Adaptec has a controller card [adaptec.com] that is excellent for this and runs about $330. Then you'll need some SATA hard drives. The card can do RAID-0, RAID-1, RAID-5, and RAID-10, so you still have flexibility that way. If you can afford it, do it.
  • Hardware (Score:3, Funny)

    by DaveKAO ( 320532 ) on Wednesday June 16, 2004 @03:24PM (#9444795) Homepage
    Wow inexpensive & reliable... Those are two words you don't see together too often.
    • Re:Hardware (Score:3, Insightful)

      by Binestar ( 28861 )
      Wow inexpensive & reliable... Those are two words you don't see together too often.

      "Good, Fast, Cheap: Pick any 2."
  • Raid 1, 0+1, or 5.. (Score:4, Informative)

    by XaXXon ( 202882 ) <xaxxon.gmail@com> on Wednesday June 16, 2004 @03:24PM (#9444805) Homepage
    Your good options are raid 1, raid 0+1, or raid 5, depending on what you want..

    Raid 1 is the safest.. just mirroring the drives, but it results in no speed increase..

    Raid 0+1 does mirrored stripe sets -- you get the speed advantages of raid 0 with the full protection of raid 1.

    Raid 5 is good middle ground. Raid 5 stores 1 drive's worth of parity. When you lose a drive, your system goes down (if you don't have a hot spare), but you throw another disk in and it'll come back up. You also get some speed increase over a normal drive setup. With RAID 5, you only lose a single drive's worth of capacity no matter how many drives are in your array, whereas with raid 1, you lose 50%.
    • Raid 5 is good middle ground. Raid 5 stores 1 drive's worth of parity. When you lose a drive, your system goes down (if you don't have a hot spare), but you throw another disk in and it'll come back up

      Actually, with any proper implementation of RAID 5 you wouldn't lose functionality during a single drive failure, but you would suffer a performance hit because every read would require the drive controller to reconstruct the missing data from the checksums.

      Replace the bad drive very quickly, though, becaus
    • by afidel ( 530433 )
      RAID 0+1 sucks, it can only sustain a single drive failure. RAID 10 (1+0) can sustain multiple drive failures without data loss under the right circumstances. The cool thing about RAID 10 is that you can use a pair of mirrored drive sets and use software to do the striping at near zero cost and you get controller redundancy! (most people who do RAID 10 will use the built in RAID1 controller and an addon two port RAID controller)
    • Your assertion about RAID-1 is not precisely true. If the RAID controller is designed well, and the driver supports it, you can get almost the same read speed enhancement of RAID-0.

      Example: two SATA drives
      RAID-0: Write Speed: 2x, Read Speed: 2x
      RAID-1: Write Speed: 1x, Read Speed: 2x

      Basically, when doing a write, the driver can use the same buffer and stream the write data to both drives synchronously meaning no slowdown. A proper read driver will read alternate chunks simultaneously from the two drives,
      • Example: two SATA drives RAID-0: Write Speed: 2x, Read Speed: 2x RAID-1: Write Speed: 1x, Read Speed: 2x

        You are unlikely to get double read performance from a RAID 1 setup. It's theoretically possible, but in practice it doesn't happen (take a look at the recently posted review [tech-report.com] at Tech Report). It's actually easier to get good performance with RAID 1 using software RAID as the OS is in a much better position to schedule reads efficiently than a RAID controller.

  • by patniemeyer ( 444913 ) * <pat@pat.net> on Wednesday June 16, 2004 @03:25PM (#9444816) Homepage
    I went through this last year and here's what I came up with for the best benefit to cost ratio with the lowest hassle. In short, take an old PC and put a four channel raid controller card in it to do RAID 5. Add a big extra fan for safety and you're done.

    Here's what I came up with: Total cost about $1200 (probably less by now).

    0) Red Hat Linux, ext3 filesystem.
    1) 3Ware Escalade 7506-4LP card (64 bit card, but fits in 32bit slot)
    2) 4x 250Gb Western Digital drives
    3) Big fan.

    At RAID 5 This yields 750gigs (715Gb after crappy GB conversion).

    The 3Ware software has a nice web monitor interface and does daily or weekly integrity checks. It emails me if there is a problem - I did have one drive die already and replaced it easily.

    Pat Niemeyer
    Author of Learning Java, O'Reilly & Associates
  • by baudilus ( 665036 ) on Wednesday June 16, 2004 @03:25PM (#9444819)
    I work for a company that uses all types of RAID. I've experience with 2 bay, 8 bay, and 16 bay RAIDs, as well as RAID cards. If you want the cheapest option, just get a two drive system (either with bays or just a card) and use RAID1. It's basically drive mirroring.

    Bottom line, you need to figure out how much you're willing to spend on this and then go from there and see what your options are. RAID5 is the hotness, but it's very expensive (easily over $10K for large capacity devices).
  • by devphaeton ( 695736 ) on Wednesday June 16, 2004 @03:26PM (#9444827)
    "I'm tired of HD failures. I've suffered through a few of them. Even with backups, they are still a pain to recover from.

    If you just run Gentoo, you can type "emerge new_harddrive" and it takes care of everything by the end of the month!

    or..

    Your shit PEECEE WINTEL crap parts made in china are no match for real quality Mac hardware, which are fully integrated with the UNIX UNDERPINNINGS that have the Best GUI Ever(tm) on top.

    Disclaimer: I love trolls.
  • by ccwaterz ( 535536 ) on Wednesday June 16, 2004 @03:26PM (#9444830)
    Dear Slashdot,

    which is better, SCSI or IDE?

    Googleless in VA
  • My choice (Score:5, Insightful)

    by Simon Carr ( 1788 ) <slashdot.org@simoncarr.com> on Wednesday June 16, 2004 @03:26PM (#9444831) Homepage
    If I could, I'd get 2x 250GB HDDs in a RAID1 (promise controllers are good for this), and a third 250GB for a cold backup of all my data that syncs weekly.

    Raid's great, but an rm -rf is still an rm -rf, thus the third drive :)
  • Raid 5 (Score:3, Interesting)

    by silas_moeckel ( 234313 ) <silas@@@dsminc-corp...com> on Wednesday June 16, 2004 @03:26PM (#9444839) Homepage
    If your running a fileserver with a decent ammount of writes yours going to want RAID5 as it has the least penalty. Hot swap drives are easy enough with SCSI or FC a bit more complicated with SATA and rather complicated with IDE but can be done. For a simple setup as little as 3 disks will do and you will get 2 disks worth of space performance setups will have more spindles. You didn't state as to what sort of load your expecting and that makes a huge difference. For the ultra cheap I have picked up IDE raid 5 cards supprting 4 drives with hot swap for sub 30 bucks on ebay they will only work with 120 gig drives max and are limited to ultra 66 but thats a third of a TB usable as well for a few hundred bucks and it's performance is good enough for a 100bt file server.
  • by SeanTobin ( 138474 ) * <byrdhuntr AT hotmail DOT com> on Wednesday June 16, 2004 @03:27PM (#9444845)
    Seriously. Raid is all about risk. Figure out how much risk is acceptable to you. If you have a stack of 6 drives and you only believe 1 is ever going to fail at any one time, then go with raid 5.

    If you have a stack of 6 drives and believe not a single one is ever going to fail, go for level 0.

    If you are a government contractor and are required to handle simultaneous failures of 75% of your drives, either mirror them all or go with 5+1 or a raid 10 setup.

    All in all, its a poor question to ask slashdot. You need to let us know what you consider an acceptable failure, and by the time you have that figured out determining what raid level you need is easy.
  • RAID 5 or 6 (Score:3, Informative)

    by Tr0mBoNe- ( 708581 ) on Wednesday June 16, 2004 @03:27PM (#9444846) Homepage Journal
    RAID 5 or 6 will stripe the data across all drives in the array. You will basically need about 8 - 10 % of the total space set aside for data recovery. You can loose 2 hard drives (as long as they are not next to eachother) and not loose any data. RAID 5 and 6 are only incredibly useful in application with more than 4 hard drives and about 500 gb of storage. It's a little faster than the lower raids becuase the redundancies are simple pairity bit calculations, and are done twice for each single data change on disk. The lower raids will have a set of disks that actually mirror the data in tact (raid 1) or perform more intensive Hamming Distance calculations and store the results on another set of disks.

    So, RAID 5 or 6 would be the best (RAID 6 is worth the extra bit of space for the 2nd calculation, and really helps when you can test the pairity bits against another pairity to create the lost data.)

    There will be some slow down associated with RAID, but it wont be as bad with 5 or 6 and generally, you can live through it with the thought of having relativly robust file servers.

  • Software RAID? (Score:5, Insightful)

    by Suydam ( 881 ) on Wednesday June 16, 2004 @03:29PM (#9444876) Homepage
    Have you thought about software RAID? Before everyone jumps down my throat, I realize that it's slower than hardware RAID...but, here is my rationale for using it:

    1) You don't need drives that are the same size.
    I've done hardware RAID, had a drive fail 2 years down the road and not been able to find an 18GB SCSI drive to re-insert to the array. That has the potential to jack your entire array. With software RAID, you buy a 36G drive, partition it so that 1 partition fits your array, and off you go

    2) It's a personal file server, so speed is less important than cost (i'm guessing). With software RAID you can mix all sorts of wonderous things together. IDE drives from the basement, SCSI-320 drives you stole from work and nearly everything in between. It's for flexible, and has no associated controller cost.

    3) It's easy as heck. You can configure it in Disk Druid/fdisk, and it works quite easily in any major distribution (I've done it in Slack, Debian, RH, Fedora and Mandrake).

    The major downside is that you cannot (as least I don't know how to) hot-swap drives. But again, this is a personal file server. Spend your money on pizza and beer, screw the SCA hot-swap drives that are going to cost you an arm and a leg.

    That's just my $0.02...flame away
    • Re:Software RAID? (Score:3, Informative)

      by pe1chl ( 90186 )
      Your comparison is between software raid as you found it in Linux, and "hardware raid" as you once found it on a certain raid controller.

      The limitations and versatility are not determined by the "software or hardware" ("hardware" being software on a dedicated raid controller) but by the design of the specific software under consideration.

      True, the software raid in Linux is quite versatile, but there is no reason why a raid controller could not work with two disks of different sizes and use part of one dis
    • Re:Software RAID? (Score:5, Informative)

      by Just Some Guy ( 3352 ) <kirk+slashdot@strauser.com> on Wednesday June 16, 2004 @04:20PM (#9445484) Homepage Journal
      Before everyone jumps down my throat, I realize that it's slower than hardware RAID...

      Many benchmarks show the exact opposite, except when dealing with high-end RAID cards. Why? Because the average CPU on a system with a RAID is going to be much more powerful than anything you're likely to find on a low- to medium-range hardware adapter. I use software RAID on a number of FreeBSD servers and it absolutely flies.

      The major downside is that you cannot (as least I don't know how to) hot-swap drives.

      That's a function of the hardware and OS. One of the above-mentioned FreeBSD servers is in a nice IBM server case with hot-swappable front-access LVD drives. The swap process is:

      1. Run vinum stop <diskname> to shut down the RAID device.
      2. Run camcontrol stop /dev/<device> to turn off the drive.
      3. Swap the drives.
      4. Run camcontrol start /dev/<device> to turn on the drive.
      5. Run dd if=raidconfig_<diskname> of=/dev/<device> to install the software RAID parameters at the beginning of the drive.
      6. Run vinum start <diskname> to start the RAID device.
      7. Watch the LEDs flash as the volumes are rebuilt, and slip out to type up an invoice.

      There's no reason you can't do hot-swappable software RAID. If there is, then someone forgot to tell me server.

  • by isolationism ( 782170 ) on Wednesday June 16, 2004 @03:36PM (#9444984) Homepage
    Really depends on the type of RAID you'd like to implement.

    RAID 0 stripes the data across 2 or more drives and therefore offers no redundancy (in fact, in a two-disk stripe you mutiply danger of data loss x4 compared to two individual drives -- because you not only double the possibility of failure with two disks as opposed to one, but stand to lose all of the data on both drives should one fail). In any event, no point in discussing it further since redundancy is the point.

    RAID 1 offers redundancy by exactly duplicating the contents of a drive onto another drive, and needs exactly two drives. This is considered the most "fail-safe" method of RAID array although offers no performance benefits whatsoever.

    RAID 10 (or 1+0 or 0+1) is a combination of RAID 0 and 1 and is nearly always done with four drives, although technically it can be done with six or eight (if your controller supports them). It offers both performance benefit and redundancy, although the cost of the "wasted" drive space is quite high.

    RAID 3 involves using 3 or more drives, one of which contains parity information to rebuild the lost drive should any of the other drives fail. This is one of the least popular RAID formats and has more or less been totally replaced by RAID 5.

    RAID 5 involves using 3 or more drives and writes parity information across all drives in the array, allowing one drive to fail with little to no performance loss. The failed drive can be replaced and the RAID rebuilt. Depending on your hardware/software, this can often be done hot without having to power down the system at all. It is one of the most commonly implemented RAID solutions because of the good mix between drive use (the price goes down the more drives you have in the array yet you can have as little as three), redundancy, and high availability.

    There are others out there like RAID 50 but nothing worth mentioning, especially for a home user.

    The only question left to you is whether the RAID will be run by hardware or software (software might be a good choice if you are already running Linux on the server, but you'll have to ask someone else about it because I don't know a thing about it). Personally I chose the hardware route years ago and bought an Adaptec 2400A, which is a four-channel hardware ATA-RAID card capable of RAID0, 1, 10, and 5 -- guess which I use. I use all four channels, each with a 200GB SATA hard drive. I've lived through a couple drive failures, a full drive upgrade (when I first bought the card it was 4x60GB drives) and even once where two drives RAID tables got zapped (I'll NEVER put my drives in removable cages again) and never lost a byte of data -- so the CAD$500 or so for the investment on the card was worth it.

    600GB of storage means not having to worry about all those unlicenced-in-North-America-anime torrents running out of space any time soon.
  • by JoeShmoe ( 90109 ) <askjoeshmoe@hotmail.com> on Wednesday June 16, 2004 @03:40PM (#9445045)
    True story...had a personal fileserver with a Promise RAID card. I got the Promise card because it was cheap and had a good rating on a couple of review sites.

    What I didn't know at the time, but learned the hard way, is that Promises's RAID monitoring program "PAM" is a user-mode only application. That means that if you don't login, it doesn't run. Care to guess what happened to me?

    At some point while I was gone for the weekend, I can only guess something crashed and rebooted Windows 2000. When it rebooted, I didn't have it set to automatically login (why would I? it's a server). So "PAM" wasn't running when one of the drives in the RAID 5 set failed. Maybe it even had something to do with the crash, I don't know.

    Now, the point of PAM is that if a drive fails, an e-mail gets sent, in this case to my mobile phones textpage address. Since PAM wasn't running however, nothing was sent. The drive failed and, I can only guess, put off so much heat that it cooked the drive above it (why do so many cases mount hard drives horizontally above each other anyway?) and next thing I know, I can't login to my server from where I'm staying. I call a family member with a key to come by and they are unable to restart the server. It wasn't until I came home and read the BIOS messages that I understood why. Everything gone.

    I had a lot of stuff on CDR, but let me tell you, I was plenty outraged that Promise could design something so utterly stupid as a monitoring utility that doesn't know how to run as a service. Even to this day, PAM still will only run as a user-mode program, and even worse, you actually have to login to the program now to start it, which can't be scripted.

    F Promise. Only a complete and utter fool would be stupid enough to buy any of their products. May they rot in that special place reserved for child molesters. (Yes, I'm still bitter about it)

    - JoeShmoe
    .
  • by Zocalo ( 252965 ) on Wednesday June 16, 2004 @03:42PM (#9445069) Homepage
    So, your key requirements are:

    It's for home use

    No data loss if a drive dies

    Easy to rebuild - remove dead drive, install new one

    Budget... Ah. Why is it *every* "Ask Slashdot" never mentions the budget? On the cheap, you could do simple mirroring RAID1 - most mobos with on-board SATA RAID will do this for you. The overhead is that you pay twice as much per GB because you obviously need two drives and the performance gains are negligable.

    Personally, I'd take the more expensive route; get a proper hardware RAID controller with proper RAID management software. There are 4 port SATA RAID controllers (who *really* still needs SCSI for home use?) for a few hundred dollars and do full RAID5. You lose one drive for the parity info, but that could be as little as 25% of your total capacity if you get four drives instead of the the minimum RAID5 requirement of three drives.

    Also, with a proper hardware RAID controller, you should also get a performance boost from use of RAID and have minimal CPU overhead. Get four of Seagate's new 400GB drives and you'll have over a TB of disk space, which should give you some bragging rights for a months or two before it's old hat. :)

  • RAID information (Score:5, Informative)

    by JWSmythe ( 446288 ) <jwsmythe@nospam.jwsmythe.com> on Wednesday June 16, 2004 @03:44PM (#9445095) Homepage Journal
    My goals are to build a file server that can live through a drive failure with no loss of data, and will be easy to rebuild. Ideally, in the event of a failure, I'd just like to remove the bad hard drive and install a new one and be done with it. Is this possible?


    You want a Promise UltraTrak SX8000 [promise.com] It's the easy idiotproof array. We're using several of these.

    If a drive fails, it beeps at you til you replace it. You just yank it out, and put in a new drive, the same size or larger. It then rebuilds automatically. No shutdown or reboot required.

    The Linux crowd will be happy to know the RM series runs linux. I don't know about the SX series, but I suppose it does too. Either one appears to the server to be a single SCSI drive. No drivers required, other than making the SCSI card of your choice work.

    There's the Linux method of doing it too, which I like a lot. It saves you a *LOT* of money in extra hardware. You can go with 3 drives without adding any extra cards to your system, or you can put in IDE controllers to add as many drives as your system can support (PCI slots, power, and physical mounting points are the limitation). Read the "Software-RAID-HOWTO", which should come with your system. I've done many of these also, and they work quite nicely. You have to shut down the system to swap a drive, and then run `raidaddhot` with a couple parameters (the md device, if I remember right), and you can be running while it rebuilds.


    How many drives to I need to get this done, 2,4 or 5? What size should they be? I know when you implement RAID, your usable drive space is N% of the total drive space depending on the RAID level."


    You should have looked it up before you posted.

    RAID 5 is the most common for a large redundant array. The array size is (N-1)*size . The more drives you use in a single array, the better off you are for size loss.

    3 100Gb drives = 200Gb
    5 100Gb drives = 400Gb
    10 100Gb drives = 900Gb
    10 200Gb drives = 1.8Tb

    RAID 0 is striping. No redundancy, which you won't be happy with. (One failure means losing the array.

    RAID 1 is mirroring. With two drives, you still only have the size of one.

    RAID 50 is nice where it does striping across redundant arrays. You lose size, but gain speed.

    Most other RAID types aren't very popular for various reasons.

    Watch out for going over 2Tb in size on a single block device. I'm having problems with that right now. I have two Promise VTrak 15100's with 15 250Gb SATA drives in each, and anything with a block size over 2Tb is giving me grief. There are legitimate reasons for this, most of which newer documentation claims to be fixing, but I'm still having problems with a current Linux release. Making logical drives under 2Tb works, but doesn't accomplish what I need.

    I hope this helps.
  • by BeerMilkshake ( 699747 ) on Wednesday June 16, 2004 @03:45PM (#9445100)
    RAID 1+ can protect you from a failure of any one disk. That's great, because that is the most probable fault condition.

    However, what happens if your place has a fire, gets vandalized, or a burglar takes off with your server(s)?

  • Try this... (Score:5, Informative)

    by wumarkus420 ( 548138 ) <wumarkus@h o t mail.com> on Wednesday June 16, 2004 @03:46PM (#9445113) Homepage
    At my last job, we needed a basic RAID device that was under $500. We found this: http://www.accusys.com.tw/7500.htm It was about $200, and is OS and system independent. You simply put in two IDE drives, and you magically have RAID-1. You can hot-swap the IDE drives if necessary. We had one drive go bad and it worked perfectly. I recommend it to anybody on a budget. It takes up 2 drive bays, so it's a pretty easy fit in any standard PC.
  • by tstoneman ( 589372 ) on Wednesday June 16, 2004 @03:53PM (#9445206)
    Don't get too fancy with yourself on this one...

    You definitely don't need any type of RAID solution because it doesn't offer you what you really need. You say you want RAID, but what you really want is backup.

    All RAID solution deal with disaster recovery, but they don't deal with the situation where you accidentally rm -rf a directory that you wanted. If you mirror or RAID 5 your drives, you're still hosed because both drives will delete the files. In the end, this is more important and much more convenient.

    Instead, go with a better approach which is copy or tar your files every night (or every week) to a backup drive, preferably over the network on a completely different machine. This will prevent the problem of a power surge or accidental shutoff from corrupting both drives at the same time.
    • Indeed, no RAID (Score:3, Informative)

      by ballpoint ( 192660 )
      I'm doing the same thing at home. I have three identical drives. One, the primary, is sitting in the server, the secondary is unmounted in a removeable tray in the server, and the third is also in a tray but at a distant location.

      Initially I've dd'ed the primary to the other two disks.

      Every morning the primary is 'cp -fpRu'ed to the second one. No files are deleted on the secondary, unless I'm running out of diskspace there, at which time I do an 'rsync -aH --delete' after some verifications.

      Each few wee
  • The joys of RAID (Score:5, Informative)

    by retro128 ( 318602 ) on Wednesday June 16, 2004 @04:01PM (#9445280)
    Seeing as how you want data redundancy, there are three RAID levels for you to pick from:

    RAID 1 - Drive mirroring.
    Pros:
    -Excellent read performance, no loss of performance if one drive crashes.

    Cons:
    -The amount of space you can have on this array is limited to the largest drive you can find. Then you have to buy a second one to mirror the data, which means you are paying double the cost per unit storage on your array.
    -Write performance is slower than other RAID levels.

    RAID 5 - Striped array with parity. You can stack as many drives as you want on this array (within limits of the controller of course) and lose only one for redundancy.

    Pros:
    -You can build a very large data array out as many drives as you want, losing only one for the purpose of data reconstruction should a drive in the array fail.

    Cons
    -Array performance dies in the event of a failure, as lost data is reconstructed on the fly from parity information stored across the remaining drives. Of course, performance is restored with the bad disk is replaced and the array reconstructed.
    -You need at least 3 drives to build a RAID 5 array.

    RAID 10 - Drive mirroring with striping. Essentially combines RAID 0 and RAID 1, hence RAID 10.

    Pros:
    -Redundant and fast. Array can survive multiple drive failures.

    Cons:
    -Expensive. You need at least 4 drives to get started with RAID 10, and go by 2's as you expand on the array. As with RAID 1, your price per unit storage is doubled.
    -The array can survive multiple failures, but that depends on which drives die...If you lose two drives out of the same mirror set, then the array is gone

    Which RAID level you pick depends on your application. If you are interested in having something like a 1 TB data dump, you'll probably want to go RAID 5. If you only want 200GB or less in your array, then RAID 1 is probably the way to go. If you are interested in lots of space, lots of redundancy, and have lots of money, then RAID 10 is probably what you want.
  • by dasMeanYogurt ( 627663 ) <texas,jake&gmail,com> on Wednesday June 16, 2004 @04:07PM (#9445348) Homepage
    It sounds to me like this guy just needs a quality HDD and good tape backup. Do not put your faith in RAID, put in a good off-site backup. I've seen RAID solutions fail to many times. I've seen RAID solutions fail twice recently. The first one was a company with a slick server and nice hot-swappable SCSI drives but their controller card went out. It was replaced by the manufacturer but the techs were unable to recover the data. Next one happened when a machines case fan went out and the mirrored HDDs cooked themselves to death. The moral of the story: NEVER TRUST RAID and as always keep a backup.
  • by Sivar ( 316343 ) <charlesnburns[@]gmail...com> on Wednesday June 16, 2004 @04:25PM (#9445528)
    Why not read a few FAQ entries [storagereview.com] at StorageReview [storagereview.com]?

    In short, I would probably recommend RAID5 if you have 3+ drives.
    RAID5 gives you the most available space while still being redundant. It allows for exactly one hard drive failure.
    RAID5's write speed is usually terrible, especially with a small number of drives, but write speed isn't a big deal on my home file server. (Only you know about your needs).

    RAID1+0 (NOT RAID 0+1, which is inferior) is great for performance. With 4 drives, you have potentially twice the STR of one drive (writing) and 4 times the STR of one drive reading. Of course, since STR is not important for most IO, this doesn't really effect your end performance much unless you are dealing with linearly reading/writing very large files.
    Writing performance will almost certainly be higher than with RAID5.
    You do lose quite a lot of space (especially when you use a large number of drives). If you used a 4-drive 1+0 array, you would have the space in two of those individual drives.

    RAID1 is nice, and is very reliable, but is impractical with more than two drives unless you are incredibly paranoid. RAID1 simply makes all drives copies of the others, this, you always have as much free space as one drive would have, even if you have ten. If course, you could also handle 9 drive failures and not lose data. RAID1 is fine for 2-drive arrays though.

    DO NOT FORGET that RAID is no substitute for regular backups. RAID will not help if your data loss is caused by FS corruption, a cracker, accidentally typing "rm -rf /", etc.

    For lowest cost, I would use software RAID, such as Linux's LVM, FreeBSD's Vinum, or whatever Windows has. (RAID5 requires Windows server). (I would not use Windows as the file server myself).
    For slightly higher cost, try a Promise controller.
    I would avoid Highpoint and Silicon Image controllers. Highpoint, especially, is crap. (but it is very cheap, at least).

    If you possibly can, I would recommend a nice 3Ware Escalade controller. Escalades are true hardware RAID cards, unlike Highpoint/SI and most of Promise's cards, and are OS independent and very stable (with certain exceptions for some unlikely configurations).

    If you have any questions, you might try the StorageReview forums. There are a number of extremely knowledgeable people there, including engineers and executives-level researchers at hard drive companies. They can give far better advice than I can, I am sure.

    By the way, all my comments assume that all drives are the same size. If not, treat all drives as if they are the same size as the smallest drive on the array (unless you are using JBOD, which is not redundant)
  • by macdaddy ( 38372 ) on Wednesday June 16, 2004 @04:42PM (#9445681) Homepage Journal
    ...buy a decent RAID controller. Don't buy a POS Highpoint or Promise card. I speak from experience with both when I say you will regret it. Buy a decent brand of card such as a 3Ware or LSI. Adaptec is also fine if you're using SCSI drives. I personally have 3 3Ware cards (7000-2, 7506-4LP, 8506-12) and love them all. I also have an Adaptec 2940U2W in my old Mac that I'm also quite fond of (not to mention a few 2940UW controllers floating around somewhere). Buying a good controller is probably the most important thing you can do. You need excellent driver support. Highpoint cards and their support of Linux is a joke at best. The driver is a freaking nightmare. I've had 2 experiences with Promise controllers and OEM chipsets. Neither were positive and both resulted is massive amounts of lost data. 3Ware and LSI support in Lilnux is excellent, as is Adaptec. #1 rule of building any array is start with a quality controller.

    Next up is drives. Not all drives are alike as I'm sure you already know. Do you want a SCSI or an IDE array? I won't go into this lengthy topic further. I'll assume though that you will build an IDE array. Some drives do not work well in RAID setups. The controller companies are more likely to tell you this than the drive manufacturers. I own 6 Western Digital WD12000JB drives (7200 RPM, 8MB cache, 120GB capacity). By all accounts one would expect those drives to work quite well in a RAID setup. They have excellent read/write times individually and have a massive amount of cache. Well, one would think that and they'd be wrong. Both 3ware, Highpoint, and Asus tech support (on an OEM Promise chipset in teh A7V333) recommend against using Western Digital drives. 3Ware did however say that WD will give you firmware that works significantly better in RAID setups if you ask for it. Personally I'm a fan of Maxtor, both the drives and the company. I've had very few failures with Maxtor drives. Whenever I did they were always extremely helpful with getting me a replacement fast. I've been very impressed by ther service. I have 2 Maxtor 7Y250P0 and 2 6Y200P0 drives in the server sitting next to me. The second is a very high quality drive from Maxtor's DiamondMaX Plus 9 line. It too have 8MB cache and 200GB to spare and runs at 7200 RPM. Nice drive. The first pair are from Maxtor's MaXLine Plus II. They have a high MTTF, 8MB cache, 250GB space, and run at 7200 RPM. They are also a little bit faster than the 6Y200P0. They are excellent drives. My next drives will also be Maxtors but this time I'll be buying the SATA siblings of the MaxLine Pluss II product line.

    That brings me to my next point. PATA or SATA. Does your case have an abundance of room? I mean a massive amount of room to route long 80-conductor ribbon cables? Do you have at least 1 if not 2 PCI slots to waste below your RAID controller with the room needed to route the ribbon cables and make connections? If not then you need to go with Serial ATA drives. Don't even think twice about it. Go with SATA. The drives cost almost the same nowadays and you'll find wht little price difference there is ($5?) is worth it in the end. SATA drives are so much easier to wire. I have a case full of round cables. The case I have is an extremely large Codegen case and even I am having trouble with the cable mess. SATA is a wonderful thing. Along the same lines is hot-swap cages. There are a dozen brands to choose from. You should probably utilize them, even if you don't need hot-swap capabilities. I need them to create 3.5 drive slots from 5.25 bays. If you do want to do hot-swapping, make sure you drive cage and controller support it.

    Finally we get to RAID levels. You don't want to increase your risk of losing data so level 0 is out. 1 is extremely redundant and with the right controller can actually speed up reads. It's also costly at twice the cost per GB. Unless the data you're storing is absolutely critical you won't want to use 1 (in most cases). Forget about level 2. For starters th

  • by KaiLoi ( 711695 ) on Wednesday June 16, 2004 @04:44PM (#9445700)
    I have just finished doing this exact thing.

    I basically built a box to do nothing other than fileserv. I put together a nice simple old PC (550mhz with 256 meg of ram) and mounted it in an old rack mount case I had lying round.

    It's running debian with 2.4.26.

    I'm running software raid and installed 2 x 2 interface IDE cards.

    I threw in 6 seagate 120 gig drives (the ones with the 8 meg cache) and ran raid5 across 5 of them and a hot spare to rebuild the raid should a drvie fail. Each drive has it's own IDE channel to prevent channel faliure from screwing my raid.

    I'm using ext3 as the filesystem and wrote my own little raid mon script that SMS's me should a drive fail and alarms locally.

    This setup has been rock steady and gives me 460 (ish) gig of usable space after formatting.

    For added peice of mind the machine is plugged into a UPS that is connected to the machine via Serial. If the UPS kicks in it shuts the machine down properly after sending an alarm SMS (the DSL and switch are also on the UPS) (yes I'm a paranoid freak)

    This makes a perfectly good media and file server and I've had no problem with it in the few months I've had it.

    I also reccomend setting the spin down time onm the drives manually with hdparm. It was getting awfully warm in the box till I turned that on on the seagates. Modern drives are rather hot. ;)

    I have the whole thing mounted via SMB on my other boxes around the house and it's fast,(gig ethernet) reliable and easy.

    Tho do remember that no amount of raiding will save you if you lose 2 drives through some horrible freak of badness, and no raid level is going to protect you from a house fire. Hence mine also rsyncs all my absoloutely vital files (scanned family photos and docs) offsite to a file storage site every night at 2am so as not to chew my bandwidth dduring usable times. Don't forget the only truely secure data is that which is backed up.. and offsite.... twice. ;)

  • Rsync every night (Score:3, Insightful)

    by derphilipp ( 745164 ) on Wednesday June 16, 2004 @04:57PM (#9445837) Homepage
    I would suggest you dont buy a RAID System: Heres what I do: I got 3 harddrives - one small one with a tiny linux installation on it and 2 harddrives of the same size for data. Every night Drive 1 is rsynced to Drive 2 and unmounted. Now Drive 2 will be mounted instead of Drive 1. The next Night Drive 2 will be rsynced to Drive 1 and so on. The great advance: If you accidentally delete a file, you have untill midnight to restore it without any hazzle.
  • by SensitiveMale ( 155605 ) on Wednesday June 16, 2004 @04:58PM (#9445870)
    How do I know? 'Cause I submitted this EXACT SAME story a month ago and was rejected.

    Sigh.

    The cheapest RAID 1 OS internal and independent RAID (MIRROR) is Duplidisk3 by ARCOIDE.com

    You also get a ton of implementations; Stand alone, PCI card (for power only), 3 1/2" bay, and 5 1/4" bay. The ones that install in bays are so the user can seethe status lights.

    If you want an external RAID 5 the cheapest I have found is this - http://www.coolgear.com/productdetails1.cfm?sku=RA ID-5-3BAY&cats=&catid=314,312 It is a 3 bay RAID 5 for $800.

    If you want 5 disk RAID 5 those are @ $1200. http://www.cooldrives.com/fii13toatade.html

    If you want external RAID 0 or 1 relatively cheap then go with one of these - http://www.cooldrives.com/dubayusb20an1.html
    You can find a ton of these devices on the web since they all use the same drive controllers and bays. The nice thing about these is that sometimes you can talk the store into selling you the RAID system without the external case. These things simply require you plugging in an IDE cable and power and can be installed in any PC case that has 2 5 1/4" bays open. If you but just the 2 bay controller they are @ $230 or so. I have one and I am really happy with it.

    Everything I listed above uses IDE drives and is OS independent.
  • by jemenake ( 595948 ) on Wednesday June 16, 2004 @06:23PM (#9446712)
    Oh boy... where to start. I'll try to offer some info that the dude wouldn't be able to find via Google (or, at least, not all in one place)...

    Basically, your options are RAID-1 and RAID-5... as hundreds of people here have already pointed out. RAID-1 is just straight mirroring (where all drives in the array contain the same information). Usually, this just involves two drives, but there's no reason why you couldn't have, say, three or four drives all mirrored... and you could lose all but one of them and still be up and running.

    RAID-5 is a very cool beast. You bascially have an array of drives with some portion of them set aside for redundancy. Most of the posts I've seen here only describe a scenario where you have three drives with one of those drives for redundancy. This only scratches the surface, however.

    For example, you could have an array of, say, 5 10GB drives, with 2 drives' worth of redundancy. With this, your RAID implementation would make available to you, what seemed to be, a single 30GB drive (since 20GB of the total 50GB is used for redundancy). This way, you could have any two drives go bad and you're still okay.

    Another example, I guess, is that you could have a two-drive RAID-5 with one drive's worth of redundancy. In this case, you'd have the functionaly equivalent of a RAID-1 mirroring setup. Not very sexy... but you could do it in some implementations, I'm sure.

    I'm trying to use the phrase "X drives' worth of redundancy" instead of "X drives set aside for redundancy" because it's important to point out that, in RAID, all of the drives are considered equal. If you have 5 drives with 2-drive redundancy, it's not like you set 3 of them as the "main" drives and 2 as the "backup" ones. There's no preferential treatment like that. All the drives are equivalent and you could lose any of them and the others all move to cover for the one that was lost.

    Now, personally, I like RAID-5 because it offers the ability to use more than 50% of the space you paid for. With RAID-1 mirroring, you always only get to use 50% of the space that really exists. This would be necessary if, when you suffered a storage failure, you always lost half of it. But that's not how it happens. Usually, you lose a single drive. So, it would be nice to maximize your space available, while having some insurance against a single drive failure.

    This is where RAID-5 really shines, because each successive drive you add, you get all of that space for your usage. You could have, say, four drives, 1 drive of redundancy, and you get 3 drives' worth of space.

    Now, there are a few pros and cons for both RAID-1 and RAID-5 regarding recovering/moving data and changing the size of your array, and I'll list them here.
    • Recovery: Since RAID-1 uses brain-dead mirroring, both drives usually contain the exact same information that the virtual RAID drive does. Because of this, if one of the drives goes bad (or even if the RAID controller goes bad), you can take one of the good drives from the RAID, plug it into a plain SCSI or IDE controller and all of your data is right there. You can even boot from it, if you were booting from your RAID earlier. So, it's brain-dead simple to go back to a non-RAID configuration with RAID-1. With RAID-5, you couldn't do that. Advantage: RAID-1.
    • Changing array size: If you fill up your RAID-1 mirrored drives and need more space, your only option is to go buy two more bigger drives, put them in the machine, set up a new mirror, and copy everything from one mirror to another. This uses up 4 hard drive connections in your machine. (Although, with RAID-1, you *could* pull out one of the older drives, and put in one of the newer drives, copy one of the old drives to one newer one, then pull out the last remaining old drive, put in the second new one, and rebuild the mirrored array. But you still have to buy TWO biggere drives). With RAID-5, there's no theoretical reason why you couldn't just *add* another drive of th

The Tao is like a glob pattern: used but never used up. It is like the extern void: filled with infinite possibilities.

Working...