Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Data Storage Hardware Hacking Build

Ask Slashdot: DIY NAS For a Variety of Legacy Drives? 260

An anonymous reader writes "I have at least 10 assorted hard drives ranging from 100 GB to 3 TB, including external drives, IDE desktop drives, laptop drives, etc. What's the best way to setup a home NAS to utilize all this 'excess' space? And could it be set up with redundancy built-in so a single drive failure would cause no data loss? I don't need anything fancy. Visibility to networked Windows PCs is great; ability to streak to Roku / iPad / Toshiba etc would be great but not necessary. What's the best way to accomplish this goal?"
This discussion has been archived. No new comments can be posted.

Ask Slashdot: DIY NAS For a Variety of Legacy Drives?

Comments Filter:
  • the 2 main choices: (Score:5, Informative)

    by gbjbaanb ( 229885 ) on Thursday May 03, 2012 @04:34PM (#39882417)

    FreeNAS [freenas.org] or OpenFiler [openfiler.com].

    I think FreeNAS (the BSD based one) is lighter and easier, as OpenFiler seems to be going in a more "fully featured" direction with less support for older hardware, but they're both good.

  • FreeNAS, for sure (Score:3, Informative)

    by fmachado ( 89905 ) on Thursday May 03, 2012 @04:44PM (#39882601)

    FreeNAS can use ZFS for aggregating multiples drives, independent of size, technology etc, all with varying degrees of protection.

    It's by far the best solution to your case.

    Flavio

  • by gbjbaanb ( 229885 ) on Thursday May 03, 2012 @04:44PM (#39882611)

    yes it does - it uses ZFS that has some fancy replication features, especially z-pools that are like software raid. You can have a 100GB vdev on both the 100GB and 3TB drive as a mirror. Of course if you have just those 2 drives, nothing is ever going to get you full data redundancy (obviously!) but ZFS gives you a lot of flexibility to use what you do have.

  • Re:Not worth it. (Score:4, Informative)

    by PaladinAlpha ( 645879 ) on Thursday May 03, 2012 @04:48PM (#39882671)

    This.

    With such a wide range of storage sizes, you're going to have serious trouble setting up any kind of redundant encoding. To mirror a segment of data (or the moral equivalent with RAID-5 or RAID-6) you need segments of the same size; those segments are going to have to be no larger than the smallest drive. That means larger drives have to store multiple segments, but that the segments have to be arranged in a way such that a drive failure on one of the large drives doesn't take the RAID down. If the drives can't be bisected -- that is, divided into two piles of the same total size -- this is impossible, and the fact that your range is from .1TB to 3TB implies this might be the case.

    Think about it -- it's probably going to take most-to-all of those smaller drives to "mirror" the larger drive to make it redundant (and mirroring is the best you can do with just two drives). But having one side of the mirror spread across 9 drives makes failure laughably likely, to the point where you're paying performance penalties for nothing.

    Your alternative is to use a JBOD setup and have just contiguous space across all of the disks. This is the same problem, except when a drive goes you lose some random segment of data. That's acceptable for two or three drives in scratch storage, but you don't want to actually store things on that.

    Make no mistake -- those drives are going to die.

    Trust me on this; don't go down this road. Your actual options are to either pair up the disks as best you can, supplimenting with strategic purchases, and make 2-3 independent raids (and maybe even RAIDing those, but it'll be painful), or just write the whole thing off, put disks in if you have obvious candidates in your hardware, and donate the rest.

  • by jtownatpunk.net ( 245670 ) on Thursday May 03, 2012 @04:59PM (#39882841)

    unRAID [lime-technology.com] does not support unlimited drives in any version. It comes in 3 (free), 6, and 21 drive versions.

    I've been using it for a year or two and, while it's got some limitations, it's a good choice for this application. Mostly because the guy's using a random collection of old drives and is likely to have bad sectors across multiple drives at some point. There is no striping with unRAID so the worst thing that can happen is he'll have to mount the drives individually and copy the data to a new array.

  • Re:Not worth it. (Score:5, Informative)

    by King_TJ ( 85913 ) on Thursday May 03, 2012 @05:02PM (#39882873) Journal

    Yep.. I agree. "Not worth it." sums it up nicely.

    Seriously, I completely understand the desire to re-use unused equipment you've got lying around. Seems like the smart thing to do, reclaiming as much of that unused storage space as possible and pooling it together so even the smaller drives add up to something worthwhile. But as a FreeNAS user myself, trust me on this one. It's NOT really a good idea.

    As other already pointed out, most RAID configurations are limited by the size of the smallest drive in the array, so that would create major problems for you right there. But even assuming you skip RAID (or set up multiple RAID pools, each consisting only of very similar sized drives -- and then join all of them into a virtual master storage "device"), you're still in a situation where the lower capacity drives probably have slower data xfer rates than the newer, larger ones. That will drag the overall performance of the server down, whenever something gets loaded or saved to the slower/older disks.

    Even if all of THAT doesn't discourage you? I have to ask what your time is worth, and to a lesser extent, what your data itself is worth? Old drives as small as 100GB capacity have got to be at least 4 -6 years old by now. Unless you bought them new and just stored them in a box this whole time, chances are, they've seen a lot of hours of operation already. They don't have a resale value more than $20 or so these days, so you're simply not out much money to throw them away or give them to a recycler. Meanwhile, you'll probably get into a much more complex and time consuming NAS configuration, trying to best utilize them in your drive pool. Even if you only make $10/hr. at your job, that means 2 hours of time spent messing around with this is worth the entire value of one of those old drives!

    I'm kind of a pack-rat for computer hardware (since I have an on-site repair business besides a day job in I.T. and computers as a spare time interest too). But even I started throwing away IDE or SATA drives under 250GB a while ago. I keep a *couple* small ones around, but only for odd situations (like someone who wants to revive a really OLD PC with a BIOS that can't recognize larger drives properly). Otherwise, everyone who wants to go to the trouble of swapping an old/dead drive out for a replacement may as well spend the relatively small extra amount of money for a current model of much larger capacity, AND a full warranty still on it. Your data is usually worth it!

  • by McKing ( 1017 ) on Thursday May 03, 2012 @05:15PM (#39883037) Homepage

    ZFS does this much more simply with no ugly hacks. You can have mismatched drives when you build a mirror (the mirror is the size of the smallest drive in the mirror set), and then you stripe across the mirrors. As the older, smaller drives fail, replace them with newer, bigger drives and the pool magically gets bigger. 100GB + 500GB mirrored (100GB usable). 100GB dies, swap in a 750GB drive and now this pool is automatically resized to 500GB. Get 2 more drives? Mirror them and add them to the pool and your pool expands with no one the wiser.

    Seriously, if you haven't played with ZFS before, download FreeNAS and give it a whirl. When I was a Solaris admin, ZFS was the most fun thing to work with by far.

  • by sirwired ( 27582 ) on Thursday May 03, 2012 @05:20PM (#39883099)

    I've been using unRAID for years and it's a great solution for a small home NAS box. If you ever change your mind about using it, you simply turn your parity drive into a regular Linux boot disk, and the remaining drives are just regular Reiserfs2 filesystems. Most RAID systems and/or software would require much gymnastics to de-RAID them, if it could be done at all.

    In addition, hardware-based striped RAID makes you dependent on the RAID controller; if it dies and you can't find a replacement compatible with the original's striping mechanism, your data just disappeared.

  • by ratboy666 ( 104074 ) <<moc.liamtoh> <ta> <legiew_derf>> on Thursday May 03, 2012 @05:29PM (#39883221) Journal

    FreeNAS can use ZFS as the filesystem. And this is what you want! Now, the actual configuration depends on the drives you have available.

    For drives with the same, or very similar capacity -- ZRAID can be used. With 3 drives, ZRAID1, or with more, use ZRAID2 (the number is the number of drives which can be failed). ZRAID offers the capacity of the smallest drive, which may waste space. If all drives are (eventually) increased in size, more storage is added.

    For drives with different capacity, ZFS offers the ability to keep a redundent number of copies of the data (eg. specify two copies, or three). Then, ZFS will duplicate the data onto multiple drives.

    As well, ZFS continually monitors the drives and redistributes any failed areas, and ensures that no bit errors accumlate in the file system. ZRAID and multiple copies can be combined.

    The main point of ZFS is to keep your data clean and safe from corruption.

    As well, "fsck" is not needed -- it happens when you "scrub", which slows down the array, but doesn't leave it unusable.

    If you have sufficient memory, ZFS can also "dedup" the blocks in your filesystem, merging identical copies of data (but copying/raiding to maintain data integrity). This feature takes a LOT of RAM (2GB per TB of disk, 32GB for 20TB of disk, and possibly more). Also, some ZFS versions offer encryption (not sure about the one in FreeNAS).

    ZFS drives can be physically moved to another system, and used (eg. FreeNAS x86 to SPARC). Endian and format issues are correctly handled. Not a feature most people would ever use, but it is nice. ZFS is available on Solaris, BSD, Linux, Mac (well, used to be).

    Also, ZFS support snapshots, which can be browsed.

    Finally, ZFS has an eight year history in production.

    In all, what's not to love?

  • Just throw them away (Score:5, Informative)

    by zmooc ( 33175 ) <{ten.coomz} {ta} {coomz}> on Thursday May 03, 2012 @05:49PM (#39883473) Homepage

    Powering 10 old harddrives for some time is going to be much more expensive than just getting a new one. A modern drive uses about 5W on average. these oldies probably use much more. 10 drives using 10 watts at $0.10 per kwh will set you back $87 per year. You do the math.

  • Re:Not worth it. (Score:5, Informative)

    by 19thNervousBreakdown ( 768619 ) <davec-slashdot@@@lepertheory...net> on Thursday May 03, 2012 @05:51PM (#39883495) Homepage

    It depends--is there a total of 6 TB of drives that doesn't include the 3 TB drive?

    Take each disk, make an LVM physical volume from it. From those physical volumes, logical volumes. You don't have to make all of them the size of your smallest drive, you just have to be careful. Say you have the following:

    1: 3 TB
    2: 2 TB
    3: 1 TB
    4: 1 TB
    5: 750 GB
    6: 750 GB
    7: 150 GB
    8: 150 GB
    9: 100 GB
    10: 100 GB

    On your 2 TB drive, make partitions matching the drives under 1 TB.

    On your 3 TB drive, make the following partitions:

    1 TB: RAID-5 with #3 and #4
    750 GB: RAID-5 with #2 and #5
    750 GB: RAID-5 with #2 and #6
    150 GB: RAID-5 with #2 and #7
    150 GB: RAID-5 with #2 and #8
    100 GB: RAID-5 with #2 and #9
    100 GB: RAID-5 with #2 and #10

    You'll end up with the following volumes:

    1: 2 TB
    2: 1.5 TB
    3: 1.5 TB
    4: 300 GB
    5: 300 GB
    6: 200 GB
    7: 200 GB

    Then take those, LVM the RAIDed LVM volumes (fairly certain you can stack [traditional meaning of "stack"] as a contiguous disk, just use an easy FS like ext3, I've run into problems with stack size [programming meaning of "stack"] using XFS on LVM). You end up with 6 TB total space, and, just like normal RAID-5, you don't lose anything unless two disks from one of those groups die. That is, if a disk in 200 GB #6 dies, and a disk in the 1.5 TB #3 dies, you still haven't lost anything. Even if your 3 TB drive dies, which is clearly the worst case since it has data for every array, or the 2 TB which is nearly as bad, you'd still need to lose a second disk to lose any data, so for failure rates it should be the same as a 10-drive RAID-5 array, which isn't quite advisable although it's not murderously bad, but this isn't work and the primary motivation is probably maximizing space with a decent reliability increase, not making next to certain it never goes down. I'm sure it feels really weird, but I don't think you're actually increasing your odds of failure at all over the 10-disk-all-same-size RAID we're used to, other than not trusting older drives--and I'm not so sure those are much more likely to fail than new ones. After all, they've lasted this long, and I've had brand new drives die within weeks. But in point of fact, there's some 2-drive failures that don't take anything down, so I think overall you're doing slightly better than the 10-same-size disk case.

    Now, your disks probably won't divide up as nicely, and you might end up having to either leave some space on the floor or subdivide in weirder ways or both, but with very careful partitioning (never put two stripes of the same array on the same disk), you can do this. Set all the arrays to verify weekly (mdadm can do this) and e-mail you on a failure. Don't set up an audible alarm, you're not going to lose a second disk at 3 AM (but you will wake up to fix it, and be worthless at work the next day for probably nothing) and even if you did lose another disk, you're not using RAID as a replacement for backups, right? Right?

    ZFS would be really nice if it did all this complex stuff for you, but do you have enough control/is it smart enough to allow you to ensure that you get as good or better reliability? It'd be ridiculously easy to make a bad mistake in layout with the above scenario. Because overall, I agree with the title: It's just not worth all this effort so you can use that crappy 100 GB disk. Once it goes down, now you have to replace it.

  • Use BtrFS or Drobo (Score:5, Informative)

    by digitaltraveller ( 167469 ) on Thursday May 03, 2012 @06:10PM (#39883741) Homepage

    Use drobo if you are time poor and money rich, use btrfs if you are time rich and money poor.

    Btrfs's capabilities are nothing short of amazing. Here is a vid about it:
    http://www.youtube.com/watch?v=9bQc_z-Cb7E [youtube.com]

  • Re:Not worth it. (Score:4, Informative)

    by cayenne8 ( 626475 ) on Thursday May 03, 2012 @06:29PM (#39883957) Homepage Journal
    But I'd have a use for it...over the years, I've gathered a bunch of disks when on sale...many barely used...some still in boxes....

    I'd like a way to throw them all together and use them for backup storage.

    From what I've gathered...use FreeNAS, with ZFS...and it will let you set this up, and allow for up to 2x drives to fail at the same time....

    I think in my case...this would be reasonable. Heck, if I set up two FreeNAS boxes...had one mirror the other one...that would indeed be a decent backup system, no?

    I have a lot of friends like me....often buying stuff on sale for "to use someday on something"...but they just sit and gather dust....I think this would be a good reason to use them, and keep buying new drives, here and there when they go on sale, to replace on the FreeNAS as drive on it do start to fail....

    Heck, thinking of keeping one FN server here..and maybe put a 2nd at friend or parents house out of state...to mirror it...

BLISS is ignorance.

Working...