Follow Slashdot blog updates by subscribing to our blog RSS feed


Forgot your password?
Data Storage Hardware Hacking Build

Ask Slashdot: DIY NAS For a Variety of Legacy Drives? 260

An anonymous reader writes "I have at least 10 assorted hard drives ranging from 100 GB to 3 TB, including external drives, IDE desktop drives, laptop drives, etc. What's the best way to setup a home NAS to utilize all this 'excess' space? And could it be set up with redundancy built-in so a single drive failure would cause no data loss? I don't need anything fancy. Visibility to networked Windows PCs is great; ability to streak to Roku / iPad / Toshiba etc would be great but not necessary. What's the best way to accomplish this goal?"
This discussion has been archived. No new comments can be posted.

Ask Slashdot: DIY NAS For a Variety of Legacy Drives?

Comments Filter:
  • Not worth it. (Score:5, Insightful)

    by Anonymous Coward on Thursday May 03, 2012 @04:34PM (#39882415)
    Those older drives are probably failures just waiting to happen. With the cost of the hard drive space continually dropping, just use new drives. It's not worth screwing around with old ones for anything other than salvaging old data off them, even though the urge to do so is strong in the more frugal among us.
    • Re:Not worth it. (Score:5, Insightful)

      by ZeroSumHappiness ( 1710320 ) on Thursday May 03, 2012 @04:42PM (#39882569)

      I agree to an extent. Take anything SAS or SATA that's 1TB or greater and re-think the project with just those. Sell or recycle the rest of the drives. Depending on your needs the remainder should be RAID-1, 5 or 6'd (using software RAID if speed isn't an issue) and then put on an OpenFiler or FreeNAS box. Anything non-replaceable should then be backed up to a respectable backup provider in addition to your home-grown solution.

      We need more information though -- what are your actual drive sizes and what do you want to put on this NAS?

      • Re:Not worth it. (Score:5, Informative)

        by King_TJ ( 85913 ) on Thursday May 03, 2012 @05:02PM (#39882873) Journal

        Yep.. I agree. "Not worth it." sums it up nicely.

        Seriously, I completely understand the desire to re-use unused equipment you've got lying around. Seems like the smart thing to do, reclaiming as much of that unused storage space as possible and pooling it together so even the smaller drives add up to something worthwhile. But as a FreeNAS user myself, trust me on this one. It's NOT really a good idea.

        As other already pointed out, most RAID configurations are limited by the size of the smallest drive in the array, so that would create major problems for you right there. But even assuming you skip RAID (or set up multiple RAID pools, each consisting only of very similar sized drives -- and then join all of them into a virtual master storage "device"), you're still in a situation where the lower capacity drives probably have slower data xfer rates than the newer, larger ones. That will drag the overall performance of the server down, whenever something gets loaded or saved to the slower/older disks.

        Even if all of THAT doesn't discourage you? I have to ask what your time is worth, and to a lesser extent, what your data itself is worth? Old drives as small as 100GB capacity have got to be at least 4 -6 years old by now. Unless you bought them new and just stored them in a box this whole time, chances are, they've seen a lot of hours of operation already. They don't have a resale value more than $20 or so these days, so you're simply not out much money to throw them away or give them to a recycler. Meanwhile, you'll probably get into a much more complex and time consuming NAS configuration, trying to best utilize them in your drive pool. Even if you only make $10/hr. at your job, that means 2 hours of time spent messing around with this is worth the entire value of one of those old drives!

        I'm kind of a pack-rat for computer hardware (since I have an on-site repair business besides a day job in I.T. and computers as a spare time interest too). But even I started throwing away IDE or SATA drives under 250GB a while ago. I keep a *couple* small ones around, but only for odd situations (like someone who wants to revive a really OLD PC with a BIOS that can't recognize larger drives properly). Otherwise, everyone who wants to go to the trouble of swapping an old/dead drive out for a replacement may as well spend the relatively small extra amount of money for a current model of much larger capacity, AND a full warranty still on it. Your data is usually worth it!

        • by tlhIngan ( 30335 )

          Old drives as small as 100GB capacity have got to be at least 4 -6 years old by now. Unless you bought them new and just stored them in a box this whole time, chances are, they've seen a lot of hours of operation already. They don't have a resale value more than $20 or so these days, so you're simply not out much money to throw them away or give them to a recycler.

          Exactly. Though, 100GB drives are plenty of storage for people. If you want to earn some brownie points at the next family gathering, you can st

        • by geekoid ( 135745 )

          "Even if you only make $10/hr. at your job, that means 2 hours of time spent messing around with this is worth the entire value of one of those old drives!"

          The idea that the wage you make at work applies to all your time is a fallacy. The ONLY exception is someone who can just say "Hey, I'm going to work a couple extra hours on the fly".

          And do you seriously believe the time learning and building technical stuff has the same value as some min. wage job?

          Great, you throw out perfectly good drive. Bully for you

          • Learning is one thing but the question was "what is the best way" The answer is not at all. Those perfectly good drives are far from perfect and probably nearly no good. Regardless how much you value your time, don't bother wasting your time putting together a bad system that will cause you many more hours of headaches in the near future when it does fail.
            • I think the assumption is that all the drives are old and have been run though the ringers.

              If they're like me, and I mentioned this on a previous post....maybe they have a ton of drives, but most of them are little used or even unused in a box.

              I have a habit of buying stuff, like drives when I see them on sale...and just set them back for use some day...and kinda forget about them.

              I've got computers I was going to use for drive in it...and that box had RAM or MB problems...just sitti

            • by segin ( 883667 )
              I have a number of old and "low" capacity drives that I wouldn't mind utilizing. I know they have well over two or more years working life in them on average, as their actual usage until now was very sparse. If "attic time" didn't take from a drive's operational lifespan, I'd say they've got 6-7 years left (given that MTBF is 8 years, on average) - in this case, it's not too bad of an idea to try to utilize them. Also note that MTBF is Mean Time Before Failure, not Maximum Time Before Failure. Drives are kn
        • by arth1 ( 260657 )

          Weeell, true, but...
          A bunch of small drives plus a fast one means you can have a reasonably fast raid of the smaller drives, and do nightly backups of it to the large drive (or nightly reassemble and restore if more drives have failed than you have resilience for).

          I has a setup like that once, with six 40 GB drives and one 80 GB drive (this was a while ago).
          Two RAID1s striped to make a RAID 10, plus two standby drives and a backup drive. Worked like a charm, and survived three drive replacements over the y

      • Depending on your needs the remainder should be RAID-1, 5 or 6'd

        Wouldn't btrfs supposedly resolve this? It's supposed to put your redundant data on multiple devices and I would assume if the device had no more space it could use another device on the array as long as it wasn't the same as the redundant data. From everything I'm reading on the FS so far it looks like it's perfectly usable now if you can schedule a regular data scrub (like a midnight cron job) to check integrity, which wouldn't be bad for a personal server (enterprise is another story.)

        • And what do you think RAID 5 is?

          • RAID5 requires that all the disks be the same size and uses one drive as a parity bit drive.

            btrfs works on the block level and should be able to write redundant block data on multiple drives no matter what size they are.

            • RAID 5 stripes the parity across all the drives. RAID 3, I believe, has a dedicated parity drive. Otherwise correct though, I think.

            • by v1 ( 525388 )

              BASIC raid5 requires identical capacity drives. Intelligent raid5 does not. Imagine a four drive array, containing three 40gb drives and one 30gb drive. So you can treat the first 30gb of the 40's, and the 30gb as one segment of the storage, (3 data and a parity, total of 90gb of protected storage) and the remaining 10gb of space on the three 40's run as 2 data with a parity, for an additional 20gb of protected storage. (if you tried to just ignore the oversize on the 40's you'd lose out on almost 20% o

          • And what do you think RAID 5 is?


        • Re: (Score:3, Interesting)

          by bodangly ( 2526754 )
          I've had nothing but bad luck with btrfs, including irrecoverable data. (well, data not as valuable as time it would take to restore) It is my opinion that the push to make btrfs the new standard is happening way too quickly and for the wrong reasons. It has been my experience that it simply isn't as reliable as the more established file systems. I would highly recommend XFS over btrfs.
      • Agreed with a caveat. My NAS has 500gb and up drives... anything else gets consolidated. I tried using those old drives, but I soon realized it just wasn't worth it. The 500gb drive is going byebye soon too. After a certain point, the empty slot and power draw becomes valuable real estate that could be populated by a larger drive. Slow speed becomes a factor for obsolescence in some cases as well.

        What smaller drives, even the 80-120gb types, are good for, is boot drives for crappy refurbished compu

        • Regarding empty slot usage... I imagine the plan is to replaced failed drives with larger drives over time and have the system adapt. I've given old drives to people in kind only to have them come back a few months later because there was a problem with their machine (mostly bad clusters) and since they didn't build the machine they didn't know how to do regular disk checks or have the machine running during the schedule I set up.

      • by AmiMoJo ( 196126 )

        The OP probably isn't too worried about security because most of the data will have been downloaded. There are a few jobs where people produce terrabytes of data, but those people will spend money backing it up properly.

        Forget RAID and just use JBOD. If one drive dies get downloading again.

    • Re:Not worth it. (Score:4, Informative)

      by PaladinAlpha ( 645879 ) on Thursday May 03, 2012 @04:48PM (#39882671)


      With such a wide range of storage sizes, you're going to have serious trouble setting up any kind of redundant encoding. To mirror a segment of data (or the moral equivalent with RAID-5 or RAID-6) you need segments of the same size; those segments are going to have to be no larger than the smallest drive. That means larger drives have to store multiple segments, but that the segments have to be arranged in a way such that a drive failure on one of the large drives doesn't take the RAID down. If the drives can't be bisected -- that is, divided into two piles of the same total size -- this is impossible, and the fact that your range is from .1TB to 3TB implies this might be the case.

      Think about it -- it's probably going to take most-to-all of those smaller drives to "mirror" the larger drive to make it redundant (and mirroring is the best you can do with just two drives). But having one side of the mirror spread across 9 drives makes failure laughably likely, to the point where you're paying performance penalties for nothing.

      Your alternative is to use a JBOD setup and have just contiguous space across all of the disks. This is the same problem, except when a drive goes you lose some random segment of data. That's acceptable for two or three drives in scratch storage, but you don't want to actually store things on that.

      Make no mistake -- those drives are going to die.

      Trust me on this; don't go down this road. Your actual options are to either pair up the disks as best you can, supplimenting with strategic purchases, and make 2-3 independent raids (and maybe even RAIDing those, but it'll be painful), or just write the whole thing off, put disks in if you have obvious candidates in your hardware, and donate the rest.

      • by isorox ( 205688 )


        With such a wide range of storage sizes, you're going to have serious trouble setting up any kind of redundant encoding

        Yes. But wouldn't it be awesome if there was some magical filing system daemon that ran on multiple machines, automatically meshed together, and presented a single contiguous file system, which kept at least n copies of data on m machines, auto-replicated (reducing available free space) if a disk died or a machine was offline for > p hours, and offered some simple admin interfaces. Throw in an LTO interface (perhaps LTFS based, perhaps not) for good measure too.

        This is slashdot, we all know how to use ra

        • You mean like GlusterFS?

          It supports access via CIFs, NFS, and Fuzelibs. Of course, his smallest disk is going to limit assignment of one filespace brick for replication on another drive to a file of the same size, but he could conceivably jigger around his assignments to use all of the disk space he has.

          Well, I hope he has a *really* fast switch if he does that; and there is that issue with power for all of those hosts, if he wants that kind of redundancy...

      • Re:Not worth it. (Score:5, Informative)

        It depends--is there a total of 6 TB of drives that doesn't include the 3 TB drive?

        Take each disk, make an LVM physical volume from it. From those physical volumes, logical volumes. You don't have to make all of them the size of your smallest drive, you just have to be careful. Say you have the following:

        1: 3 TB
        2: 2 TB
        3: 1 TB
        4: 1 TB
        5: 750 GB
        6: 750 GB
        7: 150 GB
        8: 150 GB
        9: 100 GB
        10: 100 GB

        On your 2 TB drive, make partitions matching the drives under 1 TB.

        On your 3 TB drive, make the following partitions:

        1 TB: RAID-5 with #3 and #4
        750 GB: RAID-5 with #2 and #5
        750 GB: RAID-5 with #2 and #6
        150 GB: RAID-5 with #2 and #7
        150 GB: RAID-5 with #2 and #8
        100 GB: RAID-5 with #2 and #9
        100 GB: RAID-5 with #2 and #10

        You'll end up with the following volumes:

        1: 2 TB
        2: 1.5 TB
        3: 1.5 TB
        4: 300 GB
        5: 300 GB
        6: 200 GB
        7: 200 GB

        Then take those, LVM the RAIDed LVM volumes (fairly certain you can stack [traditional meaning of "stack"] as a contiguous disk, just use an easy FS like ext3, I've run into problems with stack size [programming meaning of "stack"] using XFS on LVM). You end up with 6 TB total space, and, just like normal RAID-5, you don't lose anything unless two disks from one of those groups die. That is, if a disk in 200 GB #6 dies, and a disk in the 1.5 TB #3 dies, you still haven't lost anything. Even if your 3 TB drive dies, which is clearly the worst case since it has data for every array, or the 2 TB which is nearly as bad, you'd still need to lose a second disk to lose any data, so for failure rates it should be the same as a 10-drive RAID-5 array, which isn't quite advisable although it's not murderously bad, but this isn't work and the primary motivation is probably maximizing space with a decent reliability increase, not making next to certain it never goes down. I'm sure it feels really weird, but I don't think you're actually increasing your odds of failure at all over the 10-disk-all-same-size RAID we're used to, other than not trusting older drives--and I'm not so sure those are much more likely to fail than new ones. After all, they've lasted this long, and I've had brand new drives die within weeks. But in point of fact, there's some 2-drive failures that don't take anything down, so I think overall you're doing slightly better than the 10-same-size disk case.

        Now, your disks probably won't divide up as nicely, and you might end up having to either leave some space on the floor or subdivide in weirder ways or both, but with very careful partitioning (never put two stripes of the same array on the same disk), you can do this. Set all the arrays to verify weekly (mdadm can do this) and e-mail you on a failure. Don't set up an audible alarm, you're not going to lose a second disk at 3 AM (but you will wake up to fix it, and be worthless at work the next day for probably nothing) and even if you did lose another disk, you're not using RAID as a replacement for backups, right? Right?

        ZFS would be really nice if it did all this complex stuff for you, but do you have enough control/is it smart enough to allow you to ensure that you get as good or better reliability? It'd be ridiculously easy to make a bad mistake in layout with the above scenario. Because overall, I agree with the title: It's just not worth all this effort so you can use that crappy 100 GB disk. Once it goes down, now you have to replace it.

      • ... larger drives have to store multiple segments, but that the segments have to be arranged in a way such that a drive failure on one of the large drives doesn't take the RAID down. If the drives can't be bisected -- that is, divided into two piles of the same total size -- this is impossible, and the fact that your range is from .1TB to 3TB implies this might be the case.

        There is a simple formula to determine the available space in these circumstances: If the largest drive is larger than all the other combined, the available space (after mirroring) is the sum of the smaller drives. In this case the largest drive mirrors all the others. Otherwise, the available space is half of the total of all the drives, and no space is wasted. A filesystem like BTRFS (or, presumably, ZFS) can work out the details automatically if you set it up in RAID-1 mode.

        But having one side of the mirror spread across 9 drives makes failure laughably likely, to the point where you're paying performance penalties for nothing.

        Agreed. I would probably forge

      • by AmiMoJo ( 196126 )

        There are plenty of non-RAID and non-critical ways you can make use of smaller drives. For example most online storage accounts require sync with a local drive (Dropbox, Google Drive, Skydrive) to work and that can add up quickly. Doesn't matter too much if the disk dies because you have an online backup. You can use something like SecretSync to encrypt the log (which also doubles the storage requirements, another reason why a spare HDD is useful).

        You could give some drives to friends and set up your own mu

    • Agreed.

      Just take the sub 1/2 TB drives and mail them to me. ;-) I need some small USB drives for work to hold my music.

    • by hoggoth ( 414195 )

      "This little drive is not worth the effort. Come, let me get you something..."
      "The frugal is strong in this one."

    • Re:Not worth it. (Score:4, Interesting)

      by hairyfeet ( 841228 ) <bassbeast1968@gm[ ].com ['ail' in gap]> on Thursday May 03, 2012 @05:43PM (#39883379) Journal

      I probably shouldn't answer an AC but since the AC has been put as "insightful" I will answer with why I think he/she is wrong: If the data has already been backed up (which if you don't back up your data you're a dumbass who should be posting to yahoo Answers and not here) then frankly there is no "risk" to using drives you already have as its only a question of how long it would take to restore.

      Now of course he can mirror the data on half the drives to add some redundancy but your answer of using new drives? Until WD is back up to full speed frankly that isn't possible without a LOT of extra cost. Sure you can find cheap Seagate drives but you know what? they're shit. Not badmouthing Seagate but read the reviews and you'll see that Seagate drives over 640Gb are having a crazy high failure rate. Some say they are using ARM controllers made by the Maxtor guys and those are shit, some saying its a firmware issue, I personally don't know but what I DO know is that I've had to shitcan a crazy number of Seagate drives, especially the 1Tb and 1.5Tb drives which are the only really affordable ones ATM.

      Now for how to do this, I'm gonna stay out of the software side since I don't want to jump into a Windows VS Linux flamewar, I'm sure the guy has an OS he is comfortable with and will probably go that way anyway so I'll deal with the hardware. This is how we cooked up something similar at my previous shop with a shitload of SCSI drives the boss got at an auction...Buy a couple of matching cheap full size computer cases, geeks has several for pretty cheap. We then tore the cases apart leaving a couple of skeletons, how far you take them apart is up to you, one can just as easily take the side of one and the opposite side off the other and cut the bracing, the reason we did it this way will become obvious in a minute. Pick up a cheap old server board, you want one that will fit the case and has as many PCI slots as possible, you will of course fill the PCI slots with SATA adapters just like we did with SCSI. Then in our case for the final touch we wired up a $10 Walmart box fan to the side to cool all the drives we piled into that sucker. In our case we used a copy of Win2K Server since we had drivers for the SCSI cards in Win2K Server, but again software is your choice.

      And there you have it! While drives were topping out at 400Gb and cost an arm and a leg we had nearly 2Tb of SCSI goodness containing every single driver for every single part for every single version of Windows from 3.1-WinXP. I don't see why someone couldn't do the same with SATA, sure PCI won't give you as much bandwidth as PCIe but if you have a decent sized amount of RAM (say 2Gb) to buffer I don't see why it wouldn't work. in the end this is about using something you already have which won't cost anything VS spending hundreds of dollars to acquire a more compact solution. Personally I'm all for using what you already have, that is why I have a drawer filled with drives from 80Gb to 400Gb that I then throw into computers that are short on space, certainly cheaper than buying a new WD drive and as I said I don't trust Seagate ATM. Personally I'm just glad I loaded up on Samsung EcoDrives right before the flood when Tiger had them cheap so I can afford to wait until the prices drop before i have to look at drives. But if he already has them, why not use them?

    • by deAtog ( 987710 )
      Generally, yes it's not worth the time or the effort. However, if you're serious about taking advantage of your old drives, I'd suggest using RAID on top of LVM. LVM will allow you to group drives of different sizes together to form a logical volume. You can then use software RAID to ensure data integrity. Over time, as drives fail, you can replace failed drives with new ones and rebuild the failed logical volume. A simple Samba server should suffice for your file sharing needs.
    • Create a Truecrypt file filling each old drive, after a full format. Use for full (not incremental backups) every 6 months, starting with the smallest sizes (to use them up). Then put them in your Mum's garage, suitably labelled.

      Last tip for backups. Do "dir /on /s > backup_2012_04_23" for each drive after filling it, and keep the list on your main machine, so you can see if you've got a copy of something (and where) before fishing around.

  • the 2 main choices: (Score:5, Informative)

    by gbjbaanb ( 229885 ) on Thursday May 03, 2012 @04:34PM (#39882417)

    FreeNAS [] or OpenFiler [].

    I think FreeNAS (the BSD based one) is lighter and easier, as OpenFiler seems to be going in a more "fully featured" direction with less support for older hardware, but they're both good.

    • Can these programs mirror the contents of one USB: drive to the other USB: drive? It's a pain trying to copy-and-paste files in drive 1 to the backup drive 2. THX. :-)

    • I don't see the ability to dynamically expand FreeNAS. (Just add a drive and expand the protected space)

      I cautiously recommend unRaid. I have not had an ideal experience with it, but most of it was due to my lack of diligence in ordering compatible hardware and fully reading all 10,000 forum threads before logging in. Mainly the hardware thing.

  • Not streak to iPad. Stream. Streaking to iPad would require cleaning supplies at the point of impact.

  • FreeNAS or Unraid. (Score:3, Insightful)

    by detritus. ( 46421 ) on Thursday May 03, 2012 @04:36PM (#39882451)

    Look at FreeNAS or Unraid. Unraid has a 3-drive limit IIRC for the free version, but supports an unlimited amount of drives for the non-free version.

    • by ( 245670 ) on Thursday May 03, 2012 @04:59PM (#39882841)

      unRAID [] does not support unlimited drives in any version. It comes in 3 (free), 6, and 21 drive versions.

      I've been using it for a year or two and, while it's got some limitations, it's a good choice for this application. Mostly because the guy's using a random collection of old drives and is likely to have bad sectors across multiple drives at some point. There is no striping with unRAID so the worst thing that can happen is he'll have to mount the drives individually and copy the data to a new array.

    • by sirwired ( 27582 ) on Thursday May 03, 2012 @05:20PM (#39883099)

      I've been using unRAID for years and it's a great solution for a small home NAS box. If you ever change your mind about using it, you simply turn your parity drive into a regular Linux boot disk, and the remaining drives are just regular Reiserfs2 filesystems. Most RAID systems and/or software would require much gymnastics to de-RAID them, if it could be done at all.

      In addition, hardware-based striped RAID makes you dependent on the RAID controller; if it dies and you can't find a replacement compatible with the original's striping mechanism, your data just disappeared.

  • by aaron44126 ( 2631375 ) on Thursday May 03, 2012 @04:37PM (#39882477) Homepage
    If you use Windows, the forthcoming Windows 8 "Storage Spaces" feature appears to be perfect for situations like this. []
  • FreeBSD and ZFS (Score:3, Interesting)

    by Anonymous Coward on Thursday May 03, 2012 @04:37PM (#39882479)

    FreeBSD has fast ZFS support which is wonderful file system to fight data loss.

    • Agreed. Do this for fun, not for anything practical - I mean, there are USB thumbdrives larger than your 100GB drive!

      Pair the drives up to match them as closely as is possible so that you have 5 redundant mirrors. More realistically, you'll only have space or sata hookups for 4 pairs.

      Anyway, use FreeBSD and zfs to pair them up and then combine the 4 or 5 pairs into a single pool. As the drives die or as you acquire bigger drives, you can hook up an additional drive and use the "zpool replace" command to swa

  • If you just use LVM and group all your disks together into one PV, that would make the array appear as "one big drive" to the system.

    Redundancy (RAID) would not work so well because your array would be limited by the smallest disk in the array. Sure, raid the 300GB to the 1TB, but you end up with a RAID-1 array of 300G.

    • Re: (Score:2, Insightful)

      by Anonymous Coward

      That sounds awesome. Should have a MTBF of about 20 minutes

  • by digitalsushi ( 137809 ) <> on Thursday May 03, 2012 @04:42PM (#39882561) Journal

    Ah ha! Who else amongst you has a huge surplus of huge hard drives going unused, now that netflix streaming has displaced 60% of all the crud you had spinning idle in a closet the 3 years before you signed up?

    My storage requirements went from about 3 terabytes to about 30 gigabytes over the past 2 years. I believe I am the archetype and that I am doing the same thing as the average geek. I suspect there are piles of huge disks sitting offline because of this streaming displacement.

    It cost me about 18 dollars a month to leave my x86 file server online, idle (killawatt meter, nh rates); netflix is cheaper than that.

    Come on, who else has a comment related to this.

    • Re: (Score:2, Funny)

      by Anonymous Coward

      The OP is looking to build a giant porn vault. All of the other words in his post are just cover.

      Notice how he talks about "visibility" and "streaking". He's got the porn on his mind.

      Netflix is very light on the porn, so it is N/A here

      Come on guys, we need a modern porn vault solution here, iPorn, Porndroid, Porno on Rails, something big

    • by Xtifr ( 1323 )

      Not me. Before Netflix streaming, I got most of my movies via...Netflix! In fact, I kind of curse Netflx streaming, because I find I'm wasting a lot more time watching movies and shows than I used to, and less time reading and working on my hobbies.

    • >>>Who else amongst you has a huge surplus of huge hard drives going unused


      >>>My storage requirements went to about 30 gigabytes

      WOW. I still download a ton of stuff via Utorrent, and I need the space since I acquire movies/shows faster then I watch them. I also need to space to "seed" back the stuff I've taken. My 1 TB drive is quite full.

      I don't subscribe to Comcast or Netflix or anything else. It's just entertainment... not really worth paying for it, when I can acquire it for fre

    • by Hatta ( 162192 )

      There's no netflix for classic video games. Until I can hop on the cloud and download an ISO of whatever PC Engine game I happen to want to play today, I'm going to have to keep the TOSEC on my hard disk.

    • by pavon ( 30274 )

      Not me. My terabytes of data were being used for PVR recordings, and Netflix doesn't have enough current content to replace that function. Hulu was getting close, but not quite because of the random restrictions on what can be watch on a TV set vs in the browser, and the numerous shows which would expire from the queue before you could watch them. With Hulu's proposed cable subscription requirement, it looks like my PVR will be getting even more use in the future, not less.

    • Interesting.. I have the exact opposite experience. Lots of extra drives from constant upgrades. I have around 7-8TB (including 3TB of backups).

      The file server gets maxed out.. so I upgrade the drives with new HDs.. take the old ones and put them in the backup server (so it has enough space for the new data).. and then remove the oldest drives from the backup.

      The "old" drives get placed in any new computer that gets built.

      Recently, I've had 2x250GB drives fail... both with about the same amount of time in s

    • You are definitely the archetype... of people who really trust The Cloud (tm). I do not.

      1) I used to have Netflix. Then they jacked up their price and lost something like 60% of their already mediocre streaming selection. Their boneheaded CEO is still there. I have not seen a press release from Netflix that has convinced me it's time to go back.

      2) All of the Internet providers in my area are media companies that want to sell you TV service and have basically announced they don't believe in net neutrality. M

  • by jimicus ( 737525 ) on Thursday May 03, 2012 @04:44PM (#39882609)

    Do you care about your electricity bill at all? If you do, it'll probably be cheaper over the course of 6-12 months to buy a simple NAS box or a cheap atom board and plug in a couple of 2TB hard drives.

  • WHS V1 (Score:4, Interesting)

    by clickclickdrone ( 964164 ) on Thursday May 03, 2012 @04:47PM (#39882659)
    Windows Home Server (V1) - mix and match to your hearts content and all the addins you can eat for adding features.
  • If you have pairs of drives with reasonably similar size and performance specs, you could deploy them in RAID 1, or RAID 5 if you have three or more similar drives, and have some redundancy. FreeNAS, OpenFiler, or Nexenta will all work, but you're still rolling the dice in a rigged game, man. Old hard drives are for target practice.
  • Greyhole! (Score:5, Interesting)

    by gregthebunny ( 1502041 ) on Thursday May 03, 2012 @04:53PM (#39882759) Journal

    Why am I the only one saying this? Setup Greyhole [], throw a bunch of disks at it, and enjoy! And to all those saying "those drives are going to die soon", you can actually tell Greyhole that you consider a drive "broken" and it will still use most of its storage (albeit redundantly) until it does die and have to be removed.

    • by Dan667 ( 564390 )
      with raid 5 you lose a percentage of the disks you use (like ~25% if you use 4 disks). Maybe I am missing something, but Greyhole looks like you lose significantly more than that making multiple copies of files on different disks?
      • The two key points I see with Greyhole is that it works with differing drive sizes and you can, for each folder or file (didn't quite care to get the exact configuration setting) set what redundancy you want.

        So yes, you'll lose more space than if you use RAID 5 or 6 but it looks really easy to set up. But it looks slightly more likely to catastrophically fail than RAID 1 in the event that a drive fails before Greyhole duplicates the new files on it.

    • by Hatta ( 162192 )

      That's what I was going to suggest, if I could have remembered the name. Greyhole. heh

  • by InitZero ( 14837 ) on Thursday May 03, 2012 @05:03PM (#39882889) Homepage

    1. Throw away everything that isn't a standard-sized SATA drive.
    2. Buy a Drobo (
    3. Put the five (or eight) largest drives in the Drobo.
    4. Throw away the rest of the drives.
    5. When you get a drive that is larger than the smallest drive in your Drobo, pull the smaller drive out and insert the larger drive.
    6. Find peace in the universe.

    When I was young and foolish, I tried to keep every drive spinning, even long after its time had passed. I had *nix boxes stuffed with drives and SCSI-attached arrays. I learned a lot about drive management and system administration but, mostly, I learned that there is a value to my time and my time isn't best utilized playing disk administrator.

    Drobo doesn't pay me a dime and I am still more excited about Drobo than any technology product since TiVo.


    • I have to agree 100%. I bought a Drobo several years back, and its been extremely reliable. When a disk dies, it alerts you, and you slot in another one. It just works as advertised, and my life has just been a ton easier since then. Before that I was using various disks and raids and all sorts of things, but they're a pain in the butt when you run out of space or a disk dies. Get a Drobo and be done with it.

      As for backing up the Drobo, unfortunately you pretty much have to get another Drobo. I mean, in the

    • I haven't owned a Drobo so I can't comment on the quality or functionality. But QNAP [] and Synology [] are generally considered the leaders in the NAS market. SmallNetBuilder [] has pretty thorough coverage and benchmarks of your NAS options.

      If you don't need a NAS, just some form of aggregate storage, non-networked alternatives are made by Mediasonic [] and Sans Digital []. In my case I just needed something to throw my old drives in and power it on every couple weeks to backup my ZFS file server. So one of these
  • by DarwinSurvivor ( 1752106 ) on Thursday May 03, 2012 @05:20PM (#39883107)
    Why are you combining 100GB and 3TB drives? First of all, the 3TB drive is litterally 30 times the size (giving you a space increase of 3%). Second of all, the 100GB is probably fairly old, so shouldn't even be trusted as stable. You are going to spend more on the ATA adapter for that drive than the value of the space it provides. Currently a 3TB drive costs about $100, that's $0.03/GB which means that 100GB drive is worth ... wait for it ... $3. Sata to IDE adapters run about $9 a piece.

    I've been in the same situation, it was only a year ago that I was running on multiple 10GB drives and an old 120GB laptop drive because I only had IDE in my server. So I went to newegg and got a low powered an E350-onboard-cpu motherboard [] (doesn't even need a fan) for $130, 8GB of ram (I use ZFS) for $50 and a 2TB drive for $70 (drives have gone up since then, but not terribly high) and threw the thing into an old case with a cheap power supply. That's basically an entire system with about 15 times the storage space as my old one for $250 shipped to my front door and the system can take 5 more drives without so much as an expansion card.
  • Seriously, buy a Synology NAS, dump all your data on it and call it a day. The cheapest 2 bay model they make is straight up badass for its price point.
  • by alexpi ( 2631389 ) on Thursday May 03, 2012 @05:35PM (#39883269) Homepage

    Full disclosure: I am the developer

    Check out: []

    It's a software disk pooling solution that combines any number of disks of any size into one big virtual pool. You can designate certain folders to be duplicated on the pool. Any files placed in duplicated folders will be stored on 2 disks at the same time.

    The implementation is a hard core NT kernel driver with a virtual disk. There is a full NT kernel storage stack, no user mode hacks here.

    Unlike RAID and similar solutions, all your pooled files are stored as standard NTFS files on each individual disk in the pool. This means that you can simply plug in any pooled disk to any system that can read NTFS to get at your files in case disaster strikes.

    It's commercial software, $20 USD per server.

  • Put all of the small drives in a JBOD array and use the 3TB as an internal backup because RAID is not a backup solution.

    Use FreeNAS or OpenFiler.

    Drobo performance sucks (with more than one concurrent user).

    Low-end core i3 processor and lots of RAM because RAM is cheap these days.

  • by JonySuede ( 1908576 ) on Thursday May 03, 2012 @05:39PM (#39883311) Journal

    look at, it is a turn-key Home Server based on fedora and greyhole as it's replication engine.
    Dump anything less than a TB except one drive and you are set.
    You set the replication level by share and it keeps a full copy on each drive until the replication count is reached for that file on that share.

    you have 4 1TB drives and 1 500Gb drive.
    You have the share photo configured to replicate on each drive.
    You have replication off on the video share.
    You have a replication level of two on the mp3 share.

    When you store a photo greyhole write it to your 5 drives.
    When you store a video it goes on a random drive.
    When you store a mp3 it goes to 2 random drives.

    So if you lose a drive you should loose about 25% of your videos, 6.25% of your mp3 and 0% of your pictures.

  • Just throw them away (Score:5, Informative)

    by zmooc ( 33175 ) <> on Thursday May 03, 2012 @05:49PM (#39883473) Homepage

    Powering 10 old harddrives for some time is going to be much more expensive than just getting a new one. A modern drive uses about 5W on average. these oldies probably use much more. 10 drives using 10 watts at $0.10 per kwh will set you back $87 per year. You do the math.

  • (Score:4, Interesting)

    by JazzLad ( 935151 ) on Thursday May 03, 2012 @06:01PM (#39883643) Homepage Journal
    Pogoplugs are great, can plug in 4 drives via USB or more with a USB hub. I paid $25 for mine, can't really go wrong.
  • Use BtrFS or Drobo (Score:5, Informative)

    by digitaltraveller ( 167469 ) on Thursday May 03, 2012 @06:10PM (#39883741) Homepage

    Use drobo if you are time poor and money rich, use btrfs if you are time rich and money poor.

    Btrfs's capabilities are nothing short of amazing. Here is a vid about it: []

    • Thanks for the link !

      Please mod parent up -- its not to often you get jazzed about a modern fileystem design and implementation !

  • by linebackn ( 131821 ) on Thursday May 03, 2012 @06:39PM (#39884057)

    My advice would be to find some inexpensive USB or eSATA drive enclosures for the smaller drives and just use them as off-line storage.

    Take some data you don't need instant access to, put it on one drive, and make an identical copy on a second for backup. Put them in a corner and only power them up when needed.

    Or just use the smaller drives as partial backup for a larger NAS. Can be handy if you suddenly need to grab a collection of files and go.

    Like everyone else is saying, no sense keeping them spinning and eating up power. Might even think twice about the larger drives unless they are power efficient models.

  • The answer to your question is ZFS on FreeBSD.
  • ...after you wipe them, and buy a real NAS like a ReadyNas, Synology, etc. smallnetbuilder [] is a great resource for this.
    Alternatively, use FreeNAS and build your own, with recent drives.

To avoid criticism, do nothing, say nothing, be nothing. -- Elbert Hubbard