Ask Slashdot: DIY NAS For a Variety of Legacy Drives? 260
An anonymous reader writes "I have at least 10 assorted hard drives ranging from 100 GB to 3 TB, including external drives, IDE desktop drives, laptop drives, etc. What's the best way to setup a home NAS to utilize all this 'excess' space? And could it be set up with redundancy built-in so a single drive failure would cause no data loss? I don't need anything fancy. Visibility to networked Windows PCs is great; ability to streak to Roku / iPad / Toshiba etc would be great but not necessary. What's the best way to accomplish this goal?"
Not worth it. (Score:5, Insightful)
Re:Not worth it. (Score:5, Insightful)
I agree to an extent. Take anything SAS or SATA that's 1TB or greater and re-think the project with just those. Sell or recycle the rest of the drives. Depending on your needs the remainder should be RAID-1, 5 or 6'd (using software RAID if speed isn't an issue) and then put on an OpenFiler or FreeNAS box. Anything non-replaceable should then be backed up to a respectable backup provider in addition to your home-grown solution.
We need more information though -- what are your actual drive sizes and what do you want to put on this NAS?
Re:Not worth it. (Score:5, Informative)
Yep.. I agree. "Not worth it." sums it up nicely.
Seriously, I completely understand the desire to re-use unused equipment you've got lying around. Seems like the smart thing to do, reclaiming as much of that unused storage space as possible and pooling it together so even the smaller drives add up to something worthwhile. But as a FreeNAS user myself, trust me on this one. It's NOT really a good idea.
As other already pointed out, most RAID configurations are limited by the size of the smallest drive in the array, so that would create major problems for you right there. But even assuming you skip RAID (or set up multiple RAID pools, each consisting only of very similar sized drives -- and then join all of them into a virtual master storage "device"), you're still in a situation where the lower capacity drives probably have slower data xfer rates than the newer, larger ones. That will drag the overall performance of the server down, whenever something gets loaded or saved to the slower/older disks.
Even if all of THAT doesn't discourage you? I have to ask what your time is worth, and to a lesser extent, what your data itself is worth? Old drives as small as 100GB capacity have got to be at least 4 -6 years old by now. Unless you bought them new and just stored them in a box this whole time, chances are, they've seen a lot of hours of operation already. They don't have a resale value more than $20 or so these days, so you're simply not out much money to throw them away or give them to a recycler. Meanwhile, you'll probably get into a much more complex and time consuming NAS configuration, trying to best utilize them in your drive pool. Even if you only make $10/hr. at your job, that means 2 hours of time spent messing around with this is worth the entire value of one of those old drives!
I'm kind of a pack-rat for computer hardware (since I have an on-site repair business besides a day job in I.T. and computers as a spare time interest too). But even I started throwing away IDE or SATA drives under 250GB a while ago. I keep a *couple* small ones around, but only for odd situations (like someone who wants to revive a really OLD PC with a BIOS that can't recognize larger drives properly). Otherwise, everyone who wants to go to the trouble of swapping an old/dead drive out for a replacement may as well spend the relatively small extra amount of money for a current model of much larger capacity, AND a full warranty still on it. Your data is usually worth it!
Re: (Score:3)
Exactly. Though, 100GB drives are plenty of storage for people. If you want to earn some brownie points at the next family gathering, you can st
Re:Not worth it. (Score:4, Informative)
I'd like a way to throw them all together and use them for backup storage.
From what I've gathered...use FreeNAS, with ZFS...and it will let you set this up, and allow for up to 2x drives to fail at the same time....
I think in my case...this would be reasonable. Heck, if I set up two FreeNAS boxes...had one mirror the other one...that would indeed be a decent backup system, no?
I have a lot of friends like me....often buying stuff on sale for "to use someday on something"...but they just sit and gather dust....I think this would be a good reason to use them, and keep buying new drives, here and there when they go on sale, to replace on the FreeNAS as drive on it do start to fail....
Heck, thinking of keeping one FN server here..and maybe put a 2nd at friend or parents house out of state...to mirror it...
Re: (Score:3)
"Even if you only make $10/hr. at your job, that means 2 hours of time spent messing around with this is worth the entire value of one of those old drives!"
The idea that the wage you make at work applies to all your time is a fallacy. The ONLY exception is someone who can just say "Hey, I'm going to work a couple extra hours on the fly".
And do you seriously believe the time learning and building technical stuff has the same value as some min. wage job?
Great, you throw out perfectly good drive. Bully for you
Re: (Score:2)
Re: (Score:2)
If they're like me, and I mentioned this on a previous post....maybe they have a ton of drives, but most of them are little used or even unused in a box.
I have a habit of buying stuff, like drives when I see them on sale...and just set them back for use some day...and kinda forget about them.
I've got computers I was going to use for something....new-ish drive in it...and that box had RAM or MB problems...just sitti
Re: (Score:3)
It's not about being able to afford the power-bill.
It's about the fact that it makes no financial sense to spend (say) $30 in electricity keeping a bunch of old drives running when the same additional capacity in a new disk costs $10.
Re: (Score:2)
Re: (Score:2)
Weeell, true, but...
A bunch of small drives plus a fast one means you can have a reasonably fast raid of the smaller drives, and do nightly backups of it to the large drive (or nightly reassemble and restore if more drives have failed than you have resilience for).
I has a setup like that once, with six 40 GB drives and one 80 GB drive (this was a while ago).
Two RAID1s striped to make a RAID 10, plus two standby drives and a backup drive. Worked like a charm, and survived three drive replacements over the y
Re: (Score:2)
I've got 4 or so drives crammed into a nice old, quiet, low-wattage G4 powermac, running Debian headless. (3 onboard controllers + a 2x pci card so I could fit 10 drives as log as I don't care much about speed). They stay unspun pretty reliably once you tweak a few small things. Unfortunately WoL is broken on this model, so if I ever moved all the non-FS services off it, I couldn't power the mainboard off remotely.
Re: (Score:3)
Depending on your needs the remainder should be RAID-1, 5 or 6'd
Wouldn't btrfs supposedly resolve this? It's supposed to put your redundant data on multiple devices and I would assume if the device had no more space it could use another device on the array as long as it wasn't the same as the redundant data. From everything I'm reading on the FS so far it looks like it's perfectly usable now if you can schedule a regular data scrub (like a midnight cron job) to check integrity, which wouldn't be bad for a personal server (enterprise is another story.)
Re: (Score:2)
And what do you think RAID 5 is?
Re: (Score:2)
RAID5 requires that all the disks be the same size and uses one drive as a parity bit drive.
btrfs works on the block level and should be able to write redundant block data on multiple drives no matter what size they are.
Re: (Score:2)
RAID 5 stripes the parity across all the drives. RAID 3, I believe, has a dedicated parity drive. Otherwise correct though, I think.
Re: (Score:3)
BASIC raid5 requires identical capacity drives. Intelligent raid5 does not. Imagine a four drive array, containing three 40gb drives and one 30gb drive. So you can treat the first 30gb of the 40's, and the 30gb as one segment of the storage, (3 data and a parity, total of 90gb of protected storage) and the remaining 10gb of space on the three 40's run as 2 data with a parity, for an additional 20gb of protected storage. (if you tried to just ignore the oversize on the 40's you'd lose out on almost 20% o
Re: (Score:2)
And what do you think RAID 5 is?
inflexible?
Re: (Score:3, Interesting)
Re: (Score:2)
Agreed with a caveat. My NAS has 500gb and up drives... anything else gets consolidated. I tried using those old drives, but I soon realized it just wasn't worth it. The 500gb drive is going byebye soon too. After a certain point, the empty slot and power draw becomes valuable real estate that could be populated by a larger drive. Slow speed becomes a factor for obsolescence in some cases as well.
What smaller drives, even the 80-120gb types, are good for, is boot drives for crappy refurbished compu
Re: (Score:2)
Regarding empty slot usage... I imagine the plan is to replaced failed drives with larger drives over time and have the system adapt. I've given old drives to people in kind only to have them come back a few months later because there was a problem with their machine (mostly bad clusters) and since they didn't build the machine they didn't know how to do regular disk checks or have the machine running during the schedule I set up.
Re: (Score:2)
The OP probably isn't too worried about security because most of the data will have been downloaded. There are a few jobs where people produce terrabytes of data, but those people will spend money backing it up properly.
Forget RAID and just use JBOD. If one drive dies get downloading again.
Re:Not worth it. (Score:4, Informative)
This.
With such a wide range of storage sizes, you're going to have serious trouble setting up any kind of redundant encoding. To mirror a segment of data (or the moral equivalent with RAID-5 or RAID-6) you need segments of the same size; those segments are going to have to be no larger than the smallest drive. That means larger drives have to store multiple segments, but that the segments have to be arranged in a way such that a drive failure on one of the large drives doesn't take the RAID down. If the drives can't be bisected -- that is, divided into two piles of the same total size -- this is impossible, and the fact that your range is from .1TB to 3TB implies this might be the case.
Think about it -- it's probably going to take most-to-all of those smaller drives to "mirror" the larger drive to make it redundant (and mirroring is the best you can do with just two drives). But having one side of the mirror spread across 9 drives makes failure laughably likely, to the point where you're paying performance penalties for nothing.
Your alternative is to use a JBOD setup and have just contiguous space across all of the disks. This is the same problem, except when a drive goes you lose some random segment of data. That's acceptable for two or three drives in scratch storage, but you don't want to actually store things on that.
Make no mistake -- those drives are going to die.
Trust me on this; don't go down this road. Your actual options are to either pair up the disks as best you can, supplimenting with strategic purchases, and make 2-3 independent raids (and maybe even RAIDing those, but it'll be painful), or just write the whole thing off, put disks in if you have obvious candidates in your hardware, and donate the rest.
Re: (Score:2)
This.
With such a wide range of storage sizes, you're going to have serious trouble setting up any kind of redundant encoding
Yes. But wouldn't it be awesome if there was some magical filing system daemon that ran on multiple machines, automatically meshed together, and presented a single contiguous file system, which kept at least n copies of data on m machines, auto-replicated (reducing available free space) if a disk died or a machine was offline for > p hours, and offered some simple admin interfaces. Throw in an LTO interface (perhaps LTFS based, perhaps not) for good measure too.
This is slashdot, we all know how to use ra
Re: (Score:2)
You mean like GlusterFS? www.gluster.org
It supports access via CIFs, NFS, and Fuzelibs. Of course, his smallest disk is going to limit assignment of one filespace brick for replication on another drive to a file of the same size, but he could conceivably jigger around his assignments to use all of the disk space he has.
Well, I hope he has a *really* fast switch if he does that; and there is that issue with power for all of those hosts, if he wants that kind of redundancy...
Re:Not worth it. (Score:5, Informative)
It depends--is there a total of 6 TB of drives that doesn't include the 3 TB drive?
Take each disk, make an LVM physical volume from it. From those physical volumes, logical volumes. You don't have to make all of them the size of your smallest drive, you just have to be careful. Say you have the following:
1: 3 TB
2: 2 TB
3: 1 TB
4: 1 TB
5: 750 GB
6: 750 GB
7: 150 GB
8: 150 GB
9: 100 GB
10: 100 GB
On your 2 TB drive, make partitions matching the drives under 1 TB.
On your 3 TB drive, make the following partitions:
1 TB: RAID-5 with #3 and #4
750 GB: RAID-5 with #2 and #5
750 GB: RAID-5 with #2 and #6
150 GB: RAID-5 with #2 and #7
150 GB: RAID-5 with #2 and #8
100 GB: RAID-5 with #2 and #9
100 GB: RAID-5 with #2 and #10
You'll end up with the following volumes:
1: 2 TB
2: 1.5 TB
3: 1.5 TB
4: 300 GB
5: 300 GB
6: 200 GB
7: 200 GB
Then take those, LVM the RAIDed LVM volumes (fairly certain you can stack [traditional meaning of "stack"] as a contiguous disk, just use an easy FS like ext3, I've run into problems with stack size [programming meaning of "stack"] using XFS on LVM). You end up with 6 TB total space, and, just like normal RAID-5, you don't lose anything unless two disks from one of those groups die. That is, if a disk in 200 GB #6 dies, and a disk in the 1.5 TB #3 dies, you still haven't lost anything. Even if your 3 TB drive dies, which is clearly the worst case since it has data for every array, or the 2 TB which is nearly as bad, you'd still need to lose a second disk to lose any data, so for failure rates it should be the same as a 10-drive RAID-5 array, which isn't quite advisable although it's not murderously bad, but this isn't work and the primary motivation is probably maximizing space with a decent reliability increase, not making next to certain it never goes down. I'm sure it feels really weird, but I don't think you're actually increasing your odds of failure at all over the 10-disk-all-same-size RAID we're used to, other than not trusting older drives--and I'm not so sure those are much more likely to fail than new ones. After all, they've lasted this long, and I've had brand new drives die within weeks. But in point of fact, there's some 2-drive failures that don't take anything down, so I think overall you're doing slightly better than the 10-same-size disk case.
Now, your disks probably won't divide up as nicely, and you might end up having to either leave some space on the floor or subdivide in weirder ways or both, but with very careful partitioning (never put two stripes of the same array on the same disk), you can do this. Set all the arrays to verify weekly (mdadm can do this) and e-mail you on a failure. Don't set up an audible alarm, you're not going to lose a second disk at 3 AM (but you will wake up to fix it, and be worthless at work the next day for probably nothing) and even if you did lose another disk, you're not using RAID as a replacement for backups, right? Right?
ZFS would be really nice if it did all this complex stuff for you, but do you have enough control/is it smart enough to allow you to ensure that you get as good or better reliability? It'd be ridiculously easy to make a bad mistake in layout with the above scenario. Because overall, I agree with the title: It's just not worth all this effort so you can use that crappy 100 GB disk. Once it goes down, now you have to replace it.
Re: (Score:2)
... larger drives have to store multiple segments, but that the segments have to be arranged in a way such that a drive failure on one of the large drives doesn't take the RAID down. If the drives can't be bisected -- that is, divided into two piles of the same total size -- this is impossible, and the fact that your range is from .1TB to 3TB implies this might be the case.
There is a simple formula to determine the available space in these circumstances: If the largest drive is larger than all the other combined, the available space (after mirroring) is the sum of the smaller drives. In this case the largest drive mirrors all the others. Otherwise, the available space is half of the total of all the drives, and no space is wasted. A filesystem like BTRFS (or, presumably, ZFS) can work out the details automatically if you set it up in RAID-1 mode.
But having one side of the mirror spread across 9 drives makes failure laughably likely, to the point where you're paying performance penalties for nothing.
Agreed. I would probably forge
Re: (Score:3)
There are plenty of non-RAID and non-critical ways you can make use of smaller drives. For example most online storage accounts require sync with a local drive (Dropbox, Google Drive, Skydrive) to work and that can add up quickly. Doesn't matter too much if the disk dies because you have an online backup. You can use something like SecretSync to encrypt the log (which also doubles the storage requirements, another reason why a spare HDD is useful).
You could give some drives to friends and set up your own mu
Re: (Score:2)
Agreed.
Just take the sub 1/2 TB drives and mail them to me. ;-) I need some small USB drives for work to hold my music.
Re: (Score:2)
"This little drive is not worth the effort. Come, let me get you something..."
"The frugal is strong in this one."
Comment removed (Score:4, Interesting)
Re: (Score:3)
Salvage... and backup (Score:3)
Create a Truecrypt file filling each old drive, after a full format. Use for full (not incremental backups) every 6 months, starting with the smallest sizes (to use them up). Then put them in your Mum's garage, suitably labelled.
Last tip for backups. Do "dir /on /s > backup_2012_04_23" for each drive after filling it, and keep the list on your main machine, so you can see if you've got a copy of something (and where) before fishing around.
the 2 main choices: (Score:5, Informative)
FreeNAS [freenas.org] or OpenFiler [openfiler.com].
I think FreeNAS (the BSD based one) is lighter and easier, as OpenFiler seems to be going in a more "fully featured" direction with less support for older hardware, but they're both good.
Re: (Score:2)
Can these programs mirror the contents of one USB: drive to the other USB: drive? It's a pain trying to copy-and-paste files in drive 1 to the backup drive 2. THX. :-)
the "d" is redundant (Score:3)
Once you specify "a", the "d" is redundant.
Re: (Score:2)
I don't see the ability to dynamically expand FreeNAS. (Just add a drive and expand the protected space)
I cautiously recommend unRaid. I have not had an ideal experience with it, but most of it was due to my lack of diligence in ordering compatible hardware and fully reading all 10,000 forum threads before logging in. Mainly the hardware thing.
Re: (Score:2)
it uses ZFS, go read up on it, it is "teh win" in filesystems.
try this post for a quick summary: http://slashdot.org/comments.pl?sid=2827537&cid=39883221 [slashdot.org]
Re: (Score:2)
You can stripe the smaller drives to create a larger one that equals the capacity of a large drive. Then RAID-1 the larger drive and the collection of striped drives.
Me, I'd use the smaller ones for target practice and just get a second 3TB drive.
Re: (Score:2)
I'm trying to figure out the way to best use hardware to maximize using a bunch of older disks....most in the 1TB range.
I'd get some kind of box...like core2 duo maybe...how would I best hook up the maximum number of hardrives to it?
If a drive goes out....would you just have to shut it down...take out bad drive, plug in new one...turn it on...and FreeNAS would rebuild it? (assuming using ZFS)
Would it work with hookin
FreeNAS, for sure (Score:3, Informative)
FreeNAS can use ZFS for aggregating multiples drives, independent of size, technology etc, all with varying degrees of protection.
It's by far the best solution to your case.
Flavio
Re:the 2 main choices: (Score:4, Informative)
yes it does - it uses ZFS that has some fancy replication features, especially z-pools that are like software raid. You can have a 100GB vdev on both the 100GB and 3TB drive as a mirror. Of course if you have just those 2 drives, nothing is ever going to get you full data redundancy (obviously!) but ZFS gives you a lot of flexibility to use what you do have.
Interesting (Score:2)
It looks like l'm going to have to read up on this stuff again.
Given the spread of drive ages I'd definitely want redundancy, and given the variety of capacities, a traditional RAID system isn't going to cut it. I'm actually thinking of cloud computing technology, with it's attendent abilities to duplicate data(and services!) across sites of uneven capability, even optimize resources across different locations to optimize resources.
Basically, you'd be looking a 'cloud' of HDs, with an underlying system tha
Re: (Score:2)
Having to do reliability calculations when physical disks take out a single logical chunk and others might take out 100 or more, though, would be pretty gross...
Re:the 2 main choices: (Score:5, Informative)
ZFS does this much more simply with no ugly hacks. You can have mismatched drives when you build a mirror (the mirror is the size of the smallest drive in the mirror set), and then you stripe across the mirrors. As the older, smaller drives fail, replace them with newer, bigger drives and the pool magically gets bigger. 100GB + 500GB mirrored (100GB usable). 100GB dies, swap in a 750GB drive and now this pool is automatically resized to 500GB. Get 2 more drives? Mirror them and add them to the pool and your pool expands with no one the wiser.
Seriously, if you haven't played with ZFS before, download FreeNAS and give it a whirl. When I was a Solaris admin, ZFS was the most fun thing to work with by far.
Re:the 2 main choices: (Score:4, Informative)
FreeNAS can use ZFS as the filesystem. And this is what you want! Now, the actual configuration depends on the drives you have available.
For drives with the same, or very similar capacity -- ZRAID can be used. With 3 drives, ZRAID1, or with more, use ZRAID2 (the number is the number of drives which can be failed). ZRAID offers the capacity of the smallest drive, which may waste space. If all drives are (eventually) increased in size, more storage is added.
For drives with different capacity, ZFS offers the ability to keep a redundent number of copies of the data (eg. specify two copies, or three). Then, ZFS will duplicate the data onto multiple drives.
As well, ZFS continually monitors the drives and redistributes any failed areas, and ensures that no bit errors accumlate in the file system. ZRAID and multiple copies can be combined.
The main point of ZFS is to keep your data clean and safe from corruption.
As well, "fsck" is not needed -- it happens when you "scrub", which slows down the array, but doesn't leave it unusable.
If you have sufficient memory, ZFS can also "dedup" the blocks in your filesystem, merging identical copies of data (but copying/raiding to maintain data integrity). This feature takes a LOT of RAM (2GB per TB of disk, 32GB for 20TB of disk, and possibly more). Also, some ZFS versions offer encryption (not sure about the one in FreeNAS).
ZFS drives can be physically moved to another system, and used (eg. FreeNAS x86 to SPARC). Endian and format issues are correctly handled. Not a feature most people would ever use, but it is nice. ZFS is available on Solaris, BSD, Linux, Mac (well, used to be).
Also, ZFS support snapshots, which can be browsed.
Finally, ZFS has an eight year history in production.
In all, what's not to love?
Re:the 2 main choices: (Score:4, Insightful)
ZFS does nothing to protect integrity in memory, and especially in the dedup case, your data sits in memory a long time.
I wouldn't run dedup on a non-ECC mainboard. Had an experimental ZFS that suffered a failed memory stick (this array not run in dedup mode after a performance test following the initial build). The next scrub found inconsistencies on disk. Even after copying all the data to a new location on the same storage tank and deleting the old location, there were internal consistency errors. This didn't surprise me, but illustrates that memory-induced corruption will often kill the entire array. Keep plenty of offline backups.
Now if you just happen to have a fresh Opteron 3250 lying around on a mainboard populated with the right type of memory, with full server chip validation and background memory scrubbing, fill your boots with those old 30MB IDE drives.
I'm running my test array on three 500GB drives. Two are enterprise grade and the third was a Seagate warranty replacement (consumer grade refurb). I could have run with the consumer drive as an idle hot spare, but decided to run a three-way mirror, which keeps your hot spare silvered at all times. Note that the consumer drive limits my peak write bandwidth, as the enterprise drives have higher read/write performance. The reads seem to be distributed so that the hot silvered consumer drive works out to a net performance gain.
My scrub on 50GB of data takes just under 15m. Concurrent read traffic is not greatly impacted at home network levels.
Re: (Score:2)
Stream is the word. (Score:2)
Not streak to iPad. Stream. Streaking to iPad would require cleaning supplies at the point of impact.
Re: (Score:2)
That would be pretty funny. Ima try it.
Re: (Score:2)
Yeah that would be funny... if you were 5.
FreeNAS or Unraid. (Score:3, Insightful)
Look at FreeNAS or Unraid. Unraid has a 3-drive limit IIRC for the free version, but supports an unlimited amount of drives for the non-free version.
Re:FreeNAS or Unraid. (Score:4, Informative)
unRAID [lime-technology.com] does not support unlimited drives in any version. It comes in 3 (free), 6, and 21 drive versions.
I've been using it for a year or two and, while it's got some limitations, it's a good choice for this application. Mostly because the guy's using a random collection of old drives and is likely to have bad sectors across multiple drives at some point. There is no striping with unRAID so the worst thing that can happen is he'll have to mount the drives individually and copy the data to a new array.
Another vote for unRAID (Score:4, Informative)
I've been using unRAID for years and it's a great solution for a small home NAS box. If you ever change your mind about using it, you simply turn your parity drive into a regular Linux boot disk, and the remaining drives are just regular Reiserfs2 filesystems. Most RAID systems and/or software would require much gymnastics to de-RAID them, if it could be done at all.
In addition, hardware-based striped RAID makes you dependent on the RAID controller; if it dies and you can't find a replacement compatible with the original's striping mechanism, your data just disappeared.
Windows 8 Storage Spaces (Score:3, Interesting)
FreeBSD and ZFS (Score:3, Interesting)
FreeBSD has fast ZFS support which is wonderful file system to fight data loss.
Re: (Score:2)
Agreed. Do this for fun, not for anything practical - I mean, there are USB thumbdrives larger than your 100GB drive!
Pair the drives up to match them as closely as is possible so that you have 5 redundant mirrors. More realistically, you'll only have space or sata hookups for 4 pairs.
Anyway, use FreeBSD and zfs to pair them up and then combine the 4 or 5 pairs into a single pool. As the drives die or as you acquire bigger drives, you can hook up an additional drive and use the "zpool replace" command to swa
You can , but probably without RAID (Score:2)
If you just use LVM and group all your disks together into one PV, that would make the array appear as "one big drive" to the system.
Redundancy (RAID) would not work so well because your array would be limited by the smallest disk in the array. Sure, raid the 300GB to the 1TB, but you end up with a RAID-1 array of 300G.
Re: (Score:2, Insightful)
That sounds awesome. Should have a MTBF of about 20 minutes
The mega surplus continues! (Score:5, Interesting)
Ah ha! Who else amongst you has a huge surplus of huge hard drives going unused, now that netflix streaming has displaced 60% of all the crud you had spinning idle in a closet the 3 years before you signed up?
My storage requirements went from about 3 terabytes to about 30 gigabytes over the past 2 years. I believe I am the archetype and that I am doing the same thing as the average geek. I suspect there are piles of huge disks sitting offline because of this streaming displacement.
It cost me about 18 dollars a month to leave my x86 file server online, idle (killawatt meter, nh rates); netflix is cheaper than that.
Come on, who else has a comment related to this.
Re: (Score:2, Funny)
The OP is looking to build a giant porn vault. All of the other words in his post are just cover.
Notice how he talks about "visibility" and "streaking". He's got the porn on his mind.
Netflix is very light on the porn, so it is N/A here
Come on guys, we need a modern porn vault solution here, iPorn, Porndroid, Porno on Rails, something big
Re: (Score:2)
Not me. Before Netflix streaming, I got most of my movies via...Netflix! In fact, I kind of curse Netflx streaming, because I find I'm wasting a lot more time watching movies and shows than I used to, and less time reading and working on my hobbies.
Re: (Score:3)
>>>Who else amongst you has a huge surplus of huge hard drives going unused
No.
>>>My storage requirements went to about 30 gigabytes
WOW. I still download a ton of stuff via Utorrent, and I need the space since I acquire movies/shows faster then I watch them. I also need to space to "seed" back the stuff I've taken. My 1 TB drive is quite full.
I don't subscribe to Comcast or Netflix or anything else. It's just entertainment... not really worth paying for it, when I can acquire it for fre
Re: (Score:2)
There's no netflix for classic video games. Until I can hop on the cloud and download an ISO of whatever PC Engine game I happen to want to play today, I'm going to have to keep the TOSEC on my hard disk.
Re: (Score:2)
Not me. My terabytes of data were being used for PVR recordings, and Netflix doesn't have enough current content to replace that function. Hulu was getting close, but not quite because of the random restrictions on what can be watch on a TV set vs in the browser, and the numerous shows which would expire from the queue before you could watch them. With Hulu's proposed cable subscription requirement, it looks like my PVR will be getting even more use in the future, not less.
Re: (Score:2)
Interesting.. I have the exact opposite experience. Lots of extra drives from constant upgrades. I have around 7-8TB (including 3TB of backups).
The file server gets maxed out.. so I upgrade the drives with new HDs.. take the old ones and put them in the backup server (so it has enough space for the new data).. and then remove the oldest drives from the backup.
The "old" drives get placed in any new computer that gets built.
Recently, I've had 2x250GB drives fail... both with about the same amount of time in s
Re: (Score:3)
You are definitely the archetype... of people who really trust The Cloud (tm). I do not.
1) I used to have Netflix. Then they jacked up their price and lost something like 60% of their already mediocre streaming selection. Their boneheaded CEO is still there. I have not seen a press release from Netflix that has convinced me it's time to go back.
2) All of the Internet providers in my area are media companies that want to sell you TV service and have basically announced they don't believe in net neutrality. M
Do you care about your electricity bill? (Score:4, Insightful)
Do you care about your electricity bill at all? If you do, it'll probably be cheaper over the course of 6-12 months to buy a simple NAS box or a cheap atom board and plug in a couple of 2TB hard drives.
WHS V1 (Score:4, Interesting)
Don't do it. (Score:2)
Greyhole! (Score:5, Interesting)
Why am I the only one saying this? Setup Greyhole [greyhole.net], throw a bunch of disks at it, and enjoy! And to all those saying "those drives are going to die soon", you can actually tell Greyhole that you consider a drive "broken" and it will still use most of its storage (albeit redundantly) until it does die and have to be removed.
Re: (Score:2)
Re: (Score:2)
The two key points I see with Greyhole is that it works with differing drive sizes and you can, for each folder or file (didn't quite care to get the exact configuration setting) set what redundancy you want.
So yes, you'll lose more space than if you use RAID 5 or 6 but it looks really easy to set up. But it looks slightly more likely to catastrophically fail than RAID 1 in the event that a drive fails before Greyhole duplicates the new files on it.
Re: (Score:2)
That's what I was going to suggest, if I could have remembered the name. Greyhole. heh
Don't Build.... Buy a Drobo (Score:4, Insightful)
1. Throw away everything that isn't a standard-sized SATA drive.
2. Buy a Drobo (http://www.drobo.com/products/professionals/drobo-fs/index.php).
3. Put the five (or eight) largest drives in the Drobo.
4. Throw away the rest of the drives.
5. When you get a drive that is larger than the smallest drive in your Drobo, pull the smaller drive out and insert the larger drive.
6. Find peace in the universe.
When I was young and foolish, I tried to keep every drive spinning, even long after its time had passed. I had *nix boxes stuffed with drives and SCSI-attached arrays. I learned a lot about drive management and system administration but, mostly, I learned that there is a value to my time and my time isn't best utilized playing disk administrator.
Drobo doesn't pay me a dime and I am still more excited about Drobo than any technology product since TiVo.
Cheers,
Matt
Re: (Score:2)
I have to agree 100%. I bought a Drobo several years back, and its been extremely reliable. When a disk dies, it alerts you, and you slot in another one. It just works as advertised, and my life has just been a ton easier since then. Before that I was using various disks and raids and all sorts of things, but they're a pain in the butt when you run out of space or a disk dies. Get a Drobo and be done with it.
As for backing up the Drobo, unfortunately you pretty much have to get another Drobo. I mean, in the
Re: (Score:3)
If you don't need a NAS, just some form of aggregate storage, non-networked alternatives are made by Mediasonic [mediasonic.ca] and Sans Digital [sansdigital.com]. In my case I just needed something to throw my old drives in and power it on every couple weeks to backup my ZFS file server. So one of these
Re: (Score:2)
4 x 1TB drives, for a RAID 0 stripe.
How do you handle backing up the 4TB of data?
Re: (Score:2)
4 x 1TB drives, for a RAID 0 stripe.
How do you handle backing up the 4TB of data?
You have the same backup problem with a mishmash of drives that you cobble together on your own...
Why? (Score:3)
I've been in the same situation, it was only a year ago that I was running on multiple 10GB drives and an old 120GB laptop drive because I only had IDE in my server. So I went to newegg and got a low powered an E350-onboard-cpu motherboard [newegg.ca] (doesn't even need a fan) for $130, 8GB of ram (I use ZFS) for $50 and a 2TB drive for $70 (drives have gone up since then, but not terribly high) and threw the thing into an old case with a cheap power supply. That's basically an entire system with about 15 times the storage space as my old one for $250 shipped to my front door and the system can take 5 more drives without so much as an expansion card.
Synology (Score:2)
StableBit DrivePool + WHS 2011 (Score:4, Interesting)
Full disclosure: I am the developer
Check out: http://stablebit.com/DrivePool [stablebit.com]
It's a software disk pooling solution that combines any number of disks of any size into one big virtual pool. You can designate certain folders to be duplicated on the pool. Any files placed in duplicated folders will be stored on 2 disks at the same time.
The implementation is a hard core NT kernel driver with a virtual disk. There is a full NT kernel storage stack, no user mode hacks here.
Unlike RAID and similar solutions, all your pooled files are stored as standard NTFS files on each individual disk in the pool. This means that you can simply plug in any pooled disk to any system that can read NTFS to get at your files in case disaster strikes.
It's commercial software, $20 USD per server.
JBOD (Score:2)
Put all of the small drives in a JBOD array and use the 3TB as an internal backup because RAID is not a backup solution.
Use FreeNAS or OpenFiler.
Drobo performance sucks (with more than one concurrent user).
Low-end core i3 processor and lots of RAM because RAM is cheap these days.
take look at amahi.org (Score:5, Interesting)
look at amahi.org, it is a turn-key Home Server based on fedora and greyhole as it's replication engine.
Dump anything less than a TB except one drive and you are set.
You set the replication level by share and it keeps a full copy on each drive until the replication count is reached for that file on that share.
Example:
you have 4 1TB drives and 1 500Gb drive.
You have the share photo configured to replicate on each drive.
You have replication off on the video share.
You have a replication level of two on the mp3 share.
When you store a photo greyhole write it to your 5 drives.
When you store a video it goes on a random drive.
When you store a mp3 it goes to 2 random drives.
So if you lose a drive you should loose about 25% of your videos, 6.25% of your mp3 and 0% of your pictures.
Re: (Score:2)
fuck I meant lose 2 drives and I rounded up to 25% to compensate for the fact that one drive is smaller than the others so it has less chance to be used than the other
Re: (Score:2)
note to myself : do not do stats after work
Just throw them away (Score:5, Informative)
Powering 10 old harddrives for some time is going to be much more expensive than just getting a new one. A modern drive uses about 5W on average. these oldies probably use much more. 10 drives using 10 watts at $0.10 per kwh will set you back $87 per year. You do the math.
www.pogoplug.com (Score:4, Interesting)
Use BtrFS or Drobo (Score:5, Informative)
Use drobo if you are time poor and money rich, use btrfs if you are time rich and money poor.
Btrfs's capabilities are nothing short of amazing. Here is a vid about it:
http://www.youtube.com/watch?v=9bQc_z-Cb7E [youtube.com]
Re: (Score:2)
Thanks for the link !
Please mod parent up -- its not to often you get jazzed about a modern fileystem design and implementation !
USB/eSATA Drive enclosures (Score:3)
My advice would be to find some inexpensive USB or eSATA drive enclosures for the smaller drives and just use them as off-line storage.
Take some data you don't need instant access to, put it on one drive, and make an identical copy on a second for backup. Put them in a corner and only power them up when needed.
Or just use the smaller drives as partial backup for a larger NAS. Can be handy if you suddenly need to grab a collection of files and go.
Like everyone else is saying, no sense keeping them spinning and eating up power. Might even think twice about the larger drives unless they are power efficient models.
ZFS + FreeBSD (Score:2)
Sell them all on eBay... (Score:2)
...after you wipe them, and buy a real NAS like a ReadyNas, Synology, etc. smallnetbuilder [smallnetbuilder.com] is a great resource for this.
Alternatively, use FreeNAS and build your own, with recent drives.
Re: (Score:2)
Well, lots of things.
Media can suck up a lot of drive space...even if it is all legal!! You might want to rip all your CDs to various formats (flac for good stereo in the living room, mp3s for ipod or car).
Ripping your dvds/blurays...to watch conveniently. Then with all this, you might like a few backups so you don't lose all that ripping work too easily.
I'm about to buy a new high end DSLR....storing pictures....
Re: (Score:2)
I'll add that having a single machine for backups is very convenient. I have a FreeBSD ZFS machine in the basement, and I run CrashPlan on it as well as netatalk so it can pretend to be a Time Capsule. Whenever I fix a friend/relative's computer I make them install CrashPlan on their own computer and point them to my server. Sure it uses some of my drive space up, but it saves me hours (days?) of time when their machine dies.
As you point out, all that digital crap sure adds up - and I have a fair amount of
Re: (Score:2)
My music collection alone is in the 100's of GBs. It's not at all inconceivable to need this much storage if you're trying to digitize your physical media collection. I'd probably need 20+ TB's to rip everything I own on CD/DVD/BD, I'm just waiting until you can get a good brand for about $20/TB or so.
Re: (Score:2)
Maybe the power consumption problem could somehow be worked out by starting and stopping disks based on idle timeouts? I don't know how well that kind of setup would play with a RAID configuration, but perhaps there's some other method too.
I still kinda like the concept of keeping old hardware running for ecological reasons (making new stuff takes a lot of power and resources). And it would be interesting to find some kind of solution for this case even though it's gonna be somewhat hacky. My two cents is t
Re: (Score:2)
2006 called, and it's pissed that MS stole all those features from ZFS....
Re: (Score:2)
Well you could wait 2 months for a release candidate of an os that few people will touch before the first service pack...
Or you could use ZFS, which has had those features for years already and is supported on several stable tried and tested platforms.
Re: (Score:2)
And tying an onion to your belt used to be the style at the time.