Best Solutions For Massive Home Hard Drive Storage? 609
i_ate_god writes "I download a lot of 720/1080p videos, and I also produce a lot of raw uncompressed video. I have run out of slots to put in hard drives across two computers. I need (read: want) access to my files at all times (over a network is fine), especially since I maintain a library of what I've got on the TV computer. I don't want to have swappable USB drives, I want all hard drives available all the time on my network. I'm assuming that, since it's on a network, I won't need 16,000 RPM drives and thus I'm hoping a solution exists that can be moderately quiet and/or hidden away somewhere and still keep somewhat cool. So Slashdot, what have you done?"
Define "massive" (Score:5, Insightful)
Re:Define "massive" (Score:5, Funny)
How much data constitutes "massive"?
640K of memory should be enough for anybody.
Comment removed (Score:5, Insightful)
Re:Define "massive" (Score:4, Informative)
What you want is cheap 5U rack servers with either OpenFiler [openfiler.com] or FreeNAS [freenas.org]. Personally, I like openfiler better. iSCSI is going to be the way to go unless you want a thick OS on the server and all the other admin issues that come with that. Plus, with openfiler you can still do block level snapshotting and change replication. Also, I've heard good things about Open-e [open-e.com] as well. And if you want to mess with ZFS, there's OpenSolaris.
What you do is get yourself a huge (4 or 5U) barebones server from newegg or a cheaper place. Make sure to get a couple of good SATA RAID controllers. Not FakeRAID! SAS would be better, but the drives are a lot more, even for the nearline drives that are basically SATA drives with a SAS interface. Adaptec makes some real SATA raid cards, and there's 3Ware as well. You don't have to worry a lot about the cache, but if it isn't battery backed, you're going to write though it anyway. Who cares, you have 16 spindles! Load it with a bunch of drives. They don't have to be the biggest, anyway more spindles means more performance. 16 500GB drives is going to be fine, for instance, because then you can take 1/3 of that for RAID 6, have some hot spares, etc. Get the slowest drives you can, maybe get a little SSD to use as a boot drive (there are small ones for around $100). You could even boot from a USB key if you feel like the hassle. You don't need a ton of processor. A celeron would probably work, but you probably do want something 64 bit so you can put a bunch of RAM in it as you get more advanced.
Also check out Storage Search [storagesearch.com]. Not a very well designed site but tons of goof info under iSCSI and SAN and NAS.. If you're rich, you might try out an EqualLogic, they are around $28,000 for 8TB but pretty slick.
Re:Define "massive" (Score:5, Insightful)
Does using RAID controllers actually provide superior price:performance to using software RAID? Last I checked, the processors on most cheap RAID controllers were slower than dogshit and using md under Linux would give you better performance than basically any of them, at the cost of some CPU. But since CPU is cheaper than RAID, it probably makes sense. For example, going from a Phenom II X3 720 to a Phenom II X6 chip of the same clock rate takes the CPU from $100 to $200. How much would it cost to go from four crappy RAID controllers to four good ones? It would probably cost you at least $400.
The answer is probably to just go ahead and install Debian on a machine with as many CPU cores as you want to blow money on, and to use software raid. Put lots of system RAM in it, which the OS will automatically use for disk buffers. Current versions of grub work fine with USB keys, because they can use UUID for the groot, and the UUID never changes. If you want it to boot quickly, find a motherboard with coreboot support. If you want external disks you can use firewire cheaper than eSATA, if you get the external disks or just some enclosures at a good price. It makes maintenance a lot easier, but involves substantial power waste due to all those inefficient wall warts.
P.S. OpenSolaris is circling the drain, please don't suggest it to anyone for anything.
Re: (Score:3, Insightful)
I've heard one too many sad stories about old on-disk RAID structures not being compatible with the new version of the old failed RAID card. I prefer the md device since it has been consistent for quite a while and the on-disk format is well documented.
Re: (Score:3, Informative)
When a drive fails, SW RAID doesn't allow one to boot the system unless one has (1) Used RAID-1
True enough, but (honestly) how hard is it to use RAID-1 with hot spare(s) for your boot partition, and RAID5/6 for everything else? (answer: not very, I'm doing this at home.)
Setup the BIOS to use the two mirrored disks successively while booting.
Most modern motherboards already do this. Contraiwise, even if you have to go out of your way, it's *still* much easier than screwing around with driver disks for HW cards when installing.
SW RAID often exhibits very poor performance when a drive fails as the underlying drivers want to try to make the operations against the disks work and will often expend a fair amount of effort to retry operations and wait for extended periods of time to force the disk operation to work.
Never seen this happen. "bad" drives on SW RAID mark out just as quickly as those on cheap HW controllers.
Standard disk controllers do not support hot swap. So when a drive fails, replacing the drive involves shutting down the server, swapping the drives, and then bringing it back up.
Bull-fucking-shit.
I regularly attach an
Re: (Score:3, Insightful)
Your analysis completely ignores the cost of the electricity to run a setup like that.
I went from an older similar setup with about 1TB of storage to a dedicated NAS box with 2TB of storage with similar performance characteristics -- and saved $40 a month in electricity.
A 500GB drive draws as much power as a 2TB drive, and server motherboards and power supplies devour power.
Re: (Score:3, Interesting)
I'd consider 1TB small today. And this guy probably as well; You can't put a lot of 1080p movies on one 1TB disk.
My own setup is a box that has two thecus N5200Pro NASes NFS mounted. One has 5x1TB, the other one has 5x2TB. Both are RAID-6 arrays. I know I throw away 6TB of storage but I'd rather spend a couple extra bucks than loose my episodes of Dharma & Greg.
If something goes wrong on the 5x2TB array I'm up for a 2 day array rebuild though, praying no other disk fails as well. The newer Thecus NASes
Re: (Score:3, Insightful)
You'd have to have one hell of a bit torrent hobby/debilitating movie watching problem to need more than 2 TB of video on tap on a hard drive for entertainment purposes.
Unless you're doing HD video editing, or you like to keep a copy of every picture ever taken by your 8+ MP DSLR in RAW format, few people actually need that space. You might be able to fill 100GB with installed video games but the average person who is buying a 1TB drive is probably upgrading granny's computer and thinking "well hey,
Re: (Score:3, Insightful)
250 movies that you watch every year, in addition to the ones you rent, or go see with friends, or simply non-movie stuff you watch like sitcoms and/or live events like the news/sports? You must only work 2 hours a day to keep up with your busy viewing schedule and still have time to sleep, shower and spend time with other humans (they exist outside of movies, you know). 10 movies that you re-watch year after year I can understand, but 250 just blows my mind. Do you schedule that a year in advance? What hap
Re:Define "massive" (Score:5, Informative)
When I hear a question like this, I usually recommend heading over to the NCIX forums. There's some crazy guy over there - death_hawk - building a 100TB array. [ncix.com]
What I did was a bit less ambitious. A regular old NAS running off a cheap non-RAID SATA card in a case with lots of HDD bays.
For interest, I'll throw up a build that easily scales to 12TB. Since you mentioned noise, I'll prioritize that instead of capacity. I'll use a case geared for silence, a fanless mobo/cpu, a quiet PSU, WD Green HDDs, and a ridiculously cheap SATA card.
Case - 8 bays: http://www.ncix.com/products/?sku=51277&vpn=6900654&manufacture=Fractal%20Design [ncix.com] *1
Motherboard/CPU - Silent: http://www.ncix.com/products/?sku=50891&vpn=AT5NM10-I&manufacture=ASUS [ncix.com] *2
DDR2 - 1GB: http://www.ncix.com/products/index.php?sku=18584&vpn=VS1GB667D2&manufacture=Corsair&promoid=1114 [ncix.com] *3
PSU: http://ncix.com/products/?sku=33357&vpn=CMPSU-400CX&manufacture=Corsair&promoid=1114 [ncix.com] *4
SATA Card: http://ncix.com/products/?sku=19892&vpn=SY-SA3114-4R&manufacture=Syba [ncix.com] *5
HDD - 2TB 4KB http://ncix.com/products/index.php?sku=49591&vpn=WD20EARS&manufacture=Western%20Digital%20WD&promoid=1114 [ncix.com] *6
HDD - 2TB 512b: http://ncix.com/products/index.php?sku=36130&vpn=WD20EADS&manufacture=Western%20Digital%20WD&promoid=1114 [ncix.com] *7
OS: FreeNAS, Ubuntu, Win7, Other *8
*1 Only six will be filled. 6 SATA ports.
*2 Case still requires fans/airflow.
*3 A NAS probably only needs 512MB, but 1GB is cheap. A Win7 NAS may benefit from 2GB.
*4 Must be capable of spinning up 6-8 HDDs at once.
*5 Must be flashed with new non-RAID BIOS to avoid silent data corruption for > 1.0TB HDDs; disk read/write speeds around 30MB/sec, in my experience, on ext2. (but running with a VIA CPU - not dual-core Atom)
*6 Must be specially formatted under Windows and Linux. (Most distros only support 4KB sectors when the drive reports 4KB - these report 512b to maintain XP compatibility)
*7 May have longevity issues. (too early to say right now - lots of complainers, which reminds me of the 7200.10 days. A heck of a lot of those chirping barracudas perished early)
*8 Please verify SATA card support first. Ubuntu and FreeNAS work fine with this card, but I've never checked if Win7 has drivers. Do note that you'll have to flash it. *9 If that's a problem, buy a more expensive card. (which may give better performance, and SATA2 support) Promise [ncix.com] makes nice non-RAID SATA cards.
*9 Flashing the PCI SATA card requires making a DOS boot CD: http://www.hiren.info/pages/bootablecd [hiren.info]
Please note: A solution like this will take 12+ hours to set up. It's highly likely you'll blow a whole weekend, even if you know what you're doing. You may have to try multiple distros to get proper Atom D510 support, unless you go with Windows. When I put mine together, atoms weren't available affordably, so I went with a cheap VIA board. Ironically, Ubu
Re: (Score:3, Interesting)
I've got room for 30TB of data storage in two machines for a total of 60TB. However I have only populated them to around 12TB right now, I don't add drives till I'm out of space! :-) Not what I would call massive yet but getting there!
Re:Define "massive" (Score:5, Informative)
My pricing indicate 2TB disks are slightly cheaper/GB than 1TB
Re: (Score:3, Informative)
Check again: you're almost certainly comparing 1TB 7200RPM drives to 2TB 5900RPM drives. And Hitachi drives don't count, being the cheap pieces of garbage they are.
Re: (Score:3, Interesting)
Check again: you're almost certainly comparing 1TB 7200RPM drives to 2TB 5900RPM drives. And Hitachi drives don't count, being the cheap pieces of garbage they are.
When it's going to be used by only a handful of people, nearly always in a sequential access pattern, on the other end of a 1GbE link, why would you want hotter, noisier, 7200rpm drives ?
Re: (Score:3, Informative)
I have a lot of storage for the same reason that the OP does, and I PREFER 5400 RPM drives. They run cooler and are still faster than what I need.
I prefer WD Greens, but Samsung EcoGreen works well too. I buy the green ones because, again, they run cooler.
1.5TB drives have been cheapest $/GB for a while now, though I suspect 2TB will take its place, especially after the 3TB drives hit the shelves.
Re: (Score:3, Interesting)
Yes it's a fair bit of work that seems unnecessary when you cou
Re:Define "massive" (Score:4, Informative)
Except that this is completely untrue. Even Windows 95 scanned the table of free blocks for a reasonable area of consecutive free blocks.
HKEY_LOCAL_MACHINE \System \CurrentControlSet \Control \FileSystem
DWORD ContigFileAllocSize
http://technet.microsoft.com/en-us/library/cc768196.aspx
Re: (Score:3, Insightful)
Actually NTFS is pretty good at keeping files unfragmented.
If a program opens a new file and them immediately seeks to the end of it to fix it's size then NTFS will look for a continuous block of free space to save it in. NTFS caches all writes so it can wait to see what the program actually does with a file before committing it to disk.
It also has a system designed to reduce the fragmenting effects of small files by being able to store their data in the same block as their metadata.
The only major fragmenta
Re: (Score:3, Insightful)
Yea, in 1992 maybe ... not sure what version of Windows you're comparing too but that hasn't been true for years.
Something like this (Score:5, Interesting)
Do something like this. Put it in a case / box / cabinet of your own design since you don't need the rackmount capability.
http://blog.backblaze.com/2009/09/01/petabytes-on-a-budget-how-to-build-cheap-cloud-storage/ [backblaze.com]
Re:Something like this (Score:4, Informative)
Do something like this. Put it in a case / box / cabinet of your own design since you don't need the rackmount capability.
http://blog.backblaze.com/2009/09/01/petabytes-on-a-budget-how-to-build-cheap-cloud-storage/ [backblaze.com]
If possible use something like ZFS (or btrfs if you feel confident about it) so that you get checksumming data protection.
If you're going to put all your eggs in one basket, you better watch that basket very carefully.
The creators of that kit don't use any kind of redundancy with-in the box because their custom software stack handles replication (kind of like Google FS / Hadoop FS).
Re:Bzzzt. Still Wrong. (Score:4, Insightful)
Re:Something like this (Score:4, Insightful)
The only caveat about that particular solution is the lack of redundant power, poor serviceability in the rack (may not apply like you said), and slow speed.
Their solution achieves the density it does because they are using SATA multiplexers, but that effectively creates bottlenecks and lowers overall speed. It works for BlackBlaze's application requirements, but YMMV.
Protocase.com makes the enclosure and will sell it to you for a pretty reasonable price. Getting all the parts is not such a big issue. I think we estimated we could build one without drives for less than $3k.
If you don't have it in a rack, then serviceability will be a lot better for sure. Rackmount solutions require cable management and heavy duty slide rails, and wide aisles, in order to gain access to the drives. The backplanes are parallel to the ground, facing up, and require taking the top off to access. Not exactly IT friendly.
Since the person in the article is not using this in a datacenter, cooling is going to be an issue. I suspect BackBlaze survives due to hot-cold aisles and plenty of airflow. Sticking one of those enclosures in a closet without ventilation/cooling is a recipe for disaster.
Cheap NAS (Score:4, Informative)
Enjoy!
Re: (Score:2)
My 2 cents (Score:2, Informative)
Re: (Score:2)
Paranoia (Score:5, Funny)
So Slashdot, what have you done?
Why? What have you heard??
ZFS (Score:3, Interesting)
My personal storage solution consists of a 4U rack case with a computer with a c2d CPU, gig-E NIC, a few gigs of ram, a bunch of 7200 RPM disks and FreeBSD on the system disk (I also have the system disk mirrored just in case). All the storage disks are then pooled using RAIDZ. Pretty simple yet powerful. Just don't expect too much in the way of performance.
Re: (Score:2)
Re:ZFS (Score:5, Interesting)
ZFS + Solaris.
I have a standard ATX case with 4-in-3 [newegg.com] adapter from Newegg. I didn't get the more expensive ones with trays because I didn't need to hot swap.
I have 2x1TB drives in ZFS mirror for boot. 5x1.5TB drives in RaidZ as a tank and 2x200GB drives in mirror with a virtual block device for Xen Debian and Windows 7.
OpenSolaris is amazingly simple to use, if you're just doing your home network.
At the most basic level:
zpool create tank c5t0d0s0 c5t1d0s0
zfs sharenfs=on tank
zfs sharesmb=on tank
zfs shareiscsi=on tank
Now your new drives are all shared over NFS, SMB and iSCSI.
I keep looking for the old school full height 'desktops' at a bargain store or so. Search newegg. 3.5" external works just as well as internal.
This [newegg.com] has 11 3.5" bays and 3 x 5.25" bays. With a 4 in 3 linked above you could have 15 hard drives in a case for $100. Or if you care about hot swappability This one [newegg.com] has 20 hot swap bays (at 3x the cost).
If you want more performance, get some SSDs to work as the ZFS "cache".
How much is a lot? (Score:3, Insightful)
Cheap solution (Score:5, Funny)
Re: (Score:2)
It occurred to me once that a person could write a FS driver that did something like that, but my immediate next thought was "Naaah.. You're insane, man." The fact that somebody has actually done it makes me giggle uncontrollably. I have to stop letting sanity get in the way of ambition -- I coulda been there first!
Hmmm. With a couple thousand "reflector bots" I wonder how much data you could store in the form of IRC messages flying back and forth.
Re: (Score:2)
Install ii [suckless.org] from Suckless and then you just need to do 'cat $file > server_dir/channel_dir/out'.
Filesystems are awesome.
Re: (Score:2)
Also don't forget the bandwidth required to push and pull all those HD videos.
Why do you need them available at all times? (Score:5, Informative)
I used to work for ABC news and we never kept archive footage always accessible like you want. If we wanted something that was really old we'd have to dig it off a tape, an unplugged hard drive or powered off computer, or we'd have to find another news agency that had the footage and grab it off of a satellite feed. And this was a 24/7 TV news station responsible for national news programming where we would be tracking stories for years. If we didn't need a system where everything was instantly accessible then you needing it on an individual level might be overkill in my opinion.
I have over 30TB of music, movies, and raw video footage on my home computers and I just keep everything on separate external hard drives. I label the drives, back them up twice each, and then keep an index in a .txt file that is easy to search through. So if I want a 1080p backup copy of Blade Runner I search 'Blade Runner' in the .txt file and I see it's on drive 'A' and then I plug in drive 'A' and dump the movie on my computer. I also keep an external drive that has backups of every TV show I own on DVD. So if I want to watch The Wire then I plug in the external drive labeled 'TV' and have at it.
Re:Why do you need them available at all times? (Score:5, Informative)
That being said, the solution is SIMPLE. If you have a bunch of hard drives with data you want, you put together a low end PC, install it into a server case, and fill it with hard drives and SATA controllers. When it's full, you build another one. You have 30tb of data, mostly not accessible. I have 10tb of data accessible from any internet connected computer on earth, and it's twice as much storage as I actually use. It cost me about 500$ to build and deploy a personal storage server, and it doubles as an HTPC. ( I already had most of the drives, and some parts) It's likely most people here have enough hardware laying around to implement a basic storage server. There really isn't any reason not to do it. As a bonus, since it's not a machine you need to access directly most of the time, you can hide it in a closet and forget all about it.
Sure, you could buy a premade NAS/SAN or stand alone data box. However, they are costly and not any more suited for the job than an old machine, or low end new system. At least, not in a personal environment. If you actually require robust data storage, I'd suggest a NAS, from any number of sources. But now we are talking about 4k worth of hardware, and requiring proper power systems to be added if you really want longevity out of it. However, that's overkill for a home storage solution, no matter how much data you have. Simply because you don't need enterprise class data serving, when only one or two computers are accessing the data.
If you don't know how to build and deploy a system with lots of drives accessible over a network, then you probably started at the wrong website for help. You want DELL/HP/IBM small office sales line.
Re:Why do you need them available at all times? (Score:5, Funny)
Re: (Score:3, Informative)
For his "downloaded" 720p/1080p movies, its reasonable to assume that these are most likely reencoded to mp4 / mkv files or ts streams, probably between 2-15gig each. An external USB2 Harddrive should be able to keep up with the transfer rates. As such, you could probably go with something such as a USB hub and tons of external harddrives.
But I agree with you. I have DishNetwork DVR with the external Harddrive option. I currently have three external Hard Drives filled with movies. I keep a spreedsheet on th
Solution (Score:5, Funny)
Don't worry, we'll be right over and take care of everything. You'll never have to worry about it again.
MPAA
P.S. My sister, Riaa wants to know if you're into MP3s
SATA port multipliers (Score:5, Informative)
SATA port multipliers - 5 to 1 for about $50 + 5 2 TB gives you 10 TB off 1 SATA port.
Re: (Score:2)
I'm fairly sure you can tunnel SATA over IP. Not sure where you'd get an IP-to-SATA adapter, though, or how much such a device would cost. But it would give him fully networked storage (ANY box on the network would see all drives as though locally connected) without having to use a network filesystem and potentially be as extensible as an IPv4 private network range. But it's heavily dependent on price as to whether it's worth it.
Re: (Score:2)
Do port multipliers actually work? Last I heard they were quite unreliable. Besides, there are SOHO NAS boxes with 8 real SATA ports which gives you 16TB without any headaches.
Re:SATA port multipliers (Score:5, Informative)
They work, but will slow the system down considerably. If you connect 5 drives up to 1 multiplier, the total speed you will get is the same as 1 drive hooked in directly. In otherwords, a bottle neck.
So technically it is possible to hook up 250 SATA drives into a single SATA RAID card, but you are not going to be that impressed with the performance.
Re:SATA port multipliers (Score:5, Informative)
Well, from what I understood there are two modes - one which will give you only time slots so 5 drives each get 1/5th of the time. That's the cheap variety. The other variety is traffic based, you can't exceed 3 Gbps but you can get the cumulative read/write speed up to that point. The SATA spec site has more [serialata.org].
Re: (Score:3, Informative)
Re: (Score:3, Interesting)
I'm in the process of building a 5-bay SATA port-multiplier solution right now. What I've learned thus far is:
* Most commodity motherboard chipsets don't support port multipliers. You'll need an expansion card.
* If you have this much data, look into ZFS and RAIDZ2 for reliability. Avoid RAID5.
* The bigger the disk, the longer it takes to rebuild a degraded array
* FreeNAS is at an inflection point. If you're not scared, use PCBSD directly instead to serve your data.
* Y
5400 RPM (Score:3, Interesting)
I personally set up a downloadserver that also functions as a media server to stream the content to other devices. I put in a couple 5400 RPM 1,5 TB drives, they use less power, generate less noise and heat than a regular 7200RPM drive but since you're not running any applications of them, you won't really notice the difference in performance. Prices have gone down a bit so the sweet spot for $/GB might be at the 2TB mark now. If you don't want to go for an entire computer, maybe a NAS solution would be best for you, with the same 5400RPM drives. A NAS will have less room for the disks if you really want *massive* amounts of storage, and also you usually must purchase one + the disks. The PC you can build from spare parts lying around. I personally put gentoo linux on mine, but you also don't exactly need top of the line equipment for a nice windows XP install. The NAS however will have outputs directly for your TV and will take up less room and power.
Still, the key is 5400 RPM + 1,5/2 TB.
Sounds like one hell of a porn collection (Score:5, Funny)
/. is definitely the place to ask...
Re: (Score:2)
Wow, that was quite far in the discussion before someone brought up that old joke.
I'd use it for extra storage for Tivos, personally.
Software RAID (Score:2)
Linux Software RAID5 has worked very well for me. Performance is decent (perfectly fine to play back and transfer 1080p video). I got one way back when 3x320GB was enormous and had a 1TB drive before they were remotely available.
Now I'm seriously considering 6x2TB for a 10TB RAID for my next server replacement. No need for an SSD for booting either, just set aside a tiny RAID1 partition (mirrored across all drives) for /boot and you're set. It boots and operates fast enough.
The one problem (as with any
Re: (Score:2)
The only issue I see with that is in order to access any of your data all drives must be spinning - that's a waste of power. This is caused by data being striped. What happens to your data if two drives die at once? Or three? Do you lose it all or just those drives? Can you use standard recovery software on the drives?
IMO SATA is fine, ZFS looks interesting but I'm happy already.
WHS (Score:5, Informative)
Windows home server, 1TB 7200rpm main drive with seagate LP 5900rpm drives, lock it away and never have to think about it till you need to drop another drive in.
The reason for the fast main drive is that with WHS when you copy data to it, it stores it on the main drive first, then schedules it to be distributed out to the storage drives the next time a "storage balance" is done.
Works fairly well, its based off windows server 2003 at the moment, but if you can wait till the end of the year they have a server 2008r2 version coming out soonish.
Re:WHS (Score:4, Interesting)
This. Even ready-made resellers have pretty small devices built on Win Home Server that can take a LOT of drives. Mine supports 4, but there's many models that can take 8, 12, or more drives. The OS is rock solid and has a lot of neat features, like being able to access your network from an SSL secured web app (built in) from anywhere with indexed search, and its easy to develop plugins for (though there's a ton available already) to extend it.
Re: (Score:2)
All the articles, complaints, and bugs regarding data corruption in Windows Home Server might lead one to think otherwise......
Re: (Score:3, Informative)
Yeah, because a bug (running vista on your client and using the server without the latest updates) over a year old and fixed is a problem...
I will also point out that the very first linux release wouldn't run on my 8088 cpu... ... ... ...
Please sir, if you are going to google for bugs, check your dates :)
Re: (Score:2)
That's a waste of drive space. The system I use puts parity on one drive, the OS on a USB, and the rest of the drives are standard format drives that can be mounted under Linux should I need to try data recovery (ReiserFS). Data is not stored redundantly and I can use any size drive I want so long as the Parity drive is bigger or of equal size. With 16 2TB drives I get 30TB worth of storage per server...
unRAID - worth looking into at least. Won't have some of the ability of the Home Server to run apps on it
Re:WHS (Score:4, Informative)
Another happy WHS owner here. I do recall reading that one of the service packs (there have been three) fixed the requirement for a big first drive - files now copy directly to the storage drives.
That said, I still use a fast system drive, and the rest are a mix of 7200 and 5400 rpm drives (depending on what was cheapest at the time).
Bought the original Coolermaster Stacker case. The front of the chassis is solely 5.25" drive bays - eleven of them - technically twelve if you mod the case to move the power+usb front panel elsewhere. :)
Oh, and despite being based on Server 2003, one of the nice things about WHS is that unlike the former it doesn't cost an arm and a leg.
Re: (Score:2)
Hehe, I use a thermaltake armor full tower, with an extra 2 "icage" units installed, 10 drives in the front and 3 in the back, all cooled directly with fans (except for one of the front bays).
Currently only have 11 drives in it, but there is still room to grow.
Its survived 3 drive failures (data was redundantly stored) and a motherboard failure (24/7 constant operation over 3 years managed to kill the caps on the motherboard with 2 months to spare on the warranty), so it is indeed rock solid :)
"I won't need 16,000 RPM drives" (Score:2)
More than that, you might not need even 7200 RPM drives. There are large capacity "green line" drives from some manufacturers, 5400 RPM, that might be perfectly enough.
I'm sure other posters will have much better recommendations as to how the overall setup should look like, but for whatever it's worth from me - stay away from consumer NAS solutions, they have usually quite small transfers (and I guess its important to you, with files being rather big). Large tower with plenty of space inside + Atom motherbo
Re: (Score:2)
Correct, 5400drives are fine for viewing 1080P video and actually so is 100meg ethernet but transferring data is slow so go GigE. Consumer NAS are indeed junk, a friend just emo-raged and pitched two Drobo onto Amazon's sales board. He stormed down to Fry's and bought $600 worth of hardware and drives to build an unRAID and is now quite happy with his new appliance that no longer needs care and feeding nor smokes interfaces. ATOM systems can be done but finding a board with enough slots and enough SATA is n
You are correct. (Score:4, Interesting)
Re:"I won't need 16,000 RPM drives" (Score:4, Interesting)
There are large capacity "green line" drives from some manufacturers, 5400 RPM, that might be perfectly enough.
Do they work in RAID? or do they randomly stop responding to the raid controller and then get dropped from the raid, triggering a rebuild, to show up a few minutes later, to trigger another one? http://en.wikipedia.org/wiki/TLER [wikipedia.org]
pervert (Score:5, Funny)
NAS (Score:4, Interesting)
I've tried a variety of approaches, but overall I've been happiest with just buying a NAS box.
I have a Synology DS209 [newegg.com], and I've been very satisfied. It's a relatively cheap way to get 2 TB RAID 1 storage with really simple backup to an external USB drive. If you need more storage, you can buy NAS devices with more than just two bays.
ZFS (Score:2)
Set up a nice OpenSolaris box with ZFS and export it with Samba/NFS/iSCSI, etc.
Budget? (Score:4, Interesting)
Re:This should be modded up (Score:4, Informative)
Re:This should be modded up (Score:4, Insightful)
I looked at a Drobo - but being on a budget, I kept on looking elsewhere. I don't doubt they deserve those reviews, but they are not cheap. And if the Drobo itself dies... good luck getting the data off those drives without another Drobo handy.
Dedicated NAS (Score:2)
I suppose if you like fiddling and want to tweak, then building your own is fun and all but if you just want something that works, is most likely quieter and uses less power than one you build yourself, then I say a standalone NAS unit.
I have a QNAP which I love - Synlogy, D-Link, Thecus, Buffalo, etc etc there's a lot of choices out there in 1/2/4/8+++ drive bay sizes. They will typically have various RAIDing options, spiffy web management interfaces, etc that make 'em pretty plug and play.
Just make sure t
The Black Dwarf (Score:2)
Re: (Score:2)
nice unit but entirely too much work for most people. this isn't a casemod, it's more like a "build your own car" kind of project.
I vote for the drobo elite. All that time and materials and tools that dwarf requires easily covers the cost.
Distruted File System (Score:2)
E.g. http://www.openafs.org/ [openafs.org], http://www.gluster.org/ [gluster.org]
unRAID from Lime Technology!! (Score:3, Interesting)
I have two of these servers now. Each server can hold as many as 16 disks (possibly more actually as the programmer keeps bumping that up) with one disk reserved for parity. Data is NOT striped and parity is ONLY stored on the one drive. If a disk fails I lose no data, if two fail I lose two disks of data but nothing else. No hot spares or any other crap. If a disk isn't being used it goes to sleep and saves me heat and power. Disks can be ANY size but the parity disk must be as big or bigger than any of the data disks. Runs on a pretty decent selection of hardware although keeping the list of what works and what doesn't up to date is apparently tough since hardware changes so fast. It's Linux based but pay for play, yes he's followed the GPL. It's not super expensive and it boots from a USB drive to be web administered. I use full tower cases with SuperMicro 5n1 trays, 2gig of memory, Celeron CPU, power saving PSU, and supported mobo that have onboard video and GigE which you WILL need.
Their forums are a big help and active, users are working to expand the capabilities of these NAS and the programmer is working on making that easier too. Check it out, I've not found anything better yet and with some of the newer versions of SAMBA in the code it's pretty fast too! Perfect for a HTPC but not so great for a big transactional database
http://www.lime-technology.com/ [lime-technology.com]
My solution (Score:2)
Preferring Western Digital drives (for no particular reason) I have a pair of 1TB My Book Essential Edition external USB drives as well as a 2TB My Book World Edition network drive (which I got form a guy for like half the price).
Anyway, the World Edition has a USB port that allows me to connect the other two drives to it using a USB hub and it shows them as network shares in addition to its own folders.
Another nice thing about the World Edition is that it runs Linux so there's neat stuff you can do with it
Have you considered ATA Over Ethernet (AOE)? (Score:4, Interesting)
http://en.wikipedia.org/wiki/ATA_over_Ethernet [wikipedia.org]
This is something I've always wanted to play with. It's a little expensive (for a home user) to get into, but it's extremely scalable. If I moved all my DVDs and such to on-line storage, I think this is what I would opt for. It can be run in all sorts of RAID configurations, doesn't require matched sized hard drives, and it can all be racked up very nicely.
OWC Qx2 4-drive RAID array (Score:3, Interesting)
4 drive bay, USB, FW400/FW800 and eSATA. Will take 2tb drives, RAID 0, 1, 5 and 10. Comes pre-populated or unpopulated, the latter is what I got and added my own drives. http://www.macsales.com/ [macsales.com] No financial connection, just a satisfied customer (they have great tech support!)
This is obviously not a build-it-yourself storage array, but is a good option if you want a commercial out of the box solution.
DIY. Map-Drives, Dir, Grep (Score:2)
Have another script that you run to index things. Basically, a dir /s command [add filesizes to the end if you can]. There's your index of where everything is. Use grep to access it quickly, or load it all up in a text editor and find to acc
Two Options (Score:3, Informative)
1) Cheap tower server + your favourite unix distro + software RAID + many, many cheap 2TB drives.
2) Standalone NAS device. Everyone so far seems to recommend different makes so I'll carry on the trend and suggest Thecus [thecus.com]. Just slot in the drives and you're ready. Install the SSH module and you also have a Linux server too.
NAS Should be Obvious (Score:2)
What you do after that depends on how geeky you want to be. I have Freelink running on my L
The Delete key (Score:2)
Do you really need to save all those Blu-ray rips of the latest Hollywood blockbusters? Just delete them after you watch them. That way you'll have plenty of room for all those raw uncompressed video.
Unraid! (Score:2, Redundant)
Forget NAS (Score:3, Interesting)
If you want cheap, affordable storage get:
A decent full tower case with a modular PSU
A motherboard with 8+ SATA ports (cheap)
A 4-port SATA expansion card
=
12 SATA slots + 12x SATA power for cheap
Get a cheap bunch of 1.5 TB drives for up to 18TB total. If you say home I assume you don't mean 99.9% redundancy. You can buy a new PSU or motherboard or whatever and have it delivered and that's okay. Softraid two drives in RAID1 for 1.5 TB less storage. If you need more protection then upload it to some offsite backup - any external disk or second machine is still vunerable to theft, fire and whatever. It works for me, though I only have ~10 TB due to due of old low-capacity disks.
Oracle Sun Fire X4540 (Score:2, Funny)
NAS devices (Score:2)
You specified quiet and hidden (small), not cheap, so I'd go for a NAS device the synology DS1010 can do 5x2TB (8TB with redundancy), and if you need more it can be expanded with 5 bays more.
A cheaper option would be to take some old hardware and toss a NAS distro on it, but I'd expect more hassle and noise from that solution
speed? (Score:2)
any sort of network accessible drive is going to be relatively slow. if you are copying large files that will be important. if you expect to use the large drive for your working sets, as opposed to just for storage, that will be crucial.
the truth is that you probably won't be happy with anything less than a eSATA interface.
Cheap COTS NAS (Score:3, Informative)
The only thing I'd change is that a dual core Atom would be better. I actually haven't run into a bottleneck yet, but I wouldn't try reencoding videos on it. I believe the dual core model will be out this month. No affiliation with Acer; I'm just geeked because this is just the quiet, cheap server I've wanted for years. Sounds like sharing your other computers via NFS (automount) or CIFS plus one of these would address your needs; if not, maybe the info will help somebody else.
Cheap PCI Mobo + Multi SATA Cards (Score:4, Interesting)
I've got a $20 case, $50 500W power supply and $40 motherboard with SVGA and ethernet, its 5 PCI slots each stuffed with $25 4x SATA cards, 20 $100 1TB HDs. Running Linux, network mountable drives and ssh login.
That's $210 PC + $2000 HDs for 20TB. That's a lot of porn storage for you.
Backup Your NAS (Score:4, Interesting)
I am almost at capacity on the RAID volume, so to expand I have another RAID card that I can put in the server and create a new volume or replace the 500GB drives with 1TB or now 2TB disks. Replacing the disks would save power and heat, but I would need to backup and restore 3TB of data. Adding another RAID card is easy, but crates more dives that I can't turn off and eat up power.
I am actually thinking about building a new server with the thought of being able to add an many SATA ports as possible (via SATA cards) and then us port multipliers. The use a software based file system or RAID that allow me to add drives of different sizes to the volume. Similar to ZFS but more open. This would make growing the system much easier and allows me to power down drives when not in use. I would still get plenty of performance for my needs.
The other thing that I am doing that most people don't think about, is I backup my entire NAS to another server. I took another old PC that I had and put a 4 port SATA card in it and four 1 TB disks and run Linux and software RAID on it. Each night it powers up and runs a script to back up the primary NAS. I do this just in case something catastrophic happen to my primary NAS and I also use it when I moved to larger disks on the NAS previously. I use rsnapshot to look for changes on the primary NAS' file system and only back up data that has changed. It also keeps the lat 3 months of files that have been changed or deleted, if I need to recover something. When the script is finished, it powers down the backup NAS and wait until the next night to run again.
Re: (Score:2)
Re:Look at the DroboPro (Score:4, Interesting)
Good fucking god, $700 for the Drobo FS?
You could build a capable home server box AND buy some of the drives for that much.
Re:Look at the DroboPro (Score:5, Informative)
Having done that in the past, I'll say that buying a Drobo was worth the cost. Granted, I hunted around a bit to get a good sale price (it's not too difficult... though the FS is brand new so maybe not on that model yet), but unless you really enjoy tinkering with getting samba shares set up and working properly, sometimes it's just easier to buy your sanity.
Don't get me wrong - I wish they were cheaper. But their system worked better and more reliably than anything I ever put together, and I'm by no means incompetent. And their BeyondRaid tech, while proprietary, is pretty damn cool and works incredibly well. Being able to mix drives and not waste tons of storage space is a huge advantage that (as far as I know) I'm not going to get anywhere else.
Just a happy customer, not an employee or anything like that.
Re: (Score:2)
Re:Look at the DroboPro (Score:5, Informative)
Due to the small size and slick style I keep mine in my TV cabinet. I've done the measurements and no PC case on newegg can fit in this same space, never mind something that can house 4 harddrives.
The other thing that is so valuable about a Drobo is how well it manages it's RAID array. They call it BeyondRaid but I hear it's just a as many normal RAID arrays as it needs to organize the drives to both optimize space and maintain redundancy. Also you can pop harddrives in or out while it's on and it will automatically restructure the RAID on the remaining drives to still be redundant with out any need to shutdown or stop sharing data. I recently needed to test this out for my self. I popped out my 4th drive, plugged it in to my PC, formatted it and started moving data from my Drobo to the harddrive I just removed from it while the Drobo was still restructuring. I expected a huge mess, but everything worked exactly like the advertised. I was kinda shocked.
FYI the reason I did that swap out was because I foolishly formatted my Drobo as NTFS. This worked ok but I had one to many problems talking to it from my Linux PC. The permissions were all messed up over samba. New folders and files I created on the Drobo were root access only for some weird reason. So I decided to format it as ext3. Since the DroboShare runs Linux this is the best option for a shared drive and works fine while talking to mac and windows as long as you do so over the network.
Re: (Score:2)
Re: (Score:2)
In fact a friend of mine DID! His Drobo fucked up so often he nearly threw it out a window! He pinged me one night raging about it yet again. I told him to head for Fry's and when he got back with $600 worth of hardware he was good to go. His hardware booted unRAID luckily and when he was done and the parity all setup and the disks built he had a solid system that replaced TWO Drobo for $600. Sold his Drobo to folks on Amazon and was ahead in the money dept! The software he's running will support 15 data dr
Re: (Score:2)
Plus I recommend the drobo as well with its cool "Beyond Raid" system that let's you just pull out the smallest disk in your array and plug in a new one. Anyone who's ever rebuild or updated a traditional RAID array knows what an improvement that is.
Eh? That sounds like every other RAID system I've ever used.