What is the Ideal Low-end NAS Solution? 45
Mark asks: "As demand for storage continues to grow and prices continue to drop, network attached storage (NAS) devices are popping up everywhere...from large enterprises to restaurants to small offices and homes. Several vendors are now offering low-end NAS solutions targeted at SOHO users, with varying results. Most of them are just standard PC components and standard IDE hard drives running Linux, but the price tag on these often far oustrips what one would expect to pay for the parts. Hence, people all over the world (myself included) are building their own NAS machines at home at a fraction of the cost. Beyond support for RAID, CIFS, NFS, HTTP, and FTP, what would the ideal home NAS operating system include? And more importantly, what should it leave out to avoid conflicts, security vulnerabilities, and instability? Are there any Linux/*BSD/other distributions out there optimized specifically for NAS applications? What does the ideal NAS distribution look like to you?"
Why complicate matters? (Score:5, Informative)
A NAS is little more than a box of hard drives with a NIC attached. They get a nifty web-based interface or somesuch to make it real simple to setup and they often come in small packages, but is that worth the premium? You could buy a small-ish desktop/tower case and probably build your own very cheaply. Setting up Samba on Linux with simple "everyone can write" access is braindead simple.
Do you need a web-based interface? Do you need hot-swappable drives with auto-rebuild? Do you need a 2U rackmount or other small-ish case? (Remember, need is a very strong word.) If you can't answer yes then save yourself a few grand and do it yourself.
On the flip side, if you DO need that stuff, I've been very pleased with Fastora [fastora.com]. Good interface, easy setup and lots of options. We got a 1.337TB unit (8x250GB hard drives in RAID5, one drive as a hot spare) with 2x100Mb NIC and 1x1Gb NIC for around $7,000.
Distributions... (Score:4, Informative)
RAID 0 (Score:2, Informative)
Re:Network limitations (Score:3, Informative)
Most work (Score:2, Informative)
NAS boxes are pretty cheap and easy to build these days, just make sure if you're going to do RAID that you buy a REAL raid controller, with hardware raid support, not that crap that relys on software drivers for raid support. 3ware is wonderful solution as it's been included in the linux kernel for many, many moons.
Why not build our own? (Score:2, Informative)
Re:Why not build our own? (Score:2, Informative)
Most of my "TV episode" DiVX collection is in the general area of about 350 megabytes for a 45 minute show. Now about a minute with Octave will show that 350 / (45*60) * 8 is about a megabit per second.
That sounds reasonable considering the bandwidth of real digital TV mpeg streams.
So we will assume you need about a megabit a second.
I guess that would rule out ArcNET or a 9600 baud SLIP but everything newer than say, 10 meg HDX thinnet, will work. You're asking if a net tech a hundred times faster than necessary will work or should you go for one a thousand times faster than necessary.
You need to optimize something else... heat production, or latency, or pretty much everything else before you concern yourself with those questions.
Re:Why not build our own? (Score:4, Informative)
Just for anyone else reading who gets similar ideas, he's got some big errors.
This is incorrect. The reason you only put 1 device per channel is because with IDE, only one device on a channel can be active at once. It has nothing to do with the likelihood of failure. Even if that weren't true, his assumption is silly - a single drive is much more likely to break than a single channel on a controller.
This erroneous assumption carries through his entire implementation and has crippled it's performance (as seen in the benchmarks - 36MB/s ? That's pathetic for an 6 disk RAID0 array - effectively what is is for disk reads). Using the "hardware" RAID on the card is another mistake, tying the array forever to that particular brand and model of disk controller.
Folks, if you're setting up honkin' great big RAID arrays at home and don't want to pay for decent RAID controllers like 3wares, *use software RAID*. The CPU overhead is insignificant and the bonus of being able to move the array between arbitrary machines and not having to worry about a disk controller failure permanently making your data inaccessible is more than worth it.
Re:Most work (Score:4, Informative)
After a few days of disbelief and frantic googling, I decided to make the switch to ext3. Now if I can only get approval to purchase UPS's for the servers.
As for which distribution to use, we tested Slackware 10, Fedora Core 2, and finally chose CentOS.