Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Data Storage Operating Systems Software Hardware

What is the Ideal Low-end NAS Solution? 45

Mark asks: "As demand for storage continues to grow and prices continue to drop, network attached storage (NAS) devices are popping up everywhere...from large enterprises to restaurants to small offices and homes. Several vendors are now offering low-end NAS solutions targeted at SOHO users, with varying results. Most of them are just standard PC components and standard IDE hard drives running Linux, but the price tag on these often far oustrips what one would expect to pay for the parts. Hence, people all over the world (myself included) are building their own NAS machines at home at a fraction of the cost. Beyond support for RAID, CIFS, NFS, HTTP, and FTP, what would the ideal home NAS operating system include? And more importantly, what should it leave out to avoid conflicts, security vulnerabilities, and instability? Are there any Linux/*BSD/other distributions out there optimized specifically for NAS applications? What does the ideal NAS distribution look like to you?"
This discussion has been archived. No new comments can be posted.

What is the Ideal Low-end NAS Solution?

Comments Filter:
  • by ElForesto ( 763160 ) <elforesto&gmail,com> on Thursday August 12, 2004 @03:50PM (#9951761) Homepage

    A NAS is little more than a box of hard drives with a NIC attached. They get a nifty web-based interface or somesuch to make it real simple to setup and they often come in small packages, but is that worth the premium? You could buy a small-ish desktop/tower case and probably build your own very cheaply. Setting up Samba on Linux with simple "everyone can write" access is braindead simple.

    Do you need a web-based interface? Do you need hot-swappable drives with auto-rebuild? Do you need a 2U rackmount or other small-ish case? (Remember, need is a very strong word.) If you can't answer yes then save yourself a few grand and do it yourself.

    On the flip side, if you DO need that stuff, I've been very pleased with Fastora [fastora.com]. Good interface, easy setup and lots of options. We got a 1.337TB unit (8x250GB hard drives in RAID5, one drive as a hot spare) with 2x100Mb NIC and 1x1Gb NIC for around $7,000.

  • Distributions... (Score:4, Informative)

    by facelessnumber ( 613859 ) <drew&pittman,ws> on Thursday August 12, 2004 @04:02PM (#9951920) Homepage
    Might have a look at Mitel (formerly e-smith) SME Server [e-smith.org]. I've been using it for my file server at home, email, and to host a few domains for a couple of years now. Good stuff, pretty secure, can also be your router/gateway. One ther I haven't looked at, but I intend to check out soon, is BlueQuartz. [bluequartz.org] Not really a distro, but the results of Sun open-sourcing the Sobalt RaQ550 network appliance. There's a binary install kit for a basic Redhat/Fedora setup, source, and many howto's out there...
  • RAID 0 (Score:2, Informative)

    by sirangusthefuzz ( 756239 ) on Thursday August 12, 2004 @04:03PM (#9951940) Homepage
    That is RAID 0 by the way. Obviously, RAID 1 would be useful if you needed the redundancy.
  • by I_Love_Pocky! ( 751171 ) on Thursday August 12, 2004 @04:05PM (#9951958)
    RAID isn't just for speed (infact I wouldn't think that would even be considered its primary purpose).
  • Most work (Score:2, Informative)

    by DrunkBastard ( 652218 ) on Thursday August 12, 2004 @04:15PM (#9952079) Homepage
    Most distributions would work, I'd suggest grabbing something with your prefered journaling filesystem support. Some OS's don't support XFS natively, some don't support JFS, some don't do ReiserFS....so, whatever you feel comfortable using, make sure your distro does it. Other than that, I'm a fan of LVM, so perhaps a look at distros that support that as well.

    NAS boxes are pretty cheap and easy to build these days, just make sure if you're going to do RAID that you buy a REAL raid controller, with hardware raid support, not that crap that relys on software drivers for raid support. 3ware is wonderful solution as it's been included in the linux kernel for many, many moons.

  • by Yeechang Lee ( 3429 ) on Thursday August 12, 2004 @05:25PM (#9952925)
    I recently began a Usenet thread [google.com] on this very topic. I've copied the original post below:


    Subject: I want to build a 1.5TB storage array for MythTV

    Recently ran into the account of a guy who built his own 1.2TB RAID50-based storage array for $1600 [finnie.org]. I really like the idea and have been thinking about following suit.

    Like Finnie, I want to be able to store huge amounts of DivX/Xvid files online. In addition to the storage array, I also plan to build a separate MythTV [mythtv.org] box, which among other things will let me play them at will. My 200GB Series 1 TiVo's been serving me well for more than four years, but I really like the idea of being able to seamlessly integrate my AVI collection with TV recordings, and from what I gather MythTV has finally matured enough to be a realistic TiVo alternative.

    I have been 100% Linux at home for almost a decade and am quite comfortable with most of the technical aspects of the project.

    I'm planning on making the following changes to Finnie's build configuration:
    • Instead of 200GB ATA, use 250GB SATA drives for a total of 1.5TB. Outpost.com offers a Western Digital 250GB SATA drive for $170 [outpost.com]. I just missed the chance to get a $30 rebate off each drive, but I'm sure Fatwallet will alert me to a similar opportunity sooner or later.
    • Accordingly, get a HighPoint SATA RAID card instead of the specified RocketRAID 454 ATA RAID card. I think the RocketRAID 1640 [newegg.com] is the way to go.
    • Instead of ext3, use XFS as the file system.

    My questions:
    • If I connect the storage array to my Linksys WRT54G router, will 100Mbps Ethernet be fast enough to pump the AVI files to the MythTV box without dropping frames?
    • Conversely, will 100Mbps Ethernet be sufficient to let me use the storage array as the primary storage medium for MythTV's recordings? What about HDTV encodings (using the pcHDTV Linux-only card)? Or do I have to upgrade to a Gigabit Ethernet router? Or would the encoder card and MythTV software have to run on the storage array itself in order to achieve acceptable performance? (Actually, I'm not opposed to doing so, if one box can simultaneously handle both storage and MythTV tasks.)
    • Anything else that I'm missing or should keep in mind?
  • by vlm ( 69642 ) * on Thursday August 12, 2004 @06:22PM (#9953461)
    Well lets think about this here.

    Most of my "TV episode" DiVX collection is in the general area of about 350 megabytes for a 45 minute show. Now about a minute with Octave will show that 350 / (45*60) * 8 is about a megabit per second.

    That sounds reasonable considering the bandwidth of real digital TV mpeg streams.

    So we will assume you need about a megabit a second.

    I guess that would rule out ArcNET or a 9600 baud SLIP but everything newer than say, 10 meg HDX thinnet, will work. You're asking if a net tech a hundred times faster than necessary will work or should you go for one a thousand times faster than necessary.

    You need to optimize something else... heat production, or latency, or pretty much everything else before you concern yourself with those questions.
  • by drsmithy ( 35869 ) <drsmithy@nOSPAm.gmail.com> on Thursday August 12, 2004 @10:19PM (#9955091)
    Recently ran into the account of a guy who built his own 1.2TB RAID50-based storage array for $1600. I really like the idea and have been thinking about following suit.

    Just for anyone else reading who gets similar ideas, he's got some big errors.

    It looked like a normal 4-port ATA RAID controller, but with one difference: it boasted the fact that you could do RAID across 2 devices per channel. Normally this would be a stupid feature. Under normal circumstances, NEVER connect 2 drives to one channel if you intend to do RAID. Why? There is just as good of a chance that the channel itself dies than a single drive failing.

    This is incorrect. The reason you only put 1 device per channel is because with IDE, only one device on a channel can be active at once. It has nothing to do with the likelihood of failure. Even if that weren't true, his assumption is silly - a single drive is much more likely to break than a single channel on a controller.

    This erroneous assumption carries through his entire implementation and has crippled it's performance (as seen in the benchmarks - 36MB/s ? That's pathetic for an 6 disk RAID0 array - effectively what is is for disk reads). Using the "hardware" RAID on the card is another mistake, tying the array forever to that particular brand and model of disk controller.

    Folks, if you're setting up honkin' great big RAID arrays at home and don't want to pay for decent RAID controllers like 3wares, *use software RAID*. The CPU overhead is insignificant and the bonus of being able to move the array between arbitrary machines and not having to worry about a disk controller failure permanently making your data inaccessible is more than worth it.

  • Re:Most work (Score:4, Informative)

    by dtfinch ( 661405 ) * on Thursday August 12, 2004 @11:31PM (#9955462) Journal
    I had always used reiserfs for everything, but having been recently asked to set up a small bunch of inexpensive file servers, I took the time to research which filesystem is best able to survive a crash or power outage. The few recent tests I've found suggest that of XFS, JFS, reiserfs, and ext3 (ordered), ext3 had the by far best recovery rate, and reiserfs had the worst among the journaled filesystems tested. In one, where a disk intensive app was run and the system was reset several several seconds later, ext3 survived over 300 power cycles with minimal damage, while reiserfs became unbootable after 10 cycles, and the rest did better but came nowhere near ext3.

    After a few days of disbelief and frantic googling, I decided to make the switch to ext3. Now if I can only get approval to purchase UPS's for the servers.

    As for which distribution to use, we tested Slackware 10, Fedora Core 2, and finally chose CentOS.

UNIX is hot. It's more than hot. It's steaming. It's quicksilver lightning with a laserbeam kicker. -- Michael Jay Tucker

Working...