Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Hardware

Creating Large, Safe, and Cheap Network Attached Servers? 30

davco9200 asks: "I am looking to create a large data server for all my digital media files. The usage is the 'pro-user' category, to use the media from multiple stations in my house and at work. I value space (150+ gb would be nice), accessible from multiple platforms (Win, Mac), but perhaps most of all, some security (e.g. RAID 0 doesn't cut it). Total write or read access isn't that high of a priority. I have looked at things like the Snap 4100 that offer 160 gb or 300 gb and good raid options but the price seems high ($2,700 and $4,500 respectively). Has anyone had any experience making their own low-end NAS? Has anyone looked at the Adaptec IDE RAID Controller? This seems like a reasonable way of getting data parity so if one drive goes down your entire collection isn't lost. I figured Slashdot readers would have some good solutions. Information on specific cases, drives, and other pertinent facts would be helpful."
This discussion has been archived. No new comments can be posted.

Creating Large, Safe, and Cheap Network Attached Servers?

Comments Filter:
  • Well... (Score:3, Informative)

    by cmowire ( 254489 ) on Wednesday September 19, 2001 @08:49PM (#2323084) Homepage
    Well, if you don't care too much about performance, don't even bother with a 'real' IDE RAID card, just do software RAID. The reason why people use the hardware RAID cards like the higher end IDE RAID cards and the SCSI RAID cards is because it's faster.

    So you can cut the cost of a good IDE raid card and just put an extra IDE controller card in your psuedo-NAS box so you can have 8 drives. Or you can put that off until later.

    You should also consider getting a DAT drive and a bunch of DAT tapes, to back things up, just in case something massively bad happens to your system. RAID 5 is not perfect, and if your system catches on fire because of too much dust in the power supply, it'll be helpful.
    • I wouldn't let DAT tapes anywhere near backup of stuff I actually care about for several reasons, but mostly, capacity and speed

      Firstly size, how much storage are you considering trying to backup? The biggest DAT tape you can get is 20-40Gb compressed. Now your post was talking about 8 Drives, 40Gb each, raid5 would mean 280Gb of data yep? so that would mean 9 DAT tapes to back the whole lot up (with an average of 30Gb per tape) that doesn't sound good to me

      Secondly speed, how long will it take to backup 280Gb onto DAT? DDS4 is usually about 300Mb per minute (max), so that is 955 minutes - 15 Hours!

      Now for a serious backup solution, even though it's expensive, you proabably want to consider DLT. You can get 40-80Gb on a single DLT tape meaning you only need 5. There are even DLT like solutions that will provide 100-200Gb on a single tape. Also consider speed, a good DAT drive can give you 300Mb per minute, a 80Gb DLT drive can do 720Mb per minute meaning only 6 Hours (one night backup run). Also you must consider reliability, DLT tapes are much more robust and are considered a decent archive media, I know you said cheap but you need to make sure you don't loose your data - that would be expensive.

      • I wouldn't let DAT tapes anywhere near backup of stuff I actually care about for several reasons, but mostly, capacity and speed

        Bzzzt... we're working under a different set of rules here. I'm in the process of doing this exact same thing, and I'm using DAT for backup because they are perfect for what I want - long term, cheap, backup once and never again storage media.

        Think of this way - who *cares* if it's a small box of tapes? Who *cares* if it takes 48 hours to back it up? I'm converting my entire CD library to MP3, and my entire videotape/DVD library to DivX ;-). As I get about 30-35 gigs of encoded data done (which takes quite awhile to encode), I back it up onto a single DAT tape. That's 50 movies, or 3500 songs... which will free up two shelves of videotape (or really one shelf, since I have to double shelve tapes), or a chunk of one of my 500 disk CD-changers, and turning two shelves of VHS into a very very small, durable tape is very very nice.

        As you can probably guess by that last sentence, I have lots of media... tons of CDs and video. I have this fantasy that I get raided for "piracy" because I use MP3 and DivX ;-), it hits the news bit time, and I walk into court with dozens of boxes full of original videotapes, DVDs and CDs. Hehehehe.

        My prototype system is a 450Mhz Pentium II with a Promise IDE RAID, with one Maxtor 80 gig hooked up, 64M RAM (I had two 20 gig hooked up to test the RAID earlier). I'm looking at upping the processor because encoding just takes too long, the memory seems fine for single playback or encoding, and the controller seems fine for 4 drives, but I want 8, probably Maxtor 80 gigs, as they are cheap.

        My biggest problem is the initial video grab or analog sources. I haven't found a really good capture card, and I want one that runs under Linux, preferably with open source command line utilities so I can automate the process. As it is, I can convert DVDs just fine, and I have an All-in-Wonder Pro sitting on my desk that I want to test (since I have it, I might as well try it). I've seen some absolutly incredibly encoded porn out there - very clearly grabbed by an amatuer, and from a video (you can see the artifacts), but the clarity from an NTSC signal is incredible. I'm really curious as to what equipment is available for the prosumer for video capture, preferably Linux friendly.

        Alas, as I just laid myself off from my company (at least for a few months until we start making better revenue), it dosen't look like I'll finish my media server for another several months. Maybe then I'll get 120 gig drives.

        --
        Evan

    • People also choose hardware RAID for greater reliability, OS independence, OS simplicity, and in some cases hotswap-ability of drives.
  • Solution (Score:2, Informative)

    by 1101z ( 11793 )
    I am working on a similar project but in the >1TB range same thing applys to http://staff.sdsc.edu/its/terafile/ [sdsc.edu] On that page they have a link to another page with stuff about ide-raid. >http://www.research.att.com/~gjm/linux/ide-rai d.html [att.com] The 3ware cards are the way to go as they do raid 0,1,5 in hardware, and support things like hot swap and hot spare. I priced out a system that was just over 1TB of raid 5 for around $5,000 while the prebuilt stuff is $20,000.
  • One great place to look for an old tower is your local Computer Goodwill or used bulletin board. Get an old box with a 300W power supply, take out the motherboard, start stacking SCSI drives in a daisy chain, and then bring it back to a SCSI controller on your main box.

    Obviously SCSI is more expensive than IDE, but you get a little bit more. Just food for thought.

    • Don't be too cheap here. Power supplies do go out (or catch fire because of dust or pet hair that blew into them) and even a "good" unit might have problem maintaining its rated capacity after a number of years.

      Keep the chassis, but replace the power supply. Besides, since this is a server this sounds like a good place to use one of those units with a built-in UPS, even if box is hooked up to an external UPS. (I've had them fail because I unwittingly overloaded them, because of poor designs that allowed me to accidently turn off the UPS protection but not the power, etc.)
  • Tom's Hardware has an article on their main page about software RAID under Windows 2000. Apparently it supports striping and spanning, and unlimited drives as long as you have enough controllers for them. The boot HDD has to be standalone but the others can be RAID-linked. The best thing is that it supports multiple interfaces. You can have 3 IDE drives and a SCSI drive and they'll all RAID together.

    I don't know if an MS product is what you'd want to use, but it's out there.

    J.W. Koebel
    • For what it's worth, the volume stuff in Windows 2000 is not from Microsoft. It's a light version of a separate piece of software called VERITAS Volume Manager [veritas.com].
    • The trouble with Windows 2000 software RAID is that you boot disk can't be a part of the RAID array. I'd recommend the Windows 2000 software raid over the Promise ATA RAID cards - they are software too, they just hook themselves to the BIOS. The Promise ATA solution has been flaky in the two instalations I have used it - I had to use IBM's DFT to make the two IBM drives communicate to the Promise FastTrack controller in UDMA 2 mode - otherwise they would get corrupted. I've had very good luck with 3Ware's hardware products, highly recomended!
  • We use them in our servers and they support 50+ users without problems.
  • I would absolutely recommend the 3Ware Escalade IDE RAID cards. I'm using a 6200 right now with a couple 75GXP's striped, and can pull 50MB/s easily. And for less than $120, it was a great value!
  • I don't know what the failure mechanisms are... but it sure is appealing to just buy two of the biggest IDE drives you can afford. Fill up one, copy all the data to the other, and then just turn the power off to the backup drive.

    Last I checked, Fry's was advertising big disks at under $2 per Gigabyte. That's cheap backup.
  • I've build a box just for that requirement a few months ago using a 3ware IDE controller, 4x100gb drives in raid 5 mode and linux :) gives a nice 300 gb of space.

    I think the overall costs were around 2000 Euro ..
    (controller, drive bays, BIG case, etc ..)
  • I'm wondering if you're not letting buzzwords get in your way?

    KISS

    For home use, right? Just buy (a bunch of) gigundo SCSI drives, and cram them into an Intel system running your choise of free/open source OS. I'm fond of the *BSD family, but YMMV.

    Run samba and you'll see the files as if they were on drives native to Windows/Mac.

    Or am I missing something?
  • by Vito ( 117562 )
    One way to go might be an inexpensive, but not underpowered PC, with a PCI Firewir-- er, IEEE-1394 card.

    Buy a bunch of cheap, identical IDE HDs, and put them in IEEE-1394 cases (~$150/ea.). Compile yourself some bleeding-edge Linux-1394 [sourceforge.net] support, plug in your HDs, run XFS [sgi.com] as the filesystem, and use software RAID [linuxdoc.org]. Because you said this is just for storage and media access, you probably don't need the currently limited FireWire hot-plug support [sourceforge.net] and possibly still currently limited RAID hot-swap support [linuxdoc.org].

    For more on software RAID, IBM has a nice two-part article (1 [ibm.com], 2 [ibm.com]) on it.



  • This is my receipe for an "homebrew" Snap4100

    1) Get:
    - 1U 4bays rack mountable chassis from Sliger Designs [sliger.com]
    - 3WARE 6410 Escalade [3ware.com] IDE controller (Choice of 0/1/0+1/5 Raid) on a 90 PCI riser card
    - 4 x 75/100GB ATA100 drives (maybe DiamondMax [maxtor.com])
    - MicroATX mainboard [tyan.com] with NIC and Video integrated on board (invest in RAM not in processing power - 750/850MHZ should be more than sufficient)
    - Minimum Linux [linuxlinks.com]/*BSD [freebsd.org] OS booting from a read-only 16 to 64MB flash IDE device [tapr.org], loading kernel and a customised Ramdisk [linuxhq.com] root filesystem, mounting Raid devices in R/W mode, starting SAMBA [samba.org] (and/or Netatalk [anders.com]).
    A good starting point is Linux Bootdisk HOWTO [ibiblio.org]

    2) Choose 0+1 Raid and you get quick and completely redundant 150/200GB storage that can survive the full failure of one disk.

    3) Want remote grafical managment from a standard web browser? Go for Webmin [webmin.com] or SWAT [samba.org].




  • Has anyone had any experience making their own low-end NAS?

    Yeah, probably about half the people on Slashdot. ;-)

    There's really nothing to it. Get a x86 PeeCee, 4 cheapo 80GB ATA drives, and Linux or xBSD. Put a bootable+usablesystem partition on each drive (since you never know which drive is going to fail first) and use the rest of each drive as a slice for a RAID5. You'll have a about 220-240 Gig of storage that can survive media failure. You can do it for somewhere in the $1500-$2000 range.

    For larger non-media failures, you're still screwed, though. Backup technology just hasn't kept up. :( Unless you have Big Money to spend on this (and maybe even then), you will end up using many tapes for a single backup. That fact will tend to influence you toward not backing up at all, unless you are unusually disciplined.

    For the exporting to the other machines, the stuff you need should come with just about any Linux distro. I use NFS (for Unix and Amiga) and Appletalk (for Mac). I suppose Samba would work for wintel boxes.

  • by tkrabec ( 84267 )
    I'm not too sure but, I know you want to try to keep the mirros and the data on 2 separate controllers
    I believe this would be an optimal solution get 2 cards
    RC = Raid Controller

    4 drives
    RC1-----RC2
    D1------M1
    +--------+
    M2------D2

    8 drives
    RC1-----RC2
    D1------M1
    D2------M2
    +--------+
    M3------D3
    M4------D4

    You need to have the mirrors on different cards
    you will get better read/write performance
    data for a drive on 1 controller and the mirror on the other, and keep the mirrors and data split between the controllers Ide or scsi, I'm not sure what functions are needed on the card or even if the 3ware cards will support it. I'm not sure how to configure this on the SW side either. But for my new DB server I will configure it this way.

    You can lose 50% of the drives(if they are the right 1's it not the data and the mirror)

    with raid 5 you can loose 1 drive, then the spare(if installed) must recreate the data before you can lose another drive.

    -- Tim

Top Ten Things Overheard At The ANSI C Draft Committee Meetings: (5) All right, who's the wiseguy who stuck this trigraph stuff in here?

Working...