Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Hardware

Network Attached Storage on a Budget? 44

Full'o'MP3 asks: "Wondering what to do with all those (formerly huge) hard disks on the shelves? Well, so am I. After looking at all sorts of USB enclosures, I remembered that, a long while ago, I saw a description/review/whatever of a small board (around 3" by 4") that essentially had an Ethernet interface on one end, a microcontroller in the middle and an IDE bus on the other end. It was designed only for that purpose, could not even format the hard disks on its own and only supported SMB without any access control, but by golly, I'm looking for about a dozen (or about 1 per 4 disks). Slap them inside old PC cases, fill them with hard disks, and you have a very simple, cheap file server for home or school. I've looked at a lot of embedded Linux and commercial storage stuff, but they are all overkill and require brand new hardware. Anyone have any pointers for this? (Butchering old laptops, iPAQs or similar stuff won't cut it...)" Readers may remember this thread from early May about doing something similar with new hardware. Since this is the "budget version" of the similar question, I felt it was deserving of its own post. How hard would such a device be to build fom old computer parts and hard disks? Details on cheap electronics (like the submittor-mentioned device) that would make this easier would be appreciated.
This discussion has been archived. No new comments can be posted.

Network Attached Storage on a Budget?

Comments Filter:
  • Just use old PC MBs (Score:3, Informative)

    by ghostlibrary ( 450718 ) on Wednesday July 31, 2002 @12:01PM (#3986709) Homepage Journal
    The cheapest/easiest method would be: snag some old 486 or Pentium systems, install 4 IDE devices per, add an ISA ethernet card, and put linux on it with the few services needed (networking, yp). Probably cost you, oh, free, since a lot of folks just are tossing old 486s/Pentiums. Or buy a bunch at your local gov't auction (NASA centers have these frequently, etc).
    • by dasunt ( 249686 )

      Actually, I did something similar. I have a Pentium 166mhz (AOpen), a 2 gig HDD, and a new 80 gig HDD. With only 16 megs of memory, and a stripped down version of debian on it, its fast enough to saturate the 10 mbit/sec connection via samba. On a 100 mbit/sec connection, I use about 30% of it. (IDE controller seems to be the limiting factor here - but I haven't found an ATA/100 controller that I can 'borrow' yet).

      So, with new HDDs, your limiting factor should be the older IDE controllers in the motherboards. With old HDDs, the bottleneck should be the drive itself, which means that any old pentium-era machine will work.

      As the other poster said, it should cost you free/close to free. Plus, since its all in software, its easy enough to turn it into an ftp server/smb server/webserver with just a few config files. Remember, linux only needs to see the boot partition, even with lilo, its possible to support large disks with bioses that don't recognize them.

      Just my $.02

      • your limiting factor should be the older IDE controllers in the motherboards

        These old controlers can limit more than just the speed. The onboard IDE in the P90 in my back room can't handle HDs larger than about 5Gigs. So when you start collecting old machines, keep in mind that they need to be new enought to handle your HDs, or you will just end up with more scrap parts on your shelf.
        • I have one built to house my movies... I had an old P-90, bought a 200mhz upgrade chip off Ebay. Put 128mb of ram, a 10/100 nic, and purchased a 40GB drive. After a couple months I need more space, so I added a 120gb drive with no problems. Decided I'd let it handle other stuff also, so it runs a caching DNS server, Mail, and ITS for my lan. It could run DHCP, and firewall if I didn't already have a hardware one. Even a P-90's older IDE bus seems faster than the 100 full duplex network I have at home. For my next Nas I'm waiting on the 250GB IDE drives. 4x250GB drives in a desktop case for a little over a grand.
      • by adolf ( 21054 )
        I really doubt that the IDE interface is a limiting factor in such a machine.

        I haven't met a low-end Pentium yet which is capable of saturating 100MHz ethernet, even in applications where disk IO is not part of the equation.

        A new 80 gig drive would be vastly faster than the network, even at PIO 4 (16MB/sec, IIRC) or DMA33 or whatever old-school speed you've got the IDE interface running at.

        If your network is not running full-duplex, you'll also have an impossible time saturating the wire because of that -- ethernet gets a lot more efficient when it can talk and listen at the same time, without looking out for collisions.

        That all said, run bonnie or some other benchmark on the disk. If you see throughput in excess of, say, 7 or 8 megabytes per second, with sufficiently low CPU utilization to leave a bit for the overhead of tending to the NIC and Samba, then neither the hard disk nor its interface are any sort of bottleneck, and you should look elsewhere for an improvement in speed.

        A P233MMX CPU can be had for less than $20, these days, and would probably be trivial to configure on your AOpen board.

        FWIW, I've got a K6-2 350 router/file server/print machine, with a 30-gig drive of a couple years old. I haven't done fancy IDE interface tweaking under FreeBSD and have no idea what transfer mode it's using, and the motherboard is positively ancient so I'd be surprised if even DMA33 were an option. But it shoots files across the (half-duplex) network at 5 megabytes per second, generally, which I recall being a vast improvement over the Pentium machines which predated it.

        Good luck!
        • by mbyte ( 65875 )
          my P233MMX with an promise UDMA 66 gives:

          # hdparm -tT /dev/hdh /dev/hdh:
          Timing buffer-cache reads: 128 MB in 3.10 seconds = 41.29 MB/sec
          Timing buffered disk reads: 64 MB in 3.21 seconds = 19.94 MB/sec

          So the harddrive can read twice as fast as max. network. Another question would be how much CPU is needed to get it on the wire ?

          Btw .. is there any variant of NFS that uses hard drive caching ? i.e. make a special room of a few gig on the local harddrive to store frequently accessed data ? (this would speed up nfs-mounted /home quite a bit ! ;)
        • A, a Pentium 90 can saturate 100Mb/s ethernet. It's trivial, after all, it's only around 10-12 megs of data, a second.

          B, 100Mb/s ethernet operates at 31.25Mhz, not 100Mhz.

          C, At PIO 4, It's not going to get near 16MB/s. They processor would waste an insane amount of time copying the bytes from the interface.

          D, I suspect you have something horibly misconfigured with your router/file server/print machine; perhaps you are using a PIO mode, as I mentioned above. I used a pent 200 w/ a 20 gig drive on a UDMA-33 card. It pushed an easy 8 to 9 megs a second.

          • A, a P90 would have no trouble moving 10-12 megs of data, a second. A P60 could do it. As could a 486, and probably a 386, if one could get away from the ISA bus (PCMCIA is 16MHz, 16-bit, IIRC - a bit quicker...). As for the triviality of doing this in the context of ethernet: in feeding this data to an NIC in 1500-byte chunks and waiting for whatever handshaking which must transpire between drivers and their respective NICs, not to mention network overhead due to collisions and such, things tend to slow down. You'll never get 100 megabits per second out of 100 megabit per second ether. 'Sides, 100Mb/s ethernet was a -tad- uncommon, if existant, in the day when the P90 was new. Sneakernet was the order of the day. People weren't sure whether token ring, or ATM, or which of the ethernets (10base-T, -2, -FX or, horror, -5) were going to make it big, if any at all. Case in point: I've got a 486 here which can't keep up with 10base-T. Did you sleep through this era, or were you just not yet born?

            B, if 100Mb/s ethernet operates at 31.25MHz, I'd love to know how. AFAIK, it's serial, and binary (dual state) - thus, one bit per cycle. this link [google.com] seems to indicate that things are running at 100MHz on the wire.

            C, sure, yeah, whatever. You missed the bit where I mentioned benchmarking and CPU utilization, obviously. And since it's just a fileserver, it doesn't need CPU for anything other than serving files. Who cares if it's inefficient, as long as it's doing the job as well as it is capable of? Unless the CPU turns out to be a bottleneck in such an arrangement, things would work JustFine. Everyone wants UDMA133, even if they don't know what it is or that it exists at all - that doesn't mean it's needed to flood such a slow medium as 100base-T.

            D, bonnie tells me that, given a 200-meg test file, I'm getting local block reads and writes at a bit over 19 megabytes per second on this 30-gig Maxtor. But transfers across the network are still 5 or 6 megs per second, depending on phase of moon. Were things running full-duplex with a high-dollar Cisco switch, I might expect them to be somewhat faster, but they're not. It is apparent to me that you missed an important element of my description, the word "unswitched."

            Show me an example of a machine of similar calibre to this K6-2 333, overclocked to 350 with a first-gen generic Super7 motherboard and 1 meg of L2, where half-duplex 100 megabit ethernet performs with any superiority to this using normal methods of TCP/IP data transfer between itself and another machine of similar ilk, and I'll eat my hat.

            In closing, I'd like to remind you of two things:

            First, my original point that the combination of a modern hard drive and slower IDE interface is not a bottleneck on a common 100MHz ethernet segment stands true.

            Second, I wish in the future you might actually read the postings to which you might like to reply, and then apply a touch of critical thinking to the points you think you might like to make. If most of your writing is like this, I do suspect that you'll have talked yourself out of the effort of producing that same majority if you follow these two simple steps.
        • > I haven't met a low-end Pentium yet which is
          > capable of saturating 100MHz ethernet, even in
          > applications where disk IO is not part of the
          > equation.

          Is this running Windows or Linux? If so, that is quite possibly why (although I would still expect a low end pentium to be able to saturate 100mbit with say static http requests that are cached in ram). Much as we all love linux, it is well proven that NetBSD, FreeBSD, and OpenBSD have a much more efficient TCP/IP stack, and thus are better on low end machines, and for some tasks high end machines. This is why I run NetBSD on my servers (well, for now. I'm tempted to set up Solaris on one machine for FDDI support). I still use linux on the desktop though (along with MacOS, one Windows machine, and hopefully soon an Irix machine or two).
    • Found it [mosix.org]
      'MOSIX'

      You can use Mosix to cluster those 486's together and get high performance file data transfer.
      using
      "The experimental MOSIX Parallel I/O (MOPI) package can read over 1,600 MB/S using 60 nodes. "

  • Actually, the Department of Foreign Affairs [dfait-maeci.gc.ca], a minsitry in the Canadian government, does exactly what you are proposing. As an employee of their home loan program, I have filled out the paperwork to lend several of these file servers to local middle and high schools where they have a 3 year shelf life.
    This concept was implemented as of March 31st, 2002. Just thought it was cool one of our procedures was on slashdot lol
  • But, consider the electricity costs:
    To run 4hdds, you need a old 486/P1 (previous post suggestion), hence a 250W+ powersupply.
    What is the amount of storage space you gain. The most you can strap is either 4*8GB (without special drivers), or if you find newer board 4*30GB.
    Chances are you have smaller drives, hence the first estimate is more correct.

    Does it pay to strap them in, use untill they die, lose data in the process and waste electricity..
    Well, no, unless you want to do it for a 'geek' factor here.

    The truth is, the older the hardware, the less reliable it is, the more you are prone to loosing data. And with a 120Gb drive hovering around $300 (CAD) dollars, why even bother.

    Get one of those, add to an existing workstation, leave it on. Voila, cheapest solution possible, and not a lot of work required.

    And believe me, I am speaking from experience. I have had a p-166, p-200 and cel-266 all die on me within weeks/days, doing the exact thing you are looking to do. Then I got my current server duron-850, with a nice new board, and I have had no trouble for almost a year now.

    just my 2cents.
    • Not so true,
      New hardware has a higher fail rate than hardware 2 years old, because all the hardware that dies in the first 6 months is already dead after 2 years.

      Also, mass production of old hardware might not have come upto speed, components of far higher quality than required may have been used.

      e.g. The first CD players have far better lasers

      current CD players use top emitting laser diodes,
      old CD players use better side emitting laser diodes.

      The spindle on old CD players was manufactured to a stupid precision ( a few atoms or something)
      because they could make crap spindles or amazing spindles but not ones just good enough.

      I should imagine the same is true with a lot of electrical equipment

      I have a 30year old
      Fridge,TV,hair dryer,dish washer etc.... and they all work fine.....

      • actuall the first generation of cd players have laser with much shorter run lifes and much, much crappier AGC circuitry which makes them less capable of recovering from disk inperfections and scratches. Besides modern cd players have multifrequency lasers for multiread compatibility.
        • I have a sony cdp 101, and it works fine.
        • While it may be true that 'top of the range' CD players are build better than 10/15 years ago bottom of the range 'consumer durable' CD players are build like a paper boat.

          When CD players first came out you counld only get one type 'top of the range' then as they became more popular and technology allowed for easyer mass production of CD players cheep consumer durables came out.

          I can pick up a cd player for $30 I don't expect it will work that well and have a life of a few years tops.

          Computers have now entered a consumer durable market. I don't know what life-time they put on moddern 130gb HDD's for joe public it's cirtanly a lot less than 40gb top of the range scsi drives.

    • I agree with your basic point, but remember that
      The fact that a power supply is rated at 250W does NOT mean that it draws 250W constantly. It means that it can supply that much to its devices without puking.

      Probably your basic p1 system with 4 drives, not taxing the CPU and using apm to do cpu-idle calls would draw 125-150 watts. More for a monitor, of course.

      That's still a non-trivial amount of electricity, of course. You could save some by spinning down the drives when not in use but you're probably at a minimum of 75 watts for a running pentium.

      same principle applies to using such a linux box
      as a NAT router instead of buying a dedicated linksys or whatever - the linksys probably would draw 20 watts or less, and pay for itself in electricity in a year or two.
      • Right on. That is why i use an old Pentium 120Mhz Compaq laptop. With two pcmcia nics and a 3GB disk, it works as a NAT router, firewall and a small fileserver for non-sensitive stuff only. The lcd screen is always off, and the disk even spins down when not in use - using noflushd (google is your friend), btw. /Pedro
  • Cheap NAS (Score:4, Interesting)

    by Nyarly ( 104096 ) <nyarlyNO@SPAMredfivellc.com> on Wednesday July 31, 2002 @12:24PM (#3986872) Homepage Journal
    You can build your NAS on an old Linux box, without a doubt. My understanding is that using plain vanilla NFS with ext3 or similar is not going to get you the performance that a NAS appliance would.

    Basically, the appliances use special filesystems and NVRAM along with retuned NFS in order to squeeze out the speed - to the point where some NAS is faster than local storage.

    How much of this is available OSS, I wonder? Are there any NAS-ready filesystems out there? quickNFS? What about NVRAM cards/mbs and NFS to work with them?

    • Re:Cheap NAS (Score:3, Interesting)

      by n9hmg ( 548792 )
      Not all NAS vendors are doing that kind of optimization. Some vendors are making very-low-end systems, just like he's describing - actually, in my main experience, much lower-end. We standardized on the Maxtor MaxAttach NAS 4000 320GB machines, which are p166/64MB/4x80GBIDE, running FreeBSD 2.5. While we were negotiating the deal, they decided to end-of-life this new product, and have left it as an unreliable, poorly-performing product. They even refuse to fix the problem where the box locks up on reboot if rebooted while filesystems are mounted from the NFS shares.
      The basic concept is solid, and i'm probably going to end up doing just what the article is about, though with slightly-higher-end hardware, so I can buy a hundred or so identical systems, make an image, and splat it onto them all. It should come in at about 25% what all the NAS vendors are asking for equivalent products.
      • Try looking for corporates who are upgrading old kit, you can often pick up a job lot of identical or nearly-identical machines for a lot less than buying new. You might well find something around the PPro-200 mark at the moment which is ideal (much better I/O bandwidth than P5 systems).
      • Interesting that you are having the exact same problem with the Maxattach 4000. I bought one to use @ a video production company that I consult for and they have had the exact same problem that you are having. Does Maxtor have any resolution or migration strategy for you or are they hanging you out to dry just like they did to me?

        • Same problem, same non-resolution. We bought 36 of these worthless things, at 4000USD each, and they refuse to do anything about it, saying that their "direction" is toward the windows-based NAS, and no further development or bugfixes will be performed for the unix-based NAS. We're evaluating the Iomega NAS 401u, Apple XServ, and IBM NAS 100 as replacements. Whatever we end up with, I will never be party to another business relationship with them, if they back out of their promise (verbal only) to make the product they sold and promised at a certain minimal useability level.
          Now, watch me get laid off and go try to get a job there...
    • Check out the FAQ [sourceforge.net].

      I found that ext3 in data=journal mode got sync performance back up near async performance (which you almost never want). The NVRAM disk might be a good route; they're pricey but still cheaper than commercial NAS.

      Does anyone have any good experience with particular NVRAM units?
    • This guy is talking about embedded controllers that talk SMB. He's not talking about building a Linux box, or any type of *nix box at all. NFS doesn't enter the picture either, much less performance issues. He seems aware of any and all performance issues, since he's using old drives that are sitting on the shelf. The point is, how well do those embedded SMB/drive controllers work?
  • why look at finding the oldest systems you can? I have a p2 300 system as our current file server, holding about 300+gb across all hd's, and have yet to have a problem with it. the cost for me was 50 bucks for a new case; got at a local computer expo. am i missing something here? g

  • I put one of these out, runs real good.

    Bought a slimline IBM system from TigerDirect.

    The 10GB hard drive that came with the system was used to store the OS/etc

    I dropped a 3Com nic in it.

    Threw 2 120GB drives in it. Install Linux, RAID them out so you've got them mirrored, 120GB of storage.

    You could get away with not mirroring the drives, and have 240gb of storage, but I wanted some redundancy!
    • I used a similar method.
      I bought a Raidtronics server case with 9*5 1/4 drive bays. A 3Ware Escalade 7000 with 4* Maxtor 81.9 GB drives configured for raid 5 (~245 GB usable).
      I used an old Abit BP6 with 2* 433 Celerons, 512MB ram, and an Adaptec 2940 with a 4.6 GB IBM SCSI for the OS (FreeBSD 4.6) and Yamaha SCSI burner.
      It has an Intel Dual Port server nic configured for EtherChannel connected to a Cisco 2924XL switch.
  • by InitZero ( 14837 ) on Wednesday July 31, 2002 @03:38PM (#3988110) Homepage

    I got a hold of a bunch of Sun SCSI four-drive disk enclosures. I had an equally large bunch of four to 18 gig drives. Add in a few surplus SCSI cards and I ended up with more than 100 gig worth of disk space attached to a small linux box.

    The drives were quick enough (more spindles = more speed) for a small media server and I had no complaints.

    That was, until I noticed that my home office was now running six to eight degress warmer than the rest of the house. That got me to thinking about how much juice these guys draw. All told, I would be paying an extra few bucks a month in power.

    The straw that finally broke the camel's back was that having a dozen additional filesystems (yes, I could have striped them) to manage was a pain in the buttocks.

    In the end, I gave the drives to someone who had more time on his hands and bought myself a pair of 100-gig IDE drives.

    I don't know what you consider 'formerly huge' but unless your drives are bigger than 40 or 60-gig, it may not be worth your time. I know it would not be worth my time nor my electricity.

    InitZero

    • I don't know what you consider 'formerly huge' but unless your drives are bigger than 40 or 60-gig, it may not be worth your time. I know it would not be worth my time nor my electricity.

      A colleague of mine had a story about a place that replaced some ancient early hard drives that were so big (think refrigerator) with their modern equivalent (think breadbox) and made up for the replacement cost (and removal cost of the old unit) in reduced electric bills in a reasonably short time. (Sorry for lack of details but he's working and since he's my boss, I don't want to ask him to post this story.)

  • Um... why? (Score:3, Interesting)

    by Wakko Warner ( 324 ) on Wednesday July 31, 2002 @05:10PM (#3988617) Homepage Journal
    Buy a 3Ware Escalade RAID card. The real money is going to go to hard drives anyway; you're not going to save much by buying cheap-ass featureless controllers. At least, with real hardware RAID you're getting some resiliency.

    - A.P.
    • Hmm.. One RAID controller card costs less than a couple of cheap systems. The Escalade 12 can handle 12 ATA/133 drives. PCI. Hmm.. 6 PCI cards in one system has been done. 12*6=72 drives.

      I wonder if it can handle old drives, and behave as a simple IDE controller with all drives different. (At least RAID 0 or RAID 1 only needs 2 disks of same size -- or maybe waste some space on the larger drive)

      Oh, here is more info in the FAQ [3ware.com].

      • Some old drives are OK: "Escalade 7500 ATA RAID cards can be used with ATA/33, ATA/66, ATA/100, and ATA/133 disk drives."
      • Ah, it can handle drives separately: "JBOD is an acronym for "Just a Bunch Of Drives".
  • can i use old drives, in an old computer, and use some sort of "smartdrive" program to cache frequently accessed data to improve speed. right?

    well..im not sure about the cacheing, but my celeron300a with 64MB of ram, work great. have four hd channels onboard, and 8 additional from the two highpoint controllers. i run 12 drives, in software RAID0+1, all of them are 40Gb WD 7200RPM drives and i have 240Gb avail. the speed of this array easily maxed out my network cards bandwidth(12MB/s vs. 50+MB/s) so i installed 3 3com NIC's, i still cannot match the 50+MB/s of the drives, but 36MB/s over a network is very good, since no one computer can pull this much through on one NIC anyway. and this is just a celeron300a with 64MB of RAM.

    im running a pretty fast setup with minimal CPU, so slower drives should work well with a low pentium class machine..

    ps - i have noticed that my network storage array is not as fast as id like as im running a file server for a my local network, with 20+ machines on most of the time, but you could put multiple machines around your network to simulate more speed. not as many poeple accessing the same resource will of coarse improve performance.

    just my $.02..good luck
  • Erm... (Score:1, Interesting)

    by Anonymous Coward

    ...I think the idea is to use the smallest, dumbest hardware available, and that all the "use old PC posts" are way off the mark.

    I'd look at something like the embsd.org [embsd.org] board (mentioned previously) rather than go with old PCs. Remember, the guy specifically mentions looking at Embedded Linux and finding it too much of a hassle, so I guess he doesn't want an old PC. He just wants a _simpler_ embsd board...

    Mod me up, please. I forgot my password!

  • A brand new 80G drive costs around $200AU

    Even a pile of "free" 1G, 2G, even 9G drives are going to take enclosures, wiring, power supplies and *space*

    Sure its possible, but will it be reliable? Will you spend all your time finding which of your 15 drives is offline today and rebuilding home-made RAID sets?

    Who pays your power bills?

"Protozoa are small, and bacteria are small, but viruses are smaller than the both put together."

Working...