Network Attached Storage on a Budget? 44
Full'o'MP3 asks: "Wondering what to do with all those (formerly huge) hard disks on the shelves? Well, so am I. After looking at all sorts of USB enclosures, I remembered that, a long while ago, I saw a description/review/whatever of a small board (around 3" by 4") that essentially had an Ethernet interface on one end, a microcontroller in the middle and an IDE bus on the other end. It was designed only for that purpose, could not even format the hard disks on its own and only supported SMB without any access control, but by golly, I'm looking for about a dozen (or about 1 per 4 disks). Slap them inside old PC cases, fill them with hard disks, and you have a very simple, cheap file server for home or school. I've looked at a lot of embedded Linux and commercial storage stuff, but they are all overkill and require brand new hardware. Anyone have any pointers for this?
(Butchering old laptops, iPAQs or similar stuff won't cut it...)" Readers may remember this thread from early May about doing something similar with new hardware. Since this is the "budget version" of the similar question, I felt it was deserving of its own post. How hard would such a device be to build fom old computer parts and hard disks? Details on cheap electronics (like the submittor-mentioned device) that would make this easier would be appreciated.
Just use old PC MBs (Score:3, Informative)
Re:Just use old PC MBs (Score:3, Interesting)
Actually, I did something similar. I have a Pentium 166mhz (AOpen), a 2 gig HDD, and a new 80 gig HDD. With only 16 megs of memory, and a stripped down version of debian on it, its fast enough to saturate the 10 mbit/sec connection via samba. On a 100 mbit/sec connection, I use about 30% of it. (IDE controller seems to be the limiting factor here - but I haven't found an ATA/100 controller that I can 'borrow' yet).
So, with new HDDs, your limiting factor should be the older IDE controllers in the motherboards. With old HDDs, the bottleneck should be the drive itself, which means that any old pentium-era machine will work.
As the other poster said, it should cost you free/close to free. Plus, since its all in software, its easy enough to turn it into an ftp server/smb server/webserver with just a few config files. Remember, linux only needs to see the boot partition, even with lilo, its possible to support large disks with bioses that don't recognize them.
Just my $.02
Re:Just use old PC MBs (Score:1)
These old controlers can limit more than just the speed. The onboard IDE in the P90 in my back room can't handle HDs larger than about 5Gigs. So when you start collecting old machines, keep in mind that they need to be new enought to handle your HDs, or you will just end up with more scrap parts on your shelf.
Re:Just use old PC MBs (Score:1)
Re:Just use old PC MBs (Score:3, Insightful)
I haven't met a low-end Pentium yet which is capable of saturating 100MHz ethernet, even in applications where disk IO is not part of the equation.
A new 80 gig drive would be vastly faster than the network, even at PIO 4 (16MB/sec, IIRC) or DMA33 or whatever old-school speed you've got the IDE interface running at.
If your network is not running full-duplex, you'll also have an impossible time saturating the wire because of that -- ethernet gets a lot more efficient when it can talk and listen at the same time, without looking out for collisions.
That all said, run bonnie or some other benchmark on the disk. If you see throughput in excess of, say, 7 or 8 megabytes per second, with sufficiently low CPU utilization to leave a bit for the overhead of tending to the NIC and Samba, then neither the hard disk nor its interface are any sort of bottleneck, and you should look elsewhere for an improvement in speed.
A P233MMX CPU can be had for less than $20, these days, and would probably be trivial to configure on your AOpen board.
FWIW, I've got a K6-2 350 router/file server/print machine, with a 30-gig drive of a couple years old. I haven't done fancy IDE interface tweaking under FreeBSD and have no idea what transfer mode it's using, and the motherboard is positively ancient so I'd be surprised if even DMA33 were an option. But it shoots files across the (half-duplex) network at 5 megabytes per second, generally, which I recall being a vast improvement over the Pentium machines which predated it.
Good luck!
Re:Just use old PC MBs (Score:3, Informative)
# hdparm -tT
Timing buffer-cache reads: 128 MB in 3.10 seconds = 41.29 MB/sec
Timing buffered disk reads: 64 MB in 3.21 seconds = 19.94 MB/sec
So the harddrive can read twice as fast as max. network. Another question would be how much CPU is needed to get it on the wire ?
Btw
Re:Just use old PC MBs (Score:2)
Re:Just use old PC MBs (Score:2)
As for CPU power, I think the rule of thumb is 100MHz/100Mbit for SPARC/RISC & 200MHz/100Mbit for x86.
Re:Just use old PC MBs (Score:2, Interesting)
B, 100Mb/s ethernet operates at 31.25Mhz, not 100Mhz.
C, At PIO 4, It's not going to get near 16MB/s. They processor would waste an insane amount of time copying the bytes from the interface.
D, I suspect you have something horibly misconfigured with your router/file server/print machine; perhaps you are using a PIO mode, as I mentioned above. I used a pent 200 w/ a 20 gig drive on a UDMA-33 card. It pushed an easy 8 to 9 megs a second.
Re:Just use old PC MBs (Score:2)
B, if 100Mb/s ethernet operates at 31.25MHz, I'd love to know how. AFAIK, it's serial, and binary (dual state) - thus, one bit per cycle. this link [google.com] seems to indicate that things are running at 100MHz on the wire.
C, sure, yeah, whatever. You missed the bit where I mentioned benchmarking and CPU utilization, obviously. And since it's just a fileserver, it doesn't need CPU for anything other than serving files. Who cares if it's inefficient, as long as it's doing the job as well as it is capable of? Unless the CPU turns out to be a bottleneck in such an arrangement, things would work JustFine. Everyone wants UDMA133, even if they don't know what it is or that it exists at all - that doesn't mean it's needed to flood such a slow medium as 100base-T.
D, bonnie tells me that, given a 200-meg test file, I'm getting local block reads and writes at a bit over 19 megabytes per second on this 30-gig Maxtor. But transfers across the network are still 5 or 6 megs per second, depending on phase of moon. Were things running full-duplex with a high-dollar Cisco switch, I might expect them to be somewhat faster, but they're not. It is apparent to me that you missed an important element of my description, the word "unswitched."
Show me an example of a machine of similar calibre to this K6-2 333, overclocked to 350 with a first-gen generic Super7 motherboard and 1 meg of L2, where half-duplex 100 megabit ethernet performs with any superiority to this using normal methods of TCP/IP data transfer between itself and another machine of similar ilk, and I'll eat my hat.
In closing, I'd like to remind you of two things:
First, my original point that the combination of a modern hard drive and slower IDE interface is not a bottleneck on a common 100MHz ethernet segment stands true.
Second, I wish in the future you might actually read the postings to which you might like to reply, and then apply a touch of critical thinking to the points you think you might like to make. If most of your writing is like this, I do suspect that you'll have talked yourself out of the effort of producing that same majority if you follow these two simple steps.
Re:Just use old PC MBs (Score:1)
> capable of saturating 100MHz ethernet, even in
> applications where disk IO is not part of the
> equation.
Is this running Windows or Linux? If so, that is quite possibly why (although I would still expect a low end pentium to be able to saturate 100mbit with say static http requests that are cached in ram). Much as we all love linux, it is well proven that NetBSD, FreeBSD, and OpenBSD have a much more efficient TCP/IP stack, and thus are better on low end machines, and for some tasks high end machines. This is why I run NetBSD on my servers (well, for now. I'm tempted to set up Solaris on one machine for FDDI support). I still use linux on the desktop though (along with MacOS, one Windows machine, and hopefully soon an Irix machine or two).
I saw somthing on freshmeat (Score:1)
'MOSIX'
You can use Mosix to cluster those 486's together and get high performance file data transfer.
using
"The experimental MOSIX Parallel I/O (MOPI) package can read over 1,600 MB/S using 60 nodes. "
DFAIT (Score:1)
This concept was implemented as of March 31st, 2002. Just thought it was cool one of our procedures was on slashdot lol
I run into the same thing (Score:2, Insightful)
To run 4hdds, you need a old 486/P1 (previous post suggestion), hence a 250W+ powersupply.
What is the amount of storage space you gain. The most you can strap is either 4*8GB (without special drivers), or if you find newer board 4*30GB.
Chances are you have smaller drives, hence the first estimate is more correct.
Does it pay to strap them in, use untill they die, lose data in the process and waste electricity..
Well, no, unless you want to do it for a 'geek' factor here.
The truth is, the older the hardware, the less reliable it is, the more you are prone to loosing data. And with a 120Gb drive hovering around $300 (CAD) dollars, why even bother.
Get one of those, add to an existing workstation, leave it on. Voila, cheapest solution possible, and not a lot of work required.
And believe me, I am speaking from experience. I have had a p-166, p-200 and cel-266 all die on me within weeks/days, doing the exact thing you are looking to do. Then I got my current server duron-850, with a nice new board, and I have had no trouble for almost a year now.
just my 2cents.
the older the hardware, the less reliable (Score:4, Interesting)
New hardware has a higher fail rate than hardware 2 years old, because all the hardware that dies in the first 6 months is already dead after 2 years.
Also, mass production of old hardware might not have come upto speed, components of far higher quality than required may have been used.
e.g. The first CD players have far better lasers
current CD players use top emitting laser diodes,
old CD players use better side emitting laser diodes.
The spindle on old CD players was manufactured to a stupid precision ( a few atoms or something)
because they could make crap spindles or amazing spindles but not ones just good enough.
I should imagine the same is true with a lot of electrical equipment
I have a 30year old
Fridge,TV,hair dryer,dish washer etc.... and they all work fine.....
Re:the older the hardware, the less reliable (Score:3, Informative)
Re:the older the hardware, the less reliable (Score:2)
apples and pares (Score:2)
When CD players first came out you counld only get one type 'top of the range' then as they became more popular and technology allowed for easyer mass production of CD players cheep consumer durables came out.
I can pick up a cd player for $30 I don't expect it will work that well and have a life of a few years tops.
Computers have now entered a consumer durable market. I don't know what life-time they put on moddern 130gb HDD's for joe public it's cirtanly a lot less than 40gb top of the range scsi drives.
power usage (Score:1)
The fact that a power supply is rated at 250W does NOT mean that it draws 250W constantly. It means that it can supply that much to its devices without puking.
Probably your basic p1 system with 4 drives, not taxing the CPU and using apm to do cpu-idle calls would draw 125-150 watts. More for a monitor, of course.
That's still a non-trivial amount of electricity, of course. You could save some by spinning down the drives when not in use but you're probably at a minimum of 75 watts for a running pentium.
same principle applies to using such a linux box
as a NAT router instead of buying a dedicated linksys or whatever - the linksys probably would draw 20 watts or less, and pay for itself in electricity in a year or two.
Re:power usage (Score:1)
Cheap NAS (Score:4, Interesting)
Basically, the appliances use special filesystems and NVRAM along with retuned NFS in order to squeeze out the speed - to the point where some NAS is faster than local storage.
How much of this is available OSS, I wonder? Are there any NAS-ready filesystems out there? quickNFS? What about NVRAM cards/mbs and NFS to work with them?
Re:Cheap NAS (Score:3, Interesting)
The basic concept is solid, and i'm probably going to end up doing just what the article is about, though with slightly-higher-end hardware, so I can buy a hundred or so identical systems, make an image, and splat it onto them all. It should come in at about 25% what all the NAS vendors are asking for equivalent products.
Re:Cheap NAS (Score:1)
Max Attach (Score:2)
Re:Max Attach (Score:1)
Now, watch me get laid off and go try to get a job there...
Performance Tuning can make a big difference (Score:3, Informative)
I found that ext3 in data=journal mode got sync performance back up near async performance (which you almost never want). The NVRAM disk might be a good route; they're pricey but still cheaper than commercial NAS.
Does anyone have any good experience with particular NVRAM units?
Re:Cheap NAS (Score:1)
Why that old? (Score:1)
This works alright... (Score:1)
I put one of these out, runs real good.
Bought a slimline IBM system from TigerDirect.
The 10GB hard drive that came with the system was used to store the OS/etc
I dropped a 3Com nic in it.
Threw 2 120GB drives in it. Install Linux, RAID them out so you've got them mirrored, 120GB of storage.
You could get away with not mirroring the drives, and have 240gb of storage, but I wanted some redundancy!
Re:This works alright... (Score:1)
I bought a Raidtronics server case with 9*5 1/4 drive bays. A 3Ware Escalade 7000 with 4* Maxtor 81.9 GB drives configured for raid 5 (~245 GB usable).
I used an old Abit BP6 with 2* 433 Celerons, 512MB ram, and an Adaptec 2940 with a 4.6 GB IBM SCSI for the OS (FreeBSD 4.6) and Yamaha SCSI burner.
It has an Intel Dual Port server nic configured for EtherChannel connected to a Cisco 2924XL switch.
Throw the Drives Away -- Electricity Ain't Free (Score:4, Insightful)
I got a hold of a bunch of Sun SCSI four-drive disk enclosures. I had an equally large bunch of four to 18 gig drives. Add in a few surplus SCSI cards and I ended up with more than 100 gig worth of disk space attached to a small linux box.
The drives were quick enough (more spindles = more speed) for a small media server and I had no complaints.
That was, until I noticed that my home office was now running six to eight degress warmer than the rest of the house. That got me to thinking about how much juice these guys draw. All told, I would be paying an extra few bucks a month in power.
The straw that finally broke the camel's back was that having a dozen additional filesystems (yes, I could have striped them) to manage was a pain in the buttocks.
In the end, I gave the drives to someone who had more time on his hands and bought myself a pair of 100-gig IDE drives.
I don't know what you consider 'formerly huge' but unless your drives are bigger than 40 or 60-gig, it may not be worth your time. I know it would not be worth my time nor my electricity.
InitZero
Re:Throw the Drives Away -- Electricity Ain't Free (Score:2)
A colleague of mine had a story about a place that replaced some ancient early hard drives that were so big (think refrigerator) with their modern equivalent (think breadbox) and made up for the replacement cost (and removal cost of the old unit) in reduced electric bills in a reasonably short time. (Sorry for lack of details but he's working and since he's my boss, I don't want to ask him to post this story.)
Um... why? (Score:3, Interesting)
- A.P.
Re:Um... why? (Score:1)
I wonder if it can handle old drives, and behave as a simple IDE controller with all drives different. (At least RAID 0 or RAID 1 only needs 2 disks of same size -- or maybe waste some space on the larger drive)
Oh, here is more info in the FAQ [3ware.com].
i think the question here is: (Score:2, Informative)
well..im not sure about the cacheing, but my celeron300a with 64MB of ram, work great. have four hd channels onboard, and 8 additional from the two highpoint controllers. i run 12 drives, in software RAID0+1, all of them are 40Gb WD 7200RPM drives and i have 240Gb avail. the speed of this array easily maxed out my network cards bandwidth(12MB/s vs. 50+MB/s) so i installed 3 3com NIC's, i still cannot match the 50+MB/s of the drives, but 36MB/s over a network is very good, since no one computer can pull this much through on one NIC anyway. and this is just a celeron300a with 64MB of RAM.
im running a pretty fast setup with minimal CPU, so slower drives should work well with a low pentium class machine..
ps - i have noticed that my network storage array is not as fast as id like as im running a file server for a my local network, with 20+ machines on most of the time, but you could put multiple machines around your network to simulate more speed. not as many poeple accessing the same resource will of coarse improve performance.
just my $.02..good luck
Erm... (Score:1, Interesting)
I'd look at something like the embsd.org [embsd.org] board (mentioned previously) rather than go with old PCs. Remember, the guy specifically mentions looking at Embedded Linux and finding it too much of a hassle, so I guess he doesn't want an old PC. He just wants a _simpler_ embsd board...
Mod me up, please. I forgot my password!
Do-able, but maybe not cost effective. (Score:1)
Even a pile of "free" 1G, 2G, even 9G drives are going to take enclosures, wiring, power supplies and *space*
Sure its possible, but will it be reliable? Will you spend all your time finding which of your 15 drives is offline today and rebuilding home-made RAID sets?
Who pays your power bills?