What To Do With a Hundred Hard Drives? 487
Makoto916 writes "In five years with my current employer as the IT administrator, I've amassed a sizable cabinet of discarded hard drives; just shy of 100, in fact. All of the drives range in size from 20GB up to 300GB. They've all been stored in anti-stat bags, and spot checks of even the oldest ones show that most of them still work. Individually, they're mostly useless for our line of work, which is digital video production. However, the collective storage potential is quite significant. They are of varying size and speed, but the one commonality is they're all IDE. What is the best way to approach connecting all of these devices and realizing their storage potential? On a budget, of course. Now, I'd never use such an array for critical data storage, but it certainly would be useful as a massive backup array to our existing SAN that does store critical data. I have several spare and functioning PCs, but not nearly enough to utilize their internal IDE controllers; even with multiple add-in controllers, it still wouldn't be enough. Not to mention the nightmare of managing a bunch of independent PCs. I've looked into ATA Over Ethernet and there's a lot of potential there, but current 15 to 20 bay AoE cabinets are expensive, and single device enclosures are so rare that they're also expensive. Are there any hardware hackers out there who have crafted their own home-brew AoE systems? Could they scale to 100 drives? Is there a better way?"
Earn a little extra on the side (Score:4, Informative)
How About Just a Dozen? (Score:5, Informative)
At first I just got a dozen SATA/EIDE USB slaves for $10 each, and plugged them all into a USB hub, with just the single USB cable stretching out of the case over to another full PC's USB socket. But that is so slow, especially when copying big music or video files between drives (and through the single USB cable to the CPU and back). Playing multiple media files to different terminals in my house is too much bandwidth for the single USB, too. Running 4 USB from the big enclosure to the 4 sockets in the server PC isn't much better, because it all goes through the same CPU and PCI bus.
So I got 3 Sabrent SBT-SRD4 [google.com] 4xSATA controller PCI cards, because they were $25 each. But when I tried to boot them in a few different motherboards (pre-HP Compaq P3/1.2GHz, IBM P4/3.2GHz), none of them got past the POST to even start booting the OS. I want to use them with Linux, but with the failure to even boot I'm not hopeful about driver support, either.
I bought them from CompUSA (still alive, online only), which hasn't replied to (email only - no phone available) tech support requests. Nor has Sabrent itself. I'm not hopeful that they'll refund my money, since everything else about this transaction has sucked.
So what I want to know is what cheap motherboard (no need for graphics or anything else other than at least 3 PCI slots and 100Mb-1Gb ethernet) will work with these SATA cards? If they're really duds, what is the cheapest way to get 12 SATA drives controlled, even if they're not that fast, over to 100Mb/Gb ethernet? Either SATA cards + motherboard, or even a fat mobo with a dozen SATA ports. I'd even settle for just 4-8 SATA ports to get started. I'm talking under $200 if possible.
Ideas? If it works, then 8-9 of them should support the 100 HDs the original question was asking about.
Re:Free Geek (Score:3, Informative)
It's quite easy for computer recycling charities to get working computers, but because of data security policies at a lot of companies they are not allowed to recycle hard-drives. This means that a disproportionate number of computers to hard-drives float around until they're finally scrapped (which overall costs the charity more time, effort and money).
For example, I have a 9gb and a 26gb drive in my main development machine - with a few 40gb and 125gb drives waiting for me to upgrade to (80%
Data recovery services (Score:5, Informative)
Apparently, a lot of failed hard drives are not bad because of their physical platters, but because of the drive logic. These places need old drives for replacement controllers that you probably can't buy from the manufacturer.
ft
Re:Not worth the trouble (Score:1, Informative)
Careful with the magnets (Score:5, Informative)
Just keep in mind these are *STRONG* magnets. When you take it apart the magnets may smash into each other. This could send particles flying away in a direction that, according to Murphy, is where your eyes are. I know this by experience, lucky for me I wear glasses. And if some of your flesh is between the magnets, it's painful.
Re:magnets (how to keep them?) (Score:5, Informative)
Re:Bunches of small drives (Score:5, Informative)
Re:Bunches of small drives (Score:5, Informative)
http://www.fieldlines.com/story/2006/2/9/13128/15117 [fieldlines.com]
http://www.fieldlines.com/story/2006/10/8/112046/572 [fieldlines.com]
http://www.fieldlines.com/story/2005/9/24/152446/359 [fieldlines.com]
How to remove Hard Drive magnets from their mounting plate
http://www.fieldlines.com/story/2006/10/4/181345/402 [fieldlines.com]
Recycling parts from Hard drives
http://www.fieldlines.com/story/2006/11/9/01948/0162 [fieldlines.com]
Re:magnets (how to keep them?) (Score:1, Informative)
Donate? (Score:5, Informative)
A lot of our donated computers don't come with hard drives, so we're always in need of hard drives more than just about anything else.
We wipe all drives to DoD standards before ever putting them in anything, too. (Well, anything other than the machines we use to wipe 'em.)
If you don't want to ship them all the way to Eugene, there's lots of other charities that do the same kind of thing, and probably have the same disproportionate computer to hard drive donation ratio.
Re:Not worth the trouble (Score:4, Informative)
Re:Bunches of small drives (Score:3, Informative)
The drives alone will consume close to 1000W. It's probably another 1000W for the equipment to run them, plus whatever the hardware costs are. When you add in A/C costs, thats going to come to around $8-10/day, and depending on the average drive size, you're going to end up with less than 10TB of redundant data.
Now the alternative is 12x1TB RAID6. It will consume around 250W, and cost around $4000. That's around two years before before the power budget catches up assuming you already have all the necessary hardware. Since you have to buy all the hardware, you'll catch up in under a year.
This isn't at all considering the limited lifespan of the already used equipment.
Ummm in a word.... no (Score:5, Informative)
1. Call a recycler and dump the drives. smaller than 200GB (keep the largest ones to give out to other employees for their home systems)
2. Buy 2 or 3 1TB HDD's
3. Install them in a box.
4. Done.
Start with the shear cost the additional equipment, then add in the cost of the electricity to run the drives and their controller. then add in the cost of HVAC to keep the room they are in cool. Will by far exceed the cost of 2 or 3 1TB drives. Not to mention the cost of your time to build, deploy and maintain.
In short. Nothing you can do with these drives will save your employer money. However proper recycling might bring in a buck or two. Not to mention the good will when you hand the largest drives to fellow employees to use at home.
Re:One idea... (Score:1, Informative)
http://www-03.ibm.com/systems/storage/tape/virtualization/index.html
Re:How About Just a Dozen? (Score:1, Informative)
Re:100 ata hard drives? forget going green (Score:5, Informative)
500-800W to run 100 HDDs. Some PCs use that much alone. Even these days, it's still worth using older HDDs, because the cost of replacing them with bigger and more energy efficient ones is still not low enough to cover the cost of running an older drive for a few years. Especially if your NAS supports power saving.
That was supposed to be +1 (Score:2, Informative)
Re:1 word: magnets (Score:5, Informative)
Re:Bunches of small drives (Score:2, Informative)
firewire (Score:4, Informative)
Keep in mind this will be noticeably slower than native ide once you get more than a certain number of drives on a single bus, but for some applications, fast disk access isn't as important.
Technically speaking, you can use USB for this too, however there are many more downsides.
Many times slower than firewire, due to the method usb uses to communicate bidirectionally.
Its not that much cheaper, and also you cant use nearly as many drives per bus.
As an example, try http://www.fwdepot.com/ [fwdepot.com]
Their prices are a bit high i admit, but you can build a shopping list there and look around for best price.
4 BUS firewire cards. Note that a 4 -port- card is not at all the same. That will be one bus, with a 4 port hub built in. The less drives on each bus, and the more buses you have, the more bandwidth is available to each disk, and the speed up is exponential.
One bridge board per hard drive, a few hubs and some cabling, and spread them out over your few spare pcs.
Then run something like http://evms.sf.net/ [sf.net] to cluster the machines together and create one giant pool of storage space out of all the drives over all the machines.
It's probably as cheap as possible for getting use out of them storage wise. Any other 'better' solution will cost a lot more too.
Of course, useful for storage and just plain useful are two different metrics.
A lot of others already mentioned donating them.
Just remember to hook 4 up at a time to a spare pc and run a good HD wipe app like http://dban.sf.net/ [sf.net]
But there are many options to get rid of them to others with.
Charity donations for a tax write off, local community projects in need of hardware, friends, family, stocking stuffer for the staff, make a craigslist post and offer them for free (or next to), buyer comes to get it or pays shipping, do the ebay dance, etc etc
Re:1 word: magnets (Score:3, Informative)
Re:Earn a little extra on the side (Score:2, Informative)
you're going to need about 280,000 disk drives to get your ounce of gold ....
Re:Earn a little extra on the side (Score:4, Informative)
Wipe when removed (Score:1, Informative)
If the drive doesn't work, it's shredded.
Comment removed (Score:5, Informative)
Re:Bunches of small drives (Score:2, Informative)
* Bend open plate with the magnets and clamp it into a vise. (Usually the plate is a "U" shape, with the magnets inside the U.
* Heat up the plate with a heat gun: the magnets are glued on there, and this will melt the glue.
* Pull off the magnet with some pliers.
Easy, and it works every time.
Get a real backup solution (Score:4, Informative)
DoD now specifies to degause or slag drives (Score:5, Informative)
The standards for data sanitization is more stringent now. Anything that is more sensitive than Classified, and leaves the control of the organization disposing of the drives, needs to be either put through a degauser, chopped up into tiny pieces, or turned into slag. If the media is simply going to be re-used with-in the organization then wiping is okay.
Re:1 word: magnets (Score:1, Informative)
Re:Bunches of small drives (Score:5, Informative)
We experimented with that at the shop. Your typical degaussing ring doesn't generally have the field strength to wipe 'em. Heck...in our test, after zero-writing 'em, and checking 'em after 5, 10, 30, and 60 seconds of D-ring exposure we didn't appear to lose a bit.
Note: dedicated hard drive degaussers can get really expensive, too... It's MUCH cheaper to stick with software methodology. Have a look here [oss-spectrum.org] for details on both methods...
Re:Bunches of small drives (Score:4, Informative)
dd if=/dev/zero of=/dev/hdb bs=1M &
dd if=/dev/zero of=/dev/hdc bs=1M &
etc., substituting
Re:Careful with the magnets (Score:3, Informative)
I took apart a old 1GB hard disk (practically less than worthless these days) just to get the magnet out. It now holds my cell phone case closed (the weak magnets that were on there were crap and my phone kept falling out.) Now, it won't come apart without a strong tug. A strong magnet and a weak magnet make the perfect latch without being too strong.
The strong magnetic field hasn't affected my cell phone at all. (it's been exposed to the field practically all the time for the past few months except when I'm making a call )
Re:1 word: magnets (Score:5, Informative)
There are no magnetic monopoles in theory, either. Maxwell's four equations that define all of Electromagnetism, includes Gauss's Law of Magnetism. This law states that magnetic fields don't in net diverge.
Its usually written in differential form as: del * B = 0 (del dot B = 0). Note that Physics students from bush-league universities might write the equation in integral form, but that's either a product of their deficient education or maybe some kind of genetic defect.
More here (wikipedia):
http://en.wikipedia.org/wiki/Gauss's_law_for_magnetism [wikipedia.org] and here:
http://en.wikipedia.org/wiki/Maxwell's_equations [wikipedia.org]
Yeah, I suppose magnetic monopoles might exist and then we'd re-write the laws, but there's no reason to assume so. There is a natural temptation to look at magnetism the same as electricity (individual charges, like electrons and protons, being analogous to "North" and "South" monopoles), but probably the most useful way to think of magnetism is as a relativistic effect of electrostatics... once you do that, there's no reason to assume any kind of magnetic monopole at all.
(/geek)
Re:Not worth the trouble (Score:4, Informative)
Enterprise drives are definitely more expensive, but in this case, one gets what they pay for -- a lot more speed (especially with large, random seeks), and decent redundancy. The drives themselves are in the million to 1.4 million hour MTBF range, while consumer level drives, either don't have a rating, or the MTBF is hard to find, so the best guess is 250,000 to 500,000 hours, although some drives do have a million hour MTBF.
The key is to figure out the task at hand, and one's budget, and decide that way. Some tasks, just hooking up drives to the motherboard and using software RAID is more than workable. Other tasks are so time dependent that one has to have full hardware RAID with as many low-capacity spindles as possible to distribute the I/O far and wide. This is why Flash drives are making a good dent in the enterprise RAID market -- they are not perfect, but there is zero time wasted waiting for the head to move, and the right sector to float by.
Re:Bunches of small drives (Score:5, Informative)
This is always a good idea. Move the swap and the Windows temp to this drive and keep it formatted FAT32 (or lower). If you can, partition the disk up and give it two 2 gig partitions. Each partition should be formatted FAT16 (aka: FAT, no 32). FAT32 and FAT16 need to read/write to the disk less for each transaction than NTFS and is much faster for it. Since it's just swap and temp files, you don't need NTFS.
If you do this, you leave the rest of the drive open for users' personal files or whatever. The two partition setup should be one for swapfiles, the other for temp files. You can get more creative and create another for a web browser cache, but as you create partitions the drive head has to move farther to span the space and slows down the operation. A large FAT32 partition works well if you dig deep and move a lot off onto this second drive. Another thing to keep in mind is what people are going to be putting into their temp directory. Video work might create files bigger than the partition. In this case, create a 2 gig swap partition at FAT16 and leave the rest FAT32 for normal files.
I always get a second drive now for this reason. Helps in both Windows and Linux. For even better results, keep the drives on different IDE channels. Just think, the overall strategy is to keep one disk working on program data and the other working on the memory swap data - a major bottleneck, especially at IDE speeds.
Re:1 word: magnets (Score:2, Informative)
Re:Bunches of small drives (Score:3, Informative)
NTFS is not significantly slower than FAT, and in fact can be faster due to improved caching and resistance to fragmentation. Sure, sometimes more data need to be written to the disk than with FAT, but in practice it just gets cached until the disk is free to write it without interrupting anything else.