Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Data Storage

Making Use of Terabytes of Unused Storage 448

kernspaltung writes "I manage a network of roughly a hundred Windows boxes, all of them with hard drives of at least 40GB — many have 80GB drives and larger. Other than what's used by the OS, a few applications, and a smattering of small documents, this space is idle. What would be a productive use for these terabytes of wasted space? Does any software exist that would enable pooling this extra space into one or more large virtual networked drives? Something that could offer the fault-tolerance and ease-of-use of ZFS across a network of PCs would be great for small-to-medium organizations."
This discussion has been archived. No new comments can be posted.

Making Use of Terabytes of Unused Storage

Comments Filter:
  • not enough info (Score:2, Interesting)

    by YrWrstNtmr ( 564987 ) on Saturday February 09, 2008 @10:39AM (#22359724)
    Is this a company, college, or just a random collection of boxes in your mom's basement? What function does your organization want to do that it can't because of a lack of a few terabytes? What does the actual owner of these boxes have to say about your little enterprise?
  • use it local (Score:1, Interesting)

    by Anonymous Coward on Saturday February 09, 2008 @10:52AM (#22359804)
    You could use extensive subversioning on each machine individuall to get an benefit out of unused discspace und computing power. User who accecidentially overwrite or delete could get them back from there own disc space. Some kind of NFS would use a lot of network traffic an bandwith is often a limiting faktor.
  • dCache (Score:3, Interesting)

    by Rev Saxon ( 666154 ) on Saturday February 09, 2008 @10:56AM (#22359830) Homepage
    http://www.dcache.org/ [dcache.org] You will need a system to act as a master, but otherwise your normal nodes should work great.
  • by ralph90009 ( 1088159 ) on Saturday February 09, 2008 @11:03AM (#22359884)
    While I was in college, I worked in the IT department. In my experience, your end-users will have a proverbial shit-fit if their computer's HD starts spooling up when they aren't doing anything. While it would be nice to use the spare space for data storage, I'm not sure it would be worth the headache. The volume of user complaints would skyrocket, you'd have to train them to leave the things on all the time, and you'd have a distributed data pool to manage. Changing user behavior is like teaching a two-year-old to say "thank you" (It's possible, but not fun) and your electrical and manpower expenses would probably outstrip the savings.
  • Re:vista? - DFS (Score:5, Interesting)

    by OnlineAlias ( 828288 ) on Saturday February 09, 2008 @11:39AM (#22360130)

    This is why SAN manufacturers have come up with "thin provisioning". NetApp is quite good it, read more here [netapp.com].
  • Birth of the Matrix? (Score:5, Interesting)

    by TropicalCoder ( 898500 ) on Saturday February 09, 2008 @11:41AM (#22360146) Homepage Journal

    What would be a productive use for these terabytes of wasted space?

    Well, I had this idea when I read about some Open Source software that allowed distributed storage (sorry, forgot what that was, but by now I am sure it has already been mentioned in this discussion). The idea was this - suppose we have such software for unlimited distributed storage, so that people can download it and volunteer some unused space on their HD for a storage pool. Then suppose we have some software for distributed computing like we have for the SETI program. Now we have ziggabytes of storage and googleplexflops of processing power, what can we do with that? How about, for one thing, storing the entire internet (using compression, of course) on that endless distributed storage, and then running a decentralized, independent internet via P2P software? The distributed database could be constantly updated from the original sources, and the distributed storage then becomes in effect a giant cache that contains the entire internet. Now we could employ the distributed computing software to datamine that cache and we could have searching independent of Google or Yahoo or M$FT. Beyond that we could develop some AI that uses all that computing power and all that data to do... what? - I'm not sure yet. Just thought I would throw this out there to perhaps maybe get stepped on, or who knows, inspire further thought.

  • by BobTosh ( 740023 ) on Saturday February 09, 2008 @02:59PM (#22361810) Homepage Journal
    Raid on top of NBD works (with caveats), I tried a proof of concept once, RAID5 made out of nbd units. The configuration needs to be though through carefully so that data is striped across sufficient clients to prevent excessive resource (CPU and network) at the client end. I did it my building one PC with Linux and "mounting" each of the NBD pieces shared by the enn-user Windows PC's, then simply build RAID over the top of that. With sufficient planning you can make it quite resilient, just in case a user decides to switch off their PC. I did find that re-building the stripes when a PC did get turned off, caused the "server" (ie the Linux PC) to be heavily utilised, and this caused the clients that mounted/used the shared-out space from the "server" to receive quite poor performance. The only way I could think of really making this a serious possibility would be to beef-up the power of the "server" quite significantly and to ensure really fast network connections between it and the nbd hosting machines.
  • by Anonymous Coward on Saturday February 09, 2008 @04:03PM (#22362376)
    Yet the "necessary gun knowledge" in using it involves many things which do require you to understand the rudimentary "inner workings" of a gun.

    What happens if a gun jams? You will need to know how to clean it. What happens if you know the firing pin struck the cap but the round hasn't gone off? Open the chamber to clear the round and it might explode in your face. Sure, these things probably only happen 0.5% of the time but just *one* occasion of bad luck is enough to snuff your life out.

    You really do not have the time to skip and pick up knowledge like this only when problem comes. It takes more than just pulling the trigger to really use a gun.
  • Sun is working on it (Score:2, Interesting)

    by jfim ( 1167051 ) on Saturday February 09, 2008 @06:24PM (#22363636)
    Project Celeste [sun.com] is basically what the OP is talking about. It's a distributed filesystem with automatic replication, handles rogue nodes via voting and also exports the "filesystem" as CIFS. It's essentially a distributed object store, which can be used to implement a filesystem on top of it. I saw a demo of it last year and I was pretty surprised, it seems to work quite well for a research project.
  • 9mm vs .45 (Score:3, Interesting)

    by Firethorn ( 177587 ) on Saturday February 09, 2008 @07:17PM (#22364112) Homepage Journal
    Don't forget that at those sizes, a .45 is nearly 30% larger in diameter, and has far more mass. A 9mm will normally have a 124 grain bullet with a velocity of 1150 ft/s, 364 foot-pounds of energy. A .45 can be shooting 230 grain rounds at 900ft/s for 414 ft-lbs of energy.

    Despite all this, I think that when it comes down to the army, it's mostly because of ammunition selection. Troops are issued non-expanding FMJ ammunition, which leads to 9mm over penetrating and under performing. The 1911, chambered in .45 was designed for FMJ ammunition from the outset. The larger and slower .45 round will use more of it's energy in a body, causing more damage. A 9mm HP will out stop a .45FMJ - but US soldiers are forbidden expanding ammunition. A .45HP will stop more often than a .45FMJ, but the difference is nowhere near as large as the difference between a 9mm HP & FMJ.

    As for the rifle comment, I have to agree. Consider the 'poodle-shooter', the .223/5.56 round our military uses in most of it's rifles. 1300 ft-pds of energy in a 60-70 grain bullet traveling at over 3k ft/s. Sufficient velocity that the round will often fragment when it strikes a target.
  • Please don't (Score:5, Interesting)

    by mnmn ( 145599 ) on Saturday February 09, 2008 @08:00PM (#22364484) Homepage
    Please do not use the space for anything else. Do not try to actively use the space.

    The reason is the obscenely large amount of power required to use the space given a few gigabytes requires the whole machine to be running, and uses it's CPU which can't be less than 21Watts itself.

    It's actually cheaper to get a 1TB drive and use it elsewhere than use the power on so many desktops (or worse, servers). Even with the desktops in use by active users.
  • by owenomalley ( 103963 ) <omalley@@@apache...org> on Sunday February 10, 2008 @01:06AM (#22366850) Homepage
    You could put a Hadoop [apache.org] Distributed File System (HDFS) on them. HDFS allows you to use the storage as a single file system that is stable and reliable. We [yahoo.com] have multiple 2000 node clusters with petabytes of user data on them. Because the blocks are each replicated to 3 hosts, if a node goes down, your data on that node is not lost.

Intel CPUs are not defective, they just act that way. -- Henry Spencer

Working...