Consoldated Network Storage? 52
bigstupid asks: "Is there anyway to better utilize storage space on your network? I have a home network with about nine permanently attached PCs. A few of these are older PII300 with smaller hard drives (3-10GB). What I want to do is consolidate as much of the network storage as possible. That is: Instead of 2.4GB here, 4.6GB there, 5GB hither, 5BG tither, and 6 GB yon, I would like this storage space to appear to any computer I designate a 'client' to see and use this storage space as one large (in the case above 23GB) volume. I know I can do this within a machine with logical volumes or RAID, but is there a piece of software - client or server side - that will do this on Linux or Windows?"
Distributed File System (Score:2, Informative)
http://www.microsoft.com/windows2000/techinfo/h
NFS And samba (Score:3, Insightful)
Yes (Score:5, Informative)
Re:Yes (Score:2, Interesting)
Is it just me or does OpenAFS seem kludgy (at least on Windows)? In technical terms it did pretty much what I wanted, but the Windows 3.1-ishness of the install gave me an uneasy feeling, and the fact that it was very much mired in its own terminology for things is going to give it a high barrier of entry. Not to mention that my machine also felt significantly slower after installing it. Ideally I'd like something that just works, but it seems like the goals of distributing the data widely, and making sure the data is available are at odds in AFS making it slower and harder to administer.
Bah, don't mind me, I'm just a whining slashdotter. I'd probably bitch about the taxes if I won the lottery.
Re:Yes (Score:2)
I would too. You pay tax on the income you use to pay for lottery tickets (and maybe even sales tax?). And then you pay again to support a government run racket. Man, I bet politicians love lotteries. It's just like free money.
Re:Yes (Score:1)
Easy... (Score:3, Funny)
Re:Easy... (Score:1)
Re:Easy... (Score:2, Funny)
Re:Easy... (Score:1)
- Chris
Network file systems... (Score:4, Informative)
Modern IDE drives will handle the traffic that's generated by a decent number of client PCs. This lets you can place a couple of 200GB drives in a machine which will act as a server, and not have to worry about scouring 9 or 10 PCs to find your work! (Note: Some older BIOSes are limited to 45GB, so you might want to check for BIOS upgrades if you run into this issue.)
Re:Network file systems... (Score:2)
Modern IDE drives will handle the traffic that's generated by a decent number of client PCs. This lets you can place a couple of 200GB
---snip
Have you completely missed the point? He doesn't want a single server solution, any idiot can do that. He want a distributed file system, which would dynamically and transparently store data in different places.
I do that right now at one of my clients, with NT 4.0/windows 2000 DFS, I have over 1.5GB worth of data accessible through a single entry point. He wants something that will interoperate with unix-boxen, which makes microsoft's implentation of DFS not so great (it's a pain just to get it to work between nt4 and nt5).
A quick search of google for "distrbuted file system" quickly shows many other implentations, which would probably work better for him.
Re:Network file systems... (Score:2)
There are many solutions to this problem, trying to build a single sharepoint (what the original poster had suggested as an "answer") is a solution to a _different_ problem.
Single point of failure. (Score:1, Insightful)
Just the one drive, or all files on all drives?
Re:Single point of failure. (Score:1)
iSCSI (Score:3, Interesting)
Mosix FS (Score:2)
network block device and LVM (Score:3, Informative)
Steve
Cartman as Network Storage Engineer (Score:2, Funny)
You see, instead of 2.4 GB hyah, 4.6 GB nyah, 5 GB over hyah, and 6 GB over nyah...
Old McDonald as Network Storage Engineer (Score:2)
here a gig, there a gig
everywhere a gig-gig
Old McAdmin had a farm, 01 01 000000....
SCSI sharing? (Score:1)
Maybe making a SCSI RAID, or an IDE raid connected to one of those IDE-to-SCSI adaptor thingees (Promise sells such things; they even sell ready-to-go SCSI-connectable RAID cases.. just fill with IDE (!!!) hard drives!) and plugging it into all yer boxes would work? ^_^
Linux NOW (Score:2, Insightful)
Here is an older article on LinuxNOW.
http://www.linuxworld.com/linuxworld/l
What I did: (Score:3, Insightful)
I was in pretty much the same boat as you, but I picked up a couple new 60Gb-ers. I thought about combining them using MS's DFS, but then the drives started failing on me one-by-one.
I recommend you do the same. I mean ask yourself: Do I really want to trust my porn collection to a 6-year-old 2Gb drive? NO!!
Hardware's Cheap. Use Wisely.
Hard drive failure not fatal (Score:2)
Kind of an interesting project, actually. I don't recall this being done before.
Re:Hard drive failure not fatal (Score:2)
But would you waste your time and do that at home with 5 2Gb drives when you can buy 3 40Gb ones and call it a day? I must have been busy that week, so I just forked out the cash.
Why would I spend time making a small redundant storage repository which I know parts of will fail, when I can drop $50 on a single new drive whose size is greater than all the old ones combined? I think it's worth it. For my configuration, I have started to download (and eventually burn) a lot of movies, so I find the extra Gigs extremely usefull. btw, it's also an all-ide setup. I'm not up for fooling around with raid at home. Cheap and Easy is my motto for home. Expensive and Sophisticated is what I tell/sell my clients, so THAT's where I want to be spending my time.
Be Careful (Score:2, Informative)
Imagine this on a beowulf (Score:3, Interesting)
I am the admin at our school for a newly installed 32 node linux beowulf. Each node has a spare 20GB partition that is currently doing *nothing*. I would simply love to find a filesystem solution that can handle stripping or mirroring for a nice 32*20/x GB of filespace (where x is the amount of redunancy to be tuned for optimal reliability).
If anyone has the solution, even if it requires work, I am all ears.
Re:Imagine this on a beowulf (Score:2)
Re:Imagine this on a beowulf (Score:2)
How many times must I repeat myself? (Score:2)
Re:Imagine this on a beowulf (Score:2)
Re:Imagine this on a beowulf (Score:5, Informative)
That easy. Create a partition on each box and export it via NFS. Then plunk down a NetBSD box on the network and RAID the partitions with RAIDframe. Export *that* partition via NFS as well. Export it via Samba and even the little Windows boxes can play.
FreeBSD has RAIDframe as well, but the NetBSD version is marginally more robust and has worked over NFS are far back as '98.
Re:A silly question... (Score:1)
How apropos (Score:3, Funny)
Wow, 23 whole gigs? (Score:2)
- A.P.
Two pronged approach. (Score:3, Informative)
Now, on a special linux machine (the sucka, or Serialization and Uniformity Cache Kludge Administrator) mount all these exposed drives via "mount.smbfs" for the windows boxen. Use loopback filesystems the size of each Windows disk to create virtual devices accessable on said remote winboxen. Use md or LVM to stitch the exported linux box disks and loopbackfs-over-smbfs together into a software RAID disk.
Finally, format this UBER meta disk via your favorite filesystem. Expose it to windows via samba, and linux via NFS.
Of course, this whole setup serializations all the operations through one machine. Everything takes one round trip over the network. And unless you use RAID striping, if a machine goes down, so does the whole disk!
Other method, more complex:
Check out the Parallel Virtual Filesystem [clemson.edu]. What you do is for each spare linux box that has a disk, you run both the IO server and a client. One machine also has to pick up the slack of the metadata manager (no big deal...) Of course, for each linux machine, you have to pick and mount certain the Windows disks (via mount.smb) and run IO server procs for each mounted volume. Finally, you have to run samba on at least one of the linux machines running PVFS to expose those files back to the Windows machines. If you can tweak the samba source to use larger than normal block transfers, do so, because PVFS suffers when you transfer data between nodes that are too small.
Or you can use OpenAFS. Someone else mentioned it here. But it's not as much fun, and it is a big deal to set up if you haven't done it before.
Hivecache (Score:1)