Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Hardware

Consoldated Network Storage? 52

bigstupid asks: "Is there anyway to better utilize storage space on your network? I have a home network with about nine permanently attached PCs. A few of these are older PII300 with smaller hard drives (3-10GB). What I want to do is consolidate as much of the network storage as possible. That is: Instead of 2.4GB here, 4.6GB there, 5GB hither, 5BG tither, and 6 GB yon, I would like this storage space to appear to any computer I designate a 'client' to see and use this storage space as one large (in the case above 23GB) volume. I know I can do this within a machine with logical volumes or RAID, but is there a piece of software - client or server side - that will do this on Linux or Windows?"
This discussion has been archived. No new comments can be posted.

Consoldated Network Storage?

Comments Filter:
  • You could do this on windows 2000 servers... With a DFS (Distributed File System.) You can see some info about it here:

    http://www.microsoft.com/windows2000/techinfo/ho wi tworks/fileandprint/dfsnew.asp
  • NFS And samba (Score:3, Insightful)

    by norwoodites ( 226775 ) <pinskia AT gmail DOT com> on Tuesday November 26, 2002 @08:07PM (#4763706) Journal
    Use NFS and reexport it using samba.
  • Yes (Score:5, Informative)

    by I Am The Owl ( 531076 ) on Tuesday November 26, 2002 @08:07PM (#4763707) Homepage Journal
    Look at OpenAFS [openafs.org].
    • Re:Yes (Score:2, Interesting)

      by KILNA ( 536949 )

      Is it just me or does OpenAFS seem kludgy (at least on Windows)? In technical terms it did pretty much what I wanted, but the Windows 3.1-ishness of the install gave me an uneasy feeling, and the fact that it was very much mired in its own terminology for things is going to give it a high barrier of entry. Not to mention that my machine also felt significantly slower after installing it. Ideally I'd like something that just works, but it seems like the goals of distributing the data widely, and making sure the data is available are at odds in AFS making it slower and harder to administer.

      Bah, don't mind me, I'm just a whining slashdotter. I'd probably bitch about the taxes if I won the lottery.

      • "I'd probably bitch about the taxes if I won the lottery."

        I would too. You pay tax on the income you use to pay for lottery tickets (and maybe even sales tax?). And then you pay again to support a government run racket. Man, I bet politicians love lotteries. It's just like free money.
        • I agree that there can be a valid argument there, I'll reprase my statement. I'd probably bitch about taxes if I won the lottery on a free ticket. Though more wordy, it's actually closer to the slashdot ideal. :)
  • Easy... (Score:3, Funny)

    by *xpenguin* ( 306001 ) on Tuesday November 26, 2002 @08:14PM (#4763776)
    Just go buy yourself a 23 gigbayte harddrive.
  • by kernelistic ( 160323 ) on Tuesday November 26, 2002 @08:17PM (#4763804)
    ... such as SMB, Coda, NFS, AppleTalk and AFS work very nicely. We use a centralized data repository which is backed up daily. Users get access to their data from their stations, finding files takes less time, and backups are a breeze.

    Modern IDE drives will handle the traffic that's generated by a decent number of client PCs. This lets you can place a couple of 200GB drives in a machine which will act as a server, and not have to worry about scouring 9 or 10 PCs to find your work! (Note: Some older BIOSes are limited to 45GB, so you might want to check for BIOS upgrades if you run into this issue.)
    • ---snip
      Modern IDE drives will handle the traffic that's generated by a decent number of client PCs. This lets you can place a couple of 200GB

      ---snip

      Have you completely missed the point? He doesn't want a single server solution, any idiot can do that. He want a distributed file system, which would dynamically and transparently store data in different places.

      I do that right now at one of my clients, with NT 4.0/windows 2000 DFS, I have over 1.5GB worth of data accessible through a single entry point. He wants something that will interoperate with unix-boxen, which makes microsoft's implentation of DFS not so great (it's a pain just to get it to work between nt4 and nt5).

      A quick search of google for "distrbuted file system" quickly shows many other implentations, which would probably work better for him.
  • by Anonymous Coward
    When one of your drives crash, which files do you want to loose?

    Just the one drive, or all files on all drives?
  • iSCSI (Score:3, Interesting)

    by benjamindees ( 441808 ) on Tuesday November 26, 2002 @08:29PM (#4763884) Homepage
    You might try running RAID over something like iSCSI [uml.edu] (if it can be done), and re-exporting that filesystem from a central server.
  • I think that Mosix has something like that. I want to say it's called mosixfs, but it could very likely have a different name.
  • by stevef ( 5539 ) on Tuesday November 26, 2002 @09:28PM (#4764208)
    I haven't tried this, but it would be fun to play with.

    Steve
  • That is: Instead of 2.4GB here, 4.6GB there, 5GB hither, 5BG tither, and 6 GB yon, I would like this storage space to appear to any computer I designate a 'client' to see and use this storage space as one large (in the case above 23GB) volume.

    You see, instead of 2.4 GB hyah, 4.6 GB nyah, 5 GB over hyah, and 6 GB over nyah...

  • I could be wrong, but I seem to recall that if you're careful, you can share a chain of SCSI devices between multiple machines...

    Maybe making a SCSI RAID, or an IDE raid connected to one of those IDE-to-SCSI adaptor thingees (Promise sells such things; they even sell ready-to-go SCSI-connectable RAID cases.. just fill with IDE (!!!) hard drives!) and plugging it into all yer boxes would work? ^_^
  • Linux NOW (Score:2, Insightful)

    by Shwag ( 20142 )
    Ian Murdock, of debIan and Progeny fame, had a project called LinuxNOW which causes workstations on the network to share hard drive space as if a single drive, and also share processor cycles as in clustering.

    Here is an older article on LinuxNOW.
    http://www.linuxworld.com/linuxworld/lw -2001-03/lw -03-murdock.html
  • What I did: (Score:3, Insightful)

    by penguin_punk ( 66721 ) on Tuesday November 26, 2002 @10:06PM (#4764429) Journal
    Here's my 2 cents:

    I was in pretty much the same boat as you, but I picked up a couple new 60Gb-ers. I thought about combining them using MS's DFS, but then the drives started failing on me one-by-one.

    I recommend you do the same. I mean ask yourself: Do I really want to trust my porn collection to a 6-year-old 2Gb drive? NO!!

    Hardware's Cheap. Use Wisely.
    • It's theoretically possible to design a distributed filesystem that can handle the failure of one or more nodes.

      Kind of an interesting project, actually. I don't recall this being done before.
      • I agree.

        But would you waste your time and do that at home with 5 2Gb drives when you can buy 3 40Gb ones and call it a day? I must have been busy that week, so I just forked out the cash.

        Why would I spend time making a small redundant storage repository which I know parts of will fail, when I can drop $50 on a single new drive whose size is greater than all the old ones combined? I think it's worth it. For my configuration, I have started to download (and eventually burn) a lot of movies, so I find the extra Gigs extremely usefull. btw, it's also an all-ide setup. I'm not up for fooling around with raid at home. Cheap and Easy is my motto for home. Expensive and Sophisticated is what I tell/sell my clients, so THAT's where I want to be spending my time.
  • Be Careful (Score:2, Informative)

    Just be careful when you do this. If one of your 2.4gig HD's craps out, you've lost the whole array (that means all your data/porn)!!
  • by Froze ( 398171 ) on Tuesday November 26, 2002 @11:20PM (#4764754)
    Before you mod me down, this is serious :-)

    I am the admin at our school for a newly installed 32 node linux beowulf. Each node has a spare 20GB partition that is currently doing *nothing*. I would simply love to find a filesystem solution that can handle stripping or mirroring for a nice 32*20/x GB of filespace (where x is the amount of redunancy to be tuned for optimal reliability).

    If anyone has the solution, even if it requires work, I am all ears.
  • How apropos (Score:3, Funny)

    by tswinzig ( 210999 ) on Wednesday November 27, 2002 @01:09AM (#4765188) Journal
    They consolidated the word consolidated.
  • You know, they make 200 gig drives now... I hope I'm not sounding elitist when I offer the suggestion that you purchase something larger, shove it in one of them, and share it via NFS, SAMBA, and any other protocols your computers speak.

    - A.P.
  • by Ayanami Rei ( 621112 ) <rayanami&gmail,com> on Wednesday November 27, 2002 @02:21AM (#4765434) Journal
    The first thing to do is expose all of your drives in the "same format". On Windows machines, share the extra disks as normal. In linux, use NBD (network block device) or iSCSI to expose the disks as raw partitions accessable over ethernet to the other linux boxen.

    Now, on a special linux machine (the sucka, or Serialization and Uniformity Cache Kludge Administrator) mount all these exposed drives via "mount.smbfs" for the windows boxen. Use loopback filesystems the size of each Windows disk to create virtual devices accessable on said remote winboxen. Use md or LVM to stitch the exported linux box disks and loopbackfs-over-smbfs together into a software RAID disk.

    Finally, format this UBER meta disk via your favorite filesystem. Expose it to windows via samba, and linux via NFS.

    Of course, this whole setup serializations all the operations through one machine. Everything takes one round trip over the network. And unless you use RAID striping, if a machine goes down, so does the whole disk!

    Other method, more complex:

    Check out the Parallel Virtual Filesystem [clemson.edu]. What you do is for each spare linux box that has a disk, you run both the IO server and a client. One machine also has to pick up the slack of the metadata manager (no big deal...) Of course, for each linux machine, you have to pick and mount certain the Windows disks (via mount.smb) and run IO server procs for each mounted volume. Finally, you have to run samba on at least one of the linux machines running PVFS to expose those files back to the Windows machines. If you can tweak the samba source to use larger than normal block transfers, do so, because PVFS suffers when you transfer data between nodes that are too small.

    Or you can use OpenAFS. Someone else mentioned it here. But it's not as much fun, and it is a big deal to set up if you haven't done it before.
  • Check out Hivecache [hivecache.com]. Their software supposedly uses all your free bits of disk space for online backups. Not exactly what you're after, but a pretty cool idea none-the-less.

"May your future be limited only by your dreams." -- Christa McAuliffe

Working...