Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Data Storage The Internet

What is the Best Remote Filesystem? 72

GaelenBurns asks: "I've got a project that I'd like the Slashdot community's opinion of. We have two distant office buildings and a passel of windows users that need to be able to access the files on either office's Debian server from either location through Samba shares. We tend to think that AFS would be the best choice for mounting a remote file system and keeping data synchronized, but we're having trouble finding documentation that coherently explains installing AFS. Furthermore, NFS doesn't seem like a good option, since I've read that it doesn't fail gracefully should the net connection ever drop. Others, such as Coda and Intermezzo, seem to be stuck in development, and therefore aren't sufficiently stable. I know tools for this must exist, please enlighten me."
This discussion has been archived. No new comments can be posted.

What is the Best Remote Filesystem?

Comments Filter:
  • by 4of12 ( 97621 ) on Thursday December 18, 2003 @10:35AM (#7753726) Homepage Journal

    I'm sorry I can't address your question for good remote filesystems in the face of an unreliable network. My network has been relatively reliable and that's been a decreasing concern. Perhaps network reliability will be less of a concern for you, too, in future.

    Lately, what I've been looking for is a remote filesystem that provides performance, security, flexibility, the latter in reference to being able to log into someone else's desktop machine and easily get my home directory mounted, whether from a big server up 24x7, or from my desktop.

    Some have dabbled with DCE/DFS [lemson.com], but I've heard that's slowly dieing, ponderous to set up, performance suffers.

    SFS [nec.com] looks intriguing, but I haven't heard pro or con about its performance. It appears to be secure and flexible.

    NFS is an old friend and, yes, if the network or the server dies, a lot of local sessions will hang interminably 'NFS server not responding'. But, this doesn't happen as much as it did 5 years ago.

    Right now we're running NFS v3, but the new NFSv4 [nfsv4.org] looks like it has a better security model.

    Finally (and you shouldn't even think about this if network reliability is an issue), simple block service like iSCSI [digit-life.com] looks promising as a way of interchangeably moving around from desktop to desktop and getting your same home directory no matter where you are. More, you could conceivably even get your own flavor of OS booting, be it Red Hat 9, Win2K, XP, Gentoo, etc. Don't know about its security; it's heavily dependent on a reliable, high-performance network, but looks like a good way to get the most storage for your dollar (NAS instead of SAN).

  • by David McBride ( 183571 ) <david+slashdot&dwm,me,uk> on Thursday December 18, 2003 @10:44AM (#7753809) Homepage
    The way we do it is that we have some underlying file store running on unix machines. At the moment we've got a couple Sun machines with large RAID arrays.

    Then, to provide access to clients, we use Samba as a bridge to the Windows desktops and NFS for trusted linux clients; untrusted hosts can use SFTP or, if they just need read access, HTTP.

    Having multiple storage nodes on multiple sites synchronized is a SAN, not client access, problem. NFS just doesn't provide multiple-node functionality. NFSv4 (link [nfsv4.org], link [umich.edu]) may have some interesting features that could help; AFS [openafs.org] was designed with multiple sites in mind and does intelligent caching and has other useful features over NFS but does have some limitations; and then there's things like IBM's Storage Tank [ibm.com] which I haven't had a chance to look at properly yet.

    Bottom line: If you have a flexible SAN infrastructure, you can use bridging nodes to provide access to the SAN tailored to whatever your clients require. The infrastructure is the hard part; with commodity packages like Samba client support is a much simpler seperate issue.
  • Re:Samba (Score:3, Insightful)

    by passthecrackpipe ( 598773 ) * <passthecrackpipe AT hotmail DOT com> on Thursday December 18, 2003 @11:29AM (#7754251)
    doesn't feature disconnected mode - and given that the article discusses AFS, InterMezzo and Coda, all of whom support disconnected mode natively, I guess that that would be a requirement.
  • by scum-o ( 3946 ) <bigwebb.gmail@com> on Thursday December 18, 2003 @11:42AM (#7754386) Homepage Journal
    You should be using a VPN if you have two offices and two firewalls. Unless your debian machines ARE your firewalls, then NFS or samba would be fine. However, machines will still lock or be slow of the internet gets slow or you drop a connection from one place to another.
  • Not quite (Score:5, Insightful)

    by wowbagger ( 69688 ) on Thursday December 18, 2003 @12:38PM (#7754980) Homepage Journal
    SSH port forwarding isn't "TCP over TCP" - the SSH client isn't simply sending the TCP packets over the wire, it is sending the contents over.

    Suppose we have 2 computers, A and B, connected via SSH, and forwarding some service. A sends a block of data to B.

    The sequence is NOT:
    A packaged data into TCP packet.
    SSH encrypts packet and packages it into another TCP packet
    B receives SSH packet and acks it
    B decryptes packet
    B acks that packet.

    The sequence IS:
    A packages data into TCP packet
    SSH receives and acks packet.
    SSH encrypts PAYLOAD of TCP packet
    SSH sends packet
    B receives SSH packet and acks it
    B extracts data.
    B packages data into local TCP packet, sends it, acks it locally.

    So you don't get into the cascade failure mode for TCP over TCP.

    Now, if you use your SSH connection to forward PPP data over the wire - THEN you are getting into TCP over TCP because the SSH session is actually forwarding the PPP packets.
  • Wrong problem? (Score:5, Insightful)

    by fm6 ( 162816 ) on Thursday December 18, 2003 @02:22PM (#7755965) Homepage Journal
    AFS would be awesome... you see, sometimes these two offices need to work on the same files from both locations... not simultaneously, but sometimes consecutively. In those cases, it'd be great to have a setup that locally caches the file on the slave server, but will automatically serve the most recent version of the file, even if it had since been edited master server. With AFS, all of that is taken care of by the server, I believe.
    So far, you've said nothing about what's in these files and how they are being modified. That's not a secondary question. In fact, it may make your whole search for the right filesystem irrelevent.

    You're assuming that a remote filesystem is the only way to share files. But its only the most common and simplest. When you start talking about replication and version control (which you are, even though you don't use the terms) you need to consider a technology that directly supports these features. There's version control systems, databases, content management systems. Which is right for you? Without knowing more about the data you're dealing with, it's impossible to say.

  • Re:Samba (Score:4, Insightful)

    by Jahf ( 21968 ) on Thursday December 18, 2003 @05:01PM (#7757509) Journal
    I thought that NFSv4 was also supposed to support local caching of files?

    I REALLY wanted to use Intermezzo for my home setup where I have a central server and some nomadic (as in all over the country, not just house) laptops but after trying it and participating briefly in the mailing list, I agree with the poster, it is just too stuck in development. The version of Intermezzo that most people have in their distros often is the older version that isn't even compatible anymore.

    AFS was too much for my personal needs. I think for now I'll just be doing manual syncs using one of the various non-filesystem sync tools, but I really would like to see something like Coda or Intermezzo fully mature to an end-user I-feel-safe-with-my-data level.

Kleeneness is next to Godelness.

Working...