Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Data Storage The Internet

What is the Best Remote Filesystem? 72

GaelenBurns asks: "I've got a project that I'd like the Slashdot community's opinion of. We have two distant office buildings and a passel of windows users that need to be able to access the files on either office's Debian server from either location through Samba shares. We tend to think that AFS would be the best choice for mounting a remote file system and keeping data synchronized, but we're having trouble finding documentation that coherently explains installing AFS. Furthermore, NFS doesn't seem like a good option, since I've read that it doesn't fail gracefully should the net connection ever drop. Others, such as Coda and Intermezzo, seem to be stuck in development, and therefore aren't sufficiently stable. I know tools for this must exist, please enlighten me."
This discussion has been archived. No new comments can be posted.

What is the Best Remote Filesystem?

Comments Filter:
  • Samba (Score:4, Interesting)

    by fluor2 ( 242824 ) on Thursday December 18, 2003 @10:14AM (#7753542)
    It looks to me that both AFS and NFS are kind'a outdated. SAMBA 3 combines NTLMv2 or kerberos encrypted passwords. I like that.
  • by Hitch ( 1361 ) <hitch@nOSPAm.propheteer.org> on Thursday December 18, 2003 @10:18AM (#7753568) Homepage
    I've got developers that need to have a consistent home directory over several unix and windows boxes - we're using samba *and* nfs - an ugly system at best. I'm currently in the situation where I can start over, more or less, so I'm looking at better options. any suggestions are appreciated.
  • Simple : 9p (Score:3, Interesting)

    by DrSkwid ( 118965 ) on Thursday December 18, 2003 @10:47AM (#7753837) Journal

    9p [bell-labs.com]

  • by Halvard ( 102061 ) on Thursday December 18, 2003 @10:47AM (#7753839)

    Then you don't have to syncronize.

    If you haven't already installed SSH on a machine in both locations, do so.

    Follow the "Setting up Samba over SSH Tunnel mini-HOWTO" by Mark Williamson [ibiblio.org]. Then you can use the server on each side to share out the files on the other side and not even change anything about how your users do anything. It's very simple to set up. It's 3 steps on each side plus adding it into a log in script or mapping on the individual machines. So you should be ready in 5 minutes.

    If you still want to syncronize, there are tons of tools to do that including Unison [upenn.edu].

  • by pjl5602 ( 150416 ) on Thursday December 18, 2003 @11:51AM (#7754476) Homepage
    After you've got it down once it's old hat after that.

    I remember the first time that I did a 'vos move' on an AFS server and the volume moved from one server to the other without any downtime for the users. Talk about an admin's dream! :-)

  • by GaelenBurns ( 716462 ) <gaelenb@assuranc ... es.com minus bsd> on Thursday December 18, 2003 @11:52AM (#7754482) Homepage Journal
    The infrastructure is the hard part; with commodity packages like Samba client support is a much simpler seperate issue.

    Exactly right. The client connections will all be done via samba... it's the infrastructure I'm asking about.

    That being said, NFS4 seems to still be in development and we need something that is finished and ready for use now. Storage Tank sounds nice... but something tells me it's not free software. Free is good. Finally, AFS is the glory... but the documentation is horrible. We can find a number of how-tos, but they're all either out of date or useless. Have I missed one?
  • Re:Simple : 9p (Score:3, Interesting)

    by DrSkwid ( 118965 ) on Thursday December 18, 2003 @12:39PM (#7754989) Journal
    9p is a protocol not an OS, it is OS agnostic.

    I have a python 9p server daemon and clients.

    The ask was "what's the best remote file system", 9p is the answer.

  • by adam872 ( 652411 ) on Thursday December 18, 2003 @06:50PM (#7758502)
    ...then I would consider building a SAN with replication. High end storage solutions using HDS and/or EMC gear fix this problem by enabling remote block for block copy of data between identical arrays. Veritas also makes a product called Volume Replicator that does effectively the same thing. By the sounds of it, this would be out of your price range, but it would do the job (we have a 15TB data centre mirrored using EMC's SRDF and another one using Volume Replicator).

    In terms of free ways to do it, it will really depend on how sync'd the two offices need to be. If it's instantaneous, then you will need to have one master server and both sites pointing to it. Others have mentioned AFS, but that is also non trivial. If the synch doesn't have to be instantaneous, then perhaps a regular rsync tunneled through SSH would do the trick. CVS may also help, depending on the data you have.
  • Re:AFS (Score:3, Interesting)

    by wik ( 10258 ) on Thursday December 18, 2003 @09:10PM (#7759429) Homepage Journal
    afsd now refuses to start unless the cache directory is owned by root and chmod 600. As far as I know, the cache is still not encrypted, but if you can't trust root on the system, then you have bigger problems.

    AFS is still nasty if you lose contact with the servers. That definitely will be a problem if /usr/local is remote. I have yet to see a network file system that can gracefully handle this situation.

Understanding is always the understanding of a smaller problem in relation to a bigger problem. -- P.D. Ouspensky

Working...