Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Data Storage The Internet

What is the Best Remote Filesystem? 72

GaelenBurns asks: "I've got a project that I'd like the Slashdot community's opinion of. We have two distant office buildings and a passel of windows users that need to be able to access the files on either office's Debian server from either location through Samba shares. We tend to think that AFS would be the best choice for mounting a remote file system and keeping data synchronized, but we're having trouble finding documentation that coherently explains installing AFS. Furthermore, NFS doesn't seem like a good option, since I've read that it doesn't fail gracefully should the net connection ever drop. Others, such as Coda and Intermezzo, seem to be stuck in development, and therefore aren't sufficiently stable. I know tools for this must exist, please enlighten me."
This discussion has been archived. No new comments can be posted.

What is the Best Remote Filesystem?

Comments Filter:
  • drbd (Score:5, Informative)

    by JimmyGulp ( 60100 ) on Thursday December 18, 2003 @10:26AM (#7753643) Homepage
    What about drbd? Its a mirroring thing, like raid 1, over a network. This way, the data is syncronised, and all you have to do is mount/share the data from the nearest server, by whichever way you want. Try http://drbd.cubit.at/ [cubit.at] this.

    I think it can manage to re-sync everything when the network line comes back up, but I'm not sure.
  • security (Score:2, Informative)

    by 1isp_hax0r ( 725178 ) on Thursday December 18, 2003 @10:43AM (#7753800)
    Since the office buildings are distant, chances are that there is untrusted connection between them. Don't forget to send data through secure tunnels (eg: ssh [openssh.org] tunnel).
  • AFS is what you want (Score:5, Informative)

    by LoneRanger ( 81227 ) <<jboyens> <at> <fooninja.org>> on Thursday December 18, 2003 @11:03AM (#7753954) Journal
    Frankly AFS is what you want and what you need. I used to work at a site with over 26,000 AFS users and it was a magical system. It is hard to setup, I'll grant you that, but only the first time. After you've got it down once it's old hat after that.

    My biggest issue when I was setting it up was Kerberos integration, can be tricky but the guys on the OpenAFS mailing-lists are incredibly nice and knowledgable. Some other issues are daemons that like to write to user home dirs won't work real well unless you find a way to have them get an AFS token or Kerb ticket.

    If I were you I would SERIOUSLY consider AFS, don't listen to those who would say it's old and outdated, because it's not. OpenAFS is being actively developed and new features are being added all the time.

    Feel free to email me if you want and I'll discuss the advantages/disadvantages further or help you get resources to set up your AFS system.
  • How about Lustre? (Score:5, Informative)

    by Anonymous Coward on Thursday December 18, 2003 @11:16AM (#7754103)
    Lustre [lustre.org] is something we're looking at rolling out for user home directories. Although a few labs have 100TB+ file systems using it. You get redundant servers at all levels (which deals with the synchronization problems), and best of all, you can stripe all your existing disks to create one logical disk. Think LVM for network connected machines. It's pretty fast too.
  • by CaraCalla ( 219718 ) on Thursday December 18, 2003 @11:32AM (#7754286)
    I advise against Linux Kernel-Samba, at least if you want your Clients (be it Workstations or Servers) to have some uptime. After some days, possibly weeks it randomly stops working, all programs having open filedescriptors on the samba-share hanging. If you kill (-9) them, or the smbmount-process, they go zombie. Any other program which tries to access the former mount-point immediatly goes zombie as well (Your shell checking whats wrong, updatedb,...) After several more days I have seen those zombie-processes disappearing again, however not always.

    If you reboot daily anyway there shouldn't be any problem.

    All in all not a satisfactory situation.

    Tested with:
    - Samba 2.2.3a (Debian Woody) as Server
    - Kernel-Samba 2.4.* as Client

    But perhaps I missed something...

    Edgar
  • Oh. My. God. (Score:5, Informative)

    by schon ( 31600 ) on Thursday December 18, 2003 @11:35AM (#7754315)
    Setting up Samba over SSH Tunnel

    For a quick-and-dirty solution for one or two users, over a reliable connection, this might be sufficient, but for the poster's problem, it would be a nightmare.

    TCP over TCP is a bad idea because it amplifies the effect of lost packets.. two or three dropped packets in a short period of time will result in a cascade failure as each TCP stream attempts to compensate for the loss.

    You can find all the gory details here [sites.inka.de].
  • by TilJ ( 7607 ) on Thursday December 18, 2003 @11:36AM (#7754319) Homepage
    I agree, though from the other side of the fence: I have an existing Kerberos realm and am finding the AFS integration difficult ;-)

    There are two current stumbling blocks for me that likely won't affect the original poster:

    * OpenAFS doesn't run nicely (read: at all) on FreeBSD (tested with -STABLE on i386 and -CURRENT on sparc64). Doesn't matter if you're running it on Linux, of course.

    * AFS uses it's own filesystem rather than riding on top of the O/S. That's fine, and better for security, but sucks if you want to do something fancy like distribute the same filesystem via samba, NFSv3 and AFS simultaneously.

    To me, AFS is much more appealing than NFSv4. For one, NFSv4 is fairly rare - the implementations are basically for testing purposes and there's a limited set of operating systems supported. The extra features that AFS has (volume management, failover, ease of client maintenance, intelligent client-side caching, etc) make it a win for me.
  • Re:Samba (Score:5, Informative)

    by GaelenBurns ( 716462 ) <gaelenb@nospaM.assurancetechnologies.com> on Thursday December 18, 2003 @11:40AM (#7754367) Homepage Journal
    There is a T1 at either office, so they will be operating in connected mode the vast majority of the time. It's just that if the network connection breaks, I want to be able to rig up a way in which the network shares fail in a nice way. No crashes, no 5 minute timeouts for the users. And it'd be nice to be able to script the restoration of those network shares when the connection between the two servers is reestablished.

    I actually want AFS because it does local caching of files. Here is the comment [slashdot.org] where I describe that.
  • Re:drbd (Score:4, Informative)

    by kzanol ( 23904 ) on Thursday December 18, 2003 @11:47AM (#7754433)
    What about drbd? Its a mirroring thing, like raid 1, over a network

    Won't help in this situation:
    A drbd setup will keep one (or several) partitions syncronized between two servers. The problem is, one and only one server may access the device at a time.
    drbd is useful for high availability configurations where you need a standby server with current data that can take over if something happens to the primary server. It's most often used together with a cluster manager like heartbeat.
    In the scenario described above, where you need concurrent access to the same data on several servers, drbd isn't yet useable.
    Still, keep watching: development definitely moves in a direction that should make this possible. Steps needed to make this happen:

    • Make drbd writeable on both servers
    • add a distributed file system like GFS
    • add a distributed lock manager
    It'll be some time before drbd will be able to do all that.
  • Re:Samba (Score:5, Informative)

    by nocomment ( 239368 ) on Thursday December 18, 2003 @12:50PM (#7755134) Homepage Journal
    eh NFS is a fine way to do it. I might suggest that since you are trying to keep data synchronized, you could very easily make it filesystem agnostic by using rsync.

    I have a cluster of 4 machines that is remotely sync'd over an ssh tunnel using rsync. It's pretty easy to to do.
  • by LoneRanger ( 81227 ) <<jboyens> <at> <fooninja.org>> on Thursday December 18, 2003 @12:54PM (#7755171) Journal
    I agree there can be /some/ issues with installing software, but anything worth it's salt shouldn't break too badly. Some quick permissions fixing can be done and if you have the top-level directory permissioned right then it isn't an issue. I wouldn't ever suggest using AFS as your root fs. :)

    Even then the poster isn't asking for a software repository, he's/she's asking for a networked filesystem that provides some sort of offline use. Which is exactly the niche AFS fills.
  • by LoneRanger ( 81227 ) <<jboyens> <at> <fooninja.org>> on Thursday December 18, 2003 @01:05PM (#7755271) Journal
    * OpenAFS doesn't run nicely (read: at all) on FreeBSD (tested with -STABLE on i386 and -CURRENT on sparc64). Doesn't matter if you're running it on Linux, of course.

    How long has it been since you tried this? I seem to remember the OpenAFS team fixing a lot of their FreeBSD issues. I know OpenBSD recommends OpenAFS as a network file store. Even then you could try ARLA (?). Should be able to Google for it. IIRC Arla fully supports FreeBSD as both a client and a server.

    * AFS uses it's own filesystem rather than riding on top of the O/S. That's fine, and better for security, but sucks if you want to do something fancy like distribute the same filesystem via samba, NFSv3 and AFS simultaneously.

    Samba is supported somehow IIRC, but I KNOW that AFS over NFS is supported because it's in the docco... Appendix A. Managing the NFS/AFS Translator [openafs.org]

  • Re:Yikes! (Score:2, Informative)

    by Anonymous Coward on Thursday December 18, 2003 @03:41PM (#7756700)
    OBD's run on top of EXT3 (well sort of, its a hacked ext3, but basically it doesn't add any really new features on the journalling side).

    Lustre is a lot more stable than it used to be :)

    The failover is an "in development" feature. I know people who claim to be using it, but I wouldn't count on it working when you need it. Its just using clumanager (or simular) and a service start on the "failover" machines. It really doesn't do all that much, and requires some hgeavy scripting and hand holding to get it to work at all.

    Its a pretty good "in cluster" solution, I wouldn't recommend it (today at least) as a remote filesystem option.
  • Unison + SAMBA (Score:3, Informative)

    by obi ( 118631 ) on Thursday December 18, 2003 @04:05PM (#7756969)
    It sounds to me like you're trying to connect two servers on different locations, which then serve out the files out to the clients through samba. And the connection between those offices might drop.

    Maybe it's worth considering Unison - it's built to run over SSH, and can is like a two way rsync. It keeps state on both sides, and you can set it up so it automatically/regularly updates both sides with the changes of the other side. There's a window of conflicting updates, that's true, but you'd also have that with intermezzo or coda when they're in disconnected mode. Additionaly, unison is completely userspace, it doesn't care about what filesystem it might be running on. And there's Windows/MacOSX port too iirc.

    And hey, it's only an apt-get away :) - it's in Debian.

  • by lkaos ( 187507 ) <anthony@codemonk ... s minus math_god> on Thursday December 18, 2003 @05:21PM (#7757707) Homepage Journal
    So you're using smbfs (it's not called Kernel-Samba) plus Samba 2.2.3a.

    Both of those are very old and unmaintained. You should try out this setup with Samba 3.0 and cifsfs (available for 2.4 or 2.6).

    If you still have this problem, submit a bug report to Samba.
  • Re:AFS (Score:2, Informative)

    by rufey ( 683902 ) on Thursday December 18, 2003 @08:53PM (#7759335)
    Unless AFS has changed signifigantly since the last time I used it (1998), I don't know if it would be the best solution.

    AFS was a nice filesystem to work with, but it took more to maintain it than our regular NFS mounts. The local (client-side) caching of files was nice though. So was the concept of having a master read/write volume and being able to then replicate that volume to read-only volumes, and replicating them only when we wanted to. So we could put new programs on the read/write volumes, test them out, and, when it all was tested, "release" the volumes, et al.

    Access permissions are definatly different than your samba/CIF/NFS filesystems though. Its akin to Kerberos where you have to have a "token", and your "token" has to have rights to the file in order to read the file. And "tokens" used to not be a obtain-once-and-use-forever thing. They expired every 24 hours, so every 24 hours you'd have to re-authenticate.

    One thing that we found we didn't like (this was with AFS 3.3/3.4) was that the cache of files on the client machine was not encrypted. So if someone knew how the cache was structured, they could retreive the files in the cache without having any AFS tokens (the cache exists on local disk, not in AFS space). This may have changed.

    One other thing we had a problem with is when the AFS volume(s) would disappear from the client, and/or if the client lost contact with the cell AFS servers. The machine would become useless until it came back. This was all on UNIX (Sun, HP, SGI, BSD, Linux). Part of the problem was that /usr/local was in AFS space and contained most userland programs used.

  • AFS documentation (Score:5, Informative)

    by wik ( 10258 ) on Friday December 19, 2003 @04:08PM (#7767932) Homepage Journal
    As far as AFS documentation goes, I found the following documents useful when installing a new AFS cell/kerberos realm earlier this month.

    First, the AFS quick start guide on openafs.org (http://www.openafs.org/pages/doc/QuickStartUnix/a uqbg000.htm) provided step-by-step installation instructions for the AFS server and client. Having been an AFS user for the past 7 years did help a bit.

    Second, the quick start guide assumes you are using the kaserver included with OpenAFS. Everyone and their pet dog now recommends installing a real kerberos 5 daemon instead. We chose Heimdal 0.6. The new O'reilly book "Kerberos: A definitive guide" was invaluable for this. In order to put the two together, this impossible to find wiki page http://grand.central.org/twiki/bin/view/AFSLore/Ke rberosAFSInstall explains the changes to the quick start required to actually integrate kerberos 5.

    Finally, to get a pam login that gets both kerberos 4 (for AFS) and 5 tickets and tokens, we used pam-krb5afs (http://sourceforge.net/projects/pam-krb5/) for the login module.

    Unfortunately, none of this is tied together in a single cohesive document and I'm still trying to organize my notes. Overall, I was able to get the kerberos realm and AFS up in about a day, while getting the pam module and openssh to play nicely took three to four days.

BLISS is ignorance.

Working...