Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Data Storage The Internet

What is the Best Remote Filesystem? 72

GaelenBurns asks: "I've got a project that I'd like the Slashdot community's opinion of. We have two distant office buildings and a passel of windows users that need to be able to access the files on either office's Debian server from either location through Samba shares. We tend to think that AFS would be the best choice for mounting a remote file system and keeping data synchronized, but we're having trouble finding documentation that coherently explains installing AFS. Furthermore, NFS doesn't seem like a good option, since I've read that it doesn't fail gracefully should the net connection ever drop. Others, such as Coda and Intermezzo, seem to be stuck in development, and therefore aren't sufficiently stable. I know tools for this must exist, please enlighten me."
This discussion has been archived. No new comments can be posted.

What is the Best Remote Filesystem?

Comments Filter:
  • Samba (Score:4, Interesting)

    by fluor2 ( 242824 ) on Thursday December 18, 2003 @10:14AM (#7753542)
    It looks to me that both AFS and NFS are kind'a outdated. SAMBA 3 combines NTLMv2 or kerberos encrypted passwords. I like that.
    • Re:Samba (Score:3, Insightful)

      doesn't feature disconnected mode - and given that the article discusses AFS, InterMezzo and Coda, all of whom support disconnected mode natively, I guess that that would be a requirement.
      • Re:Samba (Score:5, Informative)

        by GaelenBurns ( 716462 ) <gaelenb@ a s s u ... echnologies.com> on Thursday December 18, 2003 @11:40AM (#7754367) Homepage Journal
        There is a T1 at either office, so they will be operating in connected mode the vast majority of the time. It's just that if the network connection breaks, I want to be able to rig up a way in which the network shares fail in a nice way. No crashes, no 5 minute timeouts for the users. And it'd be nice to be able to script the restoration of those network shares when the connection between the two servers is reestablished.

        I actually want AFS because it does local caching of files. Here is the comment [slashdot.org] where I describe that.
        • Coda also does local caching of files, or at least, so claims its documentation. I've never tested it.
          • Re:Samba (Score:4, Insightful)

            by Jahf ( 21968 ) on Thursday December 18, 2003 @05:01PM (#7757509) Journal
            I thought that NFSv4 was also supposed to support local caching of files?

            I REALLY wanted to use Intermezzo for my home setup where I have a central server and some nomadic (as in all over the country, not just house) laptops but after trying it and participating briefly in the mailing list, I agree with the poster, it is just too stuck in development. The version of Intermezzo that most people have in their distros often is the older version that isn't even compatible anymore.

            AFS was too much for my personal needs. I think for now I'll just be doing manual syncs using one of the various non-filesystem sync tools, but I really would like to see something like Coda or Intermezzo fully mature to an end-user I-feel-safe-with-my-data level.
    • (note: by master/slave terminology I only mean that the master server is used more. Only AFS has a hierarchy where master/slave really matters)

      AFS would be awesome... you see, sometimes these two offices need to work on the same files from both locations... not simultaneously, but sometimes consecutively. In those cases, it'd be great to have a setup that locally caches the file on the slave server, but will automatically serve the most recent version of the file, even if it had since been edited master
      • Wrong problem? (Score:5, Insightful)

        by fm6 ( 162816 ) on Thursday December 18, 2003 @02:22PM (#7755965) Homepage Journal
        AFS would be awesome... you see, sometimes these two offices need to work on the same files from both locations... not simultaneously, but sometimes consecutively. In those cases, it'd be great to have a setup that locally caches the file on the slave server, but will automatically serve the most recent version of the file, even if it had since been edited master server. With AFS, all of that is taken care of by the server, I believe.
        So far, you've said nothing about what's in these files and how they are being modified. That's not a secondary question. In fact, it may make your whole search for the right filesystem irrelevent.

        You're assuming that a remote filesystem is the only way to share files. But its only the most common and simplest. When you start talking about replication and version control (which you are, even though you don't use the terms) you need to consider a technology that directly supports these features. There's version control systems, databases, content management systems. Which is right for you? Without knowing more about the data you're dealing with, it's impossible to say.

      • Re:AFS (Score:2, Informative)

        by rufey ( 683902 )
        Unless AFS has changed signifigantly since the last time I used it (1998), I don't know if it would be the best solution.

        AFS was a nice filesystem to work with, but it took more to maintain it than our regular NFS mounts. The local (client-side) caching of files was nice though. So was the concept of having a master read/write volume and being able to then replicate that volume to read-only volumes, and replicating them only when we wanted to. So we could put new programs on the read/write volumes, t

        • Re:AFS (Score:3, Interesting)

          by wik ( 10258 )
          afsd now refuses to start unless the cache directory is owned by root and chmod 600. As far as I know, the cache is still not encrypted, but if you can't trust root on the system, then you have bigger problems.

          AFS is still nasty if you lose contact with the servers. That definitely will be a problem if /usr/local is remote. I have yet to see a network file system that can gracefully handle this situation.
        • Its akin to Kerberos where you have to have a "token"
          It is Kerberos, although AFS ships with Kerberos v4. However, I've heard of people using Kerberos v5 with it, though that needs some extra effort.
    • Re:Samba (Score:5, Informative)

      by nocomment ( 239368 ) on Thursday December 18, 2003 @12:50PM (#7755134) Homepage Journal
      eh NFS is a fine way to do it. I might suggest that since you are trying to keep data synchronized, you could very easily make it filesystem agnostic by using rsync.

      I have a cluster of 4 machines that is remotely sync'd over an ssh tunnel using rsync. It's pretty easy to to do.
    • BSD (OpenBSD at least) doesn't mount SMB/CIFS shares easily. 'shlight' is supposed to be able to do it, in some circumstances, but as a newbie user, I have problems with certain setups.

      (OpenBSD + Samba will export SMB/CIFS shares just fine)

  • by Hitch ( 1361 ) <hitch.propheteer@org> on Thursday December 18, 2003 @10:18AM (#7753568) Homepage
    I've got developers that need to have a consistent home directory over several unix and windows boxes - we're using samba *and* nfs - an ugly system at best. I'm currently in the situation where I can start over, more or less, so I'm looking at better options. any suggestions are appreciated.
    • by David McBride ( 183571 ) <david+slashdot&dwm,me,uk> on Thursday December 18, 2003 @10:44AM (#7753809) Homepage
      The way we do it is that we have some underlying file store running on unix machines. At the moment we've got a couple Sun machines with large RAID arrays.

      Then, to provide access to clients, we use Samba as a bridge to the Windows desktops and NFS for trusted linux clients; untrusted hosts can use SFTP or, if they just need read access, HTTP.

      Having multiple storage nodes on multiple sites synchronized is a SAN, not client access, problem. NFS just doesn't provide multiple-node functionality. NFSv4 (link [nfsv4.org], link [umich.edu]) may have some interesting features that could help; AFS [openafs.org] was designed with multiple sites in mind and does intelligent caching and has other useful features over NFS but does have some limitations; and then there's things like IBM's Storage Tank [ibm.com] which I haven't had a chance to look at properly yet.

      Bottom line: If you have a flexible SAN infrastructure, you can use bridging nodes to provide access to the SAN tailored to whatever your clients require. The infrastructure is the hard part; with commodity packages like Samba client support is a much simpler seperate issue.
      • The infrastructure is the hard part; with commodity packages like Samba client support is a much simpler seperate issue.

        Exactly right. The client connections will all be done via samba... it's the infrastructure I'm asking about.

        That being said, NFS4 seems to still be in development and we need something that is finished and ready for use now. Storage Tank sounds nice... but something tells me it's not free software. Free is good. Finally, AFS is the glory... but the documentation is horrible. W
  • drbd (Score:5, Informative)

    by JimmyGulp ( 60100 ) on Thursday December 18, 2003 @10:26AM (#7753643) Homepage
    What about drbd? Its a mirroring thing, like raid 1, over a network. This way, the data is syncronised, and all you have to do is mount/share the data from the nearest server, by whichever way you want. Try http://drbd.cubit.at/ [cubit.at] this.

    I think it can manage to re-sync everything when the network line comes back up, but I'm not sure.
    • Re:drbd (Score:4, Informative)

      by kzanol ( 23904 ) on Thursday December 18, 2003 @11:47AM (#7754433)
      What about drbd? Its a mirroring thing, like raid 1, over a network

      Won't help in this situation:
      A drbd setup will keep one (or several) partitions syncronized between two servers. The problem is, one and only one server may access the device at a time.
      drbd is useful for high availability configurations where you need a standby server with current data that can take over if something happens to the primary server. It's most often used together with a cluster manager like heartbeat.
      In the scenario described above, where you need concurrent access to the same data on several servers, drbd isn't yet useable.
      Still, keep watching: development definitely moves in a direction that should make this possible. Steps needed to make this happen:

      • Make drbd writeable on both servers
      • add a distributed file system like GFS
      • add a distributed lock manager
      It'll be some time before drbd will be able to do all that.
  • by 4of12 ( 97621 ) on Thursday December 18, 2003 @10:35AM (#7753726) Homepage Journal

    I'm sorry I can't address your question for good remote filesystems in the face of an unreliable network. My network has been relatively reliable and that's been a decreasing concern. Perhaps network reliability will be less of a concern for you, too, in future.

    Lately, what I've been looking for is a remote filesystem that provides performance, security, flexibility, the latter in reference to being able to log into someone else's desktop machine and easily get my home directory mounted, whether from a big server up 24x7, or from my desktop.

    Some have dabbled with DCE/DFS [lemson.com], but I've heard that's slowly dieing, ponderous to set up, performance suffers.

    SFS [nec.com] looks intriguing, but I haven't heard pro or con about its performance. It appears to be secure and flexible.

    NFS is an old friend and, yes, if the network or the server dies, a lot of local sessions will hang interminably 'NFS server not responding'. But, this doesn't happen as much as it did 5 years ago.

    Right now we're running NFS v3, but the new NFSv4 [nfsv4.org] looks like it has a better security model.

    Finally (and you shouldn't even think about this if network reliability is an issue), simple block service like iSCSI [digit-life.com] looks promising as a way of interchangeably moving around from desktop to desktop and getting your same home directory no matter where you are. More, you could conceivably even get your own flavor of OS booting, be it Red Hat 9, Win2K, XP, Gentoo, etc. Don't know about its security; it's heavily dependent on a reliable, high-performance network, but looks like a good way to get the most storage for your dollar (NAS instead of SAN).

    • "I'm sorry I can't address your question for good remote filesystems in the face of an unreliable network."

      I suspect what he means is that the core network within each site is reliable -- just that the linkage between the two or more storage nodes he has to manage may *not* be, and he wants to be able to recover gracefully in the event of the inter-site link going down and back up again.
    • I only mention the unreliable network because technically, it is. Just like any net connection that I've ever heard of, a T1 does not guarantee 100% uptime. We should see an uptime of greater than 99.98%.
      • If this is the case, it sounds like your real problem is with the network. For all your time, effort, and money, you may be better served by tackling this first. A few extra links and a routing protocol can dramatically increase the reliablilty of a lan/wan environment.
        Consider the Internet, how often do you hear about the WAN links going down? Considering the number of links involved in a normal internet scenario, very few. Consider that 99% reliability means 2.5 days/year of downtime. If you use an averag
    • SFS performs quite well. I recommend it but AFAICT it's only for linux and bsd ATM.
  • security (Score:2, Informative)

    by 1isp_hax0r ( 725178 )
    Since the office buildings are distant, chances are that there is untrusted connection between them. Don't forget to send data through secure tunnels (eg: ssh [openssh.org] tunnel).
    • "CVS is not the answer, CVS is the question - the answer is no!"

      Can't remember where I saw that quote first (LKML??) but I think it sums things up quite nicely... :-)
    • Yes, CVS is what I would use.
      Especially with that Windows client, Tortoise(?), that is embedded into Windows Explorer so there is no ugly client to learn. Nice color coded folders and files: green - current, red - updated, ? - new.
  • Simple : 9p (Score:3, Interesting)

    by DrSkwid ( 118965 ) on Thursday December 18, 2003 @10:47AM (#7753837) Journal

    9p [bell-labs.com]

    • Oh yes... we're well aware of Plan 9. As a matter of fact, we lament it almost constantly. Correct me if I'm wrong, but there is no linux version of Plan 9. It's still an OS, right? I suppose that if I were convinced this was the only solution I could always throw another pair of servers into the mix.
      • Re:Simple : 9p (Score:3, Interesting)

        by DrSkwid ( 118965 )
        9p is a protocol not an OS, it is OS agnostic.

        I have a python 9p server daemon and clients.

        The ask was "what's the best remote file system", 9p is the answer.

  • by Halvard ( 102061 ) on Thursday December 18, 2003 @10:47AM (#7753839)

    Then you don't have to syncronize.

    If you haven't already installed SSH on a machine in both locations, do so.

    Follow the "Setting up Samba over SSH Tunnel mini-HOWTO" by Mark Williamson [ibiblio.org]. Then you can use the server on each side to share out the files on the other side and not even change anything about how your users do anything. It's very simple to set up. It's 3 steps on each side plus adding it into a log in script or mapping on the individual machines. So you should be ready in 5 minutes.

    If you still want to syncronize, there are tons of tools to do that including Unison [upenn.edu].

    • Oh. My. God. (Score:5, Informative)

      by schon ( 31600 ) on Thursday December 18, 2003 @11:35AM (#7754315)
      Setting up Samba over SSH Tunnel

      For a quick-and-dirty solution for one or two users, over a reliable connection, this might be sufficient, but for the poster's problem, it would be a nightmare.

      TCP over TCP is a bad idea because it amplifies the effect of lost packets.. two or three dropped packets in a short period of time will result in a cascade failure as each TCP stream attempts to compensate for the loss.

      You can find all the gory details here [sites.inka.de].
      • Not quite (Score:5, Insightful)

        by wowbagger ( 69688 ) on Thursday December 18, 2003 @12:38PM (#7754980) Homepage Journal
        SSH port forwarding isn't "TCP over TCP" - the SSH client isn't simply sending the TCP packets over the wire, it is sending the contents over.

        Suppose we have 2 computers, A and B, connected via SSH, and forwarding some service. A sends a block of data to B.

        The sequence is NOT:
        A packaged data into TCP packet.
        SSH encrypts packet and packages it into another TCP packet
        B receives SSH packet and acks it
        B decryptes packet
        B acks that packet.

        The sequence IS:
        A packages data into TCP packet
        SSH receives and acks packet.
        SSH encrypts PAYLOAD of TCP packet
        SSH sends packet
        B receives SSH packet and acks it
        B extracts data.
        B packages data into local TCP packet, sends it, acks it locally.

        So you don't get into the cascade failure mode for TCP over TCP.

        Now, if you use your SSH connection to forward PPP data over the wire - THEN you are getting into TCP over TCP because the SSH session is actually forwarding the PPP packets.
    • Nice howto. One thing you may want to consider adding is the option to use SSH w/o any encryption. I think you may have to recompile to get that support in the sshd.

      I set up what you are talking about with cygwin's ssh connecting to a linux box at home, and the connection was slow to the point of being almost unusable. That SMB is one chatty protocol. I did not try the SSH w/o encryption thing though.

  • AFS is what you want (Score:5, Informative)

    by LoneRanger ( 81227 ) <jboyens&fooninja,org> on Thursday December 18, 2003 @11:03AM (#7753954) Journal
    Frankly AFS is what you want and what you need. I used to work at a site with over 26,000 AFS users and it was a magical system. It is hard to setup, I'll grant you that, but only the first time. After you've got it down once it's old hat after that.

    My biggest issue when I was setting it up was Kerberos integration, can be tricky but the guys on the OpenAFS mailing-lists are incredibly nice and knowledgable. Some other issues are daemons that like to write to user home dirs won't work real well unless you find a way to have them get an AFS token or Kerb ticket.

    If I were you I would SERIOUSLY consider AFS, don't listen to those who would say it's old and outdated, because it's not. OpenAFS is being actively developed and new features are being added all the time.

    Feel free to email me if you want and I'll discuss the advantages/disadvantages further or help you get resources to set up your AFS system.
    • by TilJ ( 7607 ) on Thursday December 18, 2003 @11:36AM (#7754319) Homepage
      I agree, though from the other side of the fence: I have an existing Kerberos realm and am finding the AFS integration difficult ;-)

      There are two current stumbling blocks for me that likely won't affect the original poster:

      * OpenAFS doesn't run nicely (read: at all) on FreeBSD (tested with -STABLE on i386 and -CURRENT on sparc64). Doesn't matter if you're running it on Linux, of course.

      * AFS uses it's own filesystem rather than riding on top of the O/S. That's fine, and better for security, but sucks if you want to do something fancy like distribute the same filesystem via samba, NFSv3 and AFS simultaneously.

      To me, AFS is much more appealing than NFSv4. For one, NFSv4 is fairly rare - the implementations are basically for testing purposes and there's a limited set of operating systems supported. The extra features that AFS has (volume management, failover, ease of client maintenance, intelligent client-side caching, etc) make it a win for me.
      • AFS uses it's own filesystem rather than riding on top of the O/S. That's fine, and better for security, but sucks if you want to do something fancy like distribute the same filesystem via samba, NFSv3 and AFS simultaneously.

        Another side effect is that the symantics of AFS aren't the same as the symantics of traditional Unix filesystems. For instance, there are some permissions issues that can make building/installing software onto an AFS-served filesystem a hassle. I had to administer (commercial) AFS

        • I agree there can be /some/ issues with installing software, but anything worth it's salt shouldn't break too badly. Some quick permissions fixing can be done and if you have the top-level directory permissioned right then it isn't an issue. I wouldn't ever suggest using AFS as your root fs. :)

          Even then the poster isn't asking for a software repository, he's/she's asking for a networked filesystem that provides some sort of offline use. Which is exactly the niche AFS fills.
      • * OpenAFS doesn't run nicely (read: at all) on FreeBSD (tested with -STABLE on i386 and -CURRENT on sparc64). Doesn't matter if you're running it on Linux, of course.

        How long has it been since you tried this? I seem to remember the OpenAFS team fixing a lot of their FreeBSD issues. I know OpenBSD recommends OpenAFS as a network file store. Even then you could try ARLA (?). Should be able to Google for it. IIRC Arla fully supports FreeBSD as both a client and a server.

        * AFS uses it's own filesyst

        • How long has it been since you tried this? I seem to remember the OpenAFS team fixing a lot of their FreeBSD issues. I know OpenBSD recommends OpenAFS as a network file store. Even then you could try ARLA (?). Should be able to Google for it. IIRC Arla fully supports FreeBSD as both a client and a server.

          It's been over a month, I think. My only -CURRENT server is a sparc64 and that could be revealing problems that don't occur on i386. Arla has been marked as "broken - does not build" in the ports tree for
    • After you've got it down once it's old hat after that.

      I remember the first time that I did a 'vos move' on an AFS server and the volume moved from one server to the other without any downtime for the users. Talk about an admin's dream! :-)

  • How about Lustre? (Score:5, Informative)

    by Anonymous Coward on Thursday December 18, 2003 @11:16AM (#7754103)
    Lustre [lustre.org] is something we're looking at rolling out for user home directories. Although a few labs have 100TB+ file systems using it. You get redundant servers at all levels (which deals with the synchronization problems), and best of all, you can stripe all your existing disks to create one logical disk. Think LVM for network connected machines. It's pretty fast too.
    • by fm6 ( 162816 )
      Part of Lustre appears to be a new local journalling file system called OBDFS. Pretty interesting in itself, thought they say little about it.

      Worth noting that ClusterFS is advertising Lustre as a pre-1.0 product. Probably not a current option for anybody who can't afford a big support contract.

      • Re:Yikes! (Score:2, Informative)

        by Anonymous Coward
        OBD's run on top of EXT3 (well sort of, its a hacked ext3, but basically it doesn't add any really new features on the journalling side).

        Lustre is a lot more stable than it used to be :)

        The failover is an "in development" feature. I know people who claim to be using it, but I wouldn't count on it working when you need it. Its just using clumanager (or simular) and a service start on the "failover" machines. It really doesn't do all that much, and requires some hgeavy scripting and hand holding to get i
  • I was suprised to hear were I work [uni-saarland.de] they are dropping NFS in favor of SMB.
    The reason I was given was that SMB has better permissions/access rights across all platforms.
    -greg
    • The problem is getting your *NIX machines to play nice with the SMB server's permissions. NFS permissions transfer smoothly (dangerously smoothly if you ask me!). I couldn't get my Linux boxen to play nice client with my SAMBA servers though, they just didn't respect the permissions like they should have.

      Maybe I did something wrong, but now that I've got te *NIX boxes using NFS and the Win32 boxes getting the same data via SMB, all is well.

      Anyone else have this problem? Anyone out there mounting home fold
  • I advise against Linux Kernel-Samba, at least if you want your Clients (be it Workstations or Servers) to have some uptime. After some days, possibly weeks it randomly stops working, all programs having open filedescriptors on the samba-share hanging. If you kill (-9) them, or the smbmount-process, they go zombie. Any other program which tries to access the former mount-point immediatly goes zombie as well (Your shell checking whats wrong, updatedb,...) After several more days I have seen those zombie-proce
    • So you're using smbfs (it's not called Kernel-Samba) plus Samba 2.2.3a.

      Both of those are very old and unmaintained. You should try out this setup with Samba 3.0 and cifsfs (available for 2.4 or 2.6).

      If you still have this problem, submit a bug report to Samba.
    • Yeah, you're probably missing something.

      I use smbfs mounts all over the place where I work, for weeks/months on end. Have not seen what you describe at all.

      This is with varying versions of 2.4 Linux kernels (on Red Hat and Slackware systems), varying versions of Samba, frame-relay, straight T1, and VPN connections between more than twenty sites.

      A Samba mount will hang when a link goes down, although sometimes Samba will recover (if the outage is only for a short period of time).
      • I use smbfs mounts all over the place where I work, for weeks/months on end. Have not seen what you describe at all.

        I use smbfs a lot also without problems. I still have problems every once in a while. For example, if I copy a large amount of data (say, with cp -a) from an smbfs mount to a local mount, and do a df in another window and ctrl-c the df when it hangs for a few seconds on the smbfs mount, a couple dozen files will fail to copy with an IO error, and then the copy will continue. When I ran into

  • You should be using a VPN if you have two offices and two firewalls. Unless your debian machines ARE your firewalls, then NFS or samba would be fine. However, machines will still lock or be slow of the internet gets slow or you drop a connection from one place to another.
  • Simple. You have samba already, setup openssh on a machine with nics on the inside and outside of your network.

    On your win32 clients, setup putty (use latest dev version) with a tunnel to port 139 to your fileserver, map the network drive on windows as \\127.0.0.1\sharename

    That's it! A free solution.

  • FAT16 (Score:5, Funny)

    by turg ( 19864 ) * <turg@@@winston...org> on Thursday December 18, 2003 @02:12PM (#7755866) Journal
    I think FAT16 is the best remote filesystem -- I like it best when FAT16 is as remote from myself as possible.
  • Samba3 is an amazing piece of software, don't get me wrong. Yet it exists to play patty-cake with Windows, and neither the Windows or the Linux side gets what it really wants. The NFS on the table doesn't look terrible, but what we have available now is pretty unusable. AFS, Coda, etc. probably aren't going to be a good solution either.

    I am starting to get interested in whatever Novell has that can save us from this mess. Of course, something free would be best, some middle ground that any OS can imple
  • Unison + SAMBA (Score:3, Informative)

    by obi ( 118631 ) on Thursday December 18, 2003 @04:05PM (#7756969)
    It sounds to me like you're trying to connect two servers on different locations, which then serve out the files out to the clients through samba. And the connection between those offices might drop.

    Maybe it's worth considering Unison - it's built to run over SSH, and can is like a two way rsync. It keeps state on both sides, and you can set it up so it automatically/regularly updates both sides with the changes of the other side. There's a window of conflicting updates, that's true, but you'd also have that with intermezzo or coda when they're in disconnected mode. Additionaly, unison is completely userspace, it doesn't care about what filesystem it might be running on. And there's Windows/MacOSX port too iirc.

    And hey, it's only an apt-get away :) - it's in Debian.

  • The only remote filesystem i found where the en-/decryption of files is done on the client side is TCFS [www.tcfs.it]. Unfortunately, it seems to be not maintained any more (last news 13months old, linuxversion still uses 2.2).

    All other "encrypting remote filesystems" encrypt only the filetransfer, not the filestorage (AFS or - if i understood the FAQ correctly - SFS [fs.net]). So the fileserver admin (or an intruder or trojan) is able to read served files cleartext.

    What's required is a remote filesystem where the clients do

    • Judging from the way he wants to use the filesystem, I don't think encrypted storage would be necessary, and probably not convenient.

      He's talking about using samba to export the files to windows clients anyhow, and I don't think samba does encryption on filetransfer anyhow, so I don't think the data is that sensitive. (He has mentioned the link between the two sites is encrypted though)

  • by adam872 ( 652411 ) on Thursday December 18, 2003 @06:50PM (#7758502)
    ...then I would consider building a SAN with replication. High end storage solutions using HDS and/or EMC gear fix this problem by enabling remote block for block copy of data between identical arrays. Veritas also makes a product called Volume Replicator that does effectively the same thing. By the sounds of it, this would be out of your price range, but it would do the job (we have a 15TB data centre mirrored using EMC's SRDF and another one using Volume Replicator).

    In terms of free ways to do it, it will really depend on how sync'd the two offices need to be. If it's instantaneous, then you will need to have one master server and both sites pointing to it. Others have mentioned AFS, but that is also non trivial. If the synch doesn't have to be instantaneous, then perhaps a regular rsync tunneled through SSH would do the trick. CVS may also help, depending on the data you have.
    • VVR doesn't allow for concurrent access at the secondary site(s). The replication is one way and while the volumes at the secondaries are started, Veritas strongly recommends you do access the filesystems on them.
  • sneaker.net (Score:4, Funny)

    by turgid ( 580780 ) on Friday December 19, 2003 @07:21AM (#7762671) Journal
    Back in the day, we were forced to use sneaker.net (TM). It worked quite well, even on MS-DOS workstations with 512k RAM, and the 80286 processor and still works to this day. Reliability is so-so, and speed can be poor, but nowadays with technological progress transfer rates can be the orders of gigabytes per second, but latencies are large (tens of seconds upwards to several days). One downside was the propagation of viruses, but distribution of code across platforms by source and proper protected mode operating systems with selectable user privileges make viruses less dangerous.
  • AFS documentation (Score:5, Informative)

    by wik ( 10258 ) on Friday December 19, 2003 @04:08PM (#7767932) Homepage Journal
    As far as AFS documentation goes, I found the following documents useful when installing a new AFS cell/kerberos realm earlier this month.

    First, the AFS quick start guide on openafs.org (http://www.openafs.org/pages/doc/QuickStartUnix/a uqbg000.htm) provided step-by-step installation instructions for the AFS server and client. Having been an AFS user for the past 7 years did help a bit.

    Second, the quick start guide assumes you are using the kaserver included with OpenAFS. Everyone and their pet dog now recommends installing a real kerberos 5 daemon instead. We chose Heimdal 0.6. The new O'reilly book "Kerberos: A definitive guide" was invaluable for this. In order to put the two together, this impossible to find wiki page http://grand.central.org/twiki/bin/view/AFSLore/Ke rberosAFSInstall explains the changes to the quick start required to actually integrate kerberos 5.

    Finally, to get a pam login that gets both kerberos 4 (for AFS) and 5 tickets and tokens, we used pam-krb5afs (http://sourceforge.net/projects/pam-krb5/) for the login module.

    Unfortunately, none of this is tied together in a single cohesive document and I'm still trying to organize my notes. Overall, I was able to get the kerberos realm and AFS up in about a day, while getting the pam module and openssh to play nicely took three to four days.

Old programmers never die, they just hit account block limit.

Working...