Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Data Storage The Internet

Sharing a Subset of Data Between 2 Sites? 23

eldrich asks: "We have two labs: a main lab (lab 1) has 1.2Tb of on-line data storage -- two machines with 600Gb RAID-5s hung off of them. These happily service about 30 Linux machines via NFS over fast ethernet. There are 5-6 WinXP machines that connect via SMB and Samba. The lab is on a private network with a single firewall between it and the world, and we use LDAP for practically everything (hostname, usernames, password, autofs, etc). The students' lab (lab 2) is 40 miles away, with 8 workstations and 2 WinXP machines. This lab also has a small RAID-5 Linux server with 180GB space which serves via NFS and Samba. Sometimes we have people from lab 2 at lab 1 and while they are at the main lab, they need their files. What I want to do is make lab 2's 180GB RAID a subset cache of the 1.2Tb one in lab 1. This puts everyone's main storage at lab 1 (which is backed up weekly) but a local copy can be cached on the lab 2 raid system. This gives the students a local copy for fast access, but all the safety of the backups made from our system. Does anyone know of a filesystem or programs that can help with this?"

"Some people spend 95% of their time in lab 2, so that is their 'home' server, but when they come to lab 1 for a week's stay or so, they scp/rsync their files to the lab 1 server, and at the end of the week push the changes back to lab 2. When people login to a workstation, they usually remain logged in for days at a time and xlock the screen. [If we can get this caching system working], it would mean that people moving between the labs would not need to copy files around since there would always be a 'local' copy.

The network between the labs is not fast enough for direct automounting of lab 1's server on the lab 2 workstations, especially since some files can be over 300Mb in size. We have a VPN (via freeswan) between the different labs, so all data transmitted is encrypted. Also, because lab 2 has 1/6 the capacity of lab 1's RAID it needs to be cached copies of in-use or probable in-use data only.

Crontab entries set for night copies are not useful because people often appear from both places on any given day.

The 3 servers currently run 2.4.18 with XFS so any solution should be compatible with XFS but at a real push we could consider changing the filesystem to another one."

This discussion has been archived. No new comments can be posted.

Sharing a Subset of Data Between 2 Sites?

Comments Filter:
  • CODA and AFS (Score:3, Informative)

    by Lomby ( 147071 ) <andrea@lo[ ]rdoni.ch ['mba' in gap]> on Wednesday November 12, 2003 @12:51PM (#7454013) Homepage
    If you have a very reliable connection you may want to go for AFS [cmu.edu]

    In case the connection is not realiable (or not fast enough), you may want to try CODA [cmu.edu] which is a distributed filesystem which supports disconnected operations. Beware: AFS is a mature project, while CODA may still be a work-in-progress.
  • by Bob Bitchen ( 147646 ) on Wednesday November 12, 2003 @12:52PM (#7454027) Homepage
    Don't over-engineer, keep it simple use CVS or rsync.
    • rsync will definitely do this job nicely. Take the 180GB offline for a couple hours and do a LAN rsync to the 1.2TB. Bring the 180GB back online, and have rsync to differentials after that. Simple!
    • Or use rdiff-backup [stanford.edu] or cvsup [cvsup.org].

      rdiff-backup is:
      rdiff-backup backs up one directory to another, possibly over a network. The target directory ends up a copy of the source directory, but extra reverse diffs are stored in a special subdirectory of that target directory, so you can still recover files lost some time ago. The idea is to combine the best features of a mirror and an incremental backup. rdiff-backup also preserves subdirectories, hard links, dev files, permissions, uid/gid ownership, and modifica

  • ssh (Score:3, Interesting)

    by TheSHAD0W ( 258774 ) on Wednesday November 12, 2003 @12:57PM (#7454076) Homepage
    I'm not sure you'd find caching a subset of your file base to work very well. You might wish to consider instead installing some additional machines at the main location and allowing your researchers to log onto them remotely, using X or VNC if necessary. This should work much better than trying to maintain a local partial cache if you think you're going to experience many cache misses, especially since some of those files are so large.
  • unison (Score:4, Informative)

    by martin ( 1336 ) <maxsec AT gmail DOT com> on Wednesday November 12, 2003 @12:58PM (#7454089) Journal
    http://www.cis.upenn.edu/~bcpierce/unison/

    works very well and is designed for this kind of thing.

    BTW - weekly backups!!!! daily surely?
  • Me too! (Score:1, Redundant)

    by G4from128k ( 686170 )
    I too would like such a capability. We don't have terabytes of data, but my wife and I find it frustrating to co-create documents and manage who has which version on which machine while ensuring the portablity of my wife's laptop and providing the speed of accessing files locally. Ideally, we would like all of our 12,000 shared files to be in at least two or three places at once (cached on my machine, cached on her laptop, and stored on a central file server).

    I'm envisioning some type of write-through
  • FolderShare.com offers a small application that allows for various ways of sharing files between windows system. While it may not be sufficiently robust for your needs, it does a wonderful job of syncing my home and office files.

    For your situation, I would imagine that the server machines would run the FolderShare app, simply mirroring in more-or-less real time the lab2 data at lab1.

    RC
  • by narensankar ( 595327 ) on Wednesday November 12, 2003 @01:47PM (#7454558)
    http://www.inter-mezzo.org/

    Similar to afs and coda suggested before, but with local caching to allow much higher performance. Also works in disconnected mode.

    • Neither Intermezzo or Coda is production stable - Have a look at this previous post [slashdot.org].
    • I spent some time working on making intermezzo work on my machines a few months back. Eventually gave it up as a very bad idea. I don't remember right now what was the last straw, but I did spend a week or so working on it.

      A shame too, it looked pretty good and like it could have quite a bit of promise.

      Building a reliable, easy to install, distributed filesystem that allows for disconnected operation, updates and similar kinds of things would be very, very useful. (Notice the recent post on using CV

  • ..is probably a good place to start. It is a cache filing system which is backed up by an NFS filing system
  • There are servers and clients for tons of operating systems, including every one you mentioned.

  • How about making network file access be via WebDAV, and place a caching HTTP proxy server set to work with only the specified domain at each end. This caches a local copy of the data for quick reads, has good properties for wide-area networking, is cross-platform compatible, and can be configured with variable timeouts for different people. Writes may take a while, but for data consistency reasons going directly back to the home storage facility is probably a good thing. You can also easily limit the proxy

To invent, you need a good imagination and a pile of junk. -- Thomas Edison

Working...