Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Data Storage Desktops (Apple) Media Software

Synchronize Data Between Linux, OS X, and Windows? 305

aaaaaaargh! writes "I'm using a laptop with Ubuntu 8.04 for work, a netbook with Ubuntu 9.10 when I'm outside, Mac OS X 10.5 for hobby projects, and Windows XP for gaming. For backups, I'm currently using Jungle Disk and Apple's Time Machine, and I use a local svn repository for my work data. Now I need to frequently exchange and synchronize OpenOffice and Latex files and source code in various cross-platform programming languages between one machine and another. Options range from putting everything online (but Jungle Disk disks seem to be too slow for anything else than backup), storing my data on external media like USB sticks or SD cards, or working with copies by synchronizing folders over the network. I don't want to give my data away to some server outside without strong encryption (controlled by me, including the source code) and external media like USB sticks are a bit too fragile according to my taste. The solution should be reliable, relatively failsafe, as simple as possible, and allow me to continue to use Jungle Disk for backup. So what would you recommend?"
This discussion has been archived. No new comments can be posted.

Synchronize Data Between Linux, OS X, and Windows?

Comments Filter:
  • How come ... (Score:1, Interesting)

    by Anonymous Coward on Thursday November 19, 2009 @05:44PM (#30164178)

    ... someone like you who is using three different Operating Systems cannot figure out how to sync data?

  • Re:dropbox? (Score:2, Interesting)

    by koick ( 770435 ) on Thursday November 19, 2009 @05:48PM (#30164264)

    I don't want to give my data away to some server outside without strong encryption

    Use dropbox with a Truecrypt encrypted container as the file which gets synchronized.

  • by DesertBlade ( 741219 ) on Thursday November 19, 2009 @05:56PM (#30164396)
    I had my account suspended for this , bluehost is only for hosting files for websites
  • by Randle_Revar ( 229304 ) <kelly.clowers@gmail.com> on Thursday November 19, 2009 @06:26PM (#30164900) Homepage Journal

    I second this, although I use Git. Works great.

  • by Anonymous Coward on Thursday November 19, 2009 @06:44PM (#30165140)

    The problem with Mercurial, Git, etc are the amount of space they take up because you're keeping the entire revision history on every machine. When I use these though I use Git because it's way faster than Mercurial.

    I don't think revision control systems are suitable for synchronization like we're talking about though. If you don't commit the changes then they don't show up on the other machines. That can be a huge pain in the ass. A true automatic distributed filesytem-type thing is what we really want. These exist but from what I seen none are any good (eg. they are generally very unstable). So I think you're stuck with some sort of manual solution like rsync, those online data storage things people keep suggesting (personally I would never trust my data to some 3rd party), revision control, etc...

  • by Temujin_12 ( 832986 ) on Thursday November 19, 2009 @06:49PM (#30165216)

    On a somewhat related note, my wife recently gave up on backing up to a different medium all together and now just builds backup into her work flow. She is a photographer and takes 50,000+ digital photos a year (in raw TIFF format). Each file is ~70MB so she requires several terabytes of storage. Loosing these images is unacceptable (ie: would result in a financial loss) and ripping to DVD for backup is simply impossible (would require a full time job in and of itself).

    So she builds backup into her work flow such that at any one point in time she has at least one copy of the image spread across multiple drives. In the work flow, photos go from raw TIFF -> PSD pre-proccess -> PSD final -> JPEG for online viewing and she uses different drives for each stage in the process. That way, if a drive goes out, she only looses time, not data. For data retention requirements (~2-3 years), she just has ~10 TB of storage and rotates files from older jobs out onto other drives (spreading across 3 - drives as per the work flow).

    Of course if something catastrophic were to happen to cause all of the drives to die (since they are all in the same location) then files would be lost, but that's what insurance is for.

  • Re:Unison (Score:1, Interesting)

    by Anonymous Coward on Thursday November 19, 2009 @07:13PM (#30165568)

    I use Unison to synchronize my iTunes music library (~20GB) between a server and three other machines, and it works perfectly. The initial sync is slow (as you'd expect), but subsequent syncs are much faster. Furthermore, unlike using Subversion or most other revision control systems, there's no local duplication overhead. This does mean a speed impact for determining changes, but it's much nicer to store a 20GB music library in 20GB of space instead of 40GB.

  • by jvonk ( 315830 ) on Thursday November 19, 2009 @07:38PM (#30165902)

    No, since Dropbox only transmit the parts of a file that changed.

    The way most encryption works, if you change a piece of the plaintext file, you get a wildly different ciphertext file.[...]I don't know if this is the case with Truecrypt's volumes, but I bet that it is.

    I have done what was described: rsyncing a TrueCrypt volume file that changes over time. Yes, there is an "expansion factor", but I would peg it at 2 - 5x the size of the diff within the encrypted volume. To restate: this scheme is entirely tenable with TrueCrypt volumes and rsync. A good binary diff capability is required, but that is the raison d'etre of rsync [wikipedia.org]. I have never used Dropbox, so I have no idea how their deltas work for binary files.

    I was originally of the same opinion as you. I anticipated some sort of avalanche effect from small changes in the ciphertext. However, this isn't the case for several reasons -- first, you can read about TrueCrypt's crypto modes of operation [truecrypt.org] and how the blocks work. Second, you can reason that TrueCrypt would be completely unusable if writing a single byte near the beginning of an encrypted hard drive/volume caused a chain reaction that would require the entire partition to be re-encrypted due to cipher block chaining [wikipedia.org] cascading to the whole drive.

    I don't claim to be a cryptography expert (perhaps someone can further enlighten me), but it seems that CBC would break down in a disk encryption scheme if there weren't a new IV rather frequently (for the same reason as I mentioned above). Of course, TrueCrypt uses XTS mode [wikipedia.org], so it seems that I am just idly musing at this point and should click Submit.

  • by rwa2 ( 4391 ) * on Thursday November 19, 2009 @07:47PM (#30166026) Homepage Journal

    All the little lock tabs on my RJ-45 ethernet jacks have broken off, so my network cables typically separate with minimum effort.

    Magsafe had to be an Apple invention, though... since the older power plugs that came with them were way more fragile than any other laptop I've used. I've had to buy 2 replacement power supplies for my wife's old ibook already, and even now I still can't get power to it through the bent-up connector. It's a shame because it'd make a decent "Ken Burns effect" photo frame if I could power it up again, but I don't want to throw any more money at it :P

  • Re:Rsync? (Score:4, Interesting)

    by Anonymous Coward on Thursday November 19, 2009 @09:06PM (#30166908)

    Except it didn't exactly answer his question. He appears to favor local storage. Ha already has a backup solution and if he wanted a snetwork share, I'm pretty sure anyone running the 3 major OSes can figure out how to share a drive via NFS/AFP/SMB.

    For a home user, ZFS's big achilles is the inability to change raidz geometry. Most home users add another drive to increase capacity, *not* another raidz of drives into a pool. Nor do they typically want to replace all their raidz drives with larger drives. ZFS fails on that particular common home storage scenario. Everyone knows it and it's been talked about to death. And btrfs is looking better every day.

    The answer he's looking for is probably some form of rsync or unison solution. Depends on the complexity of his syncing. And if he needs/wants some form of version control. He knows how to use SVN so he's probably not needing version control.

    I'm curious too; I have to keep local assets for CGI apps in sync on a few different systems and rsync is the best I've come up with. You can't drop extra files into the asset directory structure and you can't rely on their state to tell you anything about the need to sync.

  • by Anonymous Coward on Thursday November 19, 2009 @09:17PM (#30166990)

    Have you really tried it? Last time I tried, Mercurial died with files over 10 megabytes in size.

  • by Anonymous Coward on Thursday November 19, 2009 @09:25PM (#30167058)

    Well, I tried, and it still can't manage any system:

    file.bin: files over 10MB may cause memory and performance problems

  • NAS (Score:3, Interesting)

    by DarthVain ( 724186 ) on Friday November 20, 2009 @01:58PM (#30173870)

    I looked into this problem myself, and it isn't pretty. I use a Windows box for everyday stuff, a Linux based server, and my Sister uses a Mac.

    Basically I had the idea that I wanted to share some Anime videos with my Sister who also enjoys the TV shows. Anyway originally my idea was to buy an external USB HD, copy the files and then give her the HD, this way she also gets a new external HD out of the deal. However when I looked into it, not as simple as I thought. While most external drives are compatible with Mac or PC, the emphasis is on "OR". Usually they are formatted differently and are not compatible with the other once formatted.

    The only other solutions I could find were to use a NAS (Networked Attached Storage), and this would work, because you would be using the network protocol to translate the data between OS, SMB I think it is called. Anyway at the time, they were very expensive and still are to a degree however they have come down in price more recently and I have seen solutions out there for 200-300$ range. Which is still more than I wanted to spend as an external HD only runs you about 100-200$.

    I settled for the getto version and just spent the time and burned like 10 8GB DVD-R's as it was easiest. However for a continual solution, this would not work. I would say your only choice is some sort of NAS with SMB. I am sure given the time and know how you could also build your own linux box to do the same thing, but for price it likely depends if you have an extra kicking around someplace. Either could likely be set up for remote FTP. I would recommend just buying a NAS, likely simpler that way.

Make sure your code does nothing gracefully.

Working...