Best Format For OS X and Linux HDD? 253
dogmatixpsych writes "I work in a neuroimaging laboratory. We mainly use OS X but we have computers running Linux and we have colleagues using Linux. Some of the work we do with Magnetic Resonance Images produces files that are upwards of 80GB. Due to HIPAA constraints, IT differences between departments, and the size of files we create, storage on local and portable media is the best option for transporting images between laboratories. What disk file system do Slashdot readers recommend for our external HDDs so that we can readily read and write to them using OS X and Linux? My default is to use HFS+ without journaling but I'm looking to see if there are better suggestions that are reliable, fast, and allow read/write access in OS X and Linux."
UFS. (Score:2, Informative)
UFS would be the best option. Linux supports it with -rw since Kernel 2.6.30 (afaik) and OS X mounts UFS natively.
Followup question... (Score:3, Informative)
I have a similar problem, albeit on a smaller scale. I use unjournalled HFS+.
However, the problem is that HFS+ being a proper unix filesystem remembers UIDs and GIDs which are usually inappropriate when the disk is moved.
Is there any good way to get Linux to mount the filesystem and give every file the same UID and GID, like for non unix filesystems?
Don't know if broke (Score:3, Informative)
Re:Ugly but works (Score:2, Informative)
and 4Gb cap...
FAT32 is a fucking horrible idea in his case. (Score:3, Informative)
How the fuck is he supposed to store 80 GB files on a filesystem that maxes out at 4 GB?
Re:UFS. (Score:5, Informative)
Unless you're using Tiger or earlier, UFS is not an option. The last two versions do not support UFS at all. However, HFS+ support in Linux is pretty good. Otherwise you're looking at mac-fuse for ext2/3, which IME is pretty slow and buggy. I thinks Jobs has gone out of his way to make OS X incompatible with OSes other than windows. Maybe he's afraid of what will happen if everyone becomes aware they have other choices.
Re:HIPAA Constraints? (Score:1, Informative)
Why would "sneakernet" be disallowed for digital medical files? This procedure is no different than transferring real physical medical files or records.
No Filesystem (Score:5, Informative)
Re:Followup question... (Score:3, Informative)
Many filesystems support uid= and gid= options in their mount command (including HFS). Just add that to a mount script or set it up in fstab.
Rubbish (Score:5, Informative)
You're storing it in the wrong format - there are all sorts of tools to convert to Analyse or DICOM format, which give you a managable frame-by-frame set of images rather than one huge one. Most tools to manipulate MRI data expect DICOM or Analyse anyhow (BrainVoyager, NISTools, etc).
If you really want to keep it all safe, use tarfiles to hold structured data, although if you do that you've made it big again.
Removable media are a daft long-term storage - use ad-hoc removable media solutions (or more ideally, scp) to move the data.
Re:Followup question... (Score:5, Informative)
Non-native filesystems usually let you set UID, GID, and permission masks. Check the "mount" manpage and look for the filesystem you want. You might also try "man filesystem"
Re:HIPAA Constraints? (Score:5, Informative)
HIPPA data is often encrypted when placed on tape or transported across systems, but that's because such activities may involve the data being visible to unauthorized people. As examples of each:
IMHO wise use of sensitive data on laptops requires encryption at the filesystem level. It's neither difficult or time-consuming, but given how much sensitive data has been exposed via folks losing or misusing laptops, it ought to be a no-brainer. Sadly, too few places bother.
Network? (Score:5, Informative)
Really, you need a gigabit network and transfer files over it using AFP and/or NFS and/or SMB. First of all HIPAA requires you to encrypt your hard drives which most researchers won't do (it's too difficult). Then you also got the problem what happens if the researchers (or somebody else) leaves with the data.
Solaris and by extension Nexenta have really good solutions for this. You can DIY a 40TB RAIDZ2 system for well under $18,000. If you use desktop SATA drives (which I wouldn't recommend but ZFS keeps it safe) for your data you can press that cost to $10 or $12k.
I work in the same environment as you (neuroimaging, large datasets), feel free to contact me privately for more info.
Re:ext2 works. ntfs works. (Score:4, Informative)
If it's Mac OS X 10.6.x, you don't even n eed NTFS-3G, as the native NTFS driver has read / write capability. You just need to change the /etc/fstab entry for the volume to rw, and remount.
Re:No Filesystem (Score:3, Informative)
With that said, tar is a bad solution because it doesn't include any type of CRC or encryption. But it's a good idea, and certainly a million times better than a file system of some type.
True, but simply hashing the file at both ends solves that. Both linux and mac support shasum.
UDF (Score:2, Informative)
I'm using a USB Disk formatted under linux with UDF (yep, it's not limited to DVDs, there is a profile for hard disks). It can be used without problems under OSX (even Snow Leopard)
Re:No Filesystem (Score:3, Informative)
As to encryption, you just encrypt the file before you tar it. In fact, with gpg you get both encryption and integrity checking.
Gnupg is available in Mac Ports and comes with just about every linux distro.
Re:UFS. (Score:4, Informative)
Re:UFS. (Score:4, Informative)
It's the default filesystem in *BSD, so it's very well maintained etc. It has journalling (or does it call it "soft updates"?) auto-defrag, etc, etc. You fsck it if you power off without umount but otherwise you won't need to.
It's definitely a perfectly capable, full-featured, modern filesystem.
All the things you write are perfectly true... on *BSD variants where UFS is the native, default FS. That is not the case on either Linux or OS X, to the extent that in OS X v10.6 UFS is now a read-only FS because it's barely maintained.
Most people who think OS X is truly 'native' on UFS because it has BSD heritage haven't tried to actually use it. When Apple bought NeXT in 1997 the UFS implementation was already behind the times because at that time NeXT hadn't been updating its operating system for a few years. Since Apple wanted OS X to be a MacOS upgrade, development resources went into making a robust and high performance HFS+ implementation. Very little was done to modernize UFS. From the outside, it seems to have been just enough effort to make sure it worked and was still bootable over the first few versions, for those who wanted native UNIX FS semantics (mostly case sensitive file names). Then they added case sensitive filename support to HFS+ (it's a format-time option), and since then there has been even less reason for Apple to maintain UFS, hence its transition to a read-only legacy format.
The other piece of this picture is that UFS != UFS. The UFS in MacOS X is a mildly upgraded version of mid-1990s NeXT UFS (which, in fine BSD tradition, wasn't quite the same as the UFS found in other BSDs). It's almost certain it has few of the features you associate with modern versions of UFS.
Re:UDF IS ACTUALLY A SOLUTION (Score:5, Informative)
Ok everybody's occupied with surreal suggestions, but anyway:
*UDF* is quite awesome as a on disk format for LinuxOSX data exchange, because it has a file size limit around 128TB, supports all the posix permissions, hard and soft links and whatnots. There is a nice whitepaper summing it all up:
http://www.13thmonkey.org/documentation/UDF/UDF_whitepaper.pdf
If you want to use UDF on a hard disk, prepare it under linux: /dev/sdb (that's right, UDF takes the whole disk, now partitions)
1) Install uddftools
2) wipe the first few blocks of the hard disk, i.e. dd if=/dev/zero of=/dev/sdb bs=1k count=100
3) create the file system : mkudffs --media-type=hd --utf8
If you plug this into OSX, the drive will show up as "LinuxUDF". I am using this setup for years to move data between linux and OSX machines.
Re:ext2 works. ntfs works. (Score:5, Informative)
This is dangerous advice. There are numerous reports of instability and NTFS volume corruption when forcing 10.6 to mount NTFS volumes R/W. Apple seems to have turned NTFS write off by default for a good reason, it's not done yet.
Re:NTFS (Score:4, Informative)
Windows doesn't play in here, it's OSX and Linux. Tossing NTFS into that would just be... wrong somehow.
Flamebait mod or not, there is a valid point. Though various NTFS drivers do allow read/write, the success isn't graven in stone. There are better alternatives in the Linux/OSX world. Keep in mind that losing this data becomes either costly (as in time=money, let's go make another set of copies to run to whatever office) or very bad (as in someone moved the files to the external instead of copying them) or both.
So, as good as the NTFS R/W drivers are getting, it's safer to use a file system that is known to be more stable and less error prone, such as HFS+ or UFS or one of the other suggestions. "Really good" shouldnt be an option in the medical world when "even better than 'really good'" is available, compatible, and easy to install on all systems involved.
Re:NTFS (Score:1, Informative)
Re:UFS. (Score:2, Informative)