Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Data Storage OS X Hardware Linux

Best Format For OS X and Linux HDD? 253

dogmatixpsych writes "I work in a neuroimaging laboratory. We mainly use OS X but we have computers running Linux and we have colleagues using Linux. Some of the work we do with Magnetic Resonance Images produces files that are upwards of 80GB. Due to HIPAA constraints, IT differences between departments, and the size of files we create, storage on local and portable media is the best option for transporting images between laboratories. What disk file system do Slashdot readers recommend for our external HDDs so that we can readily read and write to them using OS X and Linux? My default is to use HFS+ without journaling but I'm looking to see if there are better suggestions that are reliable, fast, and allow read/write access in OS X and Linux."
This discussion has been archived. No new comments can be posted.

Best Format For OS X and Linux HDD?

Comments Filter:
  • UFS. (Score:2, Informative)

    by necroplasm ( 1804790 ) on Thursday July 01, 2010 @04:55PM (#32763632)

    UFS would be the best option. Linux supports it with -rw since Kernel 2.6.30 (afaik) and OS X mounts UFS natively.

  • Followup question... (Score:3, Informative)

    by serviscope_minor ( 664417 ) on Thursday July 01, 2010 @04:56PM (#32763660) Journal

    I have a similar problem, albeit on a smaller scale. I use unjournalled HFS+.

    However, the problem is that HFS+ being a proper unix filesystem remembers UIDs and GIDs which are usually inappropriate when the disk is moved.

    Is there any good way to get Linux to mount the filesystem and give every file the same UID and GID, like for non unix filesystems?

  • Don't know if broke (Score:3, Informative)

    by tepples ( 727027 ) <tepples.gmail@com> on Thursday July 01, 2010 @04:59PM (#32763720) Homepage Journal
    More like "I don't know if it's broke. I could be doing something so wrong it could end up on The Daily WTF. If what I'm doing is broke, could you help me fix it?"
  • Re:Ugly but works (Score:2, Informative)

    by GooDieZ ( 802156 ) on Thursday July 01, 2010 @05:00PM (#32763764) Homepage

    and 4Gb cap...

  • by Anonymous Coward on Thursday July 01, 2010 @05:06PM (#32763866)

    How the fuck is he supposed to store 80 GB files on a filesystem that maxes out at 4 GB?

  • Re:UFS. (Score:5, Informative)

    by clang_jangle ( 975789 ) on Thursday July 01, 2010 @05:09PM (#32763920) Journal

    UFS would be the best option.

    Unless you're using Tiger or earlier, UFS is not an option. The last two versions do not support UFS at all. However, HFS+ support in Linux is pretty good. Otherwise you're looking at mac-fuse for ext2/3, which IME is pretty slow and buggy. I thinks Jobs has gone out of his way to make OS X incompatible with OSes other than windows. Maybe he's afraid of what will happen if everyone becomes aware they have other choices.

  • by Anonymous Coward on Thursday July 01, 2010 @05:09PM (#32763924)

    Why would "sneakernet" be disallowed for digital medical files? This procedure is no different than transferring real physical medical files or records.

  • No Filesystem (Score:5, Informative)

    by Rantastic ( 583764 ) on Thursday July 01, 2010 @05:14PM (#32763988) Journal
    If you are only moving files from one system to another, and do not need to edit them on the portable drives, skip the filesystem and just use tar. Tar will happily write to and read from raw block devices... In fact, that is exactly what it was designed to do. A side benefit of this approach is that you won't lose any drive capacity to filesystem overhead.
  • by SEAL ( 88488 ) on Thursday July 01, 2010 @05:14PM (#32764004)

    Many filesystems support uid= and gid= options in their mount command (including HFS). Just add that to a mount script or set it up in fstab.

  • Rubbish (Score:5, Informative)

    by Improv ( 2467 ) <pgunn01@gmail.com> on Thursday July 01, 2010 @05:17PM (#32764042) Homepage Journal

    You're storing it in the wrong format - there are all sorts of tools to convert to Analyse or DICOM format, which give you a managable frame-by-frame set of images rather than one huge one. Most tools to manipulate MRI data expect DICOM or Analyse anyhow (BrainVoyager, NISTools, etc).

    If you really want to keep it all safe, use tarfiles to hold structured data, although if you do that you've made it big again.

    Removable media are a daft long-term storage - use ad-hoc removable media solutions (or more ideally, scp) to move the data.

  • by X0563511 ( 793323 ) on Thursday July 01, 2010 @05:18PM (#32764060) Homepage Journal

    Non-native filesystems usually let you set UID, GID, and permission masks. Check the "mount" manpage and look for the filesystem you want. You might also try "man filesystem"

  • by eschasi ( 252157 ) on Thursday July 01, 2010 @05:37PM (#32764384)
    HIPPA mandates who can and should have access to the files. The method of storage (disk, tape, SSD, paper, whatever) is largely irrelevant. As long as all those who have access to the files are HIPPA-trained and following the appropriate procedures, everything is fine. Similarly, transport is relevant only in that there must be no data disclosure to unauthorized persons. As such, if a person with appropriate clearance does the transport, all is cool.

    HIPPA data is often encrypted when placed on tape or transported across systems, but that's because such activities may involve the data being visible to unauthorized people. As examples of each:

    • If two physically separate sites exchange HIPPA data across the open Internet, the data must be encrypted during transport. This might be done by VPN, sftp, whatever. As long as the bits on the wire can't be read by the ISPs managing the connection, it's OK.
    • For tapes that you archive off-site, you don't want your external storage facility to be able to read the tapes, nor have the data readable if the tape is misplaced in transport.

    IMHO wise use of sensitive data on laptops requires encryption at the filesystem level. It's neither difficult or time-consuming, but given how much sensitive data has been exposed via folks losing or misusing laptops, it ought to be a no-brainer. Sadly, too few places bother.

  • Network? (Score:5, Informative)

    by guruevi ( 827432 ) on Thursday July 01, 2010 @05:42PM (#32764486)

    Really, you need a gigabit network and transfer files over it using AFP and/or NFS and/or SMB. First of all HIPAA requires you to encrypt your hard drives which most researchers won't do (it's too difficult). Then you also got the problem what happens if the researchers (or somebody else) leaves with the data.

    Solaris and by extension Nexenta have really good solutions for this. You can DIY a 40TB RAIDZ2 system for well under $18,000. If you use desktop SATA drives (which I wouldn't recommend but ZFS keeps it safe) for your data you can press that cost to $10 or $12k.

    I work in the same environment as you (neuroimaging, large datasets), feel free to contact me privately for more info.

  • by MachineShedFred ( 621896 ) on Thursday July 01, 2010 @06:04PM (#32764890) Journal

    If it's Mac OS X 10.6.x, you don't even n eed NTFS-3G, as the native NTFS driver has read / write capability. You just need to change the /etc/fstab entry for the volume to rw, and remount.

  • Re:No Filesystem (Score:3, Informative)

    by Rantastic ( 583764 ) on Thursday July 01, 2010 @06:12PM (#32765016) Journal

    With that said, tar is a bad solution because it doesn't include any type of CRC or encryption. But it's a good idea, and certainly a million times better than a file system of some type.

    True, but simply hashing the file at both ends solves that. Both linux and mac support shasum.

  • UDF (Score:2, Informative)

    by marquise2000 ( 235932 ) on Thursday July 01, 2010 @06:22PM (#32765208) Homepage

    I'm using a USB Disk formatted under linux with UDF (yep, it's not limited to DVDs, there is a profile for hard disks). It can be used without problems under OSX (even Snow Leopard)
     

  • Re:No Filesystem (Score:3, Informative)

    by Rantastic ( 583764 ) on Thursday July 01, 2010 @06:26PM (#32765264) Journal

    As to encryption, you just encrypt the file before you tar it. In fact, with gpg you get both encryption and integrity checking.

    Gnupg is available in Mac Ports and comes with just about every linux distro.

  • Re:UFS. (Score:4, Informative)

    by dogmatixpsych ( 786818 ) on Thursday July 01, 2010 @06:53PM (#32765724) Journal
    Yes, the raw files from the scanner are quite small. A whole series of scans (7 or 8 high quality sequences) is only about 450MB. We get 80GB files when we do post-processing (fiber tracking) of a diffusion scan.
  • Re:UFS. (Score:4, Informative)

    by Anonymous Coward on Thursday July 01, 2010 @07:01PM (#32765820)

    It's the default filesystem in *BSD, so it's very well maintained etc. It has journalling (or does it call it "soft updates"?) auto-defrag, etc, etc. You fsck it if you power off without umount but otherwise you won't need to.

    It's definitely a perfectly capable, full-featured, modern filesystem.

    All the things you write are perfectly true... on *BSD variants where UFS is the native, default FS. That is not the case on either Linux or OS X, to the extent that in OS X v10.6 UFS is now a read-only FS because it's barely maintained.

    Most people who think OS X is truly 'native' on UFS because it has BSD heritage haven't tried to actually use it. When Apple bought NeXT in 1997 the UFS implementation was already behind the times because at that time NeXT hadn't been updating its operating system for a few years. Since Apple wanted OS X to be a MacOS upgrade, development resources went into making a robust and high performance HFS+ implementation. Very little was done to modernize UFS. From the outside, it seems to have been just enough effort to make sure it worked and was still bootable over the first few versions, for those who wanted native UNIX FS semantics (mostly case sensitive file names). Then they added case sensitive filename support to HFS+ (it's a format-time option), and since then there has been even less reason for Apple to maintain UFS, hence its transition to a read-only legacy format.

    The other piece of this picture is that UFS != UFS. The UFS in MacOS X is a mildly upgraded version of mid-1990s NeXT UFS (which, in fine BSD tradition, wasn't quite the same as the UFS found in other BSDs). It's almost certain it has few of the features you associate with modern versions of UFS.

  • by marquise2000 ( 235932 ) on Thursday July 01, 2010 @07:07PM (#32765902) Homepage

    Ok everybody's occupied with surreal suggestions, but anyway:
    *UDF* is quite awesome as a on disk format for LinuxOSX data exchange, because it has a file size limit around 128TB, supports all the posix permissions, hard and soft links and whatnots. There is a nice whitepaper summing it all up:
    http://www.13thmonkey.org/documentation/UDF/UDF_whitepaper.pdf

    If you want to use UDF on a hard disk, prepare it under linux:
    1) Install uddftools
    2) wipe the first few blocks of the hard disk, i.e. dd if=/dev/zero of=/dev/sdb bs=1k count=100
    3) create the file system : mkudffs --media-type=hd --utf8 /dev/sdb (that's right, UDF takes the whole disk, now partitions)

    If you plug this into OSX, the drive will show up as "LinuxUDF". I am using this setup for years to move data between linux and OSX machines.

  • by Anonymous Coward on Thursday July 01, 2010 @07:09PM (#32765934)

    This is dangerous advice. There are numerous reports of instability and NTFS volume corruption when forcing 10.6 to mount NTFS volumes R/W. Apple seems to have turned NTFS write off by default for a good reason, it's not done yet.

  • Re:NTFS (Score:4, Informative)

    by RobertM1968 ( 951074 ) on Thursday July 01, 2010 @07:23PM (#32766140) Homepage Journal

    Windows doesn't play in here, it's OSX and Linux. Tossing NTFS into that would just be... wrong somehow.

    Flamebait mod or not, there is a valid point. Though various NTFS drivers do allow read/write, the success isn't graven in stone. There are better alternatives in the Linux/OSX world. Keep in mind that losing this data becomes either costly (as in time=money, let's go make another set of copies to run to whatever office) or very bad (as in someone moved the files to the external instead of copying them) or both.

    So, as good as the NTFS R/W drivers are getting, it's safer to use a file system that is known to be more stable and less error prone, such as HFS+ or UFS or one of the other suggestions. "Really good" shouldnt be an option in the medical world when "even better than 'really good'" is available, compatible, and easy to install on all systems involved.

  • Re:NTFS (Score:1, Informative)

    by zaphod777 ( 1755922 ) on Thursday July 01, 2010 @10:42PM (#32768174)
    easy enough to solve just propagate "everyone" read write access to the drive.
  • Re:UFS. (Score:2, Informative)

    by dotgain ( 630123 ) on Friday July 02, 2010 @02:56AM (#32769590) Homepage Journal
    You can get R/W support but only if you disable the journal on the HFS+ fs on MacOS. That's unless they've taken that away too in 10.6, but I routinely do this with HFS+ between 10.5 and Linux.

There are two ways to write error-free programs; only the third one works.

Working...