Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
Data Storage OS X Linux

Best Filesystem For External Back-Up Drives? 484

Posted by timothy
from the just-mirror-to-the-internet dept.
rufey writes "I've recently embarked on a project to rip my DVD and CD collection to a pair of external USB drives. One drive will be used on a daily basis to access the rips of music and DVDs, as well as store backups of all of my other data. The second drive will be a copy of the first drive, to be synced up on a monthly basis and kept at a different location. The USB drives that I purchased for this are 1 TB in size and came pre-formatted with FAT32. While I can access this filesystem from all of my Windows and Linux machines, there are some limitations." Read on for the rest, and offer your advice on the best filesystem for this application.
"Namely, the file size on a FAT32 filesystem is limited to 4GB (4GB less 1 byte to be technical). I have some files that are well over that size that I want to store, mostly raw DVD video. I'll primarily be using these drives on a Linux-based system, and initially, with a Western Digital Live TV media player. I can access a EXT3 filesystem from both of these, and I'm thinking about reformatting to EXT3. But on Windows, it requires a 3rd party driver to access the EXT3 filesystem. NTFS is an option, but the Linux kernel NTFS drivers (according to the kernel build documentation) only have limited NTFS write support, only being safe to overwrite existing files without changing the file size). The Linux-NTFS project may be able to mitigate my NTFS concerns for Linux, but I haven't had enough experience with it to feel comfortable. At some point I'd like whatever filesystem I use to be accessible to Apple's OS X. With those constraints in mind, which filesystem would be the best to use? I realize that there will always be some compatibility problems with whatever I end up with. But I'd like to minimize these issues by using a filesystem that has the best multi-OS support for both reading and writing, while at the same time supporting large files."
This discussion has been archived. No new comments can be posted.

Best Filesystem For External Back-Up Drives?

Comments Filter:
  • The solution.. (Score:5, Interesting)

    by Anrego (830717) * on Wednesday December 23, 2009 @05:59PM (#30539806)

    is to stop being so diverse! Pick a platform and stick with it!

    Ok, in all seriousness.. here's what you do:

    - buy yourself a cheap (~200) box
    - hook all your drives to it
    - use whatever file system you want (JFS, XFS would be my recommendation)
    - share it over your zoo of a network using nfs, samba, etc..

    As a bonus, your file server box could double as a media center, and replace your WD TV Live dealie.. (probably not though.. right)

    Irregardless, I think you're way better off with this approach vice trying to find the magical widely supported cross platform file system with large file capacity.

    You also might want to consider RAID vs. your monthly sync. Yes, RAID isn't a backup.. but for something like this where
    restoration would be possible, but just a pain in the ass.. mirrored raid can be a lot more convenient. You can always have
    a third external to back up your irreplacable data on a semi-annual basis..

    • Re:The solution.. (Score:5, Informative)

      by Dynedain (141758) <slashdot2@@@anthonymclin...com> on Wednesday December 23, 2009 @06:05PM (#30539852) Homepage

      Thats exactly what I did. Threw a couple of external drives on a Mac Mini. Formatted as HFS+ and did software array. Then using afp and smb provided the contents as shares to the Windows media center and various client machines on the network.

      Sure, software raid over USB is slow, but the bottleneck is the network so it doesn't really matter.

    • Re:The solution.. (Score:4, Informative)

      by Keruo (771880) on Wednesday December 23, 2009 @06:11PM (#30539922)
      One shoud never consider raid vs synced copies, use both simultaneously. They protect against different data-loss threats which aren't mutually exclusive.
    • Re:The solution.. (Score:4, Interesting)

      by uradu (10768) on Wednesday December 23, 2009 @06:22PM (#30540022)

      Or get a cheap NAS like the D-Link DNS-321. While certainly far from the bee's knees in terms of performance or number of bays (2), it can be had for under $100 and has been hacked to death to run all sorts of other stuff on it. Plus it's nice and quiet and doesn't use much power. And it's kinda purdy.

      • Re: (Score:3, Informative)

        by jeffehobbs (419930)

        I second this (DNS-323 myself). Runs like a champ, very low-power, files
        available to every machine (and a WD TV Live) from any room in the house.

    • by Tynin (634655)
      Pretty much my exact same suggestion. You can go fancy or go cheap, but you need a spare server you can stuff your big disks into. Personally I have a pretty extreme setup for my home file/media server. boot drive is 2 drives software RAID 1 with ext3 (wanted to go ZFS, but couldn't get it working on the boot partition, gave up due to time constraints). Then 8, 1TB drives in a software RAID 6 running ZFS. With 1gb switchs being so cheap it isn't a problem accessing large files over the network. I don't both
    • Re:The solution.. (Score:4, Insightful)

      by David Gerard (12369) <slashdotNO@SPAMdavidgerard.co.uk> on Wednesday December 23, 2009 @06:26PM (#30540056) Homepage

      +1 to this answer.

      This is what SAMBA is for. My home network has a mix of Ubuntu, Mac OS and Windows. It just serves to all of them without problems.

      I'm using a small silent PC as the server. Plug the USB drive into it, plug the USB turntable into it (and the cassette deck into the turntable) for ripping LPs and tapes, it's lovely.

      SMB over wifi serves fast enough to play MPEG4 video on the laptop and keep the toddler amused.

    • by Godji (957148)
      If you're taking the separate machine route, you might as well put Solaris on it and run ZFS. Having all your files checksummed, guaranteed correct, and self-repairing is a whole new world. Just avoid the latest Solaris release [guinpen.org]; the previous one works perfectly however.
    • by V!NCENT (1105021)

      Or... you could just format it ext4 (or some other rediculous, Oracle style, check 600x times if the file's been copied correctly file system) and login to Linux once a month, take you Linux files and put it on there and then from within Linux put your Windows NTFS partition on read only and copy you Windows files from withing Linux to your secure ext4 backup drive.

      Done...

  • Don't bother (Score:2, Informative)

    by Anonymous Coward

    If you're like me, you won't be happy with the compromises you have to make when picking a multi-platform filesystem. I'd outline them, but you've done a great job of doing so above. So, what to do?

    Get thee a cheap, cheap Linux box, format your drives EXT3, and all other machines access over the network. It's the only way you'll get the interoperability you want, without making compromises on max file size, cluster sizes, etc.

  • NTFS (Score:4, Insightful)

    by Hamsterdan (815291) on Wednesday December 23, 2009 @06:01PM (#30539824)
    THat's what I use here. for MAC, there's NTFS-3G (free) or Paragon ($ but faster on writes).

    I use NTFS on both my machines (Win/OS X/Linux) without any problems.

    NTFS-3G is also available for Linux.
    • by erroneus (253617)

      Exactly my answer as well. NTFS-3G works find. Never had a problem with it under Linux nor under Mac OS X. It overcomes the limitations. I only wish it weren't a Windows file system. There needs to be a "universal file system" that is supported by all OSes without added software or hassle or patent/license problems.

    • Re: (Score:3, Informative)

      by elh_inny (557966)

      I second this choice.
      For some zealots it's hard to admit but the performance is really good, you have commercial backing of the biggest software company on the planet.

      Recently a commercial company (Tuxera) was formed to provide commercial support for NTFS-3G and provide paid-for version of the driver for MacOS and Linux in addition to the free NTFS-3G.

      So the future and cross platform access is looking really good.

      On the other hand, if I were just a little bit more adventurous, I would much rather use Sun ZF

      • Re: (Score:2, Informative)

        by rhavenn (97211)

        I second this choice.
        For some zealots it's hard to admit but the performance is really good, you have commercial backing of the biggest software company on the planet.

        You only have commercial backing if the OS you're running it on starts with Windows and doesn't include Linux, Solaris, BSD, AIX, Haiku, Amiga OS, etc... in the name.

      • Re: (Score:3, Interesting)

        by PhunkySchtuff (208108)

        As much as I love ZFS, one thing it's not good at is for a single removable disk. The inability for Apple to get this working without kernel panicking the machine was one of the many reasons they chose to drop it.

  • Fat32 and VLC (Score:3, Informative)

    by caubert (1301759) on Wednesday December 23, 2009 @06:07PM (#30539882) Homepage
    VLC can play rar-compressed-splitted files beautifully, So 4GB is not a very big problem
  • by FreemanPatrickHenry (317847) on Wednesday December 23, 2009 @06:08PM (#30539888)

    ...ReiserFS. I hear it's killer.

  • I wouldn't.... (Score:5, Interesting)

    by fak3r (917687) on Wednesday December 23, 2009 @06:10PM (#30539912) Homepage
    I wouldn't limit myself to a certain filesystem, I'd run a dedicated NAS like FreeNAS and share it over the network via SMB (windows), AFP (apple) and whatever for Linux - all set. Plus as mentioned above, you can run Firefly media server, a bittorrent server, a DAAP server (itunes sharing), etc (all included in FreeNAS. http://freenas.org/ [freenas.org]) on the same box. And since filesystems don't matter in this config, you can use ZFS to make a RAIDZ pool of your drives. It's what I do now.
    • Re:I wouldn't.... (Score:4, Informative)

      by Barny (103770) <bakadamage-slashdot@yahoo.com> on Wednesday December 23, 2009 @06:23PM (#30540036) Homepage Journal

      I used to use freeNAS, but after a while I just wanted more than what it was offering.

      I switched it for a windows home server (server 2003 SBE based), mainly for the backup features, and what with the freeNAS machine being the only non-windows machine left in my house it didn't matter that it lacks full compatibility with unix.

      But yes, freeNAS is damn good at what it does, have set up some nice diskless server systems with freeNAS running from a USB stick and having all the client machines on the network sharing their drives with iSCSI, freeNAS would collect them all, turn them into a big redundant storage array, and share them back to the network, works well :)

    • by careysb (566113)
      Earlier this year I was looking into NAS drives and on at least two of the manufacturers' product info, they mentioned that some files cannot be copied to their device, e.g. MP3s. Anybody else encounter this?
    • by euxneks (516538)
      Mod parent up. Ext3 doesn't work nicely with Mac OSX (last time I checked anyway) and I hear bad things about NTFS on external drives (besides which, it's a microsoft technology - fuck em)
  • by BitZtream (692029) on Wednesday December 23, 2009 @06:10PM (#30539916)

    Via FUSE you'll get consistent features and useability across all 3 OSes. Of course moving zfs drives between those OSes isn't something I've tried, but in theory it should work fine.

    Not what your asking for, but Id put a FBSD samba server up with ZFS drives. You can still mount them on other OSes later if need be via FUSE.

  • I backup to portable USB hard disks. My backup machine is my eeepc 701. It runs ubuntu. I use this machine because it has fast USB and wifi interfaces. I have written a short shell script which runs on the eeepc. It uses rsync through ssh to copy user data from all the machines in the house to the external disk. I ignore the single windows machine in the house. If its user wants it backed up they can store their files on the server.

    I initially tried backing up through a workstation runing netbsd but I found

    • Oh and to answer the question I use ext2 on the external disks. I don't see a need for journalling on a backup device.

  • by FreelanceWizard (889712) on Wednesday December 23, 2009 @06:13PM (#30539956) Homepage

    Honestly, if FAT32 won't do what you need, NTFS is pretty much where you'll need to go. NTFS-3g [tuxera.com] gives you stable read/write capability on Linux and OS X as a FUSE driver; in fact, many distributions have NTFS-3g in their repositories. There's also native NTFS write support in Snow Leopard if you want to risk turning it on. I personally haven't had any issues with it, but some people have encountered file corruption when using it, so you might want to be wary. It is worth noting, however, that many embedded devices won't read anything other than FAT. If you plan on hooking this drive up to, say, a DVD player to show pictures, NTFS won't work for you.

    Like it or not, Microsoft file systems are the lingua franca of file transfer on portable drives these days, merely due to the installed base of Windows computers.

  • Words of caution (Score:5, Informative)

    by RenHoek (101570) on Wednesday December 23, 2009 @06:14PM (#30539970) Homepage

    I have ~6TB on external USB drives and I've been doing this for a few years now. I have a few words of caution about NTFS. If you get an USB drive that for example spins down or if you turn your USB drive off without properly dismounting it (or if Windows crashes), you might see this line:

    Delayed write failed!

    And on two occasions that meant that Windows fucked up the file allocation table or whatever it's called under NTFS and I lost the _entire_ disk.

    Windows loves getting its fingers into that table whenever you mount a USB filesystem. It's not like it tries to keep its write cache empty. Nooo.. every file access needs to be continuously recorded in that thing.

    Anyway, be careful when you use NTFS on a USB drive. Alternatively use EXT3, which you can still mount under Windows using:

    http://www.ext2fsd.com/ [ext2fsd.com]

    (Note that these experiences are under Windows XP, I have no clue if Vista or 7 does any better, I assume not.)

    • Weird, because on both XP and 7 (on two different machines), the external USB drives are set for quick removal by default (meaning cache is disabled by default).
    • Re:Words of caution (Score:4, Informative)

      by compro01 (777531) on Wednesday December 23, 2009 @06:56PM (#30540302)

      Turn off write caching for the drive and this problem goes away. It's supposed to be off by default (at least on removable drives, but some IDE/SATA-to-USB bridges show up as normal fixed drives rather than removable for whatever reason), but I've found it seems to turn itself on for whatever stupid reason.

      • How do i get external USB HDDs to allow the right click option to eject it, like i get from USB flash drives, on windows.
    • by euxneks (516538)
      However, it's a bitch to get ext3 mounted on OSX. - it would be better to just do something like was suggested by other posters, a freeNAS or freeBSD setup and share the drives over the network.
    • by KonoWatakushi (910213) on Thursday December 24, 2009 @12:54AM (#30542068)

      The fundamental problem lies in USB bridge chips which do not properly implement the cache management commands. Others have replied that you need to disable the write cache, and while that would be a solution, it is often impossible. Even with bridge chips that do support the cache disable command, some hard drives will not honor it anyway.

      Most USB bridges simply lie about when data has been written, which makes it very difficult for a filesystem on top of it to make any guarantees. While it may not happen often, this can have disastrous results, as you have seen.

      The copy on write nature of ZFS left it especially vulnerable to broken USB storage, and could easily leave you with a corrupted pool requiring manual intervention and a bit of luck to recover. Thankfully, the recent bits address this, and ZFS is now the only filesystem that I would trust on top of USB storage. Most other filesystems will survive without incident, but at the cost of some silent data corruption.

  • network it (Score:4, Interesting)

    by Sloppy (14984) on Wednesday December 23, 2009 @06:15PM (#30539974) Homepage Journal

    I realize that there will always be some compatibility problems with whatever I end up with.

    Not if you use a network filesystem, such as Samba and NFS for the Windows and MacOS machines. Then on the Linux fileserver side, use whatever filesystem you want, and any OS can talk to that server.

  • You want something that will be read by your Linux, Windows and OS X machines? um, only one option I can see and thats FAT32. Any of their own systems, such as NTFS, get you only browsable directories by one or two of the other boxes.

  • by mysidia (191772)

    If proprietary filesystems are on the table, how about VxFS [wikipedia.org] ?

    Another possibility is to use FAT with cross-platform backup software. Maybe you don't need a filesystem at all: if this really is for backups... why not just create lots of extended partitions on the device and use TAR ?

    AKA tar cf /dev/sdbXX -V 'VOLUME_A' /backup

    That's crude and hard to keep organized, but also effective. Also, Some proprietary backup products that will work on a FAT filesystem, and not require large file s

  • by mlts (1038732) * on Wednesday December 23, 2009 @06:42PM (#30540174)

    As an alternative to an external disk that goes to multiple machines, this might cost some, but perhaps consider a backup server?

    The advantages to this setup:

    1: The server initiates the backups, and can warn you in case something can't be read.
    2: Most backup software stores snapshots, and some deal with the full/incremental/different cycle by using synthetic full backups. This makes restores to a certain point in time pretty easy.
    3: More sophisticated backup software allows you to transfer backup sets to another media. This way, you just plug in a drive, do a transfer, and you have an offsite archive.
    4: If one of the backup client machines gets hacked or malware installed, existing data stored on backup media cannot be altered.

    The disadvantages:

    1: You will need an active computer which is significantly more expensive than a hard disk.
    2: Amanda/Zmanda for open source, Retrospect, Backup Exec, for commercial. The software costs a hefty chunk of change.
    3: You have to make extremely sure that the backup server box is locked down tight. If someone compromises your backup server, they got data of every box you have. If you can, perhaps consider buying a router to put the backup server behind and only allowing the vital ports incoming.
    4: Backup servers should have some redundancy for stored data. Because there is so much data stored from multiple boxes, a failure of a drive hurts more than on a normal machine.
    5: Restoring a machine may vary in difficulty.

  • by NitroWolf (72977) on Wednesday December 23, 2009 @06:49PM (#30540242)

    Replace the silly little WD TV Live media player with a mITX system that's about the same size. Install Linux and XBMC and be done with it. You'll have the best possible media player on the planet, as much storage space in any configuration you want and the ability to expand everything when the time comes. No hassle, you'll have constant online backups available and you'll have a killer always-on media center.

  • IIRC, NTFS is a descendent of something called HPFS, which is what IBM developed for OS/2. At least as recently as Win2K, Windows would mount and use HPFS partitons, and also I recall that Linux could read/write that as well. Look into that.

  • If you need Linux/Mac/Windows interoperability then we recommend NTFS for both Linux and Mac users. Instead of the old NTFS kernel driver you may want to check our open source NTFS-3G. It has read/write, and tons of options:
    http://www.tuxera.com/community/ntfs-3g-advanced/ [tuxera.com]

    If you need just high-performance NTFS read/write, this is our offering for Mac users:
    http://www.tuxera.com/products/tuxera-ntfs-for-mac/ [tuxera.com]

    If you need high-performance for a commercial Linux application or device, you may want to check this:

    • Re: (Score:3, Interesting)

      by hacker (14635)

      Bzzt... NTFS can't handle filenames that ext3, XFS and other Linux-based filesystems can handle. I went through this dance with my Drobo (incidentally, do not EVER buy a Drobo, not if you care about your data; it's dangerous to store data on that device)

      ext3 and the Windows-side e2fs-explorer style packages are fine, or use Samba/CIFS and serve it up that way. I use rsnapshot on Linux to back up my Linux and Windows machines to my NAS, which is ext3-formatted.

      NTFS is fine, if you're only ever backing up

  • Openfiler anyone? (Score:2, Informative)

    by lacourem (966180)
    Personally I like Openfiler. It can be picky about the hardware though. With that said, the speed is great, and I can mount iscsi on linux and windows. Has been stable as hell for me to boot.
  • by countach (534280)

    Looks to me like HFS is the way to go since there are good solutions for all three platforms for HFS.

  • NTFS-3G works fine (Score:3, Insightful)

    by SlightOverdose (689181) on Wednesday December 23, 2009 @08:01PM (#30540790)
    NTFS-3G, which should come standard in most distros, should be able to read and write NTFS perfectly. It's considered very stable. That said, my personal solution to this problem was to use EXT2 and install EXT2IFS on my windows machines. I had a small FAT32 partition on the USB disk with the EXT2 driver installers for Windows and MacOS, so if I ever need to read it on another computer I don't have to download anything.
  • ext3 (Score:3, Interesting)

    by steveha (103154) on Wednesday December 23, 2009 @08:03PM (#30540804) Homepage

    I format my external USB drives to ext3. Most of my machines are Linux anyway, and I can always plug the USB drive into my storage server and backup over Samba to any kind of drive supported by the storage server.

    ext3 is pretty much stable and well understood. It just works. That's what I want for backup drives.

    And my netbook has Ubuntu Linux on it, and ext3 performs well on the external USB drive there. I haven't tested NTFS over FUSE on the netbook, but I wonder about CPU overhead on the little Atom chip: it might be a little bit slow.

    If you want a drive you can take over to your friend's house, and your friend just runs Windows or a Mac, then by all means NTFS.

    steveha

  • by stilldead (233429) on Wednesday December 23, 2009 @08:26PM (#30540942)

    I haven't tried it but it looks like a good idea. http://www.cyberguys.com/product-details/?productid=36218&sk=MC71419 [cyberguys.com]
    Format it ext3 and then share it SMB for any OS.

  • by the_other_chewey (1119125) on Wednesday December 23, 2009 @08:31PM (#30540978)
    I see quite many people here recommending ext3. Oh my. ext3 sucks for large files,
    which is exactly what the submitter wants to use his setup for. Look into the crazy structures
    ("double indirect blocks") it uses. He should go with an FS that has sane data structures with
    files >>4GB.

    That kills most of the choices and leaves XFS, ext4, ZFS (only worth it if not used via FUSE,
    i.e. in Solaris), and a couple more obscure ones.
    I second the "forget OS portability, use a server" suggestion. There's great low-power, low-cost
    hardware for this nowadays.
  • Please read this (Score:5, Informative)

    by Anonymous Coward on Wednesday December 23, 2009 @09:00PM (#30541114)

    Dear AP,

    I was pondering with a very similiar problem two years ago, when I bought my first terabyte-class external USB hard drive to my home server (old 800 MHz Duron). I was thinking exactly like you are thinking now, and decided FAT-32 would be the way to go. MISTAKE. Four important things stand out, that I want you to read:

    1. You never really transfer that drive. Much less than you think. Chances are more than 99% of their lifetime they will be sitting hooked to one computer, never really being moved; portability is just another nice extra feature that you hardly ever use. During the two years I have switched my drives a couple of times between the desktop machine and the server, both of which run Linux as main OS. I have never ever taken any of these drives outside of my home.
    2. 4 GB limitation is really bad! Most DVD ISOs are bigger that that. An hour of HD video in state-of-the-art H.264 is more than 4 GB. And rest assured, when you have the space and facilities to accquire gigabyte-class multimedia, the temptation will be there. As BluRay becomes the new DVD, maybe you want to RIP your fav. movies to your hard drive for quick access? NO WAY with FAT-32!
    3. Lack of UNIX parameter support. Okay, so you just want to store back-ups? Okay.. Remember than FAT-32 doesn't support symbolic links, file ownership, user/group/others access permissions, file name character case (in Microsoft Windows, "Soviet Union" equals to "soviet union"; WILL result in a conflict when copying data from UNIX filesystems!). This information is LOST, unless you use some container format like tar (but remember the 4GB limit again). These little things are a) very helpful everyday things, value of which you realize only after loosing them (e.g. any file on extfs can be replaced/virtualized without moving files around; it can even point to a non-existstent file! And all works seamlessly, as long as the program understands symbolic links; now how valuable is that?), and b) what makes your UNIX fs work. The value of your backups is lower if they dont work "out of the box", e.g. data is lost when transfering to FAT-32. I mean, you just have a chance to save so much hassle there.. When needed, you can NFS-mount the filesystem (and its free space and contents) to your local machine from your drive, and everything works transparently to BOTH Windows and Linux (the properties of FAT-32 are a small subset of those of extfs.)
    4. Acccess speed. Ext3 and Ext4 or just about any 21st century UNIX fs are lightyears ahead of the archaic FAT in data structures. E.g. if I "ls" a big directory on my only FAT-32 drive, it is SLOW! You can see the entries being fetched one by one. Whereas, if I do the same in a similiarly-sized directory on the ext3, the files appear immediately! Access is almous immediate even over NFS mount in LAN. This comes handy, rest assured.

    Okay, those are my four vocal points. They could be in any order, because all of them are equally important reasons NOT to choose FAT-32! As it happened, after using the 0.5TB drive for 6 months with FAT-32, I bought more space (a new drive). This time there was no question about the filesystem. I made a small, few-sector long 200 MB FAT-32 partition to the beginning of the drive and downloaded all the latest Win32 EXT2 drivers there from different vendors, just for the really unplausible situation that I would ever want to mount these drives in Windows. Then I just made the rest Ext3. And, I am REALLY satisfied with the decision! Ext3 just work so sparklingly faster and better with Linux than FAT-32 ever does. Since then I have bought one more drive and did the same 200MB + 1TB thing. I will probably never use these drives in Windows, but it gives me a warm feeling to the heart that there's always a way if I should, even if the computer doesn't have an Internet connection.

    Oh, one thing I forgot to mention: get a file server! It makes your life so much easier. Nowadays I am running a desktop computer with a 60 GB SSD drive and no HDD at a

  • by lpq (583377) on Wednesday December 23, 2009 @11:09PM (#30541684) Homepage Journal

    XFS was designed with media streaming in mind -- and designed for large file, high performance. It had a defragmenter to keep disks in optimal condition before Windows98 had come out (was one at the request of a large, customer who had an especially pathological case -- before that there was normally not considered a need for it).

    Files can be 'normal', have up to and addition 256K of resource-fork related into (extended attribute info), AND you can have a real-time section that can allow for completely bypassing the file system. It was sufficiently fast for video even back when disks were 1/10th the speed they are now.

    On it's native OS, it could handle multiple streamed data to the same disk and keep it separate by allocating the separate channels out of disparate allocation groups on disk. I don't know how that works on linux. Unfornately, on linux even under x64, file block sizes AFAIK, are still limited to 4K. XFS has a 64K limit, but under linux is hamstrung to 4k. Of course Windows NT allows 64K block sizes. But not linux...hmmm....very weird. XFS minimizes impact of linux's tiny allocation block size by using a extents which can be at least 256k -- but believe the actual limit is in megabytes. Been a while since I read that stuff...

    Of course -- not to be linux centric, but have heard ZFS is pretty good, but no idea of how it compares for anything.

    my 2 cents...

  • by corychristison (951993) on Thursday December 24, 2009 @01:11AM (#30542136)

    I know this comment will get lost in the sea of other comments, but my recommendation to you would be a hybrid solution.

    Create a small partition (1GB would be overkill) and format it FAT32.
    Create another partition for the rest of the drive (or however you please) with your choice of FS (I prefer XFS, personally).

    Store the drivers(/utilties) for the FS you chose and store them on the FAT32 drive.
    Some popular drivers/utilties for Windows are:
    ext2fsd for EXT2 - http://sourceforge.net/projects/ext2fsd/ [sourceforge.net]
    rfstool for ReiserFS - http://freshmeat.net/projects/rfstool/ [freshmeat.net]
    ltools for EXT2/EXT2/ReiserFS - http://www2.hs-esslingen.de/~zimmerma/software/ltools.html/ [hs-esslingen.de]
    and so on and so forth (a simple google for "[FS] Windows Compatibility" usually works.)

    Just my thoughts. :-)

  • nice... (Score:3, Insightful)

    by ZenDragon (1205104) on Thursday December 24, 2009 @09:17AM (#30543588)
    This is the kind of topic that SHOULD be on slashdot. And no Im not being sarcastic. Something informational that might benefit everybody, dispite all the bickering in the subsequent posts! There's one thing you can count on with slashdot readers, dispite how arrogant most of us are, or how ignorant most are when it comes to politicial information - We usually know our shit when it comes to computer stuff! :)
  • NAS is the way to go (Score:3, Informative)

    by seangee (1709612) on Tuesday December 29, 2009 @08:31AM (#30581350)
    + 1 for build a NAS. Your backup doesn't need to be very portable. I started out with FreeNAS, if you go this route ZFS would be the logical choice. I got pretty frustrated with this so rebuilt my NAS using Ubuntu and EXT3 - it made sense to me to use the native FS for the OS. I currently have 4 x 1TB internal drives in raid 5 (one spare). I use a USB drive to back this up. SMB internally and all the files are accessible by every OS I use. If I need to take files out and about there are good old USB sticks or portable drives. Mine are usually NTFS or FAT 32 because anything can read these - and of course there's always good old FTP on the NAS

When you don't know what you are doing, do it neatly.

Working...