Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Data Storage

File Systems Best Suited for Archival Storage? 105

Amir Ansari asks: "There have been many comparisons between various archival media (hard drive, tape, magneto-optical, CD/DVD, and so on). Of course, the most important characteristics are permanence and portability, but what about the file systems involved? For instance, I routinely archive my data onto an external hard drive: easy to update and mirror, but which file system provides the best combination of reliability, future-proofing, data recovery, and availability across multiple platforms (Linux, OS X, BeOS/Zeta and Windows, in my case)? Open Source best guarantees the future availability of the standard and specification, but are file systems such as ext2 suitable for archival storage? Is journaling important?"
This discussion has been archived. No new comments can be posted.

File Systems Best Suited for Archival Storage?

Comments Filter:
  • by Xner ( 96363 ) on Saturday January 06, 2007 @05:42AM (#17486456) Homepage
    It's simple and supported by almost all machines and devices. Worst come to worst you can hunt for your data with with grep and dd.

    If you are not constantly editing the information (and you won't be, it's for archival purposes) the admittedly major downsides of not being journalled and being prone to fragmentation are non-issues. You might run into problem with capacity limits and/or file size limits though.

    • Don't use FAT (Score:3, Interesting)

      by AusIV ( 950840 )
      FAT has issues with partitions larger than 32 GB and files larger than 4 GB. It's nice for Flash drives that you're taking from a Windows PC to a Mac to a Linux box, but if you're talking about serious arches, you'll definitely run into the first problem, and quite possibly run into the second.

      I use Ext3 for my backup drive, and this driver [fs-driver.org] for when I need to attach it to a Windows box.

      • by dgatwood ( 11270 )

        The first limit is almost a non-issue. The FAT-32 filesystem supports up to 8 TB. Of course, Windows XP can't format a volume over 32 GB, but you can always create the volume in another way---in Windows 98, ME, or Vista; in Linux; or using a third-party formatting tool. Once you have created a larger volume, Windows (even XP) should be able to handle it just fine.

        FAT Limitations [microsoft.com]

      • by crossmr ( 957846 )
        It says ext2, does it work with ext3? I just installed it, and it seemed to automatically call my 20 GB linux partition on my HD D:, but I can't see a single thing on it. It also shows it as a CD icon not a hard drive. Maybe I'll try a restart.
        • by Simon80 ( 874052 )
          it definitely works with ext3, but as of the last time I used it, you have to manually mount the drive yourself using the mount command it installs.
          • by crossmr ( 957846 )
            Yeah it was bad. I restarted, and I couldn't login. When I clicked my username, it sayed userinit.exe failed to initialize then locked up the machine, I had to goto safemode and uninstall it.
      • by Storlek ( 860226 )
        Not to sound like an Apple fanboy, ext2/3 just isn't very well supported in Mac OS. Sure, there's an ext2 driver, but it's unstable and buggy at least in 10.4 -- if I'm mounting my ext3 partition, I have to kill Spotlight first, else it'll try to index it, and about half the time I'll end up with a kernel panic. There doesn't seem to be much development on it, either. I'm not sure the developers are even interested in the project anymore.

        It's a fine and stable filesystem in itself, but if it doesn't work on
        • by AusIV ( 950840 )
          That hardly sounds fanboyish, I was under the impression ext2/3 were at least as well supported on OSX as they are on Windows, and I'd hope no one would consider a file system that's hardly compatible with their OS.

          I'm not really much of a fan of ext3. I recommend it for cross compatibility, but nothing else. I tried using ext3 for storage of MythTV recordings, and that turned out to be problematic - it would frequently crash if there was too much going on. Recording one show, transcoding another and watch

          • by Storlek ( 860226 )
            Extfs unfortunately isn't very well supported in a lot of places besides Linux. It's the unfortunate truth. Nor are lots of filesystems, to be honest.

            By "fanboy" I was more referring to my bringing up the Mac in a Windows/Linux-centered discussion, although I suppose I could put in my 2 and suggest using HFS... OS X and Ubuntu have coexisted nicely on my Powerbook for a few months now and I have yet to see any problems with the Linux support for HFS; on the other hand the Mac ext2 driver crashes constantly.
    • by Anonymous Coward

      Worst come to worst...
      The expression is "Worse comes to worst" as in "should the condition arise such that what was considered 'worse' is so bad that it's now the worst thing that could happen..."
       
  • by Helvidius ( 659137 ) on Saturday January 06, 2007 @05:44AM (#17486464)
    I have heard that the most permanent way of preserving data for long, LONG time is to write your data in stone. Granite being one of the best. Aside from that, computer data will lost a much shorter time than even the printed word. So buy some acid-free, archival quality paper and print those bits out!

    Of course, that's just my opinion--then again, I could be wrong.

    • Re: (Score:3, Interesting)

      by jamesh ( 87723 )
      I'm sure that a while ago I read about a system that could print encoded data onto paper at a reasonably high density (eg not readable by a human, but easily decoded with a scanner). At a 'plucked out of the air' figure of .25mm x .25mm per 'bit', and an equally 'plucked out of the air' figure of 11 bits of data per byte (to allow for clocking and maybe some error correction), you'd fit about 80kbytes on a single page of A4, and about 40mb per 500 sheet ream. Not that high (and possibly much higher or much
      • I'm sure that a while ago I read about a system that could print encoded data onto paper at a reasonably high density (eg not readable by a human, but easily decoded with a scanner). At a 'plucked out of the air' figure of .25mm x .25mm per 'bit', and an equally 'plucked out of the air' figure of 11 bits of data per byte (to allow for clocking and maybe some error correction), you'd fit about 80kbytes on a single page of A4

        What you read about is a matrix code [wikipedia.org] (no, not backwards kana). 4 pixels per mm times 25.4 mm per inch = 100 dpi. Olympus Dot Code [skynet.be], used by Nintendo's e-Reader, is 3 times finer than that, at a bit over 300 dpi, improving data density by an order of magnitude.

      • I remember the article you're talking about; it was here on Slashdot a while back. It was widely assumed to be a hoax, at least in the advertised implementation. He was talking about TB per page, and replacing Blu-Ray discs with paper, etc.

        But in theory there's no reason why you can't do a 2D bar code at high resolution across a page. You wouldn't want to use regular toner though, since it sticks to the pages; you'd want to use real ink that sinks in. Preferably pigment inks instead of dyes, too.

        The problem
      • by ITMagic ( 683618 )
        There was a utility, some time ago, available for Win/MSDOS to do just this. At the time, it was shareware. Sadly, I cannot find any reference to it now. I wish there was a FOSS equivalent, as of all the archival media we have, this (with decent ink) at least has a proven track record of hundreds, if not thousands, of years.
    • Re: (Score:3, Insightful)

      by _Sharp'r_ ( 649297 )
      Stone? Easily chipped or cracked if dropped, low tensile strength, not very portable? No thanks.

      Try thin metal plates. A little more difficult to etch by hand (which can be alleviated by using the right malleability of gold), but well worth it for the long-term benefits of damage-resistance and portability.
      • by Anpheus ( 908711 )
        Gold is actually taking out two birds with one stone, as it is known for its lack of reactivity. For that reason it is used in a great deal of computing equipment.

        Unfortunately you'll have to alloy it heavily in order to get the $/byte down!
      • Yup. 20 million Mormons can't be wrong.
      • Your plan has one flaw--if one tried to record data in gold, it might just happen that people in the future would find the medium (gold) much more valuable than the message.

        One only needs to look at ancient civilizations for historical president.

        Alas, one man's gold is another man's date with a prostitute.

    • Hmm, how many stones to store 1 TB?

      And let's define "byte" as "inscribed letter". :-)
    • har, har!
      seriously, though, I was just commenting about this. Don't use HDDs for archives -- they fail. Use tape, DVD, or some other media-only (i.e. no embedded electronics to fail) device.
    • by jcaplan ( 56979 ) *
      Stone definitely has a nice record for permanence, having preserved hieroglyphics for millennia, but I quibble with your choice of granite. Living in New England and seeing many old gravestones has allowed me some observations of performance of granite. Granite is nice to carve and pretty, too, but it suffers from weathering and the details become indistinct over time, becoming difficult to read after a century or so. Slate was popular before granite came into vogue and is found in the older sections of cem
  • by fromvap ( 995894 ) on Saturday January 06, 2007 @05:45AM (#17486470)
    I would say that ubiquity is the most important factor in being able to read something in the future, not it being open source. FAT32 is certain to be easily, if not legally, accessible for the very short expected lifetime of an external harddrive. To improve data recovery capabilities, you might like to create some archives in RAR format for error checking, with PAR2 files for redundancy and recovery. Hard drive space is cheap, so for safety keep the uncompressed files as well as the archives. Since hard drives fail, you should have more than one of them. And ideally, make DVDs also. I created some files with early betas of Openoffice 2, and it was not at all easy to open them once the file format changed before the final release. As another example, despite it being open source, the legal problems of Reiser may cause that file system to be inconvenient to access in the future. An outdated, but very popular legacy format will have support that will last far longer than people want it to. Because of the high marketshare that Wordperfect had in the days of Noah, even now you can open Wordperfect files in Word and Openoffice. If you think FAT32 will be unreadable anytime soon, think again.
    • Since hard drives fail, you should have more than one of them

      You could set it up in a RAID-0 (if it's only 2 disks) or RAID-5 with 1 redundancy (3 or more disks). But you could also consider CDFS or UDF... put your data on CDs/DVDs. If you have overly large files, you could use RAR to break them up into little pieces to burn to CD. If you want free online storage for few gigs of very important files, just open up as many gmail accounts as you need, compress your files with RAR into 1MB pieces and upload ea

      • by Ucklak ( 755284 )
        Isn't RAR proprietary? Isn't that what the poster is trying to get away from?

        Last I checked, RAR compression isn't available on any default installation of Linux, Windows, or Mac.

        RAR may be the best or versatile but every time I've had to un-RAR something, I either used a trail version or a cracked version.
        Not something I want to trust 15 years later.
      • by Cecil ( 37810 )
        RAID-0 is striping. RAID-0 a tongue-in-cheek way of saying "Zero RAID" or "Not RAID". It's not redundant at all.

        You meant to say RAID-1 (mirroring), I'm sure.
    • Re: (Score:3, Insightful)

      Does anyone use RAR outside of the copyright infringement scene?
      • by RupW ( 515653 ) * on Saturday January 06, 2007 @10:09AM (#17487310)

        Does anyone use RAR outside of the copyright infringement scene?
        Yep, I do. It's widely accepted, better than zip and better than .tar.gz or .tar.bz2 because it orders the files more intelligently than tar before trying to compress them. tar.rz goes some way to address that but you have to do it in two steps because rzip doesn't pipe. .tar.rz compression is about equivalent for large numbers of small files but rzip will often beat rar single large files.

        The killer feature back in the day was the first good implementation of disk splitting. But the compression still stands up now.

        On my 'if I ever get free time' list is to implement rar's file ordering in GNU tar to see if that helps gzip and bzip2 catch up RAR's compression ratio. But I've no idea if/when I'll ever get around to that.

        -- paid-up RAR user since 1996.
        • by jZnat ( 793348 ) *
          In my experience, even a registered version of RAR (legit actually :O) compressing with max compression is still beat by tar/bz2 for textual things. 7z can do even better, so sometimes RAR isn't that great.

          What RAR is good for is cross platform file splitting, parity files (borrowed concept from RAID 5 and co), and the ability to archive without compressing. This is why it is used in the "scene" all the time.
      • At work we used rar to compress nightly all of the source code, inluding each devs own copy. We had 2gb of source code compressed down to 100megs, all because rar has much better compression methods, and as another posted, a better file ordering mechanisms.

        The command we used:

        rar a -m5 -s -mc63:128t -mdg -mcc -en -tsm0 -tsa0 -tsc0 -ri1:10 ${todaysDate}.rar "*"

        -m5 == maximum compression
        -s == solid archive, the real saver for multiple copies of same file
        -mc63:128t == text compression (PPM algorithm

    • Re: (Score:3, Insightful)

      by MrHanky ( 141717 )
      I second this. FAT-32 isn't the most robust file system out there, but it's ubiquitous and well understood. Robustness is probably not the most important aspect for archival storage, if that means write once and store, and it's meaningless if you can't read the format. It's not a modern file system, though, and has some problems (4 GB file size limit, etc.).

      I wouldn't say the same goes for RAR. It's a proprietary format, owned by a company and used mainly for piracy. I know you can extract it on many OS tod
    • by fritsd ( 924429 )
      I disagree; what's ubiquitous *now* might not be in 20 years (which is nothing). Can you still find the source for an executable for your current architecture to unpack your old zoo, lharc, and .Z files? (yes, I know gzip does .Z and zoo is in Debian, but you get my point). I think it's much, much more important that it's described in detail in a lot of locations (such as PK-ZIP and US-tar format). And when you mention Wordperfect, let me mention Wordstar :-) BTW, where's the royalty-free standards document
  • How Archival? (Score:5, Insightful)

    by Stone Rhino ( 532581 ) <mparke@gm a i l.com> on Saturday January 06, 2007 @05:54AM (#17486500) Homepage Journal
    Is this going to be relatively live, with data being mirrored onto it regularly, or is this going to be written once and accessed occasionally from then on? If you're only going to write to it a very small portion of the time, (or even WORM), journaling will be useless to you, since anything that takes out your data won't be stopped by it.

    How far into the future are you going to need it? I understand the whole "not wanting to become unreadble," but honestly, no one's going to bother re-implementing a filesystem to look at their old vacation photos. Pick a popular filesystem, and you'll be sure of support down the line. FAT's still doing just fine for itself, and the ISO filesystems for CDs and DVDs will be readable as long as people are making drives for them.

    All of the data integrity features on filesystems aren't going to protect against disk failure/media wearing out, and error correction on that scale is beyond the scope of any one disk to handle. Like the department jokingly advised, parity files and other methods can handle this in a robust, media-spanning manner, and protect against everything from a few flipped bits to a whole-disk data loss (assuming you have enough parity data).

    I think the reason not much talk about filesystems has been going on is because they're mostly irrelevant for this task. They're designed to handle the issues of a live environment; the issues that archives face are beyond the capability of how you choose to store your data on each piece of media to solve.
  • If you're only using it for archive, writing anew each time, then skip the file system altogether. Treat the media like a block device, tar or otherwise archive your backup and just write the tar as a single, linear sequence of bytes. And don't compress it, so that a bit error early in the sequence doesn't mess up later blocks.

    Now which archive format is best - tar, cpio, etc.? I've heard that cpio is a much simpler underlying format.

    And if you have the space, write the archive sequence multiple times on
    • by Aladrin ( 926209 ) on Saturday January 06, 2007 @06:30AM (#17486636)
      You'd be MUCH better off creating PAR2 files for the archive set, instead.

      If you made 2 copies of the archive on the media, and piece 10 of both sets die, you've lost everything. If you made 1 copy of the archive, and a 10% par set, any 10% of the pieces (data and parity both) could die and you'd still have your data. If you made a 100% par set, you could lose half of the data and parity and still recover. And it doesn't matter which portions.

      Add to that the fact that if you lost piece 10 in archive 1, and piece 9 in archive 2, it would be not much fun to figure out the dead pieces and make a full archive again. With PAR2, the tool will do the work for you.
      • by hey! ( 33014 )
        I like the idea of PAR, but the advantage of tar is that it has been around forever and will probably be around forever, even though "better" solutions like PAR have been created. I'd be concerned that somebody will come up with a "better" solution than PAR and implementations of PAR might be hard to find in the distant (e.g. decades away) future.
        • the advantage of tar is that it has been around forever and will probably be around forever, even though "better" solutions like PAR have been created. I'd be concerned that somebody will come up with a "better" solution than PAR and implementations of PAR might be hard to find in the distant (e.g. decades away) future.

          Any harder than implementations of tar, or even implementations of sh if you want to put GNU tar in .shar format? At least .shar is reasonably human readable and can be unpacked if you don't have a Bourne shell handy. My recommendation: On each volume, include two archives: a .shar archive containing the source code for a PAR reassembler, .tar unpacker, and .gzip decompressor in a widely supported language such as C, and a "part" of your .tar archive.

        • by Aladrin ( 926209 )
          That is a valid concern as the author of PAR has indeed been working on a better implementation. But in the past, he has kept the parchive program backwards compatible. (The PAR2 version also handles PAR, even though they are quite different.)
      • Re: (Score:2, Informative)

        by Anonymous Coward
        Depends, a 100% par set for a 100GB archive would take forever even on the faster machines. Even a simple "small" 4GB par set for a DVD backup takes hours on an Opteron 250.
  • Son, look... (Score:1, Flamebait)

    I don't know how to tell you this, but you're an idiot.
    "There have been many comparisons between various archival media". ARCHIVAL is the key word. As in "we don't move these files around a lot". As in "It doesn't make much sense as an end user to discuss the undelying filesystem to something which is used to just have files sit around, as long as it's stable". Buy something that has dedicated commercial support for the next 20-40 years, like the LTO standard and call me in the morning.
    • bad advice (Score:4, Insightful)

      by oohshiny ( 998054 ) on Saturday January 06, 2007 @12:01PM (#17488050)
      Buy something that has dedicated commercial support for the next 20-40 years

      You mean like DEC or any of the other out-of-business dinosaurs?

      As someone who has been through this, I can only say: do NOT buy anything that depends on "dedicated commercial support"; the companies and industry standards you think are going to be around for "20-40 years" are probably either not going to be, or they are not going to give a damn about you.

      Use open standards and open formats, with multi-vendor support; that's the only way to go. And you need to keep your eyes open and move to new formats and standards as the world changes.

      If LTO is the right choice, it's the right choice because of that. But I'm not convinced that LTO is going to be long-lived enough as a standard, no matter how many companies have tied their fortunes to it right now.
      • by Nutria ( 679911 )
        You mean like DEC or any of the other out-of-business dinosaurs?

        DEC might be gone, but companies still support DLT hardware.

  • There is really no best file system for every purpose.

    Since you want reliability and portability, I recommend DVD+Rs. They are certainly more reliable than an external hard drive, and more portable too.

    It really depends on how you use your archive. Since you carry your archive around, I would recommend against an external hard drive since they can be quite fragile.

    Your file system choice depends a lot on the storage technology choice. Of course for the previously mentioned, 9660 would meet your needs the be
  • by F00F ( 252082 ) on Saturday January 06, 2007 @06:24AM (#17486612)
    I've been wondering lately why no common file systems seem to implement error correcting codes (ECC/EDAC).

    In hardware, there's often a checksum, ECC/Hamming code, parity bit, Reed-Solomon code, etc. to detect and/or correct for inadvertent bit flips. But, as far as I know, no error correcting information is ever stored within the filesystem itself. Certainly the filesystem tracks how many blocks are dedicated to a particular file, and how many bytes long the file is, and one can always hash the file twelve ways to Sunday to assure that it hasn't changed since it was originally hashed, but none of that helps repair errors to the file should the medium that's being used to store it decay beyond what's already correctable via the medium access hardware.

    I can imagine scenarios where, for example, the RAM buffer in a hard drive is upset and perfectly encodes the wrong bit into a file (or even multiple stripes + parity in a RAID). In this case, the medium access hardware is useless (the data was, after all, ecoded perfectly wrong), but ECC in the filesystem would detect and potentially correct the error the next time the file was read back, even if it were decades later. I appreciate that it would add overhead, and thus maybe shouldn't be the default, but I don't see it being even an option anywhere, and some people would pay the performance penalty to get the data integrity benefit.

    Especially in instances like encrypted (or compressed, or both) loopback file systems where one bad bit can destroy an entire partition, why don't we have more data assurance layers available? Or have I just not found them?

    Whining of which, what was the deal with GNU ecc? Everyone speaks of "oh, yeah, the algorithm was deeply flawed, bummer..." but I don't ever see any details ...
    • Re: (Score:2, Informative)

      by whovian ( 107062 )
      zfs supports checksums (http://en.wikipedia.org/wiki/Comparison_of_file_s ystems#Allocation_and_layout_policies) but it is incompatible with GPL (http://linux.inet.hr/zfs_filesystem_for_linux.htm l). However, Ricardo Correia has an alpha version of zfs for FUSE/Linux (http://zfs-on-fuse.blogspot.com).
      • by bgat ( 123664 )
        Checksums let you detect errors, but don't let you do anything to correct them. I think what he's after, and what I'd like to see, is a filesystem that offers "forward error correcting" codes--- information that lets you actually _correct_ bitflips.

        In an archival setting, I'd rather get back corrupted data than no data at all. A filesystem that aborts on checksum errors would therefore be a bad choice when faced with that problem.

        The question isn't so theoretical. NAND flash requires forward error correc
        • by Cajal ( 154122 )
          If you are using ZFS in a mirror or raidz configuration, then the checksums do let the fs detect and correct corrupted data.
    • The probability that a disk will fail completely is much higher than the probability that it will corrupt a few sectors. ECC only protects against the latter case, while RAID+checksums protects against both cases. Unsurprisingly, RAID+checksums is what the industry tends to offer.
    • I've been wondering lately why no common file systems seem to implement error correcting codes (ECC/EDAC).

      Because user mode tools such as PAR2 [wikipedia.org] already implement them.

      I can imagine scenarios where, for example, the RAM buffer in a hard drive is upset and perfectly encodes the wrong bit into a file

      Likewise, I can see scenarios where, for example, the RAM buffer in an application's main memory or in the file system's buffer is upset and perfectly encodes the wrong bit into a file.

    • What I'd like to see would be a filesystem that would look like a read-only FAT32 drive with hidden files or an extra partition to an OS that didn't support it, but to an OS with the correct driver would have error correction transparently built-in.

      Being able to transparently divide files above 4gig and have them look like a single file to a supported OS would be gravy.

    • by vrmlguy ( 120854 )
      I don't know of any filesystems but there are applications that implement error detection: "Oracle ensures the data block's integrity by computing a checksum on the data value before writing the data block to the disk. This checksum value is also written to the disk. When the block is read from the disk, the reading process calculates the checksum again and then compares against the stored value. If the value is corrupted, the checksums will differ and the corruption revealed."

      My understanding is th
  • by rjforster ( 2130 ) on Saturday January 06, 2007 @07:25AM (#17486770) Journal
    In one form or another anyway. People keep asking about the _best_ way to store data for a long time (for some definition of best)

    My take on this problem is that you should use the best you reasonably can today. Then in 5 years time when there is a new technology out there, move over to that for archiveing your new data AND move your old data over while you still have working hardware.
    I went from floppy disks to LS-120 drives. From LS-120 drives to CDs. From CDs to DVDs. I'll go from DVDs to whichever of HD or BD seems best in a couple of years (unless something else crops up). I might use hard drives instead but I'm not sure yet. The point is I don't need to decide until I need to store that much.
    If you're playing in the big leagues do the same with the various formats of giganto capacity tape storage etc.

    Plan around the shelf-life and working life of the hardware you can get and the answer drops out.
    • Even with hardware that seems to be working perfectly fine, in the process of storing and repeatedly transferring stuff between different types of storage I've had errors crop up.

      Sure, I could use archives with checksums or RAID, but it'd be nice if there was an option to sacrifice some speed and space on a single form of storage to improve the reliability without going to such cumbersome lengths.

      • by rjforster ( 2130 )
        I've seen these too. DVDs I've burned have been most reliably read back on the same drive that burned them. Less reliably on other drives. Even a single one that doesn't work is a pain, but I don't want to burn multiple copies or anything like that.
        I've not tried using dvdisaster but it does seem to fit these requirements.

  • Simple... (Score:4, Interesting)

    by evilviper ( 135110 ) on Saturday January 06, 2007 @08:19AM (#17486920) Journal
    I routinely archive my data onto an external hard drive: easy to update and mirror, but which file system provides the best combination of reliability, future-proofing, data recovery, and availability across multiple platforms (Linux, OS X, BeOS/Zeta and Windows, in my case)?

    Ext2 fs mounted rw,sync. When just reading, or just writing, async can't possibly help performance. You're strictly limited by disk I/O. Async will, however, cause irrecoverable corruption if there's a system crash or power failure, which was a source of great frustration with Linux before the journaling filesystems came along.

    Ext2 can be read by nearly even operating system out there, and doesn't have the numerous limitations of FAT32.

    Which, incidentally, is the exact same answer I gave a few months ago, when the last guy wrote an Ask Slashdot to ask the exact same question...
    • Having immersed myself in Linux for half a decade I, too, believed that Ext2 was the perfect filesystem for this sort of thing. But the hoops you have to jump through to get it working on a non-linux platform are insane. There are drivers "available" for Windows XP (who knows if those will be rewritten to support Vista or not), and to date there is no official support for the latest versions of OSX. Now that our company is transitioning to Mac computers I'm realizing the shortcoming of having most of our
      • But the hoops you have to jump through to get it working on a non-linux platform are insane.

        Downloading Explore2fs isn't all that difficult.

        and to date there is no official support for the latest versions of OSX.

        Official or not, with Darwin and Ext2 both being open source, it should be quite easy for anyone who cares enough to want to do it.

        You don't want to rely on one or two individuals' works to support that filesystem 5 years from now if you need to get archival data back.

        With any archival process... T

        • Re: (Score:2, Insightful)

          by jonesy16 ( 595988 )
          Explore2fs is written and supported by one person and currently doesn't list support for Vista. I would find it hard to recommend to someone else that they use this and expect it to be a reliable solution 5 . . .10 years down the road. And if it was so easy to support ext2 on OSX then why is there no reliable support for Tiger. Last I checked into it (about a month ago) there was ONE person who was working on the project and it had been sitting idle for a while. Given that a lot of Mac users are also l
          • And if it was so easy to support ext2 on OSX then why is there no reliable support for Tiger.

            As I said... *IF* somebody cares enough. Apparently, noone does.

            Given that a lot of Mac users are also linux users, I don't see why there woudln't be widespread support

            You're making a lot of assertions and speculation there. Most of which I don't happen to believe.

            Even if your premise was true, there's no way I could possibly guess why nobody has felt the need to do it. And the fact that it doesn't exist certainl


  • If you leave a drive in a closet for 10 years, will it still spin up?
    • I'm not sure if it was quite ten years in a closet or not, but only a few months ago, I helped my granddad clean up and prep a 486/25 for donation. Yeah, someone actually wanted the thing. And yes, it booted up, Windows 3.1 and all. Again I'm not sure how long the machine actually spent in the closet (unpowered), but it had to have been close.
      • by LoRdTAW ( 99712 )
        Hell I still have 2 AT&T unix PC's, one with a 20MB hard disk and one with a 40MB. They still boot and work fine.
  • Non-IT answer (Score:5, Interesting)

    by Overzeetop ( 214511 ) on Saturday January 06, 2007 @09:18AM (#17487098) Journal
    The best file system for archival purposes is the one you're using today. Why? Because of you want that archive to be readable in any expedient manner, you are going to have to constantly monitor and update the media on which it is stored. All media will degrade over time, and you will have no idea how bad that degradation has been until you re-read it. No vendor will compensate you for the loss of your data, because there is some data which simply cannot be recreated.

    If you want archival storage, you need to have your data on- or near-line, and rewrite the data to the "new" hardware every couple of years. By choosing a filesystem that is current, you are more likely to be cable to read it in a couple years than if you (try to) stick with a single filesystem. I know this sounds like a lot of work, but if the data is truly worth archiving, it's worth keeping both the storage mechanism and format up to date.
    • Hey! I use an iBook (with Mac OS X Tiger FYI) and a Windows Desktop running XP Professional. Both have different file systems (HFS,NTFS). What do you suggest for me? Both have valuable information i need. The Mac contains all my p0rn and my XP contains all my SG-Atlantis episodes.
      I think it is better to use FAT32 since both can read it.
      What do you think?
  • by MightyYar ( 622222 ) on Saturday January 06, 2007 @09:32AM (#17487156)
    Thanks to the emulation community, I can read data from an old Commodore 64, Apple ][e, Atari, etc. on any modern computer running any mainstream operating system. What I cannot do is easily hook up an old Apple ][e disk drive to my modern hardware very easily. The filesystem will not really matter so much, because even if Wintel goes the way of the Commodore 64, someone will make a DOSBOX-esque emulator for it. Getting data off of an ATA, SATA, USB, or Firewire drive might be more challenging once new hardware ceases to support those standards.

    Personally, I just throw stuff on external hard drives. 3-5 years later, the new drives are so much bigger, faster, and cheaper that it becomes economical to consolidate to a new drive. I still have data from a 286 that had nothing but floppies, an Apple ][e, and 2 dead Macintoshes. Even my old Windows 95 computer lives on as a VirtualPC image. I don't really use them that much, but the Apple ][e and 286 stuff is under 50 megs, and the VirtualPC image is 2GB. The images of the old Mac hard drives total less than 1GB... it's simply not worth deleting them and it's kind of fun to have my old computers still around, if only "virtually".
    • Re: (Score:3, Funny)

      by Gothmolly ( 148874 )
      Dude, I'm sure you could find all that pr0n on the Internet again if you had to. Let it go.
    • by donaldm ( 919619 )
      Backing up to larger disk is fine for a personal environment (ie. PC) and I do this myself but it useless and expensive in the academic, business and scientific worlds where proper backups are important. This means a strategy needs to be in place to take into account disaster scenarios.

      The problem that many organisations face today is the long term storage of data, however it is not a simple matter of just archiving data, it is knowing if you can retrieve and reuse that data. Alright I have over simplified
  • Ext2 is suitable, because it is very likely to have really long-term support. And as long as you can boot Linux and copy the files over to some other filesystem, that is enough. Of coruse there may come a time when Linux drops backwards compatibility, but considering that the 2.0 Kernels are still supported, can run on current hardware and all kernels are still available, I would say this will not be anytime soon. Same for FAT through Linux. It is not going away, and since it is not under development, maint
    • by zyzzx0 ( 935520 )
      Very recently, a gentleman brought in his NAS device that had files he accidently deleted but wanted back. He had already taken the drive out, connected it to his windows box, discovered it was an ext2 partition, downloaded some application for his windows machine to read the partition and used a second application to try to recover the deleted files.

      For most people this is a recipe for disaster. He was smart enough to know what an ext2 partition was and just smart enough to destroy most all of the acc
  • I use ext3 (Score:3, Insightful)

    by rduke15 ( 721841 ) <rduke15@gTWAINmail.com minus author> on Saturday January 06, 2007 @10:54AM (#17487566)
    I use ext3 on my external backup disks because:
    - it is much better and more reliable than FAT32
    - it is both open source and (relatively) widely used, so I expect there will always be some way to read it
    - it can easily be read by attaching it to any machine and booting some Linux LiveCD or bootable USB.
    - the OS which traditionally can read ext2/3 is itself open source and also widely used, so there is no fear that it would become unavailable

    For archival and backup, I feel all these advantages far outweigh the slight inconvenience that the disks are not readable directly by Windows and Mac, requiring either a driver or a reboot into Linux.

    The important point is to label the disks very clearly. Otherwise, someone connecting them to a Windows or Mac machine may believe the disk is empty and re-partition/re-format it! I would not only put a big explanatory label on the disk's case, but also name the volume something like "Linux-..." or "Linux-ext3-...", and also explain to persons involved (manager(s) + people handling the disks) that they are not readable in Windows (some people don't read even big labels...).
  • Tar [gnu.org].
  • Tape (Score:3, Informative)

    by vadim_t ( 324782 ) on Saturday January 06, 2007 @12:13PM (#17488168) Homepage
    Here's why: IMO, unless you're doing it for a company, the most important thing is convenience.

    If it's your job, sure, you'll do it whether it's convenient or not.

    If it isn't, you'll quickly get tired of messing with CDs, plugging/unplugging hard drives, etc. So I went with the most convenient media possible: tape. Stick a tape into the drive, walk away, store when it spits it out. It doesn't interfere with the computer's usage since nothing else uses tape.

    For absolute convenience, get a tape robot from ebay. Then it can be completely automatic.

    Filesystem: use plain tar to write to the tape. If you must use compression, compress files individually, not the whole tape.

    Paranoid implementation: Tapes have file marks. You can ask the tape drive to give you file #1 for instance. You can use this to store some useful stuff in a format that will always be recoverable so long you have a drive that can read the tape. Store like this:

    File 1: Text document explaining what's all this stuff, and what's on the tape.
    File 2: RFC for tar format
    File 3: RFC for compression format
    File 4: source for tar program
    File 5: source for decompression program
    File 6: backup

    A tape formatted like this should be readable so long a drive capable of reading the data in it survives. To ensure that, go with a popular tape format, which is reliable, open, and has a high capacity (so that it's unlikely to become obsolete too fast)
  • The guys over at Bell Labs developed Venti [bell-labs.com] as a part of their Plan9 [bell-labs.com] Operating System. If you are not adventurous enough to install Plan9, they have a great set of ports called Plan9 Port [swtch.com] that has most of the exciting bits of Plan9 for other *nix like Operating Systems including Linux and Max OS X. Venti is an archival storage server, utilities and filesystem. It works with both magnetic and optical media.
  • ZFS - FTW (Score:4, Informative)

    by GuyverDH ( 232921 ) on Saturday January 06, 2007 @12:31PM (#17488378)
    While not as widely used (yet), it will eventually become the de-facto standard in safe filesystems.

    I've thrown all kinds of problems at it, and it has yet to lose a single byte of data.
    Add to that, taking snapshots every (x) minutes, you can look back in time as easily as reading a folder.

    With RAIDZ2 in the latest releases, you can set up sets that can withstand the loss of 2 physical drives. If you couple multiple RAIDZ2 sets into a single pool, you've increased the redundancy even further. With plain old JBOD and multiple controllers, you can reach levels of availability that only expensive EMC/Hitachi/StorEdge systems have reached in the past.

    It's opensource as well (although it's the Sun flavor at this time), and being worked on at www.opensolaris.org. I believe Sun is contemplating switching it to GPL at this time.
    • I'll second that - with 256bit checksums on all data stored, journalling on metadata AND DATA, and now it's not Sun only it's been implemented in the latest builds of Mac OS X 10.5 Leopard.
  • Do you need data-readback in a matter of seconds? Minutes? hours? days?
    Do you need storage for years, decades, centuries, millennia, 10,000 years, or longer?
    Do you need an indexing system based on content or just on title/filename?
    Can the data be printed out or carved into stone without losing important information?
    Is this a go-to-jail-if-you-don't legal requirement, a may-go-bankrupt-if-you-don't business requirement, or a save-us-a-bunch-of-money-nice-thing-to-have requirement?
    Do you think the cost of res
    • My hunch: For most applications involving less than 50 year data retention, making 2 copies of the raw data, to a currently supported stable media such as tape or archival DVD, stored in separate locations, is key. Make sure the data is both in the original format and in a published-standard format which is widely supported. Keep multiple machines that can read the data around for as long as you need the original format. Every few years or as needed, verify the data is intact, re-convert the data from the original format or, if that format is unreadable, the highest-fidelity published-standard format, to a currently-supported published standard, and save it to a currently-supported archival format.

      Interestingly enough, this is very similar to the process developed by the National Archives of Australia http://www.naa.gov.au/recordkeeping/preservation/d igital/summary.html/ [naa.gov.au]. They are saving the 'original' document and a version converted to an open format (eg Open Document Format for word processing documents). If the format changes, they will use the converted version to generate something in the new format. They will be doing it for stuff that needs to be kept a lot longer than your arbitrary 50 ye

      • The amount of paper or stone is related to how important "really important" is.

        I'd say "really important" is stuff that needs to survive a collapse of technology or even civilization, but not the collapse of literacy. Things like the Rosetta Stone or the modern equivalents, basic instructions for subsistence farming, core religious texts such as the Bible and Koran, dictionaries, some history books, instructions for making a printing press and other basic inventions that could have been built 4,000 years a
  • Why ZFS - summaries include: - http://www.sun.com/2004-0914/feature/ [sun.com] - http://www.sun.com/bigadmin/features/articles/zfs_ part1.scalable.html [sun.com]

    "Why ZFS for home": - http://uadmin.blogspot.com/2006/05/why-zfs-for-hom e.html [blogspot.com]

    "Here are ten reasons why you'll want to reformat all of your systems and use ZFS.": http://www.tech-recipes.com/rx/1446/zfs_ten_reason s_to_reformat_your_ [tech-recipes.com]...

    And some more technical explanations from Sun's Chief Engineer: - http://blogs.sun.com/bonwick/entry/zfs_end_to_end_ dat [sun.com]

    • And because it's been around for nearly a whole year, you know it's perfect for long term archival storage.

      Look! There's even some blogs about it. What could go wrong?
      • ZFS has been around for much longer, and used in production systems (at least internally to Sun for years - much longer than the latest ReiserFS).

        Now, couple this with Sun's test lab, where they've subjected ZFS to MILLIONS of intentionally data disrupting incidents, such as - reformating hard drives in the pools, removing power from hard drives, writing random data to disks in the pools, pulling SCSI cables from systems, physically powering off the system, re-cabling the disks and boxes so that they are on
  • I also looked into this problem for storing files on large external hard drives. The conclusion that I came to in the end was that at this point in time there really is only one option if you want to be able to access the drive from Windows, Mac OS X and Linux. That option is the Mac HFS Extended file system. Yes, you do have to purchase MacDrive in order to access HFS+ with Windows, but it is a very well-established and popular product that works well and is going to be around for a long time, so it's a sa
  • by jafo ( 11982 ) *
    ZFS checksums everything on the file-system. If you are using RAID-Z with ZFS, it can detect corruption of the underlying data and correct it. For exmample, if you have a RAID-Z+ZFS with 3 drives, you can "dd if=/dev/urandom of=/dev/sdX" and then do a "zpool scan" and it will figure out what was corrupted and fix it. This is one of the standard demos they show with ZFS.

    This is great. Previously I had implemented a fax archive for a client and it was getting corrupted periodically because of some ext3 fi
  • Is the VFAT 32-Bit MS-DOS file system made available in Windows 95. On CD-ROM/DVD-ROM it's essentially the same as the "Joliet" format. It supports file names up to 63 characters, subdirectories, and blanks in file names. Now, I could be wrong but I think journaling is only important where you have transactional-based file systems, where you are doing update writes and want faster performance with the ability to recover in the event of failure of a transaction to finish, i.e. the computer is rebooted bef

  • Archival meaning -- read-only. Multiple OS support meaning -- standard.

    This cuts the field down. ISO 9660 would be a good bet, but is a bit "overkill". TAR format (which can be viewed as a "primitive" filesystem) would be my choice. Simple, can be read on all your target systems. If a tar client is not (for whatever reason) easily available, the data can still be simply extracted.

    Bad point: the "directory" can only be obtained by scanning the entire byte stream. If that is tolerable (and, by indexing the fi

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...