Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Data Storage OS X Ubuntu Windows

Ask Slashdot: What's a Good Tool To Detect Corrupted Files? 247

Volanin writes "Currently I use a triple boot system on my Macbook, including MacOS Lion, Windows 7, and Ubuntu Precise (on which I spend the great majority of my time). To share files between these systems, I have created a huge HFS+ home partition (the MacOS native format, which can also be read in Linux, and in Windows with Paragon HFS). But last week, while working on Ubuntu, my battery ran out and the computer suddenly powered off. When I powered it on again, the filesystem integrity was OK (after a scandisk by MacOS), but a lot of my files' contents were silently corrupted (and my last backup was from August...). Mostly, these files are JPGs, MP3s, and MPG/MOV videos, with a few PDFs scattered around. I want to get rid of the corrupted files, since they waste space uselessly, but the only way I have to check for corruption is opening them up one by one. Is there a good set of tools to verify the integrity by filetype, so I can detect (and delete) my bad files?"
This discussion has been archived. No new comments can be posted.

Ask Slashdot: What's a Good Tool To Detect Corrupted Files?

Comments Filter:
  • AppleScript (Score:3, Interesting)

    by noh8rz3 ( 2593935 ) on Monday May 07, 2012 @03:20PM (#39918477)
    An AppleScript / Automator script can step through files on a hd, open them, and catch a thrown error if the open fails. Tis sits a good automated way to glad the bad ones. Not the fastest method, but it could run at night.

    you seem to be surprisingly ok with the fact that your computer crashed and all your documents and media were corrupted, as was your backup. I would have been beside myself. Hulk smash! Please let us know what different set ups you're exploring to avoid this.

    • Re:AppleScript (Score:4, Insightful)

      by dgatwood ( 11270 ) on Monday May 07, 2012 @03:31PM (#39918613) Homepage Journal

      But the open usually won't fail. Unless the error is within the header bytes of a movie or image, the media will open, but will appear wrong. Worse, there is no way to detect this corruption because media file formats generally do not contain any sort of checksums. At best, you could write a script that looks for truncation (not enough bytes to complete a full macroblock), or write a tool that computes the difference between adjacent pixels across macroblock boundaries and flags any pictures in which there is an obvious high energy transition at the macroblock boundary, but even that cannot tell you whether the image is corrupt or simply compressed at a low quality setting with lots of blocking artifacts.

      The short answer, however, is "no". Such corruption can't usually be detected programmatically.

      • by dgatwood ( 11270 )

        I should clarify. If you are intimately familiar with the format, and if it is a multi-frame format, such as a compressed audio or video format, it is possible to programmatically detect that there are frames that reference illegal frames, frames whose structure is not valid, etc. in much the same way that you can detect a JPEG file whose header is invalid.

        Again, though, none of this will be caught by merely opening the movie; the movie will generally play correctly up until the decoder encounters the erro

        • by vlm ( 69642 )

          The TLDR version is this scenario is why you configure your mythtv box to store MPEG TS which have embedded CRC error detection and recovery instead of MPEG PS which are irrelevantly smaller, if you have the option.

      • Doesn't MPlayer report most file corruptions to stdout or stderr even if the playback continues? You should be able to grep for it. Granted, it isn't bulletproof, but I often get warnings even if the playback seems fine - is seems to be sensitive. I don't think it would ignore jumbled sectors.
      • Re:AppleScript (Score:4, Interesting)

        by jasno ( 124830 ) on Monday May 07, 2012 @04:38PM (#39919503) Journal

        Here's what I did when I realized my mp3 collection on my Mac was slowly dying:

        find -print -exec cat {} > /dev/null

        it takes a while, but for files with ioerrors you'll see a warning printed after the file name. Put the output in a file and you can use grep(the 'B' option comes to mind) to get a list of the bad files.

        The sad thing is that Time Machine didn't seem to notice that the files were bad, so now the files are gone forever. Disk Utility didn't help.

        Shouldn't there be a way to find bad blocks on OS X? I looked around and all I could find were commercial products.

        • File corruption wont generate ioerrors I dont think. Your system may be able to properly read data from the disks, data that it thinks is what you requested, its just that the data is bad. A computer isnt going to generally be able to detect that without either knowledge of the file format, or checksums.

  • "What's a Good Tool To Detect Corrupted Files?"

    BSOD?

  • by Anonymous Coward on Monday May 07, 2012 @03:24PM (#39918527)

    is urgency. Corrupted files have the ability to detect urgency and your discovery of them will come in a form compatible with the laws of Murphy.

  • No easy answer (Score:2, Insightful)

    by gstrickler ( 920733 )

    1. Compare to backup, files that match are ok.
    2. AppleScript option others mentioned may help reduce it further.
    3. Backup regularly, and verify your backup procedure.
    4. Anything else will cost you consulting rates.

  • by denis-The-menace ( 471988 ) on Monday May 07, 2012 @03:27PM (#39918567)

    2000-2001 MAF-Soft http://www.maf-soft.de/ [maf-soft.de]
    The version I have is v1.0.3.102

    It can scan single mp3s and entire folders structures for defects and logs everything if you wish. It will give you a percentage of how good the file is.

    Depending on the damage you may be able to fix headers and chop off corrupted tag info with something like a MP3Pro Trim v1.80.exe

  • Go nuts.
  • md5sum (Score:4, Interesting)

    by sl4shd0rk ( 755837 ) on Monday May 07, 2012 @03:29PM (#39918593)

    or sha1sum if you prefer. Automate in cron against a list of knowns.

    eg:
    $ md5sum /home/wilbur/Documents/* > /home/wilbur/Docs.md5
    $ md5sum -c /home/wilbur/Docs.md5

    • Re: (Score:3, Informative)

      by subtr4ct ( 962683 )
      This type of approach is automated in a python script here [micropipes.com].
    • by Dadoo ( 899435 )

      That's a pretty good idea, if you only want to detect corrupted files (and yes, I know that's what the OP said he wanted), but I can't believe no one's suggested par2, yet. It will not only detect corrupted files, but repair them, too. If he had used par2, he wouldn't have to delete them.

  • For JPEGs (Score:5, Informative)

    by Jethro ( 14165 ) on Monday May 07, 2012 @03:30PM (#39918603) Homepage

    You can run jpeginfo -c. I have a script that runs against a directory and makes a list for when I do data recovery for all my friends who don't listen when I tell them their 10 year old laptop may be dying soon.

    • by Volanin ( 935080 )

      Author here:

      This method detected a single corrupted picture.
      Probably my pictures were the least affected of all my data.
      Thanks for the great idea.

  • by vlm ( 69642 ) on Monday May 07, 2012 @03:31PM (#39918615)

    unix "file" is not the answer. For some formats it does as little as look at a couple header bytes. Its a great tool to guess a format. Its a terrible verifying parser and does nothing to verify content.

    An example of what I'm getting at, with some made up details, unfortunately html is not like well formed xml and every viewer is different anyway so the best way to figure out if a html web page file format is corrupt is unfortunately to pull it up in firefox. This only detects corruption in the structure of the file, if the corruption is just a couple bits then you end up with problems like tQis where the only way to see the h got fouled up is to write more or less a IQ 100 artificial intelligence. All "file" is going to test is pretty much does the file begin with or contain a regex something like less-than html greater-than (getting past the filters).

    For content you could F around with, for example, piping a mp3 file thru a decoder and then thru an averaging spectrum analyzer and see if there's anything overly unusual in the spectrum. Also some heuristics like is the file only 1 second long, then its F'ed up.

  • by zdzichu ( 100333 ) on Monday May 07, 2012 @03:35PM (#39918673) Homepage Journal

    You need good filesystem, with embedded data checksum and self-healing using redundant copies. For Linux - btrfs is fine. For Mac OS X & Linux - ZFS.

    • by ltwally ( 313043 )
      The best filesystem to survive a crash is a filesystem designed for an operating system that is expected to crash: NTFS.
      • by Volanin ( 935080 )

        Author here:

        The problem lies in finding a filesystem that can be accessed by all three OSes. I would go with NTFS as well, but last time I tried, MacOS could not write to it. Every guide out there recommends FAT32, but the 4GB file size limitation is a deal breaker for me.

        • I use RAR to split the >4GB files in half. To date I'veonly needed to do that once (a DVD rip).

        • by vux984 ( 928602 )

          10.5 and 10.6 and I assume 10.7 have read/write support but its not enabled by default, and is not officially supported.

          http://hints.macworld.com/article.php?story=20090913140023382 [macworld.com]

          Also you are using paragon HFS+ for windows... you should already be aware they have Paragon NTFS for Mac.

          A bigger question is whether NTFS is the best filesystem to use, and that's a separate question entirely. And that's a question I don't know the answer to.

          So, if the primary OS was windows... then I'd use NTFS.

          But if you spen

        • NTFS-3G supports writing to NTFS. AFAIK, most Linux distributions use it instead of the kernel driver and there's a OSX port as well.

        • Finding a way to make the Mac read NTFS beats using MacDrive for HFS+ on the Windows side. NTFS just doesn't corrupt as easily with a power failure as HFS+, in my experience. Ideally, I would just use networked storage and access it from Mac OSX with afpd or NFS, from Windows with Samba, and linux with NFS.

      • The best filesystem to survive a crash is a filesystem designed for an operating system that is expected to crash: NTFS.

        I don't know if I should laugh or ask what evidence that you have NTFS is the "best".

      • The problem with that rationale is that the set of developers that make systems that crash often is hightly correlated with the set of developers that make FSs that corrupt data often.

      • Re:right filesystem (Score:5, Informative)

        by d3vi1 ( 710592 ) on Monday May 07, 2012 @04:36PM (#39919481)

        Two aspects to your problem:

        1) Recovering from the current situation

        If you didn't make ANY changes to the filesystem after it was corrupted, you still have a chance with software like DiskWarrior or Stelar Phoenix. Never work on the original corrupted filesystem unless you have copies of it. So grab a second drive, connect it over USB and using hdiutil or dd copy it to the second drive. Once you do that, use DiskWarrior or Stelar Phoenix on either one of the copies, while keeping the other one intact. Always have an intact copy of the original FS. You might be successful trying multiple methods, so KEEP AN INTACT COPY.

        2) Avoiding it in the future
        NTFS is good at surviving a crash if and only if the crash occurs in Windows. Paragon NTFS for Mac/Linux or NTFS-3G don't use journaling to it's full extent (for both metadata and data). So, if you get a crash while in Mac OS X or Linux, chances are that you get data corruption.

        Same goes for HFS+. While Mac OS X uses journaling on HFS+, Linux doesn't. It's read-only in Linux if it has journaling. Furthermore, the journaling is metadata only in HFS+.

        Now we get to the last journaled filesystem available to all 3 OSs: EXT3. It's the same crap as above.

        Because of the three points above, I have a conclusion: what you're looking for (ZFS) hasn't been invented on any of the OSs that you're using.
        Thus, I have a simple recommendation:
        Use ZFS in a VMware machine exported via CIFS/WebDAV/NFS/AFP to Linux, Windows or Mac OS X. A small FreeNAS VM with 256MB of RAM can run in VMWare Player and Workstation on Windows/Linux and Fusion on OS X.

        ZFS uses checksumming on the filesystem blocks, which lets you know of the silent corruptions. Furthermore, by design, it will be able to roll-back any incomplete filesystem transactions. I've had my arse saved by ZFS more times than I care to remember. The most difficult thing for my home storage system is to find external disk arrays that give me direct access to all the disks (not their RAID crap). A proper home storage system is RAIDZ2 (basically RAID6) + Hot Spare.

        Another way is to have a simple, TimeMachine-like backup solution on at least one of your operating systems. But even that doesn't catch silent data corruptions, let alone warn you. As such, we get back to: ZFS.

  • by Anonymous Coward on Monday May 07, 2012 @03:36PM (#39918677)

    Tech Tool Pro, over on the Mac side, has a "File Structures" check which looks at a lot of different structured file types to make sure that their internal format is valid.

  • It's already too late, but I keep important files with par2 files. That way, when there's like 5% corruption, I can still fix the file.
    I do this with flac files and some datafiles.

    Also make sure you keep backups going. I guess this was your warning. Everyone needs one.

    • by Jazari ( 2006634 )
      I'll second that. QuickPar ( http://www.quickpar.org.uk/ [quickpar.org.uk] ) has been exceptionally useful to me over and over again. I can check file integrity, recover minor corruption, and revert to past file states if I accidentally modify old archived files. It's also free. The only unfortunate thing is that it doesn't seem to be under development anymore, but at least it still works with Win7/64.

      For archival purposes, I've started using WinRAR ( http://www.rarlabs.com/ [rarlabs.com] ) with the file authenticity and recovery op

    • There is a good link here:

      http://ttsiodras.github.com/rsbep.html [github.com]

      This is a good move for creating par files etc as part of your backups. He also has some other really good information up there in regards to protecting data. Especially creating backups under windows:

      http://ttsiodras.github.com/win32backup.html [github.com]

      • Better use Crashplan (free). Backup to a remote computer, internet or your own disks in the background. Works for me (and lots of other people).

  • That seems very strange--the only files that should really be corrupted, unless something extremely rare and catastrophic happened, are the ones that were being written when power went out, or were cached. And even then, a flush usually flushes everything, or at least whole files at once, or areas of disk. Is the partition highly fragmented or something?

    I know this doesn't do much for your question, but that kind of failure mode is almost exactly what filesystems do their damnedest to avoid. HFS+, being journaled, should be even more proof against, well, exactly what happened to you. Maybe the Linux driver is poor, but man, if you got silent data corruption on a multitude of files that weren't even being written, that's really bad and the driver should be classified "EXPERIMENTAL" at best, and certainly not compiled into distros' default kernels.

    To answer your question, I don't have experience with any tools (I automate my backups, and any archival files go on a RAID volume that does a full integrity scan nightly), but once you find one, you should separate your files into two categories--"must be good", and "can be bad". The "must be good" files (serial #s, source code, etc.), you hand-check, so you know for certain that every one of them is good. It'll also motivate you to replace them now, instead of later when replacements will only get harder to come by. The "can be bad" files (music, pictures, etc.), you do the automated check on and then just delete as you run into ones that the check missed. This has the advantage of concentrating your effort into where it's useful. If you try to check all of your files, you'll just burn out before you finish. You may even want to do more advanced triaging, but you'll have to come up with the categories and criteria there. The main thing is, split this problem up.

    • by rrohbeck ( 944847 ) on Monday May 07, 2012 @04:28PM (#39919381)

      Very few filesystems keep checksums - only btrfs and zfs come to my mind.
      With defective hardware (RAM issues in main memory and disk or controller caches are fun) you can have silent corruption that goes on for a long time. Also bits on disks rot but those should give you a CRC or ECC error.

      • Yeah, that's what I was saying--it's pretty unlikely that the power failure caused this, so the author should try to find the true root of the problem.

    • The Linux HFS+ driver can't even work in write mode unless the journal has been deleted, so the journal isn't working when using the HFS+ partition under Ubuntu and probably Windows as well (author take note). I would not use that filesystem under Linux or Windows on a daily basis. Also, since the journal has been deleted, you are probably missing the safety of journaling under the native OSX as well.

      Author should also note that archival backups with md5 or sha256 checksums are probably the most straightfor

  • by Bonteaux-le-Kun ( 1360207 ) on Monday May 07, 2012 @03:40PM (#39918739)
    You can just run mencoder or ffmpeg on the mp3 and mov on all the files (with a small shell script, probably involving 'find' or similar), just tell it to write the output to /dev/null, that should go through those files as fast at they can be read from disk and abort with error on those that are broken. For the jpgs, you could try something similar with imagemagick's 'convert', to convert them to whatever format to /dev/null, which also needs to read the whole file content and aborts if they're broken (one should hope). Those converters are really fast, especially ffmpeg, so that should complete in a reasonable time.
  • by ncw ( 59013 ) on Monday May 07, 2012 @03:42PM (#39918753) Homepage

    I'd be asking myself why lots of files became corrupted from one dodgy file system event. Assuming HFS works like file systems I'm more familiar with, it will allocate sequential blocks for files wherever it can. This means that a random filesystem splat is really unlikely to corrupt loads and loads of files. You might expect a file system corruption to cause a load of files to go missing (if a directory entry is corrupted) or corrupt a few files, but not put random errors into loads of files.

    I'd check to see whether files I was writing now get corrupted too. It might be dodgy disk or RAM in your computer.

    The above might be complete paranoia, but I'm a paranoid person when it comes to my data, and silent corruption is the absolute worst form of corruption.

    For next time, store MD5SUM files so you can see what gets corrupted and what doesn't (that is what I do for my digital picture and video archive).

    • The bit rot could have gone on for some time. How often do you check those videos or MP3s that you downloaded years ago?

    • I agree with this parent. Most likely there is a hardware failure, like the one that caused Intel to spend a billion dollars recalling Sandy Bridge motherboards for SATA errors. You need to isolate the problem to either a hard drive, ram, motherboard, cable, or even power supply and fix the root cause.
  • by Sepiraph ( 1162995 ) on Monday May 07, 2012 @03:49PM (#39918851)
    I'd recommend running a base OS and then run something like VMware workstation so that you run other OSes inside the main OS. One huge benefit is that you can have access to multiple OSes at the same time and you don't need to reboot into them either. With hypervisor technology getting common on desktop, there probably isn't any need to multi-boot unless you have a specific reason not to use virtualization.
  • zfs [wikipedia.org]! Works great. Included with FreeBSD 9 [freebsd.org], amongst other OSs.

    You might also enjoy John Siracusa's exhaustive review of filesystems [5by5.tv] on one of my favorite podcasts.

  • by mattpalmer1086 ( 707360 ) on Monday May 07, 2012 @03:53PM (#39918927)

    The JSTOR/Harvard Object Validation Environment:

    http://hul.harvard.edu/jhove/ [harvard.edu]

    It's specifically designed to first probabilistically identify files, then attempt to verify their format.

    Disclaimer: I haven't worked on it directly, but I did spend a number in the digital preservation space, so I probably know some of the people who have contributed to it.

  • Cast Detect Evil, Sense Motive, and Discern Lies on the potentially corrupted files.

    • by Volanin ( 935080 )

      Author here:

      Sorry, but I can't stand anymore the Paladin of the party insisting on replacing the HD for a tried and true Bag Of Holding.
      Thanks for the tip anyway.

  • Get Rid Of Paragon! (Score:5, Interesting)

    by Lord_Jeremy ( 1612839 ) on Monday May 07, 2012 @04:02PM (#39919031)
    Alright now I'm afraid I can't help with your verify problem but I do have one piece of solid advice: get rid of Paragon HFS immediately!

    It is a truly shoddy piece of software that as of version 9.0 has a terrible bug that will cause it to destroy HFS+ filesystems. Google "paragon hfs corruption" and you will see many many horror stories from people who just plugged a Mac OS X disk into a Windows machine w/ Paragon HFS and then discovered the entire filesystem was hosed. In my dual-boot win/mac setup I replaced my copy of MacDrive with a trial version of Paragon HFS 9.0 from their website and every single one of the six HFS+ disks I had connected internally were damaged. Disk Utility couldn't do a thing and I had to buy a program called Diskwarrior to even begin to recover data. I ended up losing two disks worth of files anyway.
    http://www.mac-help.com/t12137-opened-hfs-drive-win7-paragon-hfs-now-wont-boot.html [mac-help.com]
    http://www.wilderssecurity.com/showthread.php?t=299306 [wilderssecurity.com]
    http://hardforum.com/showthread.php?t=1677099 [hardforum.com]
    http://www.avforums.com/forums/apple-mac/1509344-hfs-super-block-not-found.html [avforums.com]

    whew! Anyway the pain I went through after that software very nearly ruined my life was so great, I don't want it to happen to anyone else. According to their own website [paragon-software.com] 9.0 has this awful bug but they fixed it in 9.0.1. Evidently the trial download on the main page is still for version 9.0 and still has the disk destroying bug! Any software company that releases a filesystem driver with this terrible a bug (not to mention the numerous reports of BSODs and other relatively minor problems) clearly has terrible quality assurance and simply can't be trusted.
    • by Volanin ( 935080 )

      Author here:

      Just out of curiosity, I went to check the version of my Paragon installer and guess what... it was corrupted! Oh the irony!
      Windows is the OS I least use, and I have not booted it for the last month or so. Unless Paragon silently corrupted something there previously and somehow "weakened" the filesystem integrity since. Anyway, thanks for the tip. What do you use currently to read HFS+ in Windows?

      • I had been using MacDrive before trying out Paragon. The version of MD I had (8 I think?) no longer worked when I upgraded Windows on one of my computers so I looked around for something else before buying the MacDrive upgrade. I saw Paragon had a promotion where you'd get a discount on a new copy of HFS+ for Windows if you proved you were switching from a competing driver (making it cheaper than the MD upgrade) so that's when I installed the evil trial.

        It's only been a couple weeks since the disaster so
    • by macraig ( 621737 ) <mark@a@craig.gmail@com> on Monday May 07, 2012 @05:24PM (#39920171)

      Having nothing at all to do with Paragon (not that I'm a fan of the company otherwise), I had a very similar disaster occur with an external eSATA 5TB RAID 5 enclosure. It's one that uses an internal hardware RAID 5 circuit and doesn't require port multiplication, so when connected it appears to the host as a single large volume. At the time I was swapping it between a Linux (Ubuntu) system and a Windows 7 system; it was of course configured as GPT. Eventually I connected it to the Windows 7 system and during boot Windows declared there were problems and initiated chkdsk. Chkdsk ran for more than 18 hours and when it was done, most of the files in the volume were hopelessly corrupted. Upon detailed inspection, I found that blocks of all the files were swapped and intermingled, as if something had made a jigsaw puzzle out of the MFT and couldn't reassemble Humpty Dumpty. Was it chkdsk itself that caused the damage? Was it the swapping between two machines and operating systems (both GPT compliant)? I suspect it was actually caused by chkdsk, but could never prove it.

  • Just have your OSX do a repair - it could be that certain VTOC or directory tables were damaged, and a repair may fix it. The files themselves should be OK, but the pointers to them are fubared.

    Also try something like http://www.cgsecurity.org/wiki/PhotoRec [cgsecurity.org] or similar to recover deleted files. There's one for OSX. Run it after a repair, and photorec, and you should get most of your crap back.

  • You clearly need an image based backup system to prevent this from happening again. It needs to be a chron job (or task scheduler) and run on regular intervals when storage is available. ideally, it needs to be network storage, so that a sudden disconnect (abscence of power) cannot easily corrupt the backup. There is an open source version of Ghost, partd, rsync.. options for you, though I am relatively new to linux so I don't know what the appropriate option is for you. Time machine you could use if you
  • by jimicus ( 737525 ) on Monday May 07, 2012 @04:13PM (#39919197)

    The bad news is I don't know of any (and I don't think you'll find any) easy, one-shot tool to run across the whole lot that gives you a simple "corrupted yes/no?" answer to lots of different filetypes.

    The good news is it'd be reasonably easy to lash together something in bash, kick it off overnight and come back in the morning to a list of probably-corrupted files.

    In pseudo-bash (because I haven't the time to write it out and check it works properly), something like this would be a good start:


    function checkJpeg {
        jpeginfo -c $1 || return 1
        return 0
    }

    function checkPdf {
        # do something to check a PDF is OK
    }

    FILETYPE=`file $1`
    case $FILETYPE in
        "jpeg" )
            checkJpeg $1 || echo $1 ;;
        "PDF )
            checkPdf $1 || echo $1 ;;
    esac

    Then run it with the help of find /home -type f -print0 to check every file in /home. This would give you a list of potentially-corrupted files. Up to you how you deal with it - personally I wouldn't run rm against it in case you find files that can be rescued or that your checks aren't as perfect as you'd like.

    For extra credit, determine the expected filetype based on file extension and then use file(1) as your first "is it corrupted?" test - that way you'll spot files that are too corrupted for file(1) to work reliably.

    • Actually there is a tool that does all of that already: JHOVE - JSTOR/Harvard Object Validation Environment.

      http://hul.harvard.edu/jhove/ [harvard.edu]

      It's used in the digital preservation field, for example in an archive to try to figure out what they've got and what state it's in.

    • by macraig ( 621737 )

      All such a process can do is verify that the file header appears well-formed. That might flag a few bad apples, but the ones with good headers and corrupted contents will slip through the cracks.

  • http://en.wikipedia.org/wiki/Comparison_of_file_verification_software [wikipedia.org]

    md5sum is the one I know best, but that's because my computing is unix-centric.
  • ... yes, this is not what you want to hear at this point, but try to have a positive take on this.

    Last year during a routing Windows7 installation, my second hard drive from which I double boot my 90%-of-the-time-in-use Linux was destroyed. Either a coincidence that it occurred during the win7 installation or a nefarious plot, but the hard disk, a 1TB Seageate sata, developed an unrecoverable click of death.

    On that hard drive I had my short stories which I had written in college and the intervening years s

  • Lacking not only a backup but also PAR(2) and MD5 files, manual inspection of each and every file is the ONLY way you can determine their integrity. There is no automagic after-the-fact integrity check. If you had MD5 sums for every file, you could at least check their integrity. Some PAR2 files would not only verify but possibly repair if the damage wasn't more extensive than the PAR recovery blocks. Of course if you're willing and able to do all that, you'd probably have had full and differential back

  • by sjames ( 1099 ) on Monday May 07, 2012 @05:26PM (#39920207) Homepage Journal

    George is your best bet. He's not bright enough for most support tasks, but he can certainly handle this one.

  • First, let's presume you're running Linux for what follows.

    1. You're going to want to be familiar with both file(1) and find(1). File(1) is pretty straightforward, but be aware that its heuristics for file type detection vary in accuracy. If you're not find-literate, then at least get used to this construct:
    find /foo/bar -name "*.jpg" -print | sort -u > /tmp/files.jpg
    which will recursively search directory /foo/bar for all files suffixed ".jpg" and dump a sorted list of them into /tmp/files.jpg and this one:
    find /foo/bar -type f -print | sort -u > /tmp/files.all
    which will search the same directory, but will return a list of all (plain) files, that is, things which are not directories, devices, sockets, etc., sorted and dumped into file /tmp/files.all. (Note that the method by which find traverses filesystem trees won't yield sorted output, hence the need to pipe these through sort.)

    2. You now have (a) a list of all jpg files and (b) a list of all files. (I picked jpg arbitrarily to illustrate the process, by the way.) You can now generate a list of all files that are NOT jpg with this:
    comm -13 /tmp/files.jpg /tmp/files.all > /tmp/files.all2l
    The point of this exercise is that you can now repeat steps 1-2 with .gif, .mpg, etc., as you deal with each file type and reduce the remaining list to those awaiting your attention. /tmp/files.all3, /tmp/files.all4, etc. will each be smaller and eventually, if you deal with all files, /tmp/files.allX will be zero-length. Note that not all files have suffixes, of course -- and those without will likely be the ones requiring the most manual effort. If you want to know which suffixes are most numerous, something like
    sed -e "s/.*\.//" /tmp/files.all | sort | uniq -c | sort -n
    will give you a rough idea.

    3. Now then...you'll need some tools for dealing with each file type. The first tool I'd use is stat(1), to check sizes for plausability. Then things like jpeginfo(1), mp3val(1), tidy(1), will be some help, but of course you'll need to distinguish between "error message emitted because file is corrupt" and "error message emitted because file has minor issues...that it had BEFORE this episode". You may need to check the Ubuntu repository for tools you don't have; you may need to do some searching on the web for "Linux tool to check PDF integrity) and similar.

    4. If you have backups of any kind and can restore them, then you could try using sum(1) to compare checksums pre- and post-incident. This is a filetype-invariant method, which is good because it lets you skip the above...but bad because all it wll tell you is "different", not "mildly damaged" or "horribly corrupted" or something in between.

    5. I would recommend against deleting anything at this point. Instead, move it to secondary storage, like an external drive. I don't have a specific reason for advising this, other than "many years of experience doing partially-manual, partially-automated things like this and a recognition that sometimes errors in the methodology...or fatigue introduced by the tedium of executing it...lead to mistakes".

    6. Good luck.

I have hardly ever known a mathematician who was capable of reasoning. -- Plato

Working...