Correcting ext3 File Corruption? 74
An anonymous reader asks: "I am looking for ext2/ext3 expert. I have a small file (1395 bytes) that appears HUGE when runing ls -l (70368744179059 bytes [yes, that's 70 terabytes]). This causes a problem because tar wants to back up all those extra bytes. We have back ups of the file else where, but I'm afraid to delete it. When I remove it what is going to happen to the file system (Kernal version is 2.4.18 on i686).
This seems to be a pretty bad math error on the part of the file system. This is a really weird error, but could just be the issue of a corrupted sector on the drive. Has anyone else seen this before and have any ideas as to whether such files can be recovered? Is this problem just a small glitch or an omen of an impending filesystem crash?
"Here's what the files look like on the system:
...and the error message from tar:[ root@secure parse]# ls -l HTMLFrameSet.class
-rw-rw-r-- 1 root devel 70368744179059 Mar 20 09:05 HTMLFrameSet.class
[root@secure parse]# wc HTMLFrameSet.class
15 58 1395 HTMLFrameSet.class
No wonder my backups didn't finish! :-)"tar: HTMLFrameSet.class: File shrank by 70368744169331 bytes; padding with zeros
Re:that one is EASY to fix... (Score:1)
Re:that one is EASY to fix... (Score:2)
I read on the interweb that that's how you're supposed to do it... They wouldn't lie to me, would they?
fsck (Score:1, Informative)
And when you run "fsck"? (Score:5, Informative)
Since you appear to use tar for backups, you could also backup the affected filesystem using the exclude (-X [filename]) option first, which might be a *really* good idea. ;)
Re: And when you run "fsck"? (Score:2, Informative)
You can backup the drive image too, so if the file is irreplaceable and corrupted, you can try more than one recovery method safely.
Also, to fsck
Re:And when you run "fsck"? (Score:3, Informative)
Before you try to recover.... (Score:2, Informative)
Then use that backup-file to try out whatever other posters here suggest.
Re:another solution (Score:2)
Re:Try accessing from another machne.... (Score:1)
problem solved?
-transiit
Re:Try accessing from another machne.... (Score:1)
-transiit
Have you contacted SCT? The Creator of Ext3? (Score:2)
dd (Score:3, Interesting)
dd if=[file] of=[new file] bs=1 count=[length]
I strongly suggest rebuilding the affected filesystem, that kinda weirdness can be indicative of deeper problems.
"make test" in Perl builds used to do this.. (Score:2)
Try this:
cat
worked for me in vanilla ext2.
Should (?!?) work in ext3.
Re:"make test" in Perl builds used to do this.. (Score:2)
Sparse file? (Score:3, Informative)
-S, --sparse
handle sparse files efficiently
I'm not really familiar with them, but haven't seen any other mention here.
I know it's possible to put a file on a floppy that won't fit on your hard drive.
Re:Sparse file? (Score:1)
cat $FILE > $NEWFILE
Does Redhat not ship with 'cp' anymore?
Re:Sparse file? (Score:2)
Of course, I have no way to test this right now, so I leave it as an exercise to the reader. (:
hex (Score:5, Insightful)
the reported size in hex is
0x400000000573
and the actual size in hex is
0x573
Looks like a single extra bit got flipped when the size was stored.
Re:hex (Score:1)
Mad props to thinking in Hex. I have a hard enough time getting by in decimal.
Deletion question (Score:3, Interesting)
I'd bet there are problems with the whole filesystem, but to continue with what he asked:
It seems to me that he should be able to rm the file without any worries, after making a good copy. Only the inode that points to the falsely enlarged file will be removed, and the data blocks won't be touched, right?
If there is other data in the misallocated blocks, that dat should either have its own references, or it's already as good as deleted anyway.
This is a sparse file.... (Score:5, Informative)
This is easy to simulate by writing a small program that scribbes a few bytes to offset zero, then does an fseek out to some insane high offset, then scribble a few bytes there. Close, do an ls, see the huge file, but then note it only takes the space of two blocks on your file system. Imagine the fun you can have with this trick at parties!
Every UNIX file system I've ever dealt with handles this the same way.
tar and other programs should have switches to deal with sparse files correctly.
If you're concerned about what's in it, cat it to od. I believe od is smart enough to collapse zero blocks in its display. That way you can see if there is any real data at some pointer far into the file.
If this is a commercial closed-source package where you can't verify what it's doing, I'd strongly suggest leaving it alone and contacting vendor to see if this behavior is normal.
Re:This is a sparse file.... (Score:4, Interesting)
Tar does deal with sparse files correctly, and if this were one, he wouldn't be having trouble.
Re:This is a sparse file.... (Score:1)
The hard drive is probably on the way out
Well, he did say it was an IBM Laptop....go figure
Re:This is a sparse file.... (Score:2, Informative)
what could have happened ? (Score:1)
Try the mailing list (Score:5, Informative)
https://listman.redhat.com/pipermail/ext3-users/2
another ext3 question (Score:4, Funny)
I've got a thinkpad running RH 7.3 with two ext3 partitions. Being a laptop it has occasionally had its batteries die and been shutdown improperly. Invariably, there has been a subsequent long fsck
Isn't the whole point of ext3 so I don't have to go through this pain? This was an extremely generic installation of 7.3, why am I seeing no benefit to ext3?
Thx,
SuperID
EXT3 has failed me as well. (Score:1)
This has happened more than once too... I can't believe people actually use EXT3, and think their data is safe.
Where I work, we have machines running XFS, JFS, EXT3, and ReiserFS. EXT3 is the only filesystem we have problems with.
I especially like the 1.5 hour long fsck runs on one machine with it's 120gig data partition.
Re:EXT3 has failed me as well. (Score:2)
Is she to cheap to let you get life insurance? Medical? Comprehensive on the car? If not, explain to her that protection from data loss or not having to reboot after a power failure or glitch is just a fringe benefit, the real reason for the UPS is that it protects your expensive-to-replace electronic equipment from damage due to the electrical, thermal, and mechanical shock caused by glitchy power.
You can probably convince her that you need a second one for the TV and VCR.
Re:EXT3 has failed me as well. (Score:2)
Re:EXT3 has failed me as well. (Score:1)
or perhaps your journal is screwed up, and you might need to rebuild it with -whatever command it is to rebuild the journal-
I'm also going to go along with the hypothesis that likely one bit on this guys drive is screwed up, and that he should probably back up everything but that file (and this way he would also find any other files that might be affected in a similar way), do a full fsck, and perhaps even completely reformat that partition, doing a bad block check.
Re:EXT3 has failed me as well. (Score:1)
This isn't correct. The ext3 module should be in the initrd, which means it doesn't need to be statically compiled in for the initial rootfs mounting. It may be that the mkinitrd isn't adding the module as it should.
Re:EXT3 has failed me as well. (Score:1)
Re:EXT3 has failed me as well. (Score:1)
What are you talking about? It's initrd, loaded by the boot-loader, not /sbin/init.
Re:EXT3 has failed me as well. (Score:2)
Re:EXT3 has failed me as well. (Score:1)
It's not the controllers, it's not the cables. These drives all ran EXT2 just fine for months. EXT3 just can't handle the amount of data we're mashing through this machine.
Re:another ext3 question (Score:2)
Re:another ext3 question (Score:1)
I've had several experiences with power outages due to storms and id10ts blowing breakers. I've never once had an issue with EXT3 - every system started right back up, no prob. (And, yes, we now have UPSes.
Here's an idea - shutdown your machine just before the battery dies. Or call IBM and tell them they need to replace your battery...
Sure it's ext3? (Score:1)
This will certainly explain why it fsck'es all the time after reboots - run 'mount' without any parameter and check /proc/mounts (I think - not in front of Linux right now) and see if they both say ext3?
Hope that helps,
Michel
Lossy compression (Score:2)
ext3 (Score:1)
Just delete it. (Score:2)
I've seen this and a WARNING (Score:1)
I've seen this. In my case, it was fixed by unmounting and mounting the filesystem again. I've also seen files that one command (like find or rm -rf) would see as a directory and another would see as a file. I don't understand how there can be differences, given that they should all be using the same C library interfaces. These have always been recoverable, however.
Also, I experienced something considerably more distressing: data corruption. After reading the benchmarks comparing ReiserFS and ext3 mounted with 'data=ordered' and 'data=writeback', I decided to try writeback mode. It seemed okay for a while, but lately because of the heat my computer has been shutting itself. Once I came back and found that after hitting the reset button, my Mozilla bookmarks were reduced to a small portion of what they ought to have been. An image I had been working on and saved had been replaced by the content of several e-mail messages. rxvt would no longer start correctly from the KDE panel, even though checking through the properties it looked okay. I re-added the button and it started correctly. There were other things awry too, and probably things I haven't found.
I was using the "offical" kernel from Red Hat for 7.3, 2.4.18-5. In summary, DO NOT USE data=writeback for now.
Re:I've seen this and a WARNING (Score:1)
I should add that this is a SCSI drive, not a funky IDE drive with a non-disableable (!!) write cache.
Re:I've seen this and a WARNING (Score:1)
Um, yes, that's what Writeback does. From the mount(8) manpage:
Data ordering is not preserved - data may be written into the main file system after its metadata has been committed to the journal. This is rumoured to be the highest-throughput option. It guarantees internal file system integrity, however it can allow old data to appear in files after a crash and journal recovery.
BTW, I've had the same thing happen to me on Reiserfs.
Re:I've seen this and a WARNING (Score:1)
Yes, I know. The thing was, though, that much of this data should have already been committed--the image I saved 10 minutes or so before I left, which means it should have been flushed from the cache. I can understand volatile data like my bookmarks being lost, but not the image file.
You've got it backed up - (Score:1)
Your next step is to blow the disk away and restore.
By the time you get a coherent answer from us, you'd be back up and running.
Alternatively, if you bought the retail version of RedHat you could call them, or there's always the free newsgroups and messageboards. Give them a shot.
Same type of thing happened to me... (Score:1)
Instead of dd if=/tmp/imagefile.img of=/dev/fd0 bs=1440k,
I did dd if=/tmp/imagefile.img of=/dev/hda bs=1440k
Whoops. After restoring my MBR and partition table, I still had to deal with the fact that I overwrote the first 1438KB of my root filesystem with effectively random data.
e2fsck -y
The way I finally fixed it was by running tune2fs and removing the file by hand. It's fairly straightforward, since tune2fs has an interface similar to file navigation from a shell prompt (ls, cd, etc). Just navigate to the target directory and remove the inode listed (by ls) as the inode associated with the file in question. You probably want to run e2fsck one more time to be sure.
Happy ending: I'm still using the filesystem that dd stomped all over and luckily lost only a handful of unimportant files.
Hope this helps...
-Fat Fingers
funny. (Score:2)
stick with a real filesystem, get a Sun, HP, IBM, or SGI and use their journaling filesystems.. you'll never want to use ext* again.
Use Ghost for backup before you touch it (Score:1)
Afterwards, you can do whatever experiments you want with it, and still be on the safe side.
Re:Use Ghost for backup before you touch it (Score:1)
have you tried.. (Score:1)