Forgot your password?
typodupeerror
Operating Systems Software

Is ext4 Stable For Production Systems? 289

Posted by Soulskill
from the work-in-progress dept.
dr_dracula writes "Earlier this year, the ext4 filesystem was accepted into the Linux kernel. Shortly thereafter, it was discovered that some applications, such as KDE, were at risk of losing files when used on top of ext4. This was diagnosed as a rift between the design of the ext4 filesystem and the design of applications running on top of ext4. The crux of the problem was that applications were relying on ext3-specific behavior for flushing data to disk, which ext4 was not following. Recent kernel releases include patches to address these issues. My questions to the early adopters of ext4 are about whether the patches have performed as expected. What is your overall feeling about ext4? Do you think is solid enough for most users to trust it with their data? Did you find any significant performance improvements compared to ext3? Is there any incentive to move to ext4, other than sheer curiosity?"
This discussion has been archived. No new comments can be posted.

Is ext4 Stable For Production Systems?

Comments Filter:
  • by eldavojohn (898314) * <eldavojohnNO@SPAMgmail.com> on Saturday May 30, 2009 @12:39PM (#28150149) Journal

    Is ext4 Stable For Production Systems?

    Probably.

    Is there any incentive to move to ext4, other than sheer curiosity?

    Ok so I'm gussing production = income = your ass? Let me turn your question back to you by asking, "What is driving this need to move to ext4?" Because so far, all you've told me is that you are considering risking your ass for sheer curiosity.

    I may be grossly misinformed but that is how the question sounds to me. And by "your ass" I don't mean oh-no-we-had-a-service-outage-for-five-minutes ... no, we could have a customer on the phone saying, "You mean to tell me that the modifications being made to my site for the past 24 hours are gone?!"

    If it ain't broke, don't fix it!

    I don't know about you but I'm too busy dealing with shit like this [youtube.com] than to ponder new potential problems I can put into play.

    Look through this page [wikipedia.org] for a rough comparison of ext4 with other file systems. There's a better list of features for ext4 here [wikipedia.org] that will tell you why you might need to switch to it. It is backward compatible with ext3 and ext2 so moving to it may be trivial. If you're dealing with more than 32000 subdirectories or need to partition some major petabytes/exobytes then you might not have a choice. Some of these benefits are probably not risking your ass for but if there's a business need that cannot be overcome any easier way then back your shit up and do rigorous testing before you go live with it. If you're using Slashdot to feel out if the majority of users scream OMGNOES so you don't waste your time doing that, then that's fine. Just don't do this if you don't have to.

    I tell you what, there's a $288 desktop computer at Dell today [hot-deals.org] that you can buy, put ext4 on and your OS of choice and your application(s) and whipping boy it into next century without risking anything. Where I work we have two servers in addition to our production servers. I don't think this is an uncommon scheme so if you have a development server, throw it on there and poke it with a stick. Then move it to the testing server and let your testers grape it [youtube.com] for two weeks. Then you'll know.

    • by Joce640k (829181) on Saturday May 30, 2009 @12:50PM (#28150233) Homepage

      > If it ain't broke, don't fix it!

      This.

    • Re: (Score:3, Insightful)

      by BrokenHalo (565198)
      A shorter approach to the question:

      What do I gain by running with ext4?
      Is that gain worth the time spent changing what I've got?

      If the answer to the first question is that ext4 is cool and shiny, and the answer to the second is unknown, the OP has his answer.

      Filesystems are one thing we need to be VERY conservative about. We need to be certain that it works reliably, because we do not need to find our work disappearing out the end of our backup cycle after having discovered problems too late. (Yes, I kn
      • by Jurily (900488)

        What do I gain by running with ext4?

        And also, "What do I lose?". Ext4 is nowhere near trustworthy in my eyes. I'll probably switch about the same time I abandon KDE 3.5.

        • by DJRumpy (1345787)
          Why does everyone keep speaking about EXT4 as if it's broken? It's working exactly as designed. It's the applications that need fixing, no?
          • by Jurily (900488) <[jurily] [at] [gmail.com]> on Saturday May 30, 2009 @03:09PM (#28151213)

            It's working exactly as designed. It's the applications that need fixing, no?

            Does it matter whose fault it is when users are losing config files? It worked fine before, and now one of my basic expectations concerning Linux is broken: that no matter what happens short of hardware failure, I will not lose the files I already have. We're disappointed, and pointing fingers does not help.

            • by Ed Avis (5917) <ed@membled.com> on Saturday May 30, 2009 @03:55PM (#28151685) Homepage

              The point is that you have expressed all sorts of fear about ext4 - oh no, I'm not letting it near my production boxes - but you have not applied the same standard to the applications that trashed their config files when run on ext4. Even though, strictly speaking, it is the applications that are buggy. You should be equally enthusiastic about getting rid of KDE and any other software that trashes configuration files; otherwise it looks like you are playing favourites and blaming ext4 in order to overlook the bugs in the apps you're attached to.

            • Re: (Score:3, Insightful)

              by iYk6 (1425255)

              Does it matter whose fault it is when users are losing config files?

              Finding out where the problem lies is a pre-requisite for fixing it.

              It worked fine before, and now one of my basic expectations concerning Linux is broken: that no matter what happens short of hardware failure, I will not lose the files I already have.

              The out-of-spec-apps-saving-files-on-ext4-loses-files bug is only a problem with hardware failure.

              We're disappointed, and pointing fingers does not help.

              Well, sure, it doesn't help now. ext4 was quickly amended to behave more like ext3, and there is no reason to bitch about the past.

          • Re: (Score:3, Insightful)

            by whoever57 (658626)

            Why does everyone keep speaking about EXT4 as if it's broken? It's working exactly as designed.

            But is the design any good? If the advantage of EXT4 is better performance, how much of that performance improvement will be lost once the applications are fixed?

          • by spitzak (4019) on Saturday May 30, 2009 @10:38PM (#28154895) Homepage

            EXT4 is broken.

            Posix requires that writing a file and then renaming it to a new location is an ordered atomic operation. Say file B already exists. You write file A, then close it, then rename (mv) it to B. Another program running at the same time opens B and reads it. It will get one of these two results, and NO OTHER RESULT:

            1. It sees the old contents of B
            2. It sees what was written to A.

            EXT4 (before these patches) could result in the following result if your machine crashes and you start it again and look at B:

            3. B is empty (also B is various partially-written versions of A, but empty most common).

            Now it is true that Posix says that if the machine crashes, all bets are off. So yes EXT4 is being technically correct. But it would be equally technically correct if all the files on the disk were empty so this is pointless.

            EXT4 promises to make crashes recoverable. This implies to me that after you recover from a crash, you will be left in a state allowed by POSIX. This means either you get the old contents of B or the new full contents of A, and EXT4 by allowing a different result is breaking it's design and promise.

    • by stinerman (812158)

      It is backward compatible with ext3

      Not if you decide to use extents, which is a major reason why you'd want to use ext4. Per your link:

      The ext3 file system is partially forward compatible with ext4, that is, an ext4 filesystem can be mounted as an ext3 partition (using "ext3" as the filesystem type when mounting). However, if the ext4 partition uses extents (a major new feature of ext4), then the ability to mount the file system as ext3 is lost.

      But then again, if you're looking at ext4 just for extents, th

    • by identity0 (77976)

      >I may be grossly misinformed but that is how the question sounds to me.

      You are. The question is clearly asking about normal users, which is NOT uber-leet production $$$$ systems.

      > My questions to the early adopters of ext4 are about whether the patches have performed as expected. What is your overall feeling about ext4? Do you think is solid enough for most users to trust it with their data? Did you find any significant performance improvements compared to ext3? Is there any incentive to move to ext4

  • Ye (Score:5, Funny)

    by identity0 (77976) on Saturday May 30, 2009 @12:41PM (#28150165) Journal
    I've been running ext4 on my system and everything's fi
  • Wrong question (Score:5, Insightful)

    by AmiMoJo (196126) <mojoNO@SPAMworld3.net> on Saturday May 30, 2009 @12:41PM (#28150167) Homepage

    You are asking the wrong question. Ext4 does not need fixing, the apps do.

    Are your apps patched yet?

    • Re:Wrong question (Score:5, Interesting)

      by QuoteMstr (55051) <dan.colascione@gmail.com> on Saturday May 30, 2009 @12:54PM (#28150261)

      Face it: your side lost. "fsync everywhere" is an infeasible, untenable, and useless position to take.

      fsync-on-rename creates a much better environment for application developers and users alike. The Right Thing happens by default, and I maintain that nobody actually wants the unsafe rename behavior. Allowing an application "choice" in this respect is a red herring.

      The only improvement I'd make it to flush the file involves on every rename, not just renames that happen to overwrite an existing file. Under the current scheme, an application doing the write-close-rename to replace a file will still be put in a bind if the file to write doesn't exist yet. (i.e., you can still end up with a zero-length file where no such file ever existed on a running system)

      • Re:Wrong question (Score:5, Insightful)

        by k8to (9046) on Saturday May 30, 2009 @12:57PM (#28150279) Homepage

        There was no single loser here.

        Ext4 should handle the case gracefully, but the apps will fail on other filesystems, and they *will* be run on those filesystems, so they should fix the bugs.

        • by Jane Q. Public (1010737) on Saturday May 30, 2009 @01:41PM (#28150589)
          Huh? Buddy, this is Slashdot. There are lots of single losers here.
        • Re:Wrong question (Score:4, Informative)

          by RiotingPacifist (1228016) on Saturday May 30, 2009 @01:42PM (#28150603)

          how should the apps behave? write,rename is the best way to do what they want, if you cant trust the filesystem to rename a file (and not just not rename it but leave its metadata wrong so neither the new or original are in the correct place) then what sort of program are you going to be able to run?

        • Re: (Score:3, Informative)

          by davecb (6526) *

          The apps don't fail on ufs.

          --dave

        • Re:Wrong question (Score:4, Interesting)

          by Rich0 (548339) on Saturday May 30, 2009 @04:44PM (#28152157) Homepage

          Define bug.

          Here is the issue - application wants to make an atomic change to a file. The application doesn't care if the file ends up in the starting state, or the final state - only that the change is atomic.

          fsync doesn't do that. Fsync guarantees that the file ends up in the final state quickly (but not atomically). Fsync also degrades system performance.

          So, the proposed application change doesn't accomplish what the app writers actually want, and it slows down the system. It does reduce the risk of data loss.

          What we really need is transaction support for files - just like we have for databases. Now, I agree that this may not be needed for all file operations (though admins should be able to turn it on by default if they want), but this is really the "right way" of handling this sort of situation.

          If anything I find myself patching apps to remove fsyncs. MythTV forces frequent fsyncs of the video stream and it can kill performance and even lead to data loss (buffer overruns - the degraded disk performance can't keep up with recorded video demand). There is no reason a recording needs to be fsynced every 30 seconds. If power goes out I'm going to lose 5 minutes of my recorded show anyway while the system comes back up - losing the previous 30 seconds of unflushed video isn't the end of the world. I'd rather have that then have dropped frames and glitches all over the place from lost video packets.

          What we need is for apps to tell the OS what they actually need, and for the OS to figure out how to deliver it. App writers shouldn't care what filesystem you're writing to and what the approved way of modifying files on that filesystem is. They certainly shouldn't care about how the write cache works. Sure, there should be an fsync option, but it should be used to sync disk writes to operations that take place in other media or over the network (such as in a transactional database). There should also be other options like atomic file operatiopns (make the following changes to the following files atomically). Let the app figure out what its requirements are, and let the OS figure out how to deliver it.

      • Re:Wrong question (Score:5, Interesting)

        by icebike (68054) on Saturday May 30, 2009 @01:35PM (#28150531)

        Face it: your side lost. "fsync everywhere" is an infeasible, untenable, and useless position to take.

        And had it been enforced, as soon as all developers went thru and added the fsync calls everywhere it would have become necessary for file system maintainers to no-op fsync calls in order to regain any approximation of prior performance.

        Flushing "one file" is not always sufficient. Calling fsync() does not necessarily ensure that the entry in the directory containing the file has also reached disk. For that an explicit fsync() on a file descriptor for the directory is also needed. And perhaps the higher level directory as well.

      • by TheSunborn (68004)

        But even then you might end up with a zero byte file, if your system crashes between the close and rename call. (Or between write and close, or doing write, or well anytime after open).

        But I don't really think there might be a zero size file left if the system crashes is such a problem.

        But what we really need is a flag to close(Or open) called FLUSH_ON_CLOSE that flushes a file when it's closed. There are so few situations where you would not want to do that, so maybe it should be default, and we could add

        • Re:Wrong question (Score:5, Insightful)

          by QuoteMstr (55051) <dan.colascione@gmail.com> on Saturday May 30, 2009 @01:59PM (#28150717)

          But even then you might end up with a zero byte file, if your system crashes between the close and rename call. (Or between write and close, or doing write, or well anytime after open).

          This statement is incorrect. Suppose you want to atomically replace the contents of file "foo". Your application will write a file "foo.tmp", then call rename("foo.tmp", "foo"). At no time on a running system does any process observe a file called "foo" that does not have either the new or the old contents, and this invariant holds true whether or not "foo", "foo.tmp", or any other file has been flushed to the disk.

          On the filesystem level, the kernel can actually write the contents of foo.tmp to disk whenever is convenient. The only constraint is that the on-disk name record for "foo" must be updated to point to the new data blocks from foo.tmp only after these data blocks have themselves been written to disk. That's the issue here: without that ordering guarantee, the kernel can write a file's name record before its data blocks. If the system crashes after the name record is written but before the data blocks are, what's observed on the recovered system is a zero-length file.

          That's the problem here: the kernel is conjuring out of thin air a zero-length file that never actually existed on a running system.

          Forcing applications to call fsync is not only an onerous burden on application developers, but it also reduces performance because it gives the filesystem less freedom than the much looser constraint on rename above.

          Bonus points for anyone who can give a realistic use case for DO_NOT_FLUSH_ON_CLOSE

          1. Application configuration files. You don't care that they hit the disk immediately, but only that when they do hit the disk, they're not corrupt
          2. /etc/mtab

          Flushing on close is the wrong thing: it far exceeds the minimum requirements that most applications actually need, which will substantially reduce performance.

      • by thsths (31372)

        Nor is fsync() what you want - you want an atomic file replace operation. Rename is atomic, and it used to work, but with delayed allocation it may happen before the file is written. So what you want is an atomic file replace operation that does not happen before the data write. Rename may not be the best option for that - a special file write mode may actually be better. In any case the issue affects both sides - kernel and user space.

        • Re:Wrong question (Score:4, Insightful)

          by QuoteMstr (55051) <dan.colascione@gmail.com> on Saturday May 30, 2009 @03:26PM (#28151379)

          Nor is fsync() what you want - you want an atomic file replace operation.

          Yes.

          Rename is atomic, and it used to work, but with delayed allocation it may happen before the file is written. So what you want is an atomic file replace operation that does not happen before the data write.

          Precisely.

          Rename may not be the best option for that - a special file write mode may actually be better. In any case the issue affects both sides - kernel and user space.

          NO, NO, NO. write, fsync, close, rename is how you spell "atomically replace this file" in terms of system calls. It does precisely the correct thing on a running system. You yourself admit that it "used to work". It has worked for decades, in fact. (Though before journaling filesystems, all bets were off after a crash.)

          That sequence of system calls is how applications tell the kernel to replace the given file. There is no useful interpretation of those system calls that doesn't involve an atomic replacement of the whole file. We don't need a separate system call: we already have the system calls. Nobody executing those system calls wants the dangerous interpretation of rename. At no time did an application developer sit down and think to himself, "I want to tell the kernel to perform an atomic rename, except when the system crashes. In that case, I want a zero-length file." Gods, no. Obviously, the application developer wanted to atomically replace the named file. Filesystems just need to honor the obvious intent of application developers.

    • Re: (Score:3, Insightful)

      by eldavojohn (898314) *

      You are asking the wrong question. Ext4 does not need fixing, the apps do.

      Are your apps patched yet?

      At the risk of revealing just how incredibly inept I am about file systems ... shouldn't your "apps" (and by apps I am guessing you mean applications) be calling the operating system to do anything to the file system? I mean, isn't the point of operating systems to create or contain APIs and the like that allow you to interface with any file system type that the OS supports?

      I guess what I'm asking is just the technicality that only his operating system need be patched and tested for it?

      Again, I d

      • Re:Wrong question (Score:5, Informative)

        by blueg3 (192743) on Saturday May 30, 2009 @01:27PM (#28150487)

        The problem is that some applications assume a behavior that is not supported by the POSIX definitions (the guarantees provided by the OS functions they're calling). However, it happens to be the behavior on existing filesystems and happens to be convenient. Now a new filesystem comes along and sticks to the POSIX definitions but does not follow this behavior. Application breaks, people complain.

        As a simplified example, imagine you create file B, then delete file A. Existing filesystems happen to do this in order, so you always have at least one of A or B. (If the system crashed partway through, you might have both A and B.) Your application fails if neither A nor B is present. POSIX doesn't require that the operations be performed in order. New filesystem comes along and sometimes does them in the reverse order, so if the system crashes at the wrong time, neither A nor B is left on the filesystem.

        • POSIX doesn't require that the operations be performed in order.

          [eldavojohn mode]I guess[/] it doesn't forbid it either. So what's the reason, other than pure pedantry, to do them in random order?

          • Re: (Score:3, Insightful)

            by GryMor (88799)

            Performance optimization. You can get much write rates if you can reorder the writes to be sequential on disk, starting with whichever one the disk head can get to first.

        • Re:Wrong question (Score:5, Insightful)

          by QuoteMstr (55051) <dan.colascione@gmail.com> on Saturday May 30, 2009 @02:15PM (#28150847)

          The problem is that some applications assume a behavior that is not supported by the POSIX definitions

          POSIX is a red herring here. It covers the behavior of a running system, and makes no guarantees about atomicity or durability following a crash. After a crash and as far as POSIX goes, it's perfectly legitimate to overwrite the entire disk with hentai. Every crash recovery technique goes beyond POSIX because POSIX says nothing about crashes.

          POSIX doesn't require that the operations be performed in order

          It most certainly does! On a running system, if you rename B over A, at no point does any process on the system observe a file called "A" that does not have either the contents of the old A or the contents of B. THIS ATOMICITY IS A FUNDAMENTAL POSIX GUARANTEE.

          Filesystems should do their best to honor this guarantee (which always applies on a running system, remember) even when the system crashes. Filesystems don't have to do that according to POSIX. Instead, they should do it because it's a sane thing to do, and doesn't violate anything POSIX guarantees. POSIX is not the arbiter of what a good system should be. It's perfectly reasonable to make guarantees that go beyond POSIX, and every real-world operating system does precisely that. POSIX guarantees are necessary but insufficient for a reasonable system in 2009.

          • by AmiMoJo (196126)

            So, to bring it back round to the point, isn't the problem that apps break if this undefined behaviour isn't stuck to? That sounds like a flaw in the app.

            Can someone offer a specific example of a program that requires this behaviour for a good reason?

            • by Junta (36770)

              His point was that POSIX doesn't speak to crash behavior. As such, if a system detects a crash and zeroes the MBR and nearby blocks, it would still be POSIX compliant, but no one would plausibly be mollified by that.

              The application isn't making a complex assertion based on undocumented behavior not contained in a spec, it's making a very simple assumption that if it writes data to a file, and then calls rename when those calls complete, that those two operations will proceed in order. It proceeds in order

          • You're using a different situation than your parent post gave to try to prove him wrong.

            He used a two-operation sequence as an example, saying POSIX doesn't guarantee they'll happen in order: create B, then delete A. He said nothing about renaming B over A.

            Your example was one operation: rename B over A. Yes, this is one operation, and yes, POSIX guarantees it will happen atomically.

            Neither of you is wrong (as far as I can tell) and there's no reason both of you can't be right (since you're describing dif

            • by QuoteMstr (55051)

              Your example was one operation: rename B over A. Yes, this is one operation, and yes, POSIX guarantees it will happen atomically

              This is the operation salient to the ext4 discussion. The other operation is a distraction. (But for the record, POSIX also guarantees that they happen in order. How could it be otherwise?)

    • Re:Wrong question (Score:5, Interesting)

      by nwanua (70972) on Saturday May 30, 2009 @01:23PM (#28150457) Journal

      Wha....? Are you seriously suggesting that applications/utilities need to be patched to deal with faulty (yes, faulty) filesystem semantics? For _every_ single filesystem they might encounter? The whole point behind a filesystem layer is to present a unified view of files to the user layer regardless of physical media or driver quirks.

      The point is really that ext4 is/was broken, and IMO, any filesystem requiring patches to applications in order not to lose data is no filesystem at all. It's unbelievable (despite the technical benefits of ext4) that this would even be up for consideration.

    • Re:Wrong question (Score:5, Insightful)

      by Anonymous Coward on Saturday May 30, 2009 @01:36PM (#28150551)

      Only on Linux is it the user's fault that apps have data loss because the Linux kernel people changed filesystem semantics. At least Microsoft takes some responsibility for their mistakes :-/

      I did follow the ext4 debate. Here's my quick synopsis.

      • Linux kernel hacker discovers he can make a certain microbenchmark run 50% faster if he allows reordering of filesystem metadata writes ahead of filesystem data writes. Said hacker checks in code with a "now 50% faster!!!" message.
      • A few months later, users start discovering data corruption of KDE files. Specifically, a copy of A to A', ftruncate(A'), write(A'), rename(A' to A), host crash, causes the resulting file to contain A data and not A' data despite the well-known atomic "rename" that serves as a barrier.
      • Linux kernel hacker ignored problem as not-a-bug, since the apps didn't make use of fdatasync() / fsync() correctly, which (using Posix semantics) would have prevented data corruption. The detail to note here is that Posix doesn't actually say that rename is a write barrier for data and metadata, even though everyone would assume that it is a write barrier and ALL other filesystems have treated it as a write barrier. (And in my opinion as a professional systems programmer, this is an oversight in the Posix standard and not a desired behavior). So the linux kernel hacker is technically correct but has introduced a behavior that goes against all previous implementations.
      • Linux kernel hacker (and some Slashdot posters) attack KDE developers for being incompetent because they didn't read a sub-sub-sub clause of the Posix spec that (1) isn't mentioned in the man pages, (2) only gets read by kernel programmers anyway, and (3) is about two orders of magnitude more arcane than the average desktop app developer will ever read documentation.
      • 90% of users and 80% of programmers wonder what the hell fdatasync() and fsync() and the difference between data and metadata write barriers are, and why the default behavior is to corrupt data.
      • Linux kernel hacker promises to commit a few patches to fix the problem, so as not to break software that has worked perfectly fine for the past 10 years.
      • Those of us with experience realize that since said kernel hacker didn't believe this was a problem in the first place, the patches are as likely to be half-hearted band-aids as to actually increase data integrity guarantees. Programming has a long and proud history of making a quick fix to satisfy "management" (in this case, the Linux community) that makes one symptom go away and doesn't actually fix the underlying problem.
      • We get an Ask Slashdot asking if the problem actually got fixed, because 99% of us do not have the technical expertise to understand patches to the Linux filesystem to figure out if this actually got fixed.

      I do have a moral to this story. Filesystems have one cardinal, inviolable rule. DO NOT CORRUPT THE USER'S DATA. The guarantee is that if a user makes a read, the user will get back either good data OR an error (or explicit indication of no data). Google likes filesystems that lose data - but they don't ever give back corrupt search results. Ext3 can reorder writes - but defaults to a safe 5-second flush rate to keep the window of unexpected corruptions small. Ext4 ignored this rule and allows silent data corruption so that this filesystem can be the best at certain microbenchmarks, and instead of accepting responsibility, the kernel hacker in question blames everybody else.

      The greatest danger to Linux's success is not Microsoft. It's the hubris of many Linux developers, users, and advocates, who are too busy disavowing responsibility and blaming everybody else to fix real user's problems. (And yes, I'm a follower of the Raymond Chen philosophy)

      • Most insightful and informative AC. Ever.

        Where's my mod points?

      • The greatest danger to Linux's success is not Microsoft. It's the hubris of many Linux developers, users, and advocates, who are too busy disavowing responsibility and blaming everybody else to fix real user's problems

        Unlike Microsoft who takes all responsibility from any malfunction in its softw--Oh that's right the EULA crowd never does.

        Come on, no Ubuntu LTS uses ext4 by default, nor Debian stable, nor OpenBSD AFAIK.

        When you are dealing with the bleeding edge its normal for things to break. This is not d

        • by TCM (130219)

          Come on, no Ubuntu LTS uses ext4 by default, nor Debian stable, nor OpenBSD AFAIK.

          What has OpenBSD got to do with anything in this discussion?

          • AFAIK BSDs can be run on top any fs including ext4 yet not by defaut.

            • by TCM (130219)

              No idea where you got your knowledge.

              The BSDs support Ext2. Although personally, I wouldn't do anything write-related to an Ext2 fs on BSD, let alone use it for the system itself.

              As for Ext3 or even Ext4, well, just no.

      • by Kjella (173770)

        A few months later, users start discovering data corruption of KDE files. Specifically, a copy of A to A', ftruncate(A'), write(A'), rename(A' to A), host crash, causes the resulting file to contain A data and not A' data despite the well-known atomic "rename" that serves as a barrier.

        No, it's more fucked than that. The rename has pointed A to A', but the data for A' has not been written so you have NO data, only a zero byte file. From a "high-level" perspective, and by high level I mean I want to atomicly replace file A with A' then this is clearly a major WTF but apparently not for the ext4 developers. That means there's bigger chances of ice skating contests in hell than me installing ext4 on a production server.

      • Re: (Score:2, Insightful)

        by osu-neko (2604)

        At least Microsoft takes some responsibility for their mistakes

        Actually, I'll take the process you described above over what occurs at Microsoft or other closed-source shops any day. They also have their fair share of stubborn, arrogant developers with the kind of attitude displayed above. The reason you don't see the kind of detailed analysis of what happened all the time like the one above is simply that it all occurs behind closed doors. Oh, and because of that, you don't see the kind of outcry that eventually leads to patches until after the product ships, if ev

      • I tried ext4 as soon as it hit 2.28. I never ran into the KDE bugs, but I did notice it complaining that the filesystem was full despite many GB being free (and we're not talking about the relatively small amount reserved for root here).

        It certainly wasn't fit to be renamed from ext4dev at that stage.

      • Re:Wrong question (Score:4, Informative)

        by spitzak (4019) on Saturday May 30, 2009 @10:57PM (#28154979) Homepage

        Some corrections, although the sentiment is correct:

          copy of A to A', ftruncate(A'), write(A'), rename(A' to A), host crash, causes the resulting file to contain A data and not A'

        This is not what is wrong. If the file contained the old version of A it would be fine, this is the expected behavior. The problem is that the file contains some partially-written version of A' (usually a zero-length version).

          Posix doesn't actually say that rename is a write barrier for data and metadata

        Actually POSIX does say exactly that. The hole EXT4 weasels through is that POSIX says "anything can happen when the machine crashes".

          apps didn't make use of fdatasync() / fsync() correctly

        The apps *were* using these calls correctly, by not calling them. They are very slow and make guarantees that have nothing to do with the desired action, which is an atomic rename.

    • Re:Wrong question (Score:5, Interesting)

      by RiotingPacifist (1228016) on Saturday May 30, 2009 @01:46PM (#28150627)

      hmm i think most of them are but im still having problems with mv, seriosuly can we stop this bullshit, ext4 was clearly not working!
      If you cant rename a fucking file without risking total corruption of the file, at no point in renaming "settings-new" to "settings" should the file "settings" become unusable, What the fuck CAN kde4 do?

  • I would just wait until it becomes main stream and all the issues are worked out, until then I'll stick with ext3
  • I moved to ext4 as soon as it became available. I haven't had any problems thusfar (no data loss, etc), and the increased speed is noticable. So - in the opinion of a very casual Linux user - I would say that yes, it's "okay." I'm not sure I'd trust it with anything super serious, though. I could be the only one without any problems, after all. As always, you should tip-toe around anything bleeding-edge.
    • by eldavojohn (898314) * <eldavojohnNO@SPAMgmail.com> on Saturday May 30, 2009 @12:48PM (#28150221) Journal

      I moved to ext4 as soon as it became available. I haven't had any problems thusfar (no data loss, etc), and the increased speed is noticable. So - in the opinion of a very casual Linux user - I would say that yes, it's "okay." I'm not sure I'd trust it with anything super serious, though. I could be the only one without any problems, after all. As always, you should tip-toe around anything bleeding-edge.

      Yeah, man, it's ok go ahead and flip your entire corporation's servers to ext4 over this weekend. A Slashdot user named buttfscking just said it is "safe enough."

      • by drinkypoo (153816)

        Speaking of users with funny names, I converted to ext4 (the hard way — create a bootable backup, then repartition) as soon as Jaunty went final. So far system stability seems to be about the same as ext3. I've hung it with a couple of effective fork bombs (shell scripts accidentally spawning themselves because I am too stupid to enter a complete path) and had to force-power-cycle with no data loss or indeed problems of any kind.

        I wouldn't have done this, however, if I didn't have a full system backup

      • Re: (Score:3, Insightful)

        by Hognoxious (631665)
        Well he said not to, but don't let the facts interfere with a choleric rant.
    • by BrokenHalo (565198) on Saturday May 30, 2009 @01:55PM (#28150699)
      I haven't had any problems thusfar (no data loss, etc)

      How do you know? Do you do md5sums on every file? Most admins I've come across don't seem to, and it could be months or years before you find out, in which case any loss might easily end up outside your backup cycle.
  • by dandaman32 (1056054) <dan@enPERIODanocms.org minus punct> on Saturday May 30, 2009 @12:53PM (#28150257)

    I'm using ext4 on an encrypted partition on my tiny X41 tablet. The hard disk is 5400RPM IIRC, so when Ubuntu decides to run fsck due to a scheduled run or an unclean shutdown after a certain bug manifests itself, I don't have to sit there for 10 minutes or more waiting for fsck to run. That for me and many other casual users is probably the biggest advantage of ext4.

    Does a laptop count as production? In the eyes of an everyday user, yes. My laptop is very much "production" IMHO, and I trust ext4 enough to not magically make all my school assignments disappear.

    Digressing a bit, I haven't seen any of the data loss either, though I use GNOME and not KDE. I do think that if an application relies on specific undocumented behavior, that the application should change, not the filesystem driver. It's acceptable that the kernel developers are doing their best to get temporary workarounds into place, but the permanent solution is to fix the applications so they don't depend on undocumented behavior.

    • by Sfing_ter (99478)

      yeah, i fixed that years ago by using reiserfs.

    • reiserfs, ive been using it for years for fast fsck and it can handle a file rename gracefully too :O
      Its not undocumented, the problem is kde was using write then rename to make sure there was an atomic operation an gaurantee the integrity of the file, nobody expects a rename to fail (and then ext4 came along and zeros metadata at bad times to improve the performance)!

    • Re: (Score:3, Informative)

      by hackstraw (262471)

      Maybe I'm clueless, and I'll be corrected shortly, but a) didn't ext3 bring this functionality back in in 2000 or so? b) don't most distributions format their partitions with the options to not do fsck's periodically based on mount count or time?

      <insert paragraph break about here>

      I know that every system I ever have to create a filesystem manually I remove the counts to prevent that quick reboot from being a slow reboot and a trip to the data center to babysit the thing through a fsck.

      • by Junta (36770) on Saturday May 30, 2009 @05:04PM (#28152323)

        When they went to journalling filesystems, by and large a simple mount operation turned into a mini-recovery operation, a psuedo-fsck if you will. This would even happen on read-only mounts, which to me violates expectations of no disk data being modified.

        JFS had one 'quirk' that I think they got right, journal replay was an fsck-level event. A filesystem with a dirty journal could only be mounted read-only and the journal replay code was in fsck and had to be ran to enable remount read-write. There are numerous reasons why I stopped using JFS, but that is one point I kinda agreed with their quirkiness on.

    • A laptop? No, that doesn't count unless you run your production system on another laptop of the same build and make. At least where I work production is business critical systems on real kit, then we have our development environment, testing environment, and after that (in terms of importance) we have the business/office network and individual workstations (and your laptop would be somewhere after that).

      Nothing a developer did on a home system would be considered production ready without, you know, doing
      • So someone could actually use a laptop for hosting a crucial web site, but you wouldn't "consider" it to be production ready without actual testing. Hmm . . .

  • ext4 is buggy (Score:4, Interesting)

    by hamanu (23005) on Saturday May 30, 2009 @12:58PM (#28150291) Homepage

    Well, the fsck times are really fast compared to ext3, and thank god, because EVERY time I reboot it requires an fsck, complaining about group descriptor checksums. Even if I unmount my ext4 filesystem and remount it without rebooting it gets all fscked up. I have a 3TB ext4 fs on LVM on RAID, that was NOT converted from ext3, but built on brand new drives. My similar ext3 filesystem has had so such problems.

    ext4 takes about 7 minutes to fsck, ext3 took hours. I hope they fix this soon.

    • Re:ext4 is buggy (Score:5, Informative)

      by msuarezalvarez (667058) on Saturday May 30, 2009 @01:20PM (#28150439)
      Maybe you should do something about whatever the cause for the constance fsck'ing is. You do realize it is quite abnormal to have a system have errors at each remount, don't you?
      • Re:ext4 is buggy (Score:5, Insightful)

        by TCM (130219) on Saturday May 30, 2009 @02:29PM (#28150921)

        But he uses R-A-I-D! R-A-I-D magically makes data bulletproof and immune to disaster as we all know.

        Seriously, running a 3TB RAID with a buggy fs and applauding faster fsck times instead of wondering why the fs gets fucked up constantly must be the peak of idiocy.

        • Re: (Score:3, Interesting)

          by Junta (36770)

          I too had a 2TB RAID volume with ext4. I suffered the same situation. I continue to complain myself even though I have reformatted as ext3 and solved my problems, so that others will hear my issue and learn.

          And before you claim my underlying IO must be flawed, a large part of my job is storage subsystem validation and I'm quite used to isolating which layer is inducing problems from storage controller hardware, drivers, or higher-layer os layers, and every thing I did, every test I ran, pointed to ext4 as

    • why were you on ext3 if you needed constant fcsking there have always been better options resierfs, JFS, etc

    • by StarHeart (27290) *

      This sounds like a problem I have had. It isn't ever time I reboot, and has gotten better with newer kernel versions. Mine is a 4tb ext4 filesystem on linux software raid5.

    • We had this problem (Score:5, Interesting)

      by xiox (66483) on Saturday May 30, 2009 @03:08PM (#28151211) Homepage

      Our 8TB raid system would get trashed after copying data onto it (group descriptor checksums on fsck). It looks like it was an ext4 bug. They fixed it about a week or two ago, here [spinics.net]. Maybe it will get in your kernel soon. I'm not going to start ext4 on any production system for at least 6 months I think now.

  • No (Score:3, Insightful)

    by ducomputergeek (595742) on Saturday May 30, 2009 @12:59PM (#28150295)

    We avoid anything that has less than 24 months of wide deployment unless there is some absolute pressing need to move to an unstable/untested product.

    We have test and development systems where we run latest and greatest, but generally they are used in sync with the existing system. We don't switch over until we're damn sure there aren't any unforeseen consequences. That typically means 12 months without any major hiccups and 3 months without minor ones.

  • by 3vi1 (544505) on Saturday May 30, 2009 @01:03PM (#28150325) Homepage Journal

    I was one of the people that spoke loudly when Ext4 caused 0-byte file corruption.

    While I don't entirely agree that it's just "an application issue", because apps that work fine on every other filesystem should not need to be re-written specifically for Ext4, I am pleased at the work the devs have done to work around the problems. The kernel patches have eradicated the issues I had with corruption, and the performance is still great.

    I never did official benchmarking to determine the extent, but my perception is that there's a noticeable performance increase when using Ext4 instead of Ext3.

    If I were building a production server, I may think twice and just go with Ext3... unless the app would *greatly* benefit from Ext4. However, for a desktop system, I think Ext4 is a very good choice and ready for primetime.

  • I've never used anything other than Reiser3 with Linux. Might not be the most reliable or fast, but it has other advantages.

    - Undeletion.
    - Partition resizing.
    - Readable from within Windows via YaReG [akucom.de].

    • As my sibling post said, http://www.fs-driver.org/ [fs-driver.org] is a Windows File System driver drive ext2, and thanks to forward compatibility (as I understand it), ext3 works too. http://sourceforge.net/projects/ext2fsd [sourceforge.net] is another alternative.

      You should be warned that whenever I've used the first tool to write to the partition, I've ended up with Ubuntu fscking it on boot. But I've never noticed any problems like data corruption from using it. The second one also seems OK, although when browsing the disk from the C

  • by sirdude (578412) on Saturday May 30, 2009 @01:32PM (#28150515)

    After reading the comments on my earlier post, Delayed allocation and the zero-length file problem as well as some of the comments on the Slashdot story as well as the Ubuntu bug, itâ(TM)s become very clear to me that there are a lot of myths and misplaced concerns about fsync() and how best to use it. I thought it would be appropriate to correct as many of these misunderstandings about fsync() in one comprehensive blog posting.

    http://thunk.org/tytso/blog/2009/03/15/dont-fear-the-fsync/ [thunk.org]

    FYI, Ts'o is the ext4 maintainer.

    • Not reassuring (Score:4, Insightful)

      by Junta (36770) on Saturday May 30, 2009 @02:05PM (#28150769)

      He presents three common cases for 'quickie' file modifications:
      -Modify-in-place. Yes, this logically cannot be expected to leave the content intact in an unexpected interruption. You ask the OS to blow away data, then send it new data, there is a logical indeterminate state in the middle where doing things in the order you specified leaves you exposed.
      -Write new file, use rename, using fsync to ensure a low exposure of data. This forces data to disk so it's coherent.
      -Write new file and then use rename without fsync:
      *This* he claims should easily be expected to corrupt the contents. I take issue with this. The fact that this occurs is because ext4 commits the rename out-of-order ahead of the data commit. I don't understand why the rename operation cannot also be delayed until after the data has been written out. I've seen several people ask 'I don't care that the change happens *now*, but I want the changes to occur in the order I specified', and thus far have seen Ts'o miss that point (intentionally or unintentionally). I have not read any explanation of why changing hardlinks should logically be an operation to jump ahead of pending data writeout. I could be missing something, but I'm not the only one with these questions.

      fsync gives a relatively expensive guarantee above and beyond what people require to behave sanely. He says its inexpensive 'now' relative to the past. However, 'now' in this context only applies to ext4 users and thus the operation degrades other filesystem performance and fsync remains an expensive operation relative to not doing at all.

      In terms of the general attitude of filesystems shrugging off data consistency so long as their indexes are intact, I find myself agreeing with Torvalds' comments on the debacle:
      http://thread.gmane.org/gmane.linux.kernel/811167/focus=811700 [gmane.org]

  • You should be asking this question in a more authoritative forum. The majority of Slashdot readers are likely to just regurgitate their perceived status of ext4 from the last time ext4 was mentioned on Slashdot and I know for certain that ext4 has had more testing and development since then. Try asking the ext4 development team; they're very nice, helpful people in my experience. I refer you to the #ext4 channel on irc.oftc.net and the linux-ext4 mailing list.
  • last I checked some patches for the dealloc empty file problem was being merged in 2.6.30. if you want to avoid it but want some other advantages like faster fscks you could go with data=journal on your filesystems which is a bit slower but also disables dealloc, while still having extents, barriers, and other ext4 benefits. I've been using data=journal on my /home partition without a single problem.

    it also depends a lot on what you have in 'production'. a web server that's mostly doing reads it should be f

  • I have installed a system and have been getting resize inode invalid and group descriptors corrupted issues on clean reboots. fsck has yet to fail me, and IO stress tests have demonstrated no general io corruption other than ext4 errors.

    On the flipside, for my applications I haven't really gained much.

  • by _LORAX_ (4790)

    It still needs more time. I have played under both ubuntu and rhel 5.3 and run into strange behavior that makes me uncomfortable.

    1) Bonnie++ throws errors even on server class hardware that something is wrong when creating and deleting a large number of random files. This is with no errors in the filesystem and everything operating normally. https://www.redhat.com/archives/ext4-beta-list/2009-February/msg00000.html [redhat.com]

    2) A crash of ubuntu ended up removing *ALL* group and other permission on a laptop drive.

  • A couple of months ago i installed Ubuntu 9.01, which used ext4 by default. Running it, i experienced data loss for the first time since i moved from ext2 to ext3 quite a few years ago now. I've just changed back to ext3 - which has been rock solid for me since it first appeared in Redhat or whatever distro it was i was using back then.

  • My two cents worth: if in doubt, don't. Wait a year for others to find the bugs.

Practical people would be more practical if they would take a little more time for dreaming. -- J. P. McEvoy

Working...