Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
Linux

The State of Linux IO Scheduling For the Desktop? 472 472

pinkeen writes "I've used Linux as my work & play OS for 5+ years. The one thing that constantly drives me mad is its IO scheduling. When I'm copying a large amount of data in the background, everything else slows down to a crawl while the CPU utilization stays at 1-2%. The process which does the actual copying is highly prioritized in terms of I/O. This is completely unacceptable for a desktop OS. I've heard about the efforts of Con Kolivas and his Brainfuck Scheduler, but it's unsupported now and probably incompatible with latest kernels. Is there any way to fix this? How do you deal with this? I have a feeling that if this issue was to be fixed, the whole desktop would become way more snappier, even if you're not doing any heavy IO in the background." Update: 10/23 22:06 GMT by T : As reader ehntoo points out in the discussion below, contrary to the submitter's impression, "Con Kolivas is still actively working on BFS, it's not unsupported. He's even got a patch for 2.6.36, which was only released on the 20th. He's also got a patchset out that I use on all my desktops which includes a bunch of tweaks for desktop use." Thanks to ehntoo, and hat tip to Bill Huey.
This discussion has been archived. No new comments can be posted.

The State of Linux IO Scheduling For the Desktop?

Comments Filter:
  • It sucks I agree (Score:4, Interesting)

    by Anonymous Coward on Saturday October 23, 2010 @02:40PM (#33997908)

    This issue got so bad for me I switched to FreeBSD.

  • by Anonymous Coward on Saturday October 23, 2010 @02:49PM (#33997974)

    On IO intensive server: this is also a real issue. 20-30% of processors and cores stuck with a 99% iowait for hours, while the rest tries to cope. Total CPU load does not go above 20%. No solution yet after months of study and experimenting. Linux is indeed really bad at IO scheduling in general, it seems.

    Notw think of that situation and a heavy database system. A no-no solution.

  • by mrjb (547783) on Saturday October 23, 2010 @02:56PM (#33998042)
    I've wondered on occasion if this problem is really only due to scheduling. After all, most of us still write our file access code more or less as follows: x=fopen('somefilename'); while ( !eof(x)) { print readln(x,1024); /* ---- */ } fclose(x); Point being, there's nothing that tells the marked line that the process should gracefully go to sleep while the drive is doing its thing, and there's no callback vector defined either- nothing that indicates we're dealing with non-blocking I/O. I'd like to think that our compilers have silently been improved to hide those implementation details from us, but I have no proof that this is the case. Unless the system functions use some dirty stack manipulation voodoo to extract the return address of the function and use that as callback vector?
  • by Qzukk (229616) on Saturday October 23, 2010 @02:58PM (#33998060) Journal

    Theres a bug in chrome that causes it to usually be unable to paste into slashdot's comment box once you've placed an < character in the box. (Slashdot, specfically. It does fine on all sorts of other sites with even fancier ajaxy textareas like the stackoverflow sites)

  • by man_of_mr_e (217855) on Saturday October 23, 2010 @03:06PM (#33998122)

    How does this happen? Every year it seems I read about how this problem has been fixed in the latest kernel, and then it's like those fixes mysterious vanish?

  • by fishbowl (7759) on Saturday October 23, 2010 @03:07PM (#33998132)

    This problem is highly visible in VMs. When you have one VM doing write-heavy disk IO, the other VMs suffer.

    I don't think it's a Linux problem as much as a general problem of the compromises that must be made by any scheduling algorithm.

    What about you Linux mainframe guys? You have unbeatable IO subsystems. Do you see the same problems?

  • Switch to Deadline (Score:1, Interesting)

    by Anonymous Coward on Saturday October 23, 2010 @03:07PM (#33998146)

    I ran into the same problems and ended up switching to the "deadline" scheduler. Haven't had a single problem since. I changed it via the "elevator=deadline" on the kernel boot prompt, but you can change it on the fly for individual devices. See Configuring and Optimizing Your I/O Scheduler [devshed.com] to see how.

  • OS/2 (Score:2, Interesting)

    by picross (196866) on Saturday October 23, 2010 @03:08PM (#33998158) Homepage

    I remember using OS/2 (IBM's desktop OS) and i was always amazed that you could format a floppy and do other tasks like nothing else was going on. I never did understand why that never seemed to make it into the mainstream.

  • Wrong Question (Score:3, Interesting)

    by donscarletti (569232) on Saturday October 23, 2010 @03:12PM (#33998188)

    This is not a case of Linux IO schedulers being unsuitable for the desktop, but more a case of desktop applications being written in a horrendous way in terms of data access. The general pattern being to open up a file object, load in a few hundred kilobytes, processing this then asking the operating system for more. This is a small inefficiency when the resource is doing nothing, but if the disk is actually busy, then it will probably be doing something else by the time you ask for it to read a little bit more. Not to mention the habit of reading through a few hundred resource files one at a time in seemingly random order, and blocking every time it reads, because the application programmer is too lazy to think about what resources the app is using.

    Linux has such a nice implementation of mmap, which works by letting Linux actually know ahead of time what files you are interested in and managing them itself, without the application programmer worrying his pretty little head over it. Other options are running multiple non-blocking reads at the same time and loading the right amount of data and the right files to begin with.

    The best thing about a simple CSCAN algorithm is that it gives applications what they asked for and if the application doesn't know what it wants, well, that's hardly a system issue.

  • Re:It sucks I agree (Score:5, Interesting)

    by ObsessiveMathsFreak (773371) <obsessivemathsfreak@@@eircom...net> on Saturday October 23, 2010 @03:13PM (#33998198) Homepage Journal

    This is the number one problem with all Linux installations I have ever used. The problem is most noticeable in Ubuntu where, any time one of the frequent update/tracker programs runs, the entire system will become all but unusable for several minutes.

    I don't know if it's all that related, but swap slowdown is an appalling issue as well. If a single program spikes in RAM usage, I often have to reboot the whole system as it hangs indefinitely. As I work with Octave a lot, often a script will gobble up a few hundred megs of memory and push the system into swap. Once that happens, it's often too late to do anything about it as programs simply will not respond.

  • Re:It sucks I agree (Score:4, Interesting)

    by Lord Byron II (671689) on Saturday October 23, 2010 @03:22PM (#33998266)

    That's exactly why I stopped using swap a couple of years ago. On my main machine I have 3 GB and I feel that if I reach the limit on that, then whatever program is running is probably a lost cause anyway. The next malloc/new causes the program to crash, saving the system.

  • by Lord Byron II (671689) on Saturday October 23, 2010 @03:23PM (#33998274)

    It's been a big issue for me. Go to a directory with a couple of large files (say a dvd rip) and do a "cat * > newfile". Watch your system come to a crawl.

  • Re:easy solution: (Score:1, Interesting)

    by TheTrueScotsman (1191887) on Saturday October 23, 2010 @03:25PM (#33998286)
    You can disable swapping in Windows if you have sufficient RAM. The poster raises a very good point, but it's actually more important in servers than clients (isn't Linux anyway dead on the desktop...?).

    This is actually one of the very reasons (the other being multithreaded performance) why many of us use Windows Server 2003/2008 sometimes in preference to Linux.
  • by emergentessence (1782844) on Saturday October 23, 2010 @04:01PM (#33998544)

    I had been wondering about this myself, for some reason I was under the impression that the BFS was no longer being maintained.

    It turns out there is an up-to-date package for Ubuntu (I'm running 10.10) as well: http://launchpad.net/~chogydan/+archive/ppa [launchpad.net]

    I thought I'd try it out as the installation was much more straightforward than I'd expected.

    'uname -r' now reveals "2.6.35-22ck-generic" and, while this is just my subjective assessment, a few of the quirks I had noticed before on my own system where things would get sluggish when switching between apps / opening closing apps while running things that read/write to the disk, seem to have been ironed out.

    I would love to test this in a more empirical manner, as I can now boot into either kernel to do comparisons, but I don't know of any software that would allow me to benchmark performance in a way that is sensitive to the optimizations the BFS allegedly implements.

  • by ksandom (718283) on Saturday October 23, 2010 @04:21PM (#33998738) Homepage
    Sorry dude, it looks like it's a hardware specific problem. I did that on nearly 700G of large files and then fired up the flight sim while it was still going. The only slow down was on file related activity, which is totally what you'd expect. I had it running full screen across two monitors without any drop in frame rate. AND I'm using economy hardware.
  • by daveime (1253762) on Saturday October 23, 2010 @05:20PM (#33999162)

    Wow, that's a new one ?

    Perform a non-necessary fget on a file already known to be zero bytes, just so we can get a result "this fget failed because the file is zero bytes".

    while (!eof()) {
            readsomething();
    }

    is something I learnt perhaps 20 years ago, and it's never failed me yet. Why must people always try reinventing the wheel, just to end up with an octagon ?

  • by grandpa-geek (981017) on Saturday October 23, 2010 @05:47PM (#33999394)

    I've encountered situations where I'm trying to do something online and a task starts up due to a cron job that builds some kind of index. The index building should be in the background but somehow takes priority over what I'm doing on the desktop. Those kinds of cron jobs should be default scheduled in the background, not take priority over what is happening on the desktop.

  • Re:easy solution: (Score:4, Interesting)

    by Ingo Molnar (206899) on Saturday October 23, 2010 @06:10PM (#33999548) Homepage

    That's great that you post your experiences with server scheduling in a topic about desktop scheduling. It's so relevant. No wait, it's not.

    The boundary between the desktop space and the server space is rather fluid, and many of the problems visible on servers are also visible on desktops - and vice versa.

    For example 'copying a large amount of data' on a server is similar to 'copying a big ISO on the desktop'. If the kernel sucks doing one then it will likely suck when doing the other as well.

    So both cases should be handled by the kernel in an excellent fashion - with an optimization/tuning focus on desktop workloads, because they are almost always the more diverse ones, and hence are generally the technically more challenging cases as well.

    Thanks,

    Ingo

  • Re:It sucks I agree (Score:5, Interesting)

    by Waffle Iron (339739) on Saturday October 23, 2010 @09:18PM (#34000754)

    MythTV added a feature a while back to work around this issue. IIRC, they now keep a handle open to video files while they delete them. This causes the kernel to not actually do the delete, then over a span of about 10 minutes MythTV repeatedly shaves chunks off the end using truncate() until the file reaches 0 bytes.

    Prior to this, the system could get really bogged down right after deleting shows. I was careful not to delete too many shows at once; I had actually seen the back end lock up after telling it to delete a bunch of shows.

  • Re:It sucks I agree (Score:2, Interesting)

    by mehemiah (971799) on Saturday October 23, 2010 @09:19PM (#34000762) Homepage Journal
    Not to be combative, I bet you're right, but which IO scheduler were you using? There are three, if you were using a desktop distro like an Ubuntu desktop variant they were using the sense 2007 it was using cfq, otherwise it was using the deadline scheduler on a server distro. If it was 2007, I dont know which you might have been using.
  • by Ingo Molnar (206899) on Sunday October 24, 2010 @05:24AM (#34002478) Homepage


    So I know some people may read this and think "haha, funny joke" but given that most users are extremely predictable regarding what programs they use and when and how they use them (same with web browsing), shouldnt it be possible to gather user activity over time and analyze it to help improve scheduling.

    Yeah, that's certainly a possibility.

    This is also the goal of most heuristics in the kernel: to figure out a hidden piece of information that the application (and user) has not passed to the kernel explicitly.

    The problem comes when the kernel gets it wrong - the kernel and applications can easily get into a feedback loop / arms race of who knows how to trick the other one into doing what the app writer (or kernel writer) thinks is best. In such cases we get the worst of both worlds: we get the bad case and we get the cost of heuristics.

    (Heuristic and predictive systems also tend to be complex and hard to analyze: you can rarely reproduce bugs without having the exact same filesystem layout and usage pattern as the user experienced, etc.)

    What we found is that in terms of default behavior it's a bit better to keep things simple and predictable/deterministic and then give apps the way to inject extra information into the kernel. We have the fadvise/madvise calls which can be used with the POSIX_FADV_DONTNEED flag to drop cached content from the page cache.

    Heuristics and predictive techniques are done when we can be reasonably sure that we get the decisions right: for example there's a piece of fairly advanced code in the Linux page cache trying to figure out whether to pre-fetch data or not.

    The large file copy interactivity problems some have mentioned here were most likely real kernel bugs (in the filesystem, IO scheduling and VM subsystems) and were hopefully fixed in the v2.6.33 - v2.6.36 timeframe.

    If you can still reproduce any such problems then please report them to linux-kernel@vger.kernel.org so we can fix it ASAP.

    In any case, we could all be wrong about it, so if you have a good implementation of more aggressive predictive algorithms i'm sure a lot of people would try them out - me included. We kernel developers want a better desktop just as much as you want it.

  • Re:It sucks I agree (Score:5, Interesting)

    by Ingo Molnar (206899) on Sunday October 24, 2010 @06:50AM (#34002790) Homepage

    There's also the VM fix from Wu Fengguang [lkml.org], included in v2.6.36, which addresses similar "slowdown while copying large amounts of data" bugs.

    There were about a dozen kernel bugs causing similar symptoms, which we fixed over the course of several kernel releases. They were almost evenly spread out between filesystem code, the VM and the IO scheduler. And yes, i agree that it took too long to acknowledge and address them - these problems have been going on for several years. It's a serious kernel development process failure.

    If anyone here still experiences bad desktop stalls while handling big files with v2.6.36 too then we'd appreciate a quick bug report sent to linux-kernel@vger.kernel.org.

    Thanks,

    Ingo

  • by Cassini2 (956052) on Sunday October 24, 2010 @02:30PM (#34005390)

    I often note that multiple simultaneous low-priority file copies implemented as:

    ionice -c 3 rsync bigfilein directoryout

    run faster than multiple simultaneous high-priority copies implemented as:

    rsync bigfilein directoryout

    If the copies are run one at a time, the higher priority rsync runs faster. For multiple copies, often the lower priority rsyncs run faster. Also, desktop usability is much improved with the lower priority rsyncs.

    I suspect a priority inversion occurs inside the file systems write back cache. At regular priority levels, data is not written back to disk in a timely manner. The ionice -c 3 gives the disk caches a higher priority than the rsync I/O commands, preventing the I/O commands from filling the cache and creating a priority inversion.

    The Gnome GUI in Ubuntu is particularly vulnerable to this priority inversion, as by default it does multiple copies simultaneously inside a separate window. Ubuntu usually performs better than Windows however. Between the A-V software in Windows, and the tendency to swap applications out of memory to maximize disk cache, Windows usually performs the same copy operations more slowly than Ubuntu and with less system responsiveness.

% APL is a natural extension of assembler language programming; ...and is best for educational purposes. -- A. Perlis

Working...