Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Linux Software

How can you Reduce Disk Swapping in Linux? 24

A member of the voiceferous Clan Anonymous Coward asks: "I am finding that on my 64meg box that Linux is using the swap file even though it looks like there is plenty of RAM, but it is used by the disk(?) cache. For example, 4.8M of swap is used when the disk cache is 20M. What can be tuned to reduce the disk cache size? Sample /proc/meminfo info follows: total:64569344 used:57200640 free:7368704 shared:32337920 buffers:3235840 cached:20496384 used-swap:4837376"
This discussion has been archived. No new comments can be posted.

How can you Reduce Disk Swapping in Linux?

Comments Filter:
  • I want to know the same thing. I -can- tell you a work-around that I sometimes use, though.

    The work-around is to use RAM disks. Simply create a RAM disk, with memory that would otherwise be free, format it as swap-space, and activate it with swapon.

    It doesn't stop the swapping, but at least it's all in RAM, rather than RAMDisk.

  • It's dynamically allocated. Nothing used by programs is used for the disk cache, and you may see some swapping of programs that are probably started, but never used (innd anyone, or httpd on a home system that isn't on the net?).
  • I have 192 megs of ram and my computer constantly swap in and out of the swap file. The buffer/cache sometime gets as big as 100+ megs, any ideas why? any idea how I can turn down the cache size?



    _______________________________________________
    There is no statute of limitation on stupidity.
  • As far as I am aware, this isn't really an issue.

    The amount of physical RAM free is rarely more than a few megs, as you've probably noticed (this is the "free" value in your example). Incidentally, my preferred way to determine free memory is with the free command.

    What probably caused you to go into the swap there was you ran a big program or two at some point (Netscape, and X in general can be real killers). Linux will automatically scale down the buffers and cache to practically zero if necessary. Try running some programs that eat memory and watch the memory stats some time. It will only swap if necessary. When it does swap, it only swaps dormant processes, which you'll find plenty of in most standard distributions. Typically, you'll find you've got junk like nfsd and httpd and other more bizarre daemons running that you don't really need, and these unused processes will be what has swapped.

    Because what is swapped was sleeping, it is likely to stay that way, so you only get a one-off performance hit from the swapping. When you close down your large apps, you'll find the memory which gets freed goes back to being used for cache/buffers, instead of swapping stuff back into memory. Why? Because that would be just as slow as swapping it out in the first place, and it's handy to keep that RAM free (bigger cache == better performance). If any of those processes wake up, they'll get automatically swapped back in.

    So, to answer your question, no tuning is needed. The disk cache only takes memory not needed by processes. If RAM is low, the disk cache will shrink to virtually nothing, and if still more memory is needed, sleeping processes will get swapped. If RAM then gets freed up, the cache will expand again to fill the void. If a swapped process becomes active again, it gets swapped back into memory.

    If you are really concerned, try disabling unused services on your system free memory. This is relatively distribution specific; refer to your documentation. But the only time when you should be worried is when RAM is so low that even mostly active processes get swapped - because then you'll get lots of disk thrashing. My guess is if your system is running with 20-odd megs of buffers and cache (plus 7Mb free), you're probably quite comfortable.

    You might like to spend a little bit of time playing with top (table of processes). It lists your processes, and can be set to sort by CPU usage, Memory size, etc.

    Hope this helps.

  • "try disabling unused services on your system free memory"

    should be:

    "try disabling unused services on your system to free memory"

  • A friend of mine was saying in Linux/Apache frequently used files would automagically go into RAM. For example, a web server serving static pages would automagically have those files in RAM for serving out. Thus, increasing the speed.

    Is this a load of hooey? Could someone perhaps explain or give a link to an explanation to the memory management in Linux?

    Thanks.
  • by heroine ( 1220 ) on Thursday November 11, 1999 @06:08AM (#1542755) Homepage
    Linux has a very interesting memory management system that isn't intuitive at first. It enlists swap space even when you have plenty of physical RAM available. The degree to which you enlist swap space depends highly on how much disk activity you're performing and less on whether your program fits in memory. So every time you access the disk the kernel makes a decision: swap out physical RAM to speed up the disk or overwrite existing cache data to speed up the program. That decision has little to do with how far your resident set has exceeded physical RAM but mainly depends on how recently you accessed your resident set.

    In fact as you approach 3/4 of your physical RAM the kernel becomes very reluctant to keep programs in memory. It's virtually impossible to use 100% of physical RAM because by that time the kernel is swapping out your resident sets with every disk access. Just try recording some uncompressed video. You'll see the video recording software fits perfectly well in RAM but after writing a few gigabytes of data the kernel has swapped out everything it can to increase disk caching. In fact your first fwrite will be very slow, as the kernel is writing physical RAM to disk to free up cache and only then writing your actual data.

    Now this scheme can get very problematic if you really need the programs that the kernel swaps out. There are parameters in the kernel source which determine the swapping threshold but the problem is more what to set those parameters to than where they are.
  • This is only tangentially related, but how do I limit the maximum size of a process? You would think (as I did) that this is what the limit command in sh/csh is for, but it doesn't seem to work on Linux 2.2.12.

    See, every now and then, xmms loses its mind and grows to hundreds of megabytes. I'd like to set things up so that as soon as it hits 20M or so, it just dumps core instead.

    But I find that neither limit datasize 1000 nor limit memoryuse 1000 cause subsequently-launched processes to be unable to malloc enormous amounts of memory (I tried it with a simple malloc-bomb program; its mallocs never fail.)

    Is there some kernel flag I have to set to make heap limits be obeyed or something?

  • See the other notes here regarding the Linux implementation of memory management. I'm here to talk about Apache.

    It's somewhat true. I don't know about caching static pages in RAM (wouldn't suprise me though), but there are programs like mod_perl that load up Perl CGI scripts in memory and leave them there. The Perl is already compiled and in memory, so the next time that CGI is hit, it just runs straight from memory.
  • Read. Play with /proc/sys/vm/* and see if it helps.

    JWZ's question is a good one; I'd like to know why the RSS rlimit stuff doesn't seem to work, too. The other rlimits seem to work.
  • I wish I knew, too. Netscape (et al) tends to consume all of my system's memory until the swapping is continuous. Then I can't do *anything* at the console for extended periods of time, all the while hearing my disk thrash itself into oblivion. Couldn't the system reserve a few K of physical memory to handle stuff if it got full? Like shut down errant apps or at least give them an out of memory and let the process deal with it.

    I know I can run Junkbuster and Netscape tends not to do this, but it shouldn't be allowed to take so much memory anyhow.
  • I believe what's happening is that GNU libc's malloc() often uses mmap() of /dev/zero to get large anonymous mappings. mmap() doesn't count against the memoryuse limit (RLIMIT_DATA in the code), which only covers things acquired via brk().

    It appears that RLIMIT_AS, shown in bash as 'virtual memory (kbytes)' and available via ulimit -v, will limit total virtual memory usage, including all mmap()'d objects (and I think System V shared memory, from a peek at the code). Unfortunately it appears that csh/tcsh haven't been updated to know about this new thing-to-limit.

  • In my local experience, this is quite to be
    expected with a 2.2.* series kernel (used in
    a number of recent distributions, eg Redhat 6.0
    onwards). 2.2.* seems to aggressively swap things
    out to make room for various sorts of disk cache;
    the same machines under 2.0.* didn't go into the
    swap until they actually mostly ran out of memory
    for real, non-cache/non-buffer pages.



    This aggressiveness appears to be basically
    harmless; I haven't noticed any particular
    performance problems due to it..
    Although it can be somewhat disconcerting to
    see a
    machine with lots of real memory and no programs
    applying memory pressure be 10 or 20 megabytes
    into swap.


  • I have the same problem. The way Linux manages virtual memory and disk cache _by default_ doesn't scale well with common desktop use. Not to put down linux developers who knows exactly what's going on, but having such management too dynamical, lacking some intelligence, can severely hurt performance in some cases. That's my experience after all.

    Technically I'd say there are a few solutions to the problem:

    Dumb:
    1) Set the size of disk cache statical. This is the easiest solution, but requires manual tuning of the system.

    Intelligent:
    2) Persistantly register the usage of files, how much they are being used when they first are being used/run, % of CPU, etc. The disk cache and VM can then operate better by sorting processes, memory areas and files by certain priority criterias. This can then be used to set a more averagely needed disk cache size, and to completely skip files you usually access once in a blue moon.

    The more choices we have, the better chance of a perfect fit.

    - Steeltoe
  • Why not try swapoff ! [sti.ac.cn]

    That will stop those anoying little critters...

    You may of course have to buy a few more Mb RAM ;-)

    Seriously though as has been said here a few times allready, Linux swaps out unused program code so you get more disk buffers & hopefully better performance. If it is thrashing you are either doing lots with big memory footprint apps, or you have some funny data/code usage patterns. The good old answer of more memory may be the right one or get into some real kernel tuning.

    Cheers,
    R.

The biggest difference between time and space is that you can't reuse time. -- Merrick Furst

Working...