How can you Reduce Disk Swapping in Linux? 24
A member of the voiceferous Clan Anonymous Coward asks: "I am finding that on my 64meg box that Linux is using the swap file even though it looks like there is plenty of RAM, but it is used by the disk(?) cache. For example, 4.8M of swap is used when the disk cache is 20M. What can be tuned to reduce the disk cache size? Sample /proc/meminfo info follows:
total:64569344 used:57200640 free:7368704 shared:32337920 buffers:3235840 cached:20496384 used-swap:4837376
"
Good question (Score:2)
The work-around is to use RAM disks. Simply create a RAM disk, with memory that would otherwise be free, format it as swap-space, and activate it with swapon.
It doesn't stop the swapping, but at least it's all in RAM, rather than RAMDisk.
Don't worry about it.... (Score:2)
Same Problem (Score:1)
_______________________________________________
There is no statute of limitation on stupidity.
Not a problem (Score:2)
As far as I am aware, this isn't really an issue.
The amount of physical RAM free is rarely more than a few megs, as you've probably noticed (this is the "free" value in your example). Incidentally, my preferred way to determine free memory is with the free command.
What probably caused you to go into the swap there was you ran a big program or two at some point (Netscape, and X in general can be real killers). Linux will automatically scale down the buffers and cache to practically zero if necessary. Try running some programs that eat memory and watch the memory stats some time. It will only swap if necessary. When it does swap, it only swaps dormant processes, which you'll find plenty of in most standard distributions. Typically, you'll find you've got junk like nfsd and httpd and other more bizarre daemons running that you don't really need, and these unused processes will be what has swapped.
Because what is swapped was sleeping, it is likely to stay that way, so you only get a one-off performance hit from the swapping. When you close down your large apps, you'll find the memory which gets freed goes back to being used for cache/buffers, instead of swapping stuff back into memory. Why? Because that would be just as slow as swapping it out in the first place, and it's handy to keep that RAM free (bigger cache == better performance). If any of those processes wake up, they'll get automatically swapped back in.
So, to answer your question, no tuning is needed. The disk cache only takes memory not needed by processes. If RAM is low, the disk cache will shrink to virtually nothing, and if still more memory is needed, sleeping processes will get swapped. If RAM then gets freed up, the cache will expand again to fill the void. If a swapped process becomes active again, it gets swapped back into memory.
If you are really concerned, try disabling unused services on your system free memory. This is relatively distribution specific; refer to your documentation. But the only time when you should be worried is when RAM is so low that even mostly active processes get swapped - because then you'll get lots of disk thrashing. My guess is if your system is running with 20-odd megs of buffers and cache (plus 7Mb free), you're probably quite comfortable.
You might like to spend a little bit of time playing with top (table of processes). It lists your processes, and can be set to sort by CPU usage, Memory size, etc.
Hope this helps.
Damn typos.. (Score:1)
"try disabling unused services on your system free memory"
should be:
"try disabling unused services on your system to free memory"
Related Question (Score:1)
Is this a load of hooey? Could someone perhaps explain or give a link to an explanation to the memory management in Linux?
Thanks.
Linux memory management (Score:4)
In fact as you approach 3/4 of your physical RAM the kernel becomes very reluctant to keep programs in memory. It's virtually impossible to use 100% of physical RAM because by that time the kernel is swapping out your resident sets with every disk access. Just try recording some uncompressed video. You'll see the video recording software fits perfectly well in RAM but after writing a few gigabytes of data the kernel has swapped out everything it can to increase disk caching. In fact your first fwrite will be very slow, as the kernel is writing physical RAM to disk to free up cache and only then writing your actual data.
Now this scheme can get very problematic if you really need the programs that the kernel swaps out. There are parameters in the kernel source which determine the swapping threshold but the problem is more what to set those parameters to than where they are.
limiting process size? (Score:2)
See, every now and then, xmms loses its mind and grows to hundreds of megabytes. I'd like to set things up so that as soon as it hits 20M or so, it just dumps core instead.
But I find that neither limit datasize 1000 nor limit memoryuse 1000 cause subsequently-launched processes to be unable to malloc enormous amounts of memory (I tried it with a simple malloc-bomb program; its mallocs never fail.)
Is there some kernel flag I have to set to make heap limits be obeyed or something?
Re:Related Question (Score:1)
It's somewhat true. I don't know about caching static pages in RAM (wouldn't suprise me though), but there are programs like mod_perl that load up Perl CGI scripts in memory and leave them there. The Perl is already compiled and in memory, so the next time that CGI is hit, it just runs straight from memory.
linux/Documentation/sysctl/vm.txt (Score:1)
JWZ's question is a good one; I'd like to know why the RSS rlimit stuff doesn't seem to work, too. The other rlimits seem to work.
Re:limiting process size? (Score:1)
I know I can run Junkbuster and Netscape tends not to do this, but it shouldn't be allowed to take so much memory anyhow.
Re:limiting process size? (Score:1)
I believe what's happening is that GNU libc's malloc() often uses mmap() of /dev/zero to get large anonymous mappings. mmap() doesn't count against the memoryuse limit (RLIMIT_DATA in the code), which only covers things acquired via brk().
It appears that RLIMIT_AS, shown in bash as 'virtual memory (kbytes)' and available via ulimit -v, will limit total virtual memory usage, including all mmap()'d objects (and I think System V shared memory, from a peek at the code). Unfortunately it appears that csh/tcsh haven't been updated to know about this new thing-to-limit.
The 2.2.* kernels swap stuff out more than 2.0.* (Score:1)
In my local experience, this is quite to be
expected with a 2.2.* series kernel (used in
a number of recent distributions, eg Redhat 6.0
onwards). 2.2.* seems to aggressively swap things
out to make room for various sorts of disk cache;
the same machines under 2.0.* didn't go into the
swap until they actually mostly ran out of memory
for real, non-cache/non-buffer pages.
This aggressiveness appears to be basically
harmless; I haven't noticed any particular
performance problems due to it..
Although it can be somewhat disconcerting to
see a
machine with lots of real memory and no programs
applying memory pressure be 10 or 20 megabytes
into swap.
Kernel ideas (Score:1)
Technically I'd say there are a few solutions to the problem:
Dumb:
1) Set the size of disk cache statical. This is the easiest solution, but requires manual tuning of the system.
Intelligent:
2) Persistantly register the usage of files, how much they are being used when they first are being used/run, % of CPU, etc. The disk cache and VM can then operate better by sorting processes, memory areas and files by certain priority criterias. This can then be used to set a more averagely needed disk cache size, and to completely skip files you usually access once in a blue moon.
The more choices we have, the better chance of a perfect fit.
- Steeltoe
Obvious solution (Score:1)
That will stop those anoying little critters...
You may of course have to buy a few more Mb RAM
Seriously though as has been said here a few times allready, Linux swaps out unused program code so you get more disk buffers & hopefully better performance. If it is thrashing you are either doing lots with big memory footprint apps, or you have some funny data/code usage patterns. The good old answer of more memory may be the right one or get into some real kernel tuning.
Cheers,
R.