How Big Should My Swap Partition Be? 900
For the last 10 years, I have been asking people more knowledgeable than I, "How big should my swap be?" and the answer has always been "Just set it to twice your RAM and forget about it." In the old days, it wasn't much to think about — 128 megs of RAM means 256 megs of swap. Now that I have 4 gigs of RAM in my laptop, I find myself wondering, "Is 8 gigs of swap really necessary?" How much swap does the average desktop user really need? Does the whole "twice your RAM" rule still apply? If so, for how much longer will it likely apply? Or will it always apply? Or have I been consistently misinformed over the last 10 years?
What Has Changed? (Score:5, Informative)
'Is 8 gigs of swap really necessary?'
With a 750GB [newegg.com] hard drive selling under $100, what has changed?
... and 8GB of space is still trivial with a 750GB hard drive.
Yeah, your 256MB of space was trivial when you had a 30GB hard drive
That said, I'll forward you some common information on paging [wikipedia.org].
Linux and other Unix-like operating systems use the term "swap" to describe both the act of moving memory pages between RAM and disk, and the region of a disk the pages are stored on. It is common to use a whole partition of a hard disk for swapping. However, with the 2.6 Linux kernel, swap files are just as fast as swap partitions, although Red Hat recommends using a swap partition. The administrative flexibility of swap files outweighs that of partitions; since modern high capacity hard drives can remap physical sectors, no partition is guaranteed to be contiguous.
I'm no expert but the short answer to this is to look at your swap partition as your extended virtual memory. By saying that your swap partition should be 2x your main memory is like saying that you will never use 3x of what your main memory is (in this case 12GB). While that rule of thumb is a good one, there may in fact be applications today in the graphics and processing world that require insane amounts of memory. While Firefox is probably never going to reach that critical mass (nor will most average programs) it's probable that a few years from now it will be common place. I know it's insane to think of but 'ought to be enough for anybody' is not the phrase you want to throw around in the digital information world.
It's those days when I'm playing Warcraft through wine, listening to streaming radio through Amarok, have 20 windows open behind it, idling a LAMP server for my development projects, running a vent client, some form of news aggregater, pidgin & an e-mail client hooked up to several POP3/IMAP accounts that I am happy I erred on the side of a whole ton of swap space.
Re:What Has Changed? (Score:5, Interesting)
I have an Eee 901. It has 1GiB of RAM and 20GB of disk space. A swap partition on the 'twice your RAM' rule would be far from trivial.
I decided to be bold and installed Hardy with no swap partition. It seems to work just fine so far; Firefox greys out for a few seconds sometimes while loading pages, which might have to do with my reckless configuration, but on the whole it's pretty snappy.
As for my desktop PC, it has 4GiB of RAM. I followed the traditional rule when I installed on that. I don't think that swap partition has ever even been used.
Re:What Has Changed? (Score:4, Interesting)
consider using a swap file for your setup id recommend 256mb of swap, with 1.25gb of ram (apart from when i left wireshark running for toolong) ive not seen it creep above ~100mb for long.
Here's how big (Score:5, Interesting)
I use swap only to tell me that I'm low on RAM. Basically once the machine starts using swap and getting slightly slow- it means I'm low, then I can try to shut down stuff (without it behaving otherwise strangely, or dying abruptly).
Here's how I suggest you figure out _roughly_ how much swap you need.
1) Figure out the amount of Virtual Memory your programs and services _allocate_ without really _using_ - call this F. There are some programs that allocate hundreds of MB of memory but never use it. But note that there are some programs that allocate lots of memory and may use it
2) Figure out your drive throughput for swap access (swap in + swap out)- this is often related to random access throughput - and for a typical hard drive it could be in the order of magnitude of 10MB/sec - call this M. Note that many flash drives have pathetic random write speeds of 4MB/sec (or even less!).
3) Figure out the time you are willing to wait for stuff to swap in and out (e.g. time to get an ssh prompt- call this T.
Swap = F + T * M.
So for example, if you have programs that allocate a total of 100MB and never use it, and your drive swap throughput is 10MB/sec and the amount of time you're willing to wait is 15 seconds.
Swap = 100MB + 15 sec * 10MB/sec = 250MB.
As you can see allocating gigabytes can hurt - since it'll take days to swap in and out processes that are using gigabytes of swap. You'll run out of time before you run out of swap, and when that happens somebody will do a hard shutdown of the machine - and that means ALL processes will be abnormally terminated, rather than just one.
Yes, there are cases where the offending program might not keep accessing all of that swap, but when a program misbehaves like that, you'd rather find out sooner rather than have to shutdown the whole computer (because it takes ages to respond).
Running programs from swap is best reserved for those who wish to experience the 1950s drum memory days. If you want to do retrocomputing keep in mind that memory speeds are now much faster than disk speeds, whereas in the 1950s memory speed = drum speed, and most modern programs assume modern memory speeds.
Re:Here's how big (Score:5, Informative)
So I'm a FreeBSD guy rather than linux, but I'm going to assume that Linux also supports 'limits' that define the maximum a program can utilize before its denied access to more resources. You won't get a normal app on my FreeBSD boxes to use more than 256M of ram, they aren't allowed. There are 2 exceptions, the PostgreSQL server on one of the machines, and the bot that connects to that database. They both deal with large datasets on regular basis so they are allowed to use more ram. Now mind you, these machines are used for my personal development projects and they aren't really 'servers' in the sense that they see real load. My instances of apache don't NEED a lot of ram, some do.
My point is that there are other protections in place that prevent an app from 'running away' and taking a properly configured machine down.
Second, swap can be VERY useful even if you NEVER run out of ram. The OS can swap apps that have used memory but aren't actually doing anything with it out, and leave that memory available for file/disk caching, which can make performance FAR better than if you kept the idling apps in memory and had less available disk cache. Some apps avoid buffering things in memory because its both easier and most times more efficient to use the disk and let the OS manange the buffering. I've seen NT based OSes aggressively swap out things that aren't in use just so there is more memory available for disk cache, and it makes sense cause there is a lot of crap the kernel and other apps load up that is very RARELY needed, if ever.
So while you can ( and did ) point out the potential pitfalls of using swap, your examples don't apply to any modern OS. I'm excluding Windows from that statement cause lets face it, its not exactly modern at the core. Modern kernels are FAR better at deciding what to swap than you are in almost every case, just like compilers can do a far better job of optimizing applications that most developers can. Yes some can do better, but its not likely you are, and certainly not the guy asking this question.
In short, if you're going to try to get technical with why you wouldn't want to use swap, at least use examples problems that weren't solved years ago.
And for reference, you configure your swap poorly if you do what you say.
Re:Here's how big (Score:4, Insightful)
Unfortunately, these "aggressive" memory managers are rather stupid. They will happily swap out every running program to increase the disk cache even in situations where caching makes no sense. Caching only makes sense if the underlying media is a bottleneck for performing a given task. How much of a bottleneck is your hard disk however when you are downloading and uploading files? When you're watching a movie? When you're playing MP3's? Or even, when you are serving web pages (over a link slower than your hard disk)?
In none of those situations will you get ANY benefit at all from disk caching... yet if I watch a 4 GB movie over a period of 2 hours, a lot of memory managers will decide that attempting to cache all of that data might be a good thing. Halfway through the movie, it will think that all those other running programs have been unused for an hour and can be safely swapped out in favor of caching more of that 4 GB file. The end result is that half your programs are swapped out after watching a movie, resulting in a sluggish system that is trashing all over the swap file to restore some sanity to it, and all for caching data that put NO STRESS on the underlying media in the first place.
The same thing happens for idling systems left on over night, doing simple tasks like virus scanning, downloading files, rebuilding indexes, and so on. The end result is that a system feels sluggish the next day, for no tangible performance benefit.
Ask yourself, if I have 4 GB of RAM, and 500 MB worth of applications running, effectively having 3.5 GB for disk caching. How useful is it to swap out that extra 0.5 GB worth of kernel/programs for even more disk caching? Is 4 GB of disk cache so much more valuable than 3.5 GB? I highly doubt it, so to prevent stupid memory managers from swapping out my favourite programs which I left running for a reason, I just turn off swap.
Re:Here's how big (Score:4, Insightful)
This formula is ridiculous and makes no sense at all. To determine if you need swap you are far better off just figuring out if your system has enough RAM to run the programs you want to use on a daily basis. If you have that, then there's no reason to ever use swap (as that was the original reason people needed swap in the first place). As an added bonus, systems without swap cannot swap out programs in favor of increasing the disk cache, keeping everything snappy even after days of not using certain programs. So, here's my formula:
1) Donot turn on swap.
2) If there's ever any problem with memory, create a swap file (if you don't have one yet) and type swapon on a live system.
You can see when you have a memory problem by applications getting killed when they try to allocate large chunks of memory or by keeping an eye on a memory monitor. If you see most memory allocated to stuff that isn't the disk cache, then you'll need swap.
Re: (Score:3, Funny)
Re:What Has Changed? (Score:5, Informative)
There are better reasons than boldness for not using swap on an Eee. They use solid-state drives (except some 1000-series models and the 904), which are faster than mechanical devices but can be rewritten fewer times. To make sure your drives last longer, do the following [ubuntu.com]:
Sure, without swap and with tmpfs you will have less memory available, but I have an Eee 900A and I bought it as a presentation machine, possibly for some occasional work while travelling, not as a workhorse.
Re:What Has Changed? (Score:5, Insightful)
Re: (Score:3, Informative)
They use solid-state drives (except some 1000-series models and the 904), which are faster than mechanical devices but can be rewritten fewer times.
Actually, on my Eee PC 901 they're not even faster than normal drives. The stock Eee PC comes with one boot drive that seems to run at more-or-less "normal" speed, and one larger "data drive" that seems ... slow. The slowness is especially noticeable during writes. For example, when I stored my Firefox profile on the data drive, I could expect a 1-4 second pause every time I loaded up a Web page, during which the program would be unresponsive. Presumably this happened when Firefox was writing to its cache,
Re:What Has Changed? (Score:4, Informative)
Re: (Score:3, Interesting)
Re:What Has Changed? (Score:5, Insightful)
Well, that doesn't mean it isn't swapping. If faced with memory pressure, the OS can throw away file backed pages instead, such as program executable pages, and then bring them in later. Those file backed pages will be scattered all around the partitions that hold your programs, though, not concentrated in the swap partition.
It also means that buffered writes will need to be pushed to disk sooner too, which reduces your disk buffering for anything that writes a lot of data will impose more pain on your system.
The bummer in all this is that you have nowhere to put anonymous pages. These are the pages associated with "malloc()" (or "new" if you prefer), as well as any other per-task writable structures such as the stack and global variables. These pages aren't backed by any file and could only go to swap. Without a swap file, they will always accumulate in RAM until unmapped, crowding out program pages and disk buffers. This includes pages that don't actually hold anything at the moment, but remain part of the process' malloc heap due to internal heap fragmentation.
So, that's where the increasing thrashiness comes from on a swapless system. If you get under enough memory pressure from anonymous pages, then it's hard to keep enough program pages and disk buffers around to make real progress. And when you do need those other kinds of pages, they're spread all over the disk so you suffer from tons of seeking penalty, unless you're on an SSD.
--Joe
Re:What Has Changed? (Score:5, Insightful)
If the paging algorithm does its job well and the active working set stays stable in RAM, then the bulk of the writes to the swap file are for the dead weight inactive anonymous pages. Freeing up additional RAM for disk buffers could also prevent writes on other random files if they were short lived and deleted before ever getting written. This happens more often than you might think, and is one of the motivations (but not the only one) behind deferred allocation. (The other big one is multiple files opened for streaming writes in parallel.)
So, like all things, it's a tradeoff. When you're on an SSD, if your working set fits in RAM and you don't really thrash, then by all means turn swap off. If you find yourself thrashing a little, do yourself a favor and make a small swapfile and see if that stabilizes things, since at least some of that additional activity will be writes that could go away if you had more RAM--may as well let the VM throw out some deadweight pages to make room for transient pages that might live and die in RAM. If you're oversubscribing your RAM such that you need a truly huge swapfile, consider getting more RAM, because you're likely punishing your SSD.
Re:What Has Changed? (Score:5, Insightful)
I suppose that is not really a disk but rather flash storage. Swapping to flash is not the best idea as it could cause the flash to last shorter than it should. So I'd say this is probably one of those cases where no swap is the correct configuration.
Re: (Score:3, Interesting)
Having a swap partition doesn't necessarily mean having a lot of swap traffic. Often what gets placed in swap are portions of the heap that got allocated, but won't be referred to for quite some time. It gives room for other types of pages (as I mention here [slashdot.org]).
That said, if what you're doing doesn't cause a lot of thrashing when there is no swap, don't add swap on your flash SSD.
Re: (Score:3, Informative)
I'm not 100% sure, but I think firefox grays out when trying to crunch big script(s) - it might be pegging the processor, but I don't know for sure. I've been meaning to track down the issue, but have been too lazy to so far. If anyone knows the cause/fix, I'd love t
Re:What Has Changed? (Score:4, Informative)
Re:What Has Changed? (Score:4, Insightful)
Linux will use swap sometimes even if you don't fill up your RAM. It can swap out idle programs and use the recovered RAM for file caching which gives a performance boost to the file system.
Re:What Has Changed? (Score:5, Informative)
Linux will use swap sometimes even if you don't fill up your RAM. It can swap out idle programs and use the recovered RAM for file caching which gives a performance boost to the file system.
Conversely, if you have enough ram for file caching and running applications, then you will get a performance boost from disabling swap (because applications would be faster if they weren't ever swapped out).
Re:What Has Changed? (Score:4, Funny)
"Enough ram for file caching" is approximately infinite RAM
You know what I love? I love how you're not afraid to say something unbelievably stupid without irony. You should run for President.
Re: (Score:3, Interesting)
Kind of like running a LiveCD all the time.
Re:What Has Changed? (Score:5, Informative)
Base 2. Storage numbers using base 10 are for disk manufacturers that are filthy liars.
Re:What Has Changed? (Score:5, Interesting)
Perhaps you'd like to tell us whether a GB is base 2 or base 10 then.
You obviously aren't worth your SALT!
Remember kids, it DEPENDS!
Bandwidth? Base 10 -- always has been.
ROM? Base 2 -- always has been -- and traditionally in bits, not bytes.
RAM? Base 2, and bytes.
Hard disk? Base 10 in the manufacturer's specs, base 2 in the OS display. Always has been that way, always will be.
Floppies? Base 2 until you get to MB, where 1MB = 1000 base 2 KB (seriously).
Clock speeds? Base 10, always has been, always will be.
Flash? Who knows.
Isn't it great that we have such an easy, convenient system that is focused around the needs of us humans, and not the needs of the computers (who don't care in the slightest).
Re:What Has Changed? (Score:5, Informative)
Pop quiz:
Throughput: How many bits per second peak can a 14.4kbps modem move? 1.544Mbps T1 line? 10Mbit Ethernet?
Disks: How many bytes are on a 1.44MB floppy? A 2.88MB floppy? A 650MB CD-ROM?
Answers:
Throughput: 14,400. 1,544,000. 10,000,000. Hmmm... so much for base-2 throughput numbers. And yet, when you see the "kB/sec" rate in your browser download dialog, that is most likely in a 1024 byte/sec quantities.
Disks: 1,473,560 (1440 * 1024, a mixture of base-10 and base-2), 2,949,120 (2880 * 1024, again a mixture), and 681,984,000 (purely base-2, derived from 333,000 sectors * 2048 bytes/sector / 1,048,576). And yet when you look at disk capacities from most computer software, it's reported as purely base-2 sizes.
So, what's consistent about this again? RAM seems to be the only thing that gets it right most of the time, though I do remember seeing plenty of adverts for Commodore 64s that listed them with 65K of memory back in the day.
And for the real brain bender: If we agree that bits should always use power-of-2 meanings and everything else should use power-of-10, what do we do when the two collide, such as when talking about areal density? (That's bits per square meter.)
That said, whoever came up with the names gibibytes, mebibytes and kibibytes must have wanted us all to sound like we have a speech impediment or something, as the pronunciation for these sounds worse than baby talk. I'll stick to saying gigabytes, megabytes and kilobytes and their understood power-of-2 meanings where it makes sense, knowing full well that it has deep flaws. It's just an unfortunate circumstance, but most of the time it thankfully doesn't matter.
--Joe
Re:What Has Changed? (Score:4, Insightful)
Yes, let's go back to the days of overlays and manual management of transferring data to and from disks and other devices. That was so much simpler.
Re: (Score:3, Informative)
Now, if you suppose that someone has 4 GB RAM and a 750 GB drive, they'd be using 8 GB for swap space, which represents about
HOWEVER, if you look at smaller drive sizes, which are still c
Errr... check your math. (Score:5, Informative)
8GB swap on a 120GB drive is 7%, not .07%. On a 200GB drive, it's 4%, not .04%, etc.
SirWired
Re:Errr... check your math. (Score:5, Funny)
It's "Verizon math"...
Just my 0.02 cents worth.
Re:What Has Changed? (Score:5, Insightful)
Re:What Has Changed? (Score:5, Insightful)
You present several arguements, but none of them are really very good.
Honestly, why does the 2x RAM guideline make any sense? Why is it that when I upgrade my 1GB to 8GB, I suddenly need 16GB of swap space, even though my total RAM+SWAP was less than half of my current memory? That makes no sense. Why should I want to increase the amount of swap I'm using if I've never used half the RAM I've got in the life of the computer?
How about we practice some Engineering? I know, it's slashdot, it's a tough thing to do, but bear with me.
So you've got a computer, and you know what you do with it. Simply do what you'll do, and figure out the peak memory usage over a period of time. Add 50% or so to get a target memory value, and if your current memory exceeds that value and thus you've got more than enough to never have to hit swap, pick a small number like 256MB for your swap partition to satisfy applications which demand swap even when enough memory is available. If you don't have enough memory, then create a swap file to make up the memory shortfall.
Seriously, some of the suggestions for swap are impractical. If you're using 24GB of memory in my hypothetical, and your hard drive can only transfer 16MB of memory at once, you're not using your computer anyway because it's too slow.
Re: (Score:3, Informative)
Because if you've already successfully chewed through 8GB of RAM, having an extra 256MB of swap available just in case isn't likely to be a very meaningful stop-gap.
The other advantage of swap, which I've not seen anyone discuss here yet, is that unused programs and data will be flushed out of RAM to make room for more buffers and cache.
Might not seem like a big deal, either, but: Suppose you've got OpenOffice and Firefox open, but you haven't used them for awhile. They're each using a few hundred megs of
Re:What Has Changed? (Score:4, Informative)
Re: (Score:3, Interesting)
I'd agree that the relative size of 8G page space is probably smaller than a 512M partition a while ago... However, some corrections:
Technically swap space is not page space, though the distinction is being blurred quite a bit. Swap space was used to actually swap out entire processes, while page space was used to page out memory, er, pages. I'll use swap here :D
The wikipedia link is a little incorrect. In many cases a swap partition can be more efficient than a swap file at least in Linux. For one, there'
Re:What Has Changed? (Score:4, Informative)
i assume you dont hibernate, thats the only reason i have 1x my ram in swap. Although using something like "hibernate" instead of ubuntu's tool will happily compress it so i could really get away with about 60-70%
Re:What Has Changed? (Score:5, Funny)
I hate it when that happens. A helpful popup told me I ran out of CPU cycles just a few days ago, and I had to order a whole bunch online. Cost thousands! Still waiting for them to arrive.
Re:What Has Changed? (Score:5, Insightful)
Swap space does improve performance. I have a lot of services loaded, ready for someone to use them, but they are rarely used. FTP server, file server, music server, web server, and so on. Most people have at least one little-used process running.
With no swap, these never-running programs actively consume RAM and reduce the amount of RAM available to running programs and even disk cache.
With swap, these sleepy daemons are paged out and not loaded again unless someone needs them. I get my RAM back for something I'm doing now.
Yes, I could pare down my system so it doesn't load things unless absolutely needed, but why should I have to do that manually when I could just leave them running and have them consume zero RAM?
As to "how big should swap be?", I prefer the Mac OS X solution - all free space on your drive is swap. Nothing is reserved, and you can make swap go away by completely filling your drive (but you wouldn't do that, would you?)
Re:What Has Changed? (Score:5, Insightful)
Looking at my own OS X activity monitor:
* 320MB free (i.e. in use as disk cache)
* ~320MB wired
* ~970MB active
* ~400MB inactive
* ~500MB swap used
And it's not about applications launched later; it's about applications running now, and the files they're accessing now.
What kind of OS would say "I could use more memory right now to give better disk caching... but fuck it, there's a service that hasn't been used in 6 weeks. Better let it keep that inactive program in memory and just keep reading the disk over and over again instead of caching it"
Re:What Has Changed? (Score:5, Insightful)
Why would I want that? If Windows crashes, I want it to restart and quickly, not waste my time dumping memory.
Re:What Has Changed? (Score:4, Insightful)
Some people actually want to debug something and find out *why* windows crashed.
I have customers who insist on windows servers. When they crash, the customer wants to know *why* -- dump files are handy in this case.
For a home user, however, I see your point.
Re:What Has Changed? (Score:5, Funny)
"Some people actually want to debug something and find out *why* windows crashed."
It crashed because they booted it. Next.
Re: (Score:3, Informative)
Why would I want that? If Windows crashes, I want it to restart and quickly, not waste my time dumping memory.
Agree, but having a pagefile the size of RAM in Windows is not for crashes as the parent suggests.
Having a pagefile the size of RAM in Vista for example, lets the OS do writethrough to the pagefile, so when you hibernate, it is faster, as it is a snapshot that doesn't have to be written to the HD.
I only recommend this for laptop users that want that extra second or two of speed when sending the comp
Re:What Has Changed? (Score:5, Informative)
Try it out, it's amazingly useful for debugging BSODs.
Re:What Has Changed? (Score:4, Interesting)
But then people might find out it was Realtek or Nvidia fucking up and won't be able to blame Micro-dollars-oft.
You can't spell goatse without Gates and a big O you know!
Re:separate partitions for / and /home (Score:5, Insightful)
Is there any point to separate partitions for / and /home? I mean, if you were running different file systems on each of them I could see the point.
I have gone through four different version of Linux on my laptop: mandrake/mandriva -> fedora -> knoppix -> ubuntu. Guess how many times I've thanked 8 lb 6 oz baby Jesus that I had the foresight to separate the two? All my data from my college days is still intact under /home.
For this simple reason, I heavily recommend it.
Re:separate partitions for / and /home (Score:4, Informative)
Guess how many times I've thanked 8 lb 6 oz baby Jesus that I had the foresight to separate the two?
My guess: At LEAST three. :-)
I have three partitions on my system:
Home stores all my stuff, /usr/local stores all the stuff I download and build from source, and / is the stuff the distribution I use (currently Slackware 12.1) gets to muck with.
When I want a new distro, I can nuke and pave / with impunity, and depending on the age of things in /usr/local, they may need to be recompiled, and that's about all I need. Every now and then, /home and /usr/local get moved to a new, bigger drive, which is a lengthy, but fairly painless process. I don't clean out; I can't justify spending hours figuring out what I can purge and what I can't when storage is so cheap. I just buy a bigger drive, and the old smaller one becomes the new /. If the old system drive fails, it's no biggie. The new one gets its critical files backed up. If I lose it, there will be some pain, but I keep the "If I lose these files, I'd rather just die" stuff burned to disk, copied to my virtual server 1000 miles away, and on my USB keychain drive.
Multiple partitions FTW.
Re: (Score:3, Insightful)
Yes. If an OS upgrade fails, then I can reinstall the OS (i.e. format the / partition) without having to move all my /home files to a backup drive first.
Re: (Score:3, Interesting)
Because I don't have a Vista reinstall CD or even a restoration partition. Dell didn't provide one. If VMWare fscks up my existing Vista install then I'd have a problem.
Re:What Has Changed? (Score:5, Informative)
Just killing processes more or less at random when the system runs low on memory is not a good idea. (I know it is not completely random, but there surely ain't any guarantee that it will make sane decisions). What you really want is for programs to get an out of memory error when trying to allocate memory, and then they can shut down as gracefully as possible. (Would be neat if the choice of who get the first ENOMEM was chosen by the heuristics that would otherwise kill the process, but I guess that has not been implemented). Guaranteeing that you will never need to kill a process because you are out of memory means the kernel will have to not commit to more than can be backed by memory and swap. However since actual usage tends to be somewhat lower than what is actually committed to, that would be a bit wasteful. This is the main reason why it makes sense to have a large swap partition that is mostly unused. Just so you have backing for the amount you need to commit to in order to optimally use the physically available memory.
You typically wouldn't want to make use of most of that swap. So once any significant amount start getting used, you'd want to start giving ENOMEM errors. And that should help ensuring that the swap will only be used for a short time. There are a few pieces of data in virtual memory that are only used under very rare circumstances, and those it is nice to have on swap so they don't take up precious memory. So the aim is not to have zero swap in use, just some low number of pages that are really not needed in memory.
Is there any kernel out there that gets all of this right? I don't know. But at least those I know about can be tweaked to do pretty well.
Re:What Has Changed? (Score:5, Informative)
For best performance, don't reduce your swap below the amount of RAM you have, unless you want to get rid of it entirely. The reason is that Linux 2.2.x and later will, when your disks are idle, preemptively copy your physical memory to swap - that way if you do run out of RAM, all Linux has to do is reuse that RAM for other things - your application's virtual memory has already been written out to disk. This can't work as well if the swap space isn't there for it.
With 2.0.x and earlier, I would have recommended you pick the amount of virtual memory you think you need, subtract the amount of physical memory you have, and set up that much swap. With 2.2.x and later, I recommend you pick the amount of virtual memory you think you need, and set up that much swap.
For what it's worth, Windows NT derivatives do the same thing.
Re: (Score:3, Insightful)
I am thinking of reducing the amount of swap on my primary compute server -- the reason is simple: if the machine starts using appreciable amounts of swap, it becomes so slow, it is unusable. So, really, by reducing the swap, what I do is get the OOM killer to take action and kill some processes sooner. I may have an unusual situation that when my machine is out of memory, the cause is almost certainly due to a process that I want killed anyway.
The problem with this is that there is no guarantee that the process(es) that the kernel kills are the ones that are causing the problem (basically put a process is killed when it requests a resource that is not available and that is not necessarily the process that is hogging the resource to begin with). A better approach is to limit the resources that a given process can use with utilities such as ulimit and similar so that it sees the resource as being unavailable sooner and is killed off instead of som
Re: (Score:3, Interesting)
>(basically put a process is killed when it
>requests a resource that is not available and that
>is not necessarily the process that is hogging the
>resource to begin with)
That's not true at all, please read oom_killer.c in the kernel source code before continuing to make statements about a piece of code you seem to never have read. (Or if you read it before, you haven't kept up to date...)
The oom_killer scores processes using a metric that takes into account usage and generally will kill the task
Re:What Has Changed? (Score:5, Interesting)
In that case, why can't I just let Windows XP or Vista manage the virtual memory size by itself? I don't see why I should need to establish a fixed size when Windows can manage it dynamically.
I have yet to see a version of Microsoft windows that does not end up with a hopelessly fragmented swap file over time. And if you let windows dynamically use space for swap, you're just asking for an even more hopelessly fragmented drive as it starts grabbing space anywhere it can find some to expand the swap file.
On my own windows installs I have brought some old-school unix (FreeBSD in particular) methodology to partitioning, and I make a partition just for swap (still 2x my total memory size). Of course in windows you still have to partition it, but I just don't write anything to it myself, and tell windows to only swap to that partition. Then my main partition doesn't end up as terribly fragmented.
Definitely not twice... (Score:5, Informative)
The origin of the 'twice real RAM' came in the early days of windows, in which windows could not use any swap unless you had at least as much as real RAM. That's been gone for ages now - and you should actively avoid too much swap.
If you allocate, say, 8G of swap for 4G of RAM, most of the time almost all of it will go unused. If it actually /is/ used, your machine has probably spent the past hour or so frantically swapping to try to accomidate this 12G request; ie, your system is completely unresponsive due to every program being mostly swapped out. The additional swap merely delays the out of memory event, and in the meantime you can't control the machine.
Swap is still useful for holding data that's not part of the working set, in order to free memory for cache; but this shouldn't be very much RAM (256-512mb should be enough). It's also useful for software suspend on linux - if you have a laptop, make it a little bit larger than physical RAM. And always have /some/ - linux's memory manager doesn't like having none.
Oh, nonsense (Score:4, Informative)
2X RAM was the standard rule of thumb at Sun, for SunOS long before Windows was around.
If anything, Microsoft ripped it off from Sun.
Re:Oh, nonsense (Score:5, Interesting)
In early Unixes (SunOS, e.g.), the memory manager was dumb and preallocated swap space sufficient to swap your entire process out if it became necessary, and it really did want contiguous. Running out of swap was common, even if it was really never used, and the "rule" to avoid that problem was 2xRAM. Further, if you had two swap partitions, or a partition and a file, your process stayed in whatever swap it started in and did not split across both. You could be out of swap space and still have a completely empty swap file.
Memory managers have gotten smarter, mapping smarter, and now swap is only used when it really is necessary. Pages that are not dirty don't get swapped, they get reloaded from the disk they came from. Pages that are swapped are often used soon enough that they never leave the RAM buffers.
Yesterday, I had a user come to me saying he was getting an "out of memory" error from Matlab. Matlab is notorious for not garbage collecting when it needs to. His Matlab process had 800Mb of resident memory, even though he said he had just 300Mb of data. The kicker? Somehow, over the last couple of years, the swap file I had created to extend the 512Mb swap partition had gotten lost. Dunno where it went, just not there. He had 512Mb of swap, and most of that wasn't being used. Never noticed it until yesterday. His 2Gb of RAM was sufficient for what he was doing.
It's a case of people who learned early just doing what they know works, telling youngsters the "rule" so they do the same thing.
Re:Oh, nonsense (Score:5, Informative)
Yes, I believe it was the BSD memory manager (possibly earlier, V7 maybe) that had the 2xRAM rule. Less and you could have issues - more was wasted disk space.
BSD was the foundation for SunOS (pre-Solaris 2.X), Ultrix, etc. so they all inherited this requirement - and from there the "requirement" became gospel on other systems.
I've actually never heard the 2xRAM in relation to Windows, but it certainly predates it. I was setting up systems with 2xRAM when Windows was a DOS app... :)
Re:Definitely not twice... (Score:5, Insightful)
Uh, report this to your vendor as a bug. No amount of swap space should cause your system's memory manager to make such lousy decisions.
And, in fact, having an "unreasonable" amount of swap can actually pay off. If your system can swap out really stale memory to disk and use the RAM to cache stuff on disk that you might actually want, you're going to see a really big performance gain.
-Peter
Re: (Score:3, Informative)
What Oracle Wants (Score:5, Informative)
If you were running Oracle - here is what they recommend:
RAM -> Swap Space
1 GB - 2 GB -> 1.5 times the size of RAM
2 GB - 8 GB -> Equal to the size of RAM
more than 8GB -> 0.75 times the size of RAM
I don't know if this would carry across to general computing - it seems to me if it's enough for an Oracle RDBMS server, it ought to do it for most things.
Re: (Score:3, Informative)
for a server thats probably about right, on a desktop where stability is not quite as important id go with about half that
More sensible suggestion... (Score:5, Informative)
Reading through OpenBSD's FAQ:
"The 'b' partition of your root drive automatically becomes your system swap partition. Many people follow an old rule of thumb that your swap partition should be twice the size of your main system RAM. This rule is nonsense. On a modern system, that's a LOT of swap, most people prefer that their systems never swap. You don't want your system to ever run out of RAM+swap, but you usually would rather have enough RAM in the system so it doesn't need to swap. If you are using a flash device for disk, you probably want no swap partition at all. Use what is appropriate for your needs. If you guess wrong, you can add another swap partition in /etc/fstab or swap to a file later."
HTH.
Just test? (Score:5, Informative)
I imagine different people will need different amounts of swap space, so use a size that's right for you.
Re: (Score:3, Informative)
Remember to set permissions on swapfiles, letting any user read them is not a good idea as they may end up containing sensitive information (e.g. passwords).
If you need crashdumps, same rule applies (Score:5, Informative)
If you're debugging your kernel or are helping people to debug your kernel, and are generating crashdumps either manually or as a result of kernel panic, you need your swap to be twice as big as the memory so it all fits comfortably (You can probably get away with X times bigger, where 1X2, but 2 is a safe number).
To my understanding that's always been the reason for the rule of thumb about doubling the memory. If you can afford the disk, go for it, because you never know when you might hit a panic and need crashdumps. If you are in a live environment and are sure you will never, ever need or even want crash dumps, and the disk space is at a premium, you can size it based on need.
Another thing to keep in mind is that as you have more ram, you have more pages, and the whole point of swap is to get pages to disk as well in case you need to free up physical ram quickly.
Re: (Score:3, Interesting)
There's other reasons for big swap, on Solaris though. (I don't know about other OSes, don't use 'em much).
One, which recently bit me in the foot, has to do with forking. The system basically pre-allocates swap against process space size (vm size, not rss), even though the pages may not actually get physically allocated. This is because the kernel wants to make sure that when you want to write to the memory, it's going to be able to allocate pages in the VM for you to write to -- remember, we're forking so
For Suspend to Disk more than actual RAM (Score:5, Insightful)
Whatever you do, you need to remember to setup you swap partition to as large or bigger than your ram in order to be able to use the "suspend to disk" function in Linux. On older laptops suspend is sometimes handled by the bios. Then you need a special partition. But nowdays Linux just suspends to your swap. And if your memory was full ...
Re: (Score:3, Informative)
For built-in suspend this is true. TuxOnIce [tuxonice.org] offers, among other things, suspend-to-file support which eliminates the need to keep gobs of swap around if you don't want to.
Forget the RAMx2 rule (Score:5, Insightful)
Forget the RAM X 2 rule. Capacity of drives are way up, base RAM load is way up. Drive transfer speed isn't up very much. Doesn't really matter how much ram you have, long before you get a Gig of swap utilized the system is going to be trashing to the point of being unusable under any but lab conditions.
Running with no swap can cause some problems, because it does help if the system can push out blocks of memory that aren't backed by a file and also haven't been used for awhile. Still on an all flash system with an adequate amount of RAM running without swap is probably the right move. On a machine with a spinning disc give it a 1GB swap and forget it.
The exception being in cases where the a system is doing suspend to disc into the swap. I don't have any Linux machines that will do suspend to disc so don't ask me about any details.
Re:Forget the RAMx2 rule (Score:4, Informative)
Lower the amount of RAM Linux uses by changing vfs_cache_pressure to > 100. This will make the kernel dedicate less RAM for caching dirents (directory entries) for quicker lookups. For instance, to cut the amount of directory caching in half, double the pressure by doing:
'echo 200 >
HTH
It Depends, but at least as much as RAM (Score:5, Informative)
Re:It Depends, but at least as much as RAM (Score:4, Informative)
Yeah, hibernate actually DOES mean "use no electricity". Perhaps you mean "suspend"?
Re: (Score:3, Informative)
"Hibernate" does not mean "use no electricity."
Actually, "hibernate" is no different than "off" from an electricity use perspective. When you hibernate with Linux (or even Windows), the contents of memory are stored to disk and the machine is powered off just as if you had performed a normal shutdown procedure.
At this point, a hibernating system is using exactly as much power as a system that is off. For an ATX system this is not zero since the power supply has to provide some power for the on/off button to function as well as to power the USB ports.
/tmp on tmpfs and large swap (Score:3, Insightful)
I personally prefer to put /tmp on tmpfs, and combine with a large swap partition (much larger than 2x RAM). tmpfs is a lot faster than a regular filesystem *even if it has to hit disk*, simply because it doesn't have to care about consistency. If the machine goes down, the data disappears.
0 MB (Score:3, Insightful)
On a system with adequate RAM, the primary effect of swap is to make the system bog down before it crashes when a runaway process tries to allocate a huge amount of memory.
As much as you think you'll need (Score:3, Informative)
How big should my swap be?
It really depends on what you're planning on doing with the machine:
It really depends on what you're doing. A simple firewalling machine will never need to swap. Low trafic websites and mailservers will probably hardly ever need it.
Also, you can always add swap later if you resize another partition. It really isn't that much of an issue, so pick a value and adjust according to your needs.
What if we just got rid of paging? (Score:4, Interesting)
Maybe we should be asking "should we even bother with swap files?". I took a class on that where we calculated the steps it takes to get the final memory address in a paged memory system. It was something like 36 steps per address! We had PDEs, PTEs, convert this, change that. I didn't grok all the steps, but I do know there were a lot of them. I know 36 steps per little itty bitty piece of memory is a lot, even if you are a very fast CPU, when you have to do this hundreds of millions of times.
Back in the day, it made sense to convince your programs you had an extra 100 megs of RAM, because a lot of programs needed that and didn't have it in memory. Today, memory is more abundant than things we would really need it for at the non-industrial level. I don't personally have any non-industrial applications that will fill up 4 gigs of RAM. Even Vista + WoW won't take up all that.
So, and my professor suggested this, maybe the ideal swap size is ZERO. What if your operating system just operated under the concept of "If you can't fit it in 4 gigs, tough. Just wait until memory is free. I'm not even going to bother to split memory into pages because I'm always going to use RAM, not a hard drive page. Case closed." We could save so much overhead and complexity if we just admit that we never need to pretend hard drive is RAM. With 4 gigs or more of RAM, why even have a glacial slow hard drive in the mix?
Re: (Score:3, Informative)
Either your professor is an idiot, or you had no understanding what (s)he was talking about.
Swap is not the reason for paging. Memory fragmentation between programs is the reason for paging. Saying a program can't allocate memory because it has bumped into the next one, when there is a large amount on the other side is insanity. This is exactly like saying you can't write to a file because there is another file right after it on the hard drive. Paging has the nice side benefit of efficient swap (swappin
Re: (Score:3, Informative)
Swapping requires virtual memory. The converse is false.
All this scary PDE, PTE and other TLB stuff is what happens when a virtual address is converted to a physical address. That has nothing to do with swapping or paging.
Now, you cannot seriously consider abandoning virtual memory and all that comes with it (inter-process protection, kernel protection from user-space errors amongst others), can you?
Been There Done That... (Score:3, Informative)
I have made hundreds of swap partitions for OS X, AU/X, Windows, Schmindows, and just about every flavor of Unix I came across.
I would advise...
For Windows, load Process Explorer, and look at the Commit Change Peak RAM. Nice...
Now load a browser, a word processor, and Acrobat. OR Load the game you want to play.
Make the partition the size of that peak RAM+10%.
Make the swap size, the larger of the system cache or the minimum peak commit change. ( There is a brilliant trick here, but Id have to kill you...)
System 1:
1024MB ram.
Peak is 70%.
Swap partition is : 1916Mb, 64K Clusters.
Swap file size is : 512~1668Mb
Swap file size on OS Partition is 2Mb.
( Someone warned me about this, and I actually listened. Sure has helped when imaging drives )
More Later...( It gets trickier for smaller ram values...) I am working on a 512MB system, a 384MB system, and a 256MB system.
RAM-based hard drive (Score:4, Interesting)
Re: (Score:3, Interesting)
.. or you could just install the RAM in the machine and remove the need for swap at all. tmpfs takes care of the rest.
Re: (Score:3, Informative)
Yeah that would be a great [gigabyte.com.tw] idea.
And yes, those bad boys do raid together and they max out SATA transfer rates.
None may be a good option (Score:5, Insightful)
I've been setting up machines with no swap partitions for a few years. Swap partitions have a bad habit if collecting secure info you may have assumed was just in ram. All modern operating systems allow to you use a file or other blank space as swap means you don't need a dedicated partition. There is also the issues that if your starting to swap, where does it end? If your swapping on a machine with 4 or 8 gig of ram, will an extra gig help fix what ever is broken or just make the machine very slow until it gets around to telling the runaway program that there is no more memory. In the case of no swap, that tends to happen much faster. The only reason I see for swap partitions is that the OS will need a place to dump debug info if it crashes and the swap partition has traditionally been used for that.
Need More Info... (Score:5, Insightful)
I also agree that the old "2 x RAM" standard is outdated.
If you are a typical desktop user--browsing, email, games, etc, you will likely never swap. If you happen to edit photos a lot then you'll use a bit more. In these cases doing 4GB swap for 4GB RAM should be more than sufficient, and even then overkill.
If you are a serous 'power' desktop user, heavy graphics / video editing or similar heavy-duty tasks, you will likely have significantly more RAM. If you ever did swap things would become so slow your productivity would be severely hampered.
Were you talking about a server I'd say the same thing. Your swap space on an active server (thinking database or application server) is really just there to keep you operational should some process go haywire, long enough for you to fix it. If you are regularly swapping on a server then you need to upgrade your RAM or adjust your software on it.
8GB of swap is useful (Score:3, Funny)
It lets you leak more memory for longer, this is a necessary feature for running modern software.
No reason to worry :-) (Score:3, Interesting)
The "rule of two" is due to Knuth's demonstration : "When the memory is 50% full, there is necessarily one free block at least as big as the biggest already allocated block", or something similar.
Today, I would say the swap partition is mainly useful to store the state of the computer when you put it in hibernation mode, that is a little more that the size of your RAM if you want to be really cautious, just in case.
That being said, A GB of disk is so cheap compared to 1 GB of RAM - which is already cheap, now - that there is no problem in doubling that size for very special purposes (alternating 2 different "hot" graphic users sessions or operating systems without rebooting, for instance). Just my two cents.
RedHat EL 5 documentation says.... (Score:3, Informative)
Arguably RH is the authority on the subject... See their documentation here [redhat.com].
-m
How much Swap per RAM? (Score:3, Informative)
Oracle has very specific requirements/recommendations:
Our organization just bought 4 database servers with 32 Gb of RAM each. I personally setup and installed the servers. I told the DBA:
The DBA agreed with this, and we went with 8 Gb of swap. Haven't had any problems with the server or DB applications for more than 6 months. It is the most heavily utilized server in the entire organization.
For a laptop, I would set the swap to equal or more of the RAM, only if you want to suspend to swap. Depending on the applications, I would say at least half the amount of RAM to double the amount of RAM, within reason. If you have 8 Gb of RAM on a workstation, you probably do not need 16 Gb of swap for everyday use.
My OS X installation... (Score:3, Informative)
has a dynamically grown swap file currently at 64 MB.
I have 8 GB of RAM and never page out even when I run dozens of memory hungry apps (photoshop, nikon capture etc).
The general rule is if you are swapping pages out when running typical apps you use daily, get more RAM.
8GB of RAM and zero swap (Score:3, Interesting)
... and it works like a champ.
32 bit (Score:3, Interesting)
Is there any point, with a 32bit OS, in having a swapfile bigger than 4 gigabytes?
I just (today) installed a new hard drive, 1 Terabyte, so I moved the swapfile to that drive, but kept it the same size.(2 gigabytes)
I have 3 gigabytes of RAM
It's not the size of the swap space (Score:3, Funny)
that matters. It's how you use it....
My 2 cents. (Score:4, Interesting)
Until a few months ago, I regularly answered this question for enterprise Linux customers, so I humbly submit that my anecdotal experience is marginally more informed than most here.
Memory capacity and bandwidth is improving orders of magnitude faster than disk throughput and latency, and this has been true for decades. If the workload stays the same, you should generally have a lower swap/RAM ratio on newer hardware than older hardware, because it's so much cheaper these days to add more RAM, and adding more swap can actually make your system slower when you finally start using it, because it takes much longer to page in 8 GB of data from disk than 4 GB.
The kernel virtual memory (VM) subsystem is a briar patch of carefully-tuned code which, whenever altered, almost always causes a regression for some obscure combination of hardware and software that someone somewhere cares an awful lot about. This is not due to inherent bugginess, but rather the fact that the VM is essentially in the business of predicting the future, which is mathematically impossible to always get right. As a result, developers tend to be very conservative about VM optimizations, so the VM tends not to adjust its assumptions about hardware quite as quickly as the hardware itself changes.
The upshot of all of this is that as time goes by, swap becomes more of a lifeline for worst-case memory shortages and less of an optimization to make the system behave as though it had more memory. This is not to say you should do without it completely, but the ratio tends to keep going down. For desktop use, I've been using a 1:1 ratio for a while, and honestly, that's probably too large for how I use it. Digging out of 2X swap takes *more* than twice as long as digging out of 1X swap, because you end up thrashing back through the stuff you've already paged in and out before you get to the rest. Think of the Tower of Hanoi problem as an extreme worst case. Beyond a certain point, you really want the kernel to refuse memory allocations and/or invoke the OOM-killer to kill off your misbehaving app and restore performance for the rest of the system.
Whatever you do, you shouldn't go completely swapless unless you really know what you're doing. Having just a few hundred megabytes of swap on a huge 4-socket server gives you a buffer against out-of-memory conditions that could bring down the whole system. In this extreme case, it's actually *good* that swap is slower than RAM, because it stalls userspace page dirtying while waiting for I/O, leaving the CPU free for the kernel to scan for pages that should be paged out, faster than userspace can dirty them.
If you're stuck on a small system you can't upgrade, having a high swap/RAM ratio might still make sense, but modern hardware tends to have much more and faster RAM and only slightly faster I/O.
If you've got a carefully tuned database server that's reserving much of its memory for hugepages, you should start your calculation with the amount of *swappable* RAM, which is the RAM not set aside for hugepages. So, if you've got 16 GB of RAM, and 12 GB reserved in hugepages, you only want swap proportional to 4 GB of RAM.
The proportion itself is still a delicate matter. On a desktop system where you may open lots of applications, and then leave some of them idle for days while using other resource-intensive programs, it may make sense to go as high as 1x. On servers where latency is important, you probably don't want to go higher than 0.25. If you've got a batch compute system where you feed it a huge amount of work and expect it to be done when you come back several hours later, it can still make sense to have upwards of 2x as much swap as RAM. It might be sluggish to give you a login prompt, but that doesn't necessarily mean it's thrashing inefficiently if you have a fairly sequential access pattern.
If all of this confuses you, and your distribution recommends 2 GB by default at install time, odds are you'll do okay with that, at least for the near future. Once solid-state storage becomes mainstream, most of what I've said in this post will be completely obsolete.
Cool Performance Tip (Score:5, Funny)
Why does everyone put their swap on a slow harddrive ? A Gentoo running mate of mine in the pub showed me how to map the swap file into RAM: runs much faster there.
(Although suspend does not seem to work now :-(
Re:None (Score:4, Interesting)
Delaying is largely the point as I see it. If you're out of ram and it's eating into the swap, things are going to slow to a crawl and you'll know something is wrong, so you can look for, find, and kill whatever is running amok before it consumes all and triggers a panic/BSOD/etc.
Re:None (Score:5, Informative)
Well, I do occasionally need more than 2GB of RAM, without there being a memory leak. I've been running GIS programs, an IDE, a couple of RDBMSs, and then I fire up the old compression program...
Which brings me to my point. The question "how much swap do I need" is probably meaningless, even for a given amount of memory. There are people who find 2GB with no swap fine, and others, like me, who probably could get by with 2GB of RAM and maybe 512MB of swap, and others who might need more.
I think the 2x RAM rule of thumb has one virtue: excepting certain exotic kinds of systems, it's fairly safe that anybody who finds themselves needing more than that is probably feeling a world of pain that can only be fixed by getting more RAM. On the other hand, in most cases 2x RAM amounts to a trivial amount of disk. Probably most people could get by with 25% of RAM, but the value of thinking about whether that is true for you is very likely less than the cost of the disk space.
Common sense applies. If you have some kind of scientific computing device with a gazillion bytes of RAM, your swap requirements might not be related to your maximum RAM requirements at all. If you're running some kind of operating system that launches a bunch of rarely used garbage, you probably ought to think about your swap. I had awful problems with Vista until I figured out the page file Windows created had something like eight thousand fragments. I was actually better off getting rid of the page file
That's not the point of swap space. (Score:3, Informative)
The point of swap space isn't to kick in when you run out of physical memory. The point of swap space is to allow the kernel to make use the most efficient use of your RAM, by swapping out the contents of infrequently accessed memory pages, and putting that memory to better use, like caching frequently accessed disk blocks.
If you have no swap space at all, any memory pages that your processes are hardly using have to stick around in memory forever, even if you'd get better performance by swapping the con
Re:With a caveat... (Score:5, Interesting)
Oh dear FSM, Please for the sake of everyones sanity, NEVER LET WINDOWS GROW THE SWAPFILE! besides the fact it it will fragment the pagefile, it will also completely lock up the computer for X amount of time... right when you need it most! ...it ALWAYS happens at a bad time.