Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Upgrades Hardware

How Big Should My Swap Partition Be? 900

For the last 10 years, I have been asking people more knowledgeable than I, "How big should my swap be?" and the answer has always been "Just set it to twice your RAM and forget about it." In the old days, it wasn't much to think about — 128 megs of RAM means 256 megs of swap. Now that I have 4 gigs of RAM in my laptop, I find myself wondering, "Is 8 gigs of swap really necessary?" How much swap does the average desktop user really need? Does the whole "twice your RAM" rule still apply? If so, for how much longer will it likely apply? Or will it always apply? Or have I been consistently misinformed over the last 10 years?

This discussion has been archived. No new comments can be posted.

How Big Should My Swap Be?

Comments Filter:
  • by meringuoid ( 568297 ) on Wednesday October 01, 2008 @06:49PM (#25226063)
    Yeah, your 256MB of space was trivial when you had a 30GB hard drive ... and 8GB of space is still trivial with a 750GB hard drive.

    I have an Eee 901. It has 1GiB of RAM and 20GB of disk space. A swap partition on the 'twice your RAM' rule would be far from trivial.

    I decided to be bold and installed Hardy with no swap partition. It seems to work just fine so far; Firefox greys out for a few seconds sometimes while loading pages, which might have to do with my reckless configuration, but on the whole it's pretty snappy.

    As for my desktop PC, it has 4GiB of RAM. I followed the traditional rule when I installed on that. I don't think that swap partition has ever even been used.

  • by whoever57 ( 658626 ) on Wednesday October 01, 2008 @06:51PM (#25226091) Journal

    With a 750GB [newegg.com] hard drive selling under $100, what has changed?

    The relative speeds of disks and memory have changed.

    I am thinking of reducing the amount of swap on my primary compute server -- the reason is simple: if the machine starts using appreciable amounts of swap, it becomes so slow, it is unusable. So, really, by reducing the swap, what I do is get the OOM killer to take action and kill some processes sooner. I may have an unusual situation that when my machine is out of memory, the cause is almost certainly due to a proces that I want killed anyway.

  • Re:None (Score:4, Interesting)

    by compro01 ( 777531 ) on Wednesday October 01, 2008 @06:54PM (#25226119)

    Delaying is largely the point as I see it. If you're out of ram and it's eating into the swap, things are going to slow to a crawl and you'll know something is wrong, so you can look for, find, and kill whatever is running amok before it consumes all and triggers a panic/BSOD/etc.

  • by RiotingPacifist ( 1228016 ) on Wednesday October 01, 2008 @07:01PM (#25226229)

    consider using a swap file for your setup id recommend 256mb of swap, with 1.25gb of ram (apart from when i left wireshark running for toolong) ive not seen it creep above ~100mb for long.

  • by Databass ( 254179 ) on Wednesday October 01, 2008 @07:02PM (#25226257)

    Maybe we should be asking "should we even bother with swap files?". I took a class on that where we calculated the steps it takes to get the final memory address in a paged memory system. It was something like 36 steps per address! We had PDEs, PTEs, convert this, change that. I didn't grok all the steps, but I do know there were a lot of them. I know 36 steps per little itty bitty piece of memory is a lot, even if you are a very fast CPU, when you have to do this hundreds of millions of times.

    Back in the day, it made sense to convince your programs you had an extra 100 megs of RAM, because a lot of programs needed that and didn't have it in memory. Today, memory is more abundant than things we would really need it for at the non-industrial level. I don't personally have any non-industrial applications that will fill up 4 gigs of RAM. Even Vista + WoW won't take up all that.

    So, and my professor suggested this, maybe the ideal swap size is ZERO. What if your operating system just operated under the concept of "If you can't fit it in 4 gigs, tough. Just wait until memory is free. I'm not even going to bother to split memory into pages because I'm always going to use RAM, not a hard drive page. Case closed." We could save so much overhead and complexity if we just admit that we never need to pretend hard drive is RAM. With 4 gigs or more of RAM, why even have a glacial slow hard drive in the mix?

  • RAM-based hard drive (Score:4, Interesting)

    by suck_burners_rice ( 1258684 ) on Wednesday October 01, 2008 @07:17PM (#25226403)
    I have 64 GB of RAM and my swap partition is the same size at 64 GB. I thought 128 GB is a little excessive. What I would really like is if someone would make up an IDE interface to RAM modules and build a large amount of such RAM into the form factor of a hard disk drive. Then, you could populate this RAM-based "hard drive" with the necessary data during startup, and use it for swap and for all of your system's various "temp" folders. This would make swapping (and temp stuff) extremely fast to access, and more importantly, it would eliminate the need to encrypt your swap and/or temp partitions, as the data would simply disappear when power is removed. So when the agents (including Agent Smith) come to bust down your door, all you do is pull the plug and voila! Your secrets are safe. :-)
  • Re:Oh, nonsense (Score:5, Interesting)

    by Obfuscant ( 592200 ) on Wednesday October 01, 2008 @07:27PM (#25226501)
    Here's my understanding of the "rule".

    In early Unixes (SunOS, e.g.), the memory manager was dumb and preallocated swap space sufficient to swap your entire process out if it became necessary, and it really did want contiguous. Running out of swap was common, even if it was really never used, and the "rule" to avoid that problem was 2xRAM. Further, if you had two swap partitions, or a partition and a file, your process stayed in whatever swap it started in and did not split across both. You could be out of swap space and still have a completely empty swap file.

    Memory managers have gotten smarter, mapping smarter, and now swap is only used when it really is necessary. Pages that are not dirty don't get swapped, they get reloaded from the disk they came from. Pages that are swapped are often used soon enough that they never leave the RAM buffers.

    Yesterday, I had a user come to me saying he was getting an "out of memory" error from Matlab. Matlab is notorious for not garbage collecting when it needs to. His Matlab process had 800Mb of resident memory, even though he said he had just 300Mb of data. The kicker? Somehow, over the last couple of years, the swap file I had created to extend the 512Mb swap partition had gotten lost. Dunno where it went, just not there. He had 512Mb of swap, and most of that wasn't being used. Never noticed it until yesterday. His 2Gb of RAM was sufficient for what he was doing.

    It's a case of people who learned early just doing what they know works, telling youngsters the "rule" so they do the same thing.

  • Re:With a caveat... (Score:5, Interesting)

    by Gewalt ( 1200451 ) on Wednesday October 01, 2008 @07:30PM (#25226539)

    Oh dear FSM, Please for the sake of everyones sanity, NEVER LET WINDOWS GROW THE SWAPFILE! besides the fact it it will fragment the pagefile, it will also completely lock up the computer for X amount of time... right when you need it most! ...it ALWAYS happens at a bad time.

  • by Anonymous Coward on Wednesday October 01, 2008 @07:31PM (#25226547)

    This is all very well as long as you don't run out of physical memory. When you do, Linux starts killing processes at random (well, sort of) until it's freed enough memory to carry on. This can be annoying.

    Disk space is cheap. Allocate a reasonable chunk of it to a swap file, and keep a bit of an eye on how much is used. If a significant amount of swap space is used, it's a good sign you haven't got enough RAM. Fortunately, RAM is also pretty cheap, so give it some more.

    How much is a "reasonable" chunk? Remember this is all easily changeable on the fly if you use swap files instead of swap partitions. So start out with, say, swap space to match physical memory. Watch it for a while. If it's never used, and you're short of disk space, remove it and give it a smaller amount. Keep watching it on and off, especially if your usage patterns change. You'll soon enough get a feel for how much you need.

    Me? I hate getting killed more than I hate running slowly, and I have spare disk space, so I have a big chunk of swap space, at least a couple of times physical memory. It's hardly ever used, but I know that if I kick off something heavy it's not going to be killed off. And if I need the disk space for a while I just get rid of the swapfile for the duration.

  • by Tony Hoyle ( 11698 ) <tmh@nodomain.org> on Wednesday October 01, 2008 @07:33PM (#25226565) Homepage

    .. or you could just install the RAM in the machine and remove the need for swap at all. tmpfs takes care of the rest.

  • by franois-do ( 547649 ) on Wednesday October 01, 2008 @07:41PM (#25226653) Homepage

    The "rule of two" is due to Knuth's demonstration : "When the memory is 50% full, there is necessarily one free block at least as big as the biggest already allocated block", or something similar.

    Today, I would say the swap partition is mainly useful to store the state of the computer when you put it in hibernation mode, that is a little more that the size of your RAM if you want to be really cautious, just in case.

    That being said, A GB of disk is so cheap compared to 1 GB of RAM - which is already cheap, now - that there is no problem in doubling that size for very special purposes (alternating 2 different "hot" graphic users sessions or operating systems without rebooting, for instance). Just my two cents.

  • by digitalhermit ( 113459 ) on Wednesday October 01, 2008 @07:46PM (#25226721) Homepage

    I'd agree that the relative size of 8G page space is probably smaller than a 512M partition a while ago... However, some corrections:

    Technically swap space is not page space, though the distinction is being blurred quite a bit. Swap space was used to actually swap out entire processes, while page space was used to page out memory, er, pages. I'll use swap here :D

    The wikipedia link is a little incorrect. In many cases a swap partition can be more efficient than a swap file at least in Linux. For one, there's an extra overhead involved in dealing with files because of the OS filesystem layer. Perhaps there's a raw file option with a swapfile, but I'm not aware of it. In most cases (i.e., desktop and moderate server use) there's probably not a lot of difference and swapfiles are ostensibly easier to use. With LVMs it doesn't really matter.

    In some OSes the swap space is not necessarily additive. In some versions of the Linux kernel, for example, only swap space above that of physical memory was additive. This was a consequence of the virtual memory subsystem.

    In other OSes it worked similarly for performance reasons. E.g., once a page is requested from the OS and written, the OS would write both to physical memory and to the page space. If the page needed to be swapped out it was merely an update to the page table to indicate whether the page was in memory or not rather than doing a full write on every page-out. This could drastically improve performance under some circumstances but diminish it in others (thus it's usually a tunable parameter).

    Having some page space is generally good too. Modern OSes will still swap out dormant pages if it can free up memory for running processes even if memory is not near capacity. This is also tunable.

    There are also reasons to run without swap (i.e., no hard disk or slow hard disk). You can disable swap entirely by tuning the overcommit options of the kernel.

    Having too much swap can be bad too. In some OSes the page table size (which cannot be swapped) was dependent on the size of total memory (physical + swap). An overenthusiastic administrator may decide to really bump up the size of his page space if he saw that physical memory was close to capacity. If he happened to be on one of the boundaries (4G, 16G in some cases) then adding more swap space would actually DECREASE his available physical memory because the size of the page table would grow.

    In other words, there's no valid "rule of thumb" :D

  • by Anonymous Coward on Wednesday October 01, 2008 @07:47PM (#25226735)

    I don't think it was the/a decimal mistake. A lot of people do the math and come up with, for example, 0.04. Four percent. But since they want to express "four percent" they think they have to add the "%" to the expression, which now becomes 0.04% or four one hundreths of a percent.

    When you see the number of people posting everywhere who can't discern the proper uses of then and than it's not hard to see how something as esoteric (to them) as 0.04 and 0.04% would throw an error. It's because English is so contexual and streamlined incorrect spelling often has no affect on the concept being explained.

  • by Anonymous Coward on Wednesday October 01, 2008 @07:56PM (#25226839)

    It's those days when I'm playing Warcraft through wine, listening to streaming radio through Amarok, have 20 windows open behind it, idling a LAMP server for my development projects, running a vent client, some form of news aggregater, pidgin & an e-mail client hooked up to several POP3/IMAP accounts that I am happy I erred on the side of a whole ton of swap space.

    If you are relying on a ton of swap space to make that happen, you are hating life. If your system is constantly swapping all that in and out, you are running slower than most humans can tolerate.

  • by FooBarWidget ( 556006 ) on Wednesday October 01, 2008 @08:23PM (#25227119)

    Because I don't have a Vista reinstall CD or even a restoration partition. Dell didn't provide one. If VMWare fscks up my existing Vista install then I'd have a problem.

  • by jrothwell97 ( 968062 ) <jonathan&notroswell,com> on Wednesday October 01, 2008 @08:37PM (#25227291) Homepage Journal
    Perhaps Asus neglected to set swappiness to 0. Anyway, I have a 701 which I use as a general workhorse with no swap and a ramdisk /tmp, and it zooms with Ubuntu on it. On HDD based machines, a swapfile is a better, more flexible solution than a dedicated partition IMO. If the machine's only ever going to be used for office work (word processing, etc) I doubt a swap area will even be needed.
  • by multipartmixed ( 163409 ) on Wednesday October 01, 2008 @08:38PM (#25227301) Homepage

    There's other reasons for big swap, on Solaris though. (I don't know about other OSes, don't use 'em much).

    One, which recently bit me in the foot, has to do with forking. The system basically pre-allocates swap against process space size (vm size, not rss), even though the pages may not actually get physically allocated. This is because the kernel wants to make sure that when you want to write to the memory, it's going to be able to allocate pages in the VM for you to write to -- remember, we're forking so we're doing copy-on-write.

    I bumped into this a few months ago retrofitting a bind-listen-accept-fork daemon to make use of a 200+ MB data structure which was populated into the parent and read in the children. Even though I had plenty of swap "free", the OS had enough of it bookmarked for use in case I wrote to it that it refused to fork all the children I needed.

    For the curious, the solution to my particular dilemma was to write my own [trivial] allocator backed by mmap; IIRC I used MAP_NORESERVE | MAP_SHARED to convice Solaris not set aside excessive amounts of swap [excessive because I had knowledge the OS didn't] and allow my program to fork enough times to handle the requisite load.

  • eee pc here (Score:2, Interesting)

    by eric31415927 ( 861917 ) on Wednesday October 01, 2008 @09:02PM (#25227577)
    I've got 2 gigs of ram and 4 gigs of SSD on my 4G. If I followed your maths, I'd have no SSD space for my OS.

    Actually - I've got a 4 gig swap partition set up on an SD card. However, I never run any programs needing more RAM than what I've got. I suppose the purpose of my swap partition is to keep me from plugging other cards in my only SD slot.

    Well this ought to change. My swap partition is going. Thanks for making me think a little.

  • by Mr Z ( 6791 ) on Wednesday October 01, 2008 @09:03PM (#25227583) Homepage Journal

    Having a swap partition doesn't necessarily mean having a lot of swap traffic. Often what gets placed in swap are portions of the heap that got allocated, but won't be referred to for quite some time. It gives room for other types of pages (as I mention here [slashdot.org]).

    That said, if what you're doing doesn't cause a lot of thrashing when there is no swap, don't add swap on your flash SSD.

  • by djcapelis ( 587616 ) on Wednesday October 01, 2008 @09:54PM (#25228023) Homepage

    >(basically put a process is killed when it
    >requests a resource that is not available and that
    >is not necessarily the process that is hogging the
    >resource to begin with)

    That's not true at all, please read oom_killer.c in the kernel source code before continuing to make statements about a piece of code you seem to never have read. (Or if you read it before, you haven't kept up to date...)

    The oom_killer scores processes using a metric that takes into account usage and generally will kill the task using the most resources. It does not kill tasks based on which task most recently requests it unless you explicitly configure out the standard oom_killer.

  • by timster ( 32400 ) on Wednesday October 01, 2008 @09:56PM (#25228039)

    Perhaps you'd like to tell us whether a GB is base 2 or base 10 then.

    You obviously aren't worth your SALT!

    Remember kids, it DEPENDS!

    Bandwidth? Base 10 -- always has been.
    ROM? Base 2 -- always has been -- and traditionally in bits, not bytes.
    RAM? Base 2, and bytes.
    Hard disk? Base 10 in the manufacturer's specs, base 2 in the OS display. Always has been that way, always will be.
    Floppies? Base 2 until you get to MB, where 1MB = 1000 base 2 KB (seriously).
    Clock speeds? Base 10, always has been, always will be.
    Flash? Who knows.

    Isn't it great that we have such an easy, convenient system that is focused around the needs of us humans, and not the needs of the computers (who don't care in the slightest).

  • by Skapare ( 16644 ) on Wednesday October 01, 2008 @10:20PM (#25228227) Homepage

    ... and it works like a champ.

  • by Easy2RememberNick ( 179395 ) on Wednesday October 01, 2008 @10:45PM (#25228399)

    Kind of like running a LiveCD all the time.

  • Here's how big (Score:5, Interesting)

    by TheLink ( 130905 ) on Wednesday October 01, 2008 @10:55PM (#25228491) Journal
    It does hurt to allocate a couple of gigs of swap.

    I use swap only to tell me that I'm low on RAM. Basically once the machine starts using swap and getting slightly slow- it means I'm low, then I can try to shut down stuff (without it behaving otherwise strangely, or dying abruptly).

    Here's how I suggest you figure out _roughly_ how much swap you need.

    1) Figure out the amount of Virtual Memory your programs and services _allocate_ without really _using_ - call this F. There are some programs that allocate hundreds of MB of memory but never use it. But note that there are some programs that allocate lots of memory and may use it :). If you have lots of RAM and are too lazy to guess, set F=0.

    2) Figure out your drive throughput for swap access (swap in + swap out)- this is often related to random access throughput - and for a typical hard drive it could be in the order of magnitude of 10MB/sec - call this M. Note that many flash drives have pathetic random write speeds of 4MB/sec (or even less!).

    3) Figure out the time you are willing to wait for stuff to swap in and out (e.g. time to get an ssh prompt- call this T.

    Swap = F + T * M.

    So for example, if you have programs that allocate a total of 100MB and never use it, and your drive swap throughput is 10MB/sec and the amount of time you're willing to wait is 15 seconds.

    Swap = 100MB + 15 sec * 10MB/sec = 250MB.

    As you can see allocating gigabytes can hurt - since it'll take days to swap in and out processes that are using gigabytes of swap. You'll run out of time before you run out of swap, and when that happens somebody will do a hard shutdown of the machine - and that means ALL processes will be abnormally terminated, rather than just one.

    Yes, there are cases where the offending program might not keep accessing all of that swap, but when a program misbehaves like that, you'd rather find out sooner rather than have to shutdown the whole computer (because it takes ages to respond).

    Running programs from swap is best reserved for those who wish to experience the 1950s drum memory days. If you want to do retrocomputing keep in mind that memory speeds are now much faster than disk speeds, whereas in the 1950s memory speed = drum speed, and most modern programs assume modern memory speeds.
  • 32 bit (Score:3, Interesting)

    by rossdee ( 243626 ) on Wednesday October 01, 2008 @10:59PM (#25228529)

    Is there any point, with a 32bit OS, in having a swapfile bigger than 4 gigabytes?

    I just (today) installed a new hard drive, 1 Terabyte, so I moved the swapfile to that drive, but kept it the same size.(2 gigabytes)
    I have 3 gigabytes of RAM

  • by ozphx ( 1061292 ) on Wednesday October 01, 2008 @11:03PM (#25228569) Homepage

    But then people might find out it was Realtek or Nvidia fucking up and won't be able to blame Micro-dollars-oft.

    You can't spell goatse without Gates and a big O you know!

  • My 2 cents. (Score:4, Interesting)

    by Chris Snook ( 872473 ) on Thursday October 02, 2008 @12:05AM (#25229029)

    Until a few months ago, I regularly answered this question for enterprise Linux customers, so I humbly submit that my anecdotal experience is marginally more informed than most here.

    Memory capacity and bandwidth is improving orders of magnitude faster than disk throughput and latency, and this has been true for decades. If the workload stays the same, you should generally have a lower swap/RAM ratio on newer hardware than older hardware, because it's so much cheaper these days to add more RAM, and adding more swap can actually make your system slower when you finally start using it, because it takes much longer to page in 8 GB of data from disk than 4 GB.

    The kernel virtual memory (VM) subsystem is a briar patch of carefully-tuned code which, whenever altered, almost always causes a regression for some obscure combination of hardware and software that someone somewhere cares an awful lot about. This is not due to inherent bugginess, but rather the fact that the VM is essentially in the business of predicting the future, which is mathematically impossible to always get right. As a result, developers tend to be very conservative about VM optimizations, so the VM tends not to adjust its assumptions about hardware quite as quickly as the hardware itself changes.

    The upshot of all of this is that as time goes by, swap becomes more of a lifeline for worst-case memory shortages and less of an optimization to make the system behave as though it had more memory. This is not to say you should do without it completely, but the ratio tends to keep going down. For desktop use, I've been using a 1:1 ratio for a while, and honestly, that's probably too large for how I use it. Digging out of 2X swap takes *more* than twice as long as digging out of 1X swap, because you end up thrashing back through the stuff you've already paged in and out before you get to the rest. Think of the Tower of Hanoi problem as an extreme worst case. Beyond a certain point, you really want the kernel to refuse memory allocations and/or invoke the OOM-killer to kill off your misbehaving app and restore performance for the rest of the system.

    Whatever you do, you shouldn't go completely swapless unless you really know what you're doing. Having just a few hundred megabytes of swap on a huge 4-socket server gives you a buffer against out-of-memory conditions that could bring down the whole system. In this extreme case, it's actually *good* that swap is slower than RAM, because it stalls userspace page dirtying while waiting for I/O, leaving the CPU free for the kernel to scan for pages that should be paged out, faster than userspace can dirty them.

    If you're stuck on a small system you can't upgrade, having a high swap/RAM ratio might still make sense, but modern hardware tends to have much more and faster RAM and only slightly faster I/O.

    If you've got a carefully tuned database server that's reserving much of its memory for hugepages, you should start your calculation with the amount of *swappable* RAM, which is the RAM not set aside for hugepages. So, if you've got 16 GB of RAM, and 12 GB reserved in hugepages, you only want swap proportional to 4 GB of RAM.

    The proportion itself is still a delicate matter. On a desktop system where you may open lots of applications, and then leave some of them idle for days while using other resource-intensive programs, it may make sense to go as high as 1x. On servers where latency is important, you probably don't want to go higher than 0.25. If you've got a batch compute system where you feed it a huge amount of work and expect it to be done when you come back several hours later, it can still make sense to have upwards of 2x as much swap as RAM. It might be sluggish to give you a login prompt, but that doesn't necessarily mean it's thrashing inefficiently if you have a fairly sequential access pattern.

    If all of this confuses you, and your distribution recommends 2 GB by default at install time, odds are you'll do okay with that, at least for the near future. Once solid-state storage becomes mainstream, most of what I've said in this post will be completely obsolete.

  • Re:Here's how big (Score:3, Interesting)

    by TheRaven64 ( 641858 ) on Thursday October 02, 2008 @06:20AM (#25230889) Journal

    I've seen NT based OSes aggressively swap out things that aren't in use just so there is more memory available for disk cache, and it makes sense cause there is a lot of crap the kernel and other apps load up that is very RARELY needed, if ever.

    This is a model that NT got indirectly from Mach, and FreeBSD got directly. The idea is that each step in the memory hierarchy is just a cache for the lower level. Your disk is the lowest level, main memory is the next one, then each of the layers in the CPU cache. If a program is waiting for data from the disk then it doesn't matter whether that data was meant to be put in memory with mmap, malloc, or read - it's still causing the program to block waiting for I/O. If you really care a lot about performance, then you want to make sure that the data you want next is in memory, and favouring malloc'd data turns out not to be such a great heuristic for this. Lots of applications leak memory, or allocate large data structures where they only rarely access more than a small part of them. The well-behaved ones of these (e.g. Dovecot IMAP) create these in mmap'd files, so they don't take up any swap, but others don't and it makes sense to spill these into memory and use the space to cache data that really will be used.

  • by DavidRawling ( 864446 ) on Thursday October 02, 2008 @07:22AM (#25231111)
    As usual the information is 100% correct but not necessarily helpful. The disk is marketed as 160GB. GB is an SI unit denoting 10^9 bytes, so you should have at least 160,000,000,000 bytes (which you do). Your OS is reporting your 160GB disk as 160GB, it's quite clear. You, however, have calculated that it's 149 GiB, not 149GB.
  • by swilver ( 617741 ) on Thursday October 02, 2008 @08:29AM (#25231487)
    Let's assume for a second that my harddisk is just as fast as main memory. Does it make sense to cache this? Do you think memory managers make a distinction here? I know they don't. So, what about the situation where my harddisk is not as fast as main memory, but instead I'm uploading a 4 GB file over night, at a rate of 1 MBps. Does it make sense to cache this 4 GB file? There's no hard disk stress on the system, other than some prefetching, there's no benefit at all from caching all this data that is being accessed only once. Do you think it likely this file will suddenly need to be accessed at top performance and thus benefitting from caching? What if the file is always accessed sequentially and is slightly larger than the total amount of RAM in your system? Does it make sense to try and cache the whole thing, knowing that it will never fit? I'm afraid that memory managers aren't that smart, in fact, they're downright stupid. They donot KNOW what makes sense to cache, they just use highly tweaked LRU type algorithm and don't pay any attention to usage patterns and whether or not the underlying media would benefit from caching for a given task. So they will do stupid things, like trying to cache a 4 GB data file which was uploaded the other night. This leaves you with a crippled system in the morning because it decided to slowly swap out everything during the night for this purpose.
  • by damn_registrars ( 1103043 ) <damn.registrars@gmail.com> on Thursday October 02, 2008 @09:39AM (#25232131) Homepage Journal

    In that case, why can't I just let Windows XP or Vista manage the virtual memory size by itself? I don't see why I should need to establish a fixed size when Windows can manage it dynamically.

    I have yet to see a version of Microsoft windows that does not end up with a hopelessly fragmented swap file over time. And if you let windows dynamically use space for swap, you're just asking for an even more hopelessly fragmented drive as it starts grabbing space anywhere it can find some to expand the swap file.

    On my own windows installs I have brought some old-school unix (FreeBSD in particular) methodology to partitioning, and I make a partition just for swap (still 2x my total memory size). Of course in windows you still have to partition it, but I just don't write anything to it myself, and tell windows to only swap to that partition. Then my main partition doesn't end up as terribly fragmented.

  • by Pogie ( 107471 ) on Thursday October 02, 2008 @05:50PM (#25239215)

    Forgive me for not reading all the posts, but I did get through those moderated high, and I'd like to clear up the reason (if not the true historical origin) of the 2x RAM rule for swap.

    The actual reason to create a swap area (through file, dedicated partitions, disks, etc) that is sized at 2x physical RAM is so that you can both:

    a) reserve swap pages equal to total ram.
    b) swap out idle process pages equal to total ram.

    Many applications reserve virtual pages in the swap area to ensure that in the event the memory manager pages the application out to swap there will be enough swap available to hold the process. Oracle is a perfect example of an app that, by default, will do a disk page reservation for portions of the SGA and PGA. So if you've got a server with 8GB of memory + 8GB swap on which you've allocated 6GB to the oracle SGA and PGA, you'll see around 5-6GB of swap utilized once oracle's up and running, even though no portion of the process page space is actively residing on disk.

    Now, if that's the case, then what happens when your memory manager needs to actually swap something out of physical memory to virtual (disk) pages? Well, the system has a lot less virtual page space to work with, and it's possible to encounter allocation issues if new processes are started. To avoid that problem, you add another 8GB of swap (working off the 8GB physical ram example). Now, even if you are running 8GB of programs which require a one-to-one ratio between physical pages and virtual page reservations, there is still enough swap available for the memory manager to swap ALL of the processes in physical memory out to disk and allocate physical memory to new processes.

    Now, is it ever going to happen that you actually end up needing to both reserve swap == physical ram AND swap out your system's RAM worth of processes? Not if your admins have the first clue what they are doing, no. :) But by using the 2X rule, you pretty much idiot-proof your memory manager for any application profile and any OS.

    Of course, if you are knowledgeable about your operating system and the actual usage of your server, you will almost never take this generalized approach. Modern Unix OS's and applications normally will have alternative configurations that allow you to avoid swap reservation (v_pinshm, for example, in AIX 5.3 and later), as well as tuning the behavior of the virtual memory manager to better handle situations where the system has a low free page count. It's important to remember, though, that the consequence of allocating too little swap is the memory manager killing processes to recover free pages, and in a production environment, the process it kills will inevitably be the one resulting in you receiving a 4am wakeup call.

    Personally, I will usually run with swap space == total physical memory, and upgrade the server the minute I start seeing page outs to disk. That's probably not an option for the desktop unix crowd, but it's a good rule for any unix servers running an application that requires even the lowest level of guaranteed performance. Swapping is bad. End of story.

    Thanks for your time, and as always, I reserve the right to be wrong.

"Everyone's head is a cheap movie show." -- Jeff G. Bone

Working...