Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Windows Operating Systems Software Hardware

Why Use Virtual Memory In Modern Systems? 983

Cyberhwk writes "I have a system with Windows Vista Ultimate (64-bit) installed on it, and it has 4GB of RAM. However when I've been watching system performance, my system seems to divide the work between the physical RAM and the virtual memory, so I have 2GB of data in the virtual memory and another 2GB in the physical memory. Is there a reason why my system should even be using the virtual memory anymore? I would think the computer would run better if it based everything off of RAM instead of virtual memory. Any thoughts on this matter or could you explain why the system is acting this way?"
This discussion has been archived. No new comments can be posted.

Why Use Virtual Memory In Modern Systems?

Comments Filter:
  • by alain94040 ( 785132 ) * on Thursday December 04, 2008 @06:00PM (#25995059) Homepage

    You must be confused about virtual vs. physical memory. In modern processors, there is no penalty for using virtual memory, all translation from virtual to physical address space is done internal to the processor and you won't notice the difference.

    So all the physical memory installed in your PC is used by the processor as one big pool of resources. Processes can think whatever they want and address huge memory spaces, that's all in virtual land. Virtual memory only starts impacting performance when pages are being swapped in and out, because all your processes need more resident memory than you actually have.

    Swapping means accessing the disk and freezing the requesting process until its page of memory has arrived from the disk, which takes millions of processor cycles (a lifetime from the processor's point of view). It's not so bad if you swap once, as the processor can work on other processes while waiting for the data to arrive, but if all your programs keep pushing each other out of physical memory, you get thrashing and consider yourself happy if the mouse pointer is still responsive!

    So you may want to change the title of your post to: "why use physical memory in modern systems?". I would point you to an article I wrote on that topic in 1990, but somehow I can't find a link to it on the web :-)

    fairsoftware.net - software developers share revenue from the apps they build

    • by Anonymous Coward on Thursday December 04, 2008 @06:09PM (#25995173)

      You must be confused about virtual vs. physical memory.

      Indeed. When I read this story my knee jerk reaction was "please be gentle." And thankfully the first +5 post on this story is informative and helpful and relatively kind.

      I fear the "turn off your computer, put it in a box and mail it back to the manufacturer" hardcore hardware experts that are going to show up in 3 ... 2 ... 1 ...

      • by alain94040 ( 785132 ) * on Thursday December 04, 2008 @06:24PM (#25995447) Homepage

        Gentle answers is what 6 years in customer support teaches you.

        That, or hating everyone ;-)

        • by houstonbofh ( 602064 ) on Thursday December 04, 2008 @06:42PM (#25995655)

          Gentle answers is what 6 years in customer support teaches you.

          That, or hating everyone ;-)

          That kind of attitude really pisses me off! ;-)

          • by weighn ( 578357 ) <weighn.gmail@com> on Thursday December 04, 2008 @08:06PM (#25996763) Homepage

            That kind of attitude really pisses me off! ;-)

            yes, I detest being gently hated by patronising tech support heroes

      • by Digital Vomit ( 891734 ) on Thursday December 04, 2008 @09:05PM (#25997371) Homepage Journal

        When I read this story my knee jerk reaction was "please be gentle." And thankfully the first +5 post on this story is informative and helpful and relatively kind.

        It's a Christmas miracle!

    • by brxndxn ( 461473 ) on Thursday December 04, 2008 @06:10PM (#25995209)

      So do I really only need 640k of physical memory if I have a modern system?

      • by qw0ntum ( 831414 ) on Thursday December 04, 2008 @09:15PM (#25997469) Journal
        I know it was a joke but actually, in an oversimplified sense, yes. A main point of virtual memory in its true sense is to abstract the limitations of your amount of physical memory away from user programs, and instead present them with an effectively limitless virtual address space with which to work with. When the program says, "read from memory address 0x(some huge number)", the OS/memory management unit will translate that address request from a virtual page address to a physical frame via the page table. If there is no frame in memory that contains the data pointed to by the requested address, that's when you have a page fault. Then the operating goes to disk and fetches the data you requested.

        Your performance would be abysmally slow, and obviously probably wouldn't work at all with modern operating systems (just a theoretical point here!), but assuming a good implementation of virtual memory you should be able to run everything just fine. Of course, if you don't have enough disk space for your address space, you'll run into problems. :)
    • by TypoNAM ( 695420 ) on Thursday December 04, 2008 @06:13PM (#25995255)

      Actually no the author was correct in Microsoft's Windows' terms. This is the exact text used in System Properties -> Advanced tab under Virtual memory:
      "A paging file is an an area on the hard disk that Windows uses as if it were RAM."

      You might think well they said paging file not virtual memory, well click on Change button and you'll see the dialog pop up named "Virtual Memory" of which you can specify multiple paging files on multiple drives if you wanted to. Defaulted to a single paging file on the C:\ or boot drive. So blame Microsoft for the confusing use of virtual memory and paging file back and forth. I guess they mean by virtual memory as in the collection usage of paging files after the fact (for those situations where there's more than one paging file used, just like on Linux you can have more than one swap file in use).

      Anyway, I too have seen Windows 2000 and XP just love to make heavy use of the paging file even though there is clearly enough physical memory available. Some friends of mine have even disabled Windows from using a paging file completely, at first you will get a warning about it, but other than that they have reported better system performance and no draw backs noticed since then. This is on systems with at least 3GB of RAM.

      • by TypoNAM ( 695420 ) on Thursday December 04, 2008 @06:34PM (#25995551)

        Sorry, got to correct the path to where exactly I got that quote from:
        System Properties -> Advanced -> Performance area, click Settings -> Advanced tab (on Windows XP, as for 2000 its the default tab).

      • Re: (Score:3, Insightful)

        by Dadoo ( 899435 )

        So blame Microsoft for the confusing use of virtual memory and paging file

        I'm no Microsoft fanboy, but I don't think you can blame them for this, especially when "virtual memory" originally did mean what the OP thinks it does. I'd like to know when the definition changed.

        • by hardburn ( 141468 ) <hardburn.wumpus-cave@net> on Thursday December 04, 2008 @07:00PM (#25995893)

          It never did change. "Virtual Memory" always meant a trick the kernel and CPU do to make programs think they are accessing a different memory address than they actually are. This trick is necessary in all multitasking operating systems.

          Once you've made the jump to mapping real memory addresses to fake ones, it's easy to map the fake addresses to a swap file on the hard drive instead of actual RAM. The confusion of the terms started when naive programmers at the UI level called that swap file "virtual memory".

          • by Bert64 ( 520050 ) <bert@[ ]shdot.fi ... m ['sla' in gap]> on Thursday December 04, 2008 @08:15PM (#25996871) Homepage

            AmigaOS multitasked, and didn't use memory mapping like that...
            It had a flat memory model, and ran on processors which lacked the necessary memory management hardware.

            If you did have an upgraded cpu with MMU, then there were third party virtual memory addons.

            • by Stormie ( 708 ) on Friday December 05, 2008 @12:02AM (#25998835) Homepage

              AmigaOS multitasked, and didn't use memory mapping like that... It had a flat memory model, and ran on processors which lacked the necessary memory management hardware.

              And paid the price, in the form of one program being able to trample another's memory, or crash the whole system (hence the famous Guru Meditation).

              The Amiga was actually the first thing I thought of when I read this Ask Slashdot - you may recall the immense prejudice against virtual memory from a lot of Amiga users, who thought that virtual memory simply meant swapping to disk. They didn't realise that releasing a range of Amigas which all had MMUs (i.e. non-crippled 030+ CPUs) and a version of the OS with virtual memory would cure a number of ills completely unrelated to swapping, such as memory fragmentation and the aforementioned ability of one rogue program to bring down the system.

      • Re: (Score:3, Funny)

        by jon3k ( 691256 )
        If you get your technical information from Microsoft dialog windows, I blame YOU for being wrong. You should know better.
      • by Reziac ( 43301 ) * on Thursday December 04, 2008 @06:57PM (#25995839) Homepage Journal

        I've been running without a pagefile, in all versions of Windows, for about 10 years now -- on any machine with more than 512mb.

        The only drawback is that a few stupid Photoshop plugins whine and refuse to run, because if they don't see a pagefile, they believe there is "not enough memory" -- a holdover from the era when RAM was expensive and the pagefile was a busy place. Sometimes I think about making a very small pagefile just for them, but have never actually got around to doing it.

        • by ChrisA90278 ( 905188 ) on Thursday December 04, 2008 @08:12PM (#25996831)

          "I've been running without a pagefile, in all versions of Windows,..."

          Not really. On a modern OS when executable code is loaded from disk to RAM. It isn't really loaded. What they do is map the file that holds the code into virtual memory. So in effect when you run a program called "foobar.exe" you have made that file a swap file. It gets better. The OS never has to copy pages out of ram because the data is already in foobar.exe. When the OS needs space it can re-use the pages without need to write them to a swap file because it knows where to get the data.

          So yu are in effect using as many swap files as programs you are running

      • Can't hibernate (Score:5, Interesting)

        by anomaly ( 15035 ) <tom DOT cooper3 AT gmail DOT com> on Thursday December 04, 2008 @07:00PM (#25995881)

        Windows makes me CRAZY about this. the OS is internally configured to use an LRU algorithm to aggressively page.

        ("Technical bastards" who question my use of paging and swap interchangeably in this post can send their flames to /dev/null \Device\Null or NUL depending on OS)

        What I found when disabling paging on an XP pro system with 2GB RAM is that the system performance is explosively faster without the disk IO.

        Even an *idle* XP pro system swaps - explaining the time it takes for the system to be responsive to your request to maximize a window you have not used in a while.

        I was thrilled to have a rocket-fast system again - until I tried to hibernate my laptop. Note that the hibernation file is unrelated to the swap/paging space.

        The machine consistently would blue screen when trying to hibernate if swap/paging was disabled. Enabling swap enabled the hibernation function again. Since reboots take *FOREVER* to reload all the crap that XP needs on an enterprise-connected system - systems mangement, anti-virus agent, software distribution tool, and the required ram-defragger which allows XP to "stand by" when you've got more than 1GB of RAM, plus IM, etc

        I reboot as infrequently as possible and consider "stand by" and "hibernate" required functions. As a result, I live with XP and paging enabled, and tolerate the blasted system "unpaging" apps that have been idle a short time.

        Poo!

    • by Trepidity ( 597 ) <delirium-slashdot@@@hackish...org> on Thursday December 04, 2008 @06:14PM (#25995273)

      I'd assume what he's asking is: in modern systems where the amount of physical RAM is considerably larger than what most people's programs in total use, why does the OS ever swap RAM out to disk?

      The answer is basically to free up RAM for disk cache, based on a belief (sometimes backed up by benchmarks) that for typical use patterns, the performance hit of sometimes having to swap RAM back into physical memory is outweighed by the performance gain of a large disk cache.

      Of course, OS designers are always revisiting these assumptions---it may be that for some kinds of use patterns using a smaller disk cache and swapping RAM out to disk less leads to better performance, or at least better responsiveness (if that's the goal).

      • by jandrese ( 485 ) <kensama@vt.edu> on Thursday December 04, 2008 @06:27PM (#25995481) Homepage Journal
        Man, I hated that assumption in 2000, and I hate it in XP. It's the one that means when you bring Firefox up after it has been minimized, that the OS will have to laboriously swap in all of the memory for it from disk, which takes forever when you're talking about a slow laptop hard drive. I made it a habit of switching the paging file management to "manual" and reducing the paging size down to 2mb. It makes the whole system way more responsive when you're like me and have a bunch of applications open at once and in the background, and memory is so cheap that buying a little extra so you never run out (2GB) is easy.
        • by martyros ( 588782 ) on Thursday December 04, 2008 @06:47PM (#25995719)

          The question, though, is how is the reduction in disk cache size resulting from having no virtual memory to speak of affecting your runtime? Rather than seeing it all at once, like when you swap back in Firefox, are you taking longer to navigate directories because it has to read them in every single time? And when you're using firefox, does it take longer to check its disk cache? Are you saving 2 seconds when you switch applications by losing 60 seconds over the course of 10 minutes as you're actually using an individual application?

          Saving the 60 seconds (perhaps at the expense of the 2 seconds) is exactly what the block cache is trying to do for you. Whether it's succeeding or not, or how well, is a different question. :-)

          • by Mprx ( 82435 ) on Thursday December 04, 2008 @07:12PM (#25996057)

            It might save 60 seconds, but it's saving the wrong 60 seconds. I'm not going to notice everything being very slightly faster, but I'll notice Firefox being swapped back from disc. I only care how long something takes if I have to wait for it.

            Kernel developers seem to mostly care about benchmarks, and interactive latency is hard to benchmark. This leads to crazy things like Andrew Morton claiming to run swappiness 100 (swappiness 0 is the only acceptable value IMO if you need swap at all). I don't use swap, and with 4GB ram I never need it.

      • by DragonWriter ( 970822 ) on Thursday December 04, 2008 @06:50PM (#25995735)

        The answer is basically to free up RAM for disk cache, based on a belief (sometimes backed up by benchmarks) that for typical use patterns, the performance hit of sometimes having to swap RAM back into physical memory is outweighed by the performance gain of a large disk cache.

        Whether or not it works (and I'm not sure how well it does), there's something odd about swapping out RAM contents to disk so that you can mirror disk contents in RAM.

      • by Solandri ( 704621 ) on Thursday December 04, 2008 @06:59PM (#25995859)

        The problem I noticed with XP (dunno if Vista does the same) is that it doesn't seem to give running apps priority over disk cache. So if you have your browser in the background and hit a lot of files (e.g. a virus scan), the browser would get paged to disk and would take forever to bring back to the foreground.

        What would be great is a setting like, "disk cache should never exceed 256 MB unless there is free RAM". In other words, if the total memory footprint of the OS and my running apps is less than my physical RAM minus 256 MB, they will never be swapped to disk. As I start approaching the limit, the first thing to be scaled back should be disk cache. Disk cache >256 MB will not be preserved by swapping my apps to disk.

        As it is, I set XP's swapfile manually to 128 MB (any smaller and I would get frequent complaints about it being too small even though I have 3 GB of RAM). If it really needs more memory, it will override my setting and increase the swapfile size. But 99% of the time this limits the amount of apps XP can swap to disk to just 128 MB, which for me results in a much speedier system.

        • by Trepidity ( 597 )

          One problem is that there are relatively frequent types of disk-access patterns where caching them gives little to no benefit in return for the paging out of RAM it requires. A virus scan (touching most of your files exactly once) is one canonical example. Media playback (touching a few large files in sequential block reads) is another.

          The difficult question is how to exclude these kinds of wasted caching while still retaining the benefits of caching frequently accessed files, and not introducing excessive

      • by lgw ( 121541 ) on Thursday December 04, 2008 @07:09PM (#25996019) Journal

        The answer is basically to free up RAM for disk cache, based on a belief (sometimes backed up by benchmarks) that for typical use patterns, the performance hit of sometimes having to swap RAM back into physical memory is outweighed by the performance gain of a large disk cache.

        We're rapidly getting to the point where there's enough RAM for not only all the programs you're running, but all of the disk that those programs will access! Paging memory out to disk just doesn't make much sense anymore. I've run WIndows with no page file since Win2000 came out, and never had a problem with that.

        My current (non-gaming) desktop box has 8GB of RAM, and cost me about $1000. I rarely use that much memory for the combined total of apps, OS footprint, and all non-streaming files (there's no point in caching streaming media files on a single-user system, beyond maybe the first block).

        I expect my next $1000 system in a few years will have 64GB of RAM, at which point there really will be no point in using a page file for anything. And with a solid-state hard drive, I'm not sure there will be any point in read caching either (though write caching will still help I guess).

    • No he doesn't (Score:4, Informative)

      by fm6 ( 162816 ) on Thursday December 04, 2008 @06:57PM (#25995837) Homepage Journal

      You must be confused about virtual vs. physical memory. In modern processors, there is no penalty for using virtual memory, all translation from virtual to physical address space is done internal to the processor and you won't notice the difference.

      Huh? That's totally wrong. If it were true, you wouldn't need any RAM.

      It's true that address translation is hard-wired in modern processors. But that just means that figuring out where the data is is as fast as for data that's already in RAM. Actually reading or writing it is only as fast as the media it's stored on. So if you have a lot of big applications running, and there isn't enough RAM for them all to be in physical memory at once, your system "thrashes", as data migrates back and forth between the two media. That's why adding RAM is very often the best way to speed up a slow system, especially if you're running Microsoft's latest bloatware. Defragging the swap disk can also be helpful.

      To answer the original question: actually, you often don't need any virtual memory. But sometimes you do. Disk space is cheap, so it makes sense to allocate a decent amount of virtual memory and just not worry about whether it's absolute necessary.

      • Re: (Score:3, Informative)

        That's why adding RAM is very often the best way to speed up a slow system, especially if you're running Microsoft's latest bloatware.

        Not running Microsoft's latest bloatware is probably the best way to speed up a slow system if you are currently doing that.

      • Re:No he doesn't (Score:4, Informative)

        by lgw ( 121541 ) on Thursday December 04, 2008 @07:23PM (#25996211) Journal

        Note: virtual memory is not necessarily on disk. "Virtual" memory just refers to the fact that the memory address that the application uses isn't the physical memory address (and in fact there might not *be* a physical memory address this instant), nothing more.

        Defragging the swap disk can also be helpful.

        I think this is never helpful. Pagefile reads are basically random for the first block being paged in for a given app, and modern hard drives mostly claim no seek latency as long as you have 1 read queued (of course, that claim might be BS).

        For Windows, the OS doesn't *create* framgenation of the page file over time. If there is a large contiguous space available when the pagefile is created (and there usually is, right after OS installation), Windows will use that block, and it will never fragment. Also, if your pagefile was fragmented at creation, defragging by itself won't fix it, as it doesn't move any pagefile blocks.

        I hope the same thing is true in Linux - if defragging your swap drive helps, someone has done something very wrong to begin with.

    • by Calibax ( 151875 ) * on Thursday December 04, 2008 @07:06PM (#25995979)

      No, I don't think the OP is confused.

      Back in the days of mainframes only, say before 1980 or so, all the systems I worked on (NCR, IBM and Burroughs) used the term "virtual memory" to refer to secondary memory storage on a slower device. Early on the secondary device was CRAM (Card Random Access Memory) and later it was disk.

      But the point is that Virtual Memory originally referred to main memory storage on a secondary device. Furthermore, this is still the term used for paged storage in Microsoft Windows. Check out the Properties page on the "Computer" menu item on Vista or "My Computer" icon on XP which talks about Virtual Memory when setting the size of the paging file.

      The OP is totally correct in his use of Virtual memory both by historical precedent and by current usage in Windows.

  • by Alereon ( 660683 ) * on Thursday December 04, 2008 @06:00PM (#25995061)

    Memory exists to be used. If memory is not in use, you are wasting it. The reality is that your system will operate with higher performance if unused data is paged out of RAM to disk and the newly freed memory is used for additional disk caching. Vista's memory manager is actually reasonably smart and will only page data out to disk when it really won't be used, or you experience an actual low-memory condition.

    • by etymxris ( 121288 ) on Thursday December 04, 2008 @06:12PM (#25995227)

      I've known this argument for many years, I just don't think it applies anymore. The extra disk cache doesn't really help much, and what ends up happening is that I come in to work in the morning, unlock my work XP PC, and I sit there for 30 seconds while everything gets slowly pulled of the disk. XP thought it would be wise to page all that stuff out to disk, after all, I wasn't using it. But why would I care about the performance of the PC when I'm not actually using it?

      At the very least, the amount of swap should be easily configurable like it is in Linux. I haven't actually used a swap partition in Linux for years, preferring instead to have 6 or 8gb of RAM, which is now cheap.

      • by ivan256 ( 17499 ) on Thursday December 04, 2008 @06:17PM (#25995331)

        What you're actually complaining about is that Windows did a poor job of deciding what to page out. Sure, you could "turn off swap" if you have enough memory, and you won't ever have to wait for anything to be paged in.. But your system would be faster if you had a good paging algorithm and could use unaccessed memory pages for disk cache instead.

        • Re: (Score:3, Interesting)

          by mea37 ( 1201159 )

          I think you might have awfully high expectations of the paging algorithm, if you think it's "bad" because it paged out data that wasn't being used for something like 16 hours.

          Perhaps the problem is that the cost/benefit values of "keep an app that isn't being touched in RAM" vs. "increase the available memory for disk caching", while they may be appropriate when the computer is actually being used, are not optimal for a computer left idle overnight. The idle computer has a higher-than-expected cost (in ter

      • by Timothy Brownawell ( 627747 ) <tbrownaw@prjek.net> on Thursday December 04, 2008 @06:21PM (#25995389) Homepage Journal

        At the very least, the amount of swap should be easily configurable like it is in Linux. I haven't actually used a swap partition in Linux for years, preferring instead to have 6 or 8gb of RAM, which is now cheap.

        It is, (Right-click "My Computer")->Properties, "Advanced" tab, "Settings" under Performance, "Advanced" tab, "Change" under "Virtual memory". Almost as easy as "dd if=/dev/zero of=swapfile bs=1G count=1; swapon swapfile", spclly if u cant spel cuz u txt 2 much.

      • by SiChemist ( 575005 ) * on Thursday December 04, 2008 @06:55PM (#25995815) Homepage

        You can also adjust the "swappiness" of a computer running linux. I've set my desktop to have a swappiness of 10 (in a scale of 0 to 100 where 0 means don't swap at all). In Ubuntu, you can do sudo sysctl vm.swappiness=10 to set the swappiness until next boot or edit /etc/sysctl.conf and add vm.swappiness=10 to the bottom of the file to make it permanent.

        The default swappiness level is 60.

    • Agreed (Score:5, Interesting)

      by Khopesh ( 112447 ) on Thursday December 04, 2008 @06:35PM (#25995575) Homepage Journal

      Linux kernel maintainer Andrew Morton sets his swappiness [kerneltrap.org] to 100 (page as much physical memory as you can, the opposite of this Ask-Slashdot's desires), which he justified in an interview (see above link) by saying:

      My point is that decreasing the tendency of the kernel to swap stuff out is wrong. You really don't want hundreds of megabytes of BloatyApp's untouched memory floating about in the machine. Get it out on the disk, use the memory for something useful.

      Of course, there's another view, also presented at the above kerneltrap article: If you swap everything, you'll have a very long wait when returning to something you haven't touched in a while.

      If you have limited resources, milk the resources you have plenty of; workstations should have high swappiness while laptops, who suffer in disk speed, disk capacity, and power, are probably better suited with lower swappiness. Don't go crazy, though ... swappiness = 0 is the same as running swapoff -a and will crash your programs when they need more memory than is available (as the kernel isn't written for a system without swap).

    • by hey! ( 33014 ) on Thursday December 04, 2008 @07:09PM (#25996023) Homepage Journal

      Memory exists to be used. If memory is not in use, you are wasting it.

      While I grant this statement is in a sense true, a system designer would do well to ponder the distinction between "not used" and "freely available".

      RAM that is not currently being used, but which will be required for the next operation is not "wasted"; it is being held in reserve for future use. So when you put that "unused" RAM to use, the remaining unused RAM, plus the RAM you can release quickly, has to be greater than the amount of physical RAM the user is likely to need on short notice. Guess wrong, and you've done him no favors.

      I'm not sure what benchmark you are using to say Vista's vm manager is "reasonably smart"; so far as I know no sensible vm scheme flushes swaps out pages if there is enough RAM to go around.

      My own experience with Vista over about eighteen months was that it is fine as long as you don't do anything out of the ordinary, but if you suddenly needed a very large chunk of virtual memory, say a GB or so, Vista would be caught flat footed with a ton of pages it needed to get onto disk. Thereafter, it apparently never had much use for those pages, because you can release the memory you asked for and allocate it again without any fuss. It's just that first time. What was worse was that apparently Vista tried to (a) grow the page file in little chunks and (b) put those little chunks in the smallest stretch of free disk it could find. I had really mediocre performance with my workloads which required swapping with only 2-3GB of RAM, and I finally discovered that the pagefile had been split into tens of thousands of fragments! Deleting the page file, then manually creating a 2GB pagefile, brought performance back up to reasonable.

      One of the lessons of this story is to beware of assuming "unused" is the same as "available", when it comes to resources. Another is not to take any drastic steps when it comes to using resources that you can't undo quickly. Another is that local optimizations don't always add up to global optimizations. Finally, don't assume too much about a user.

      If I may wax philosophical here, one thing I've observed is that most problems we have in business, or as engineers, doesn't come from what we don't know, or even the things we believe that aren't true. It's the things we know but don't pay attention to. A lot of that is, in my experience, fixing something in front of us that is a problem, without any thought of the other things that might be connected to it. Everybody knows that grabbing resources you don't strictly need is a bad thing, but it is a kind of shotgun optimization where you don't have to know exactly where the problem is.

  • by Anonymous Coward on Thursday December 04, 2008 @06:02PM (#25995077)
    Virtual memory [wikipedia.org] is very useful.

    Note that "virtual memory" is not just "using disk space to extend physical memory size".

    • by Dadoo ( 899435 ) on Thursday December 04, 2008 @06:38PM (#25995605) Journal

      I think I'm going to need to add a comment to that Wikipedia page. I'm not sure when the definition changed, but a long time ago (mid 80s), "virtual memory" did mean "making a program believe it had more memory than there was on the system". At least three different vendors defined it that way: Motorola, Data General, and DEC. I still have the Motorola and DG manuals that say so.

    • Some advantages (Score:5, Informative)

      by pavon ( 30274 ) on Thursday December 04, 2008 @06:57PM (#25995833)

      That page mostly talks about what virtual memory is and doesn't directly list why it is an improvement.

      Some folks have already mentioned the fact that it eliminates memory fragmentation, and that it allows mapping of files and hardware into memory without dedicating (wasting) part of the address space to those uses.

      Another reason is that you can have 2^64 bytes of total system memory, even if the individual applications are 32-bit, and can only address 2^32 bytes of memory. Since the 32-bit applications are presented a virtual address space, it doesn't matter if their pages are located above the 32-bit boundary.

      It means that per-process memory protection is enforced by the CPU paging table. Without virtual memory you would have to reimplement something like it just for memory protection.

      It means that the linker/loader don't have to patch the executable with modified address locations when it is loaded into memory.

      The above two reasons have the corollary that libraries can be shared in memory much more easily.

      And that's just off the top of my head. Virtual memory is a very, very useful thing.

  • by Ethanol-fueled ( 1125189 ) * on Thursday December 04, 2008 @06:04PM (#25995089) Homepage Journal
    Virtual memory and pagefiles still exist so that there will be persistent, recoverable storage of your browsing and search history, illegally downloaded music, and furrie porn should anybody come a-knockin after you hit the power switch.

    [/tinfoil hat]
    • Re: (Score:3, Funny)

      by eldavojohn ( 898314 ) *

      Virtual memory and pagefiles still exist so that there will be persistent, recoverable storage of your browsing and search history, illegally downloaded music, and furrie porn should anybody come a-knockin after you hit the power switch. [/tinfoil hat]

      </worrying> You're close but do you know why I only drink rain memory and grain memory, Mandrake? It's because virtual memory and pagefiles are the greatest Communist conspiracy to sap and impurify our precious computerly processes. <love the bomb>

  • by Xerolooper ( 1247258 ) on Thursday December 04, 2008 @06:05PM (#25995121)
    you could create a RAM Disk and set your page file to use that.
    Then all your virtual memory is in RAM.
    I'll leave it to someone else to explain why that isn't a good idea.
    • Re:Would it help if (Score:5, Interesting)

      by Changa_MC ( 827317 ) on Thursday December 04, 2008 @06:21PM (#25995385) Homepage Journal
      I know it's not a good idea now, but this was seriously a great trick under win98. Win98 Recognized my full 1GB of RAM, but seemed to want to swap things to disk rather than use over 256MB of RAM. So I just created a RAM disk using the second 512MB of RAM, and voila! Everything ran much faster. When everything is broken, bad ideas become good again.
  • Turn it off, then! (Score:5, Insightful)

    by Jeppe Salvesen ( 101622 ) on Thursday December 04, 2008 @06:06PM (#25995131)

    We who know what we are doing are free to take the risk of running our computers without a swapfile.

    Most people are not in a position where they can be sure that they will never run out of physical memory. Because of that, all operating systems for personal computers set up a swapfile by default: It's better for joe average computer owner to complain about a slow system than for him to lose his document when the system crashes because he filled up the physical memory (and there is no swap file to fall back on).

  • by chrylis ( 262281 ) on Thursday December 04, 2008 @06:06PM (#25995133)

    The other extreme point of view is that modern systems should only have virtual memory and, instead of having an explicit file system, treat mass storage as a level-4 cache. In fact, systems that support mmap(2) do this partially.

    The idea here is that modern memory management is actually pretty good, and that it's best to let the OS decide what to keep in RAM and what to swap out, so that issues like prefetching can be handled transparently.

    • Re: (Score:3, Funny)

      by Anonymous Coward

      Modern like the IBM System 38 circa 1980?

    • Multics (Score:5, Insightful)

      by neongenesis ( 549334 ) on Thursday December 04, 2008 @06:38PM (#25995597)
      One word: Multics. Way too far ahead of its time. Those who forget history will have to try to re-invent it. Badly.
      • Re: (Score:3, Informative)

        by debrain ( 29228 )

        Those who forget history will have to try to re-invent it. Badly.

        I believe that is an insightful combination of two quotes:

        - Those who forget history are doomed to repeat it. (alt. George Santayana: "Those
        who cannot remember the past are condemned to repeat it.")

        - "Those who don't understand UNIX are condemned to reinvent it, poorly." Henry Spencer

    • Circa 1975 through 2000 or so, the "native" (*) versions of the Pick Operating System worked exactly this way. Even the OS-level programmers - working in assembly language (!) - only saw virtual memory in the form of disk pages. When you put an address in a register, it was a reference to a disk page, not physical memory. The pages were auto-magically brought into memory at the moment needed and swapped out when not by a tiny VM-aware paging kernel. That was the only part of the system that understood that

  • File - Save (Score:4, Interesting)

    by Anonymous Coward on Thursday December 04, 2008 @06:07PM (#25995145)

    For that matter, why do we even need to explicitely "save" anymore? Why does the fact that Notepad has 2KB of text to save prevent the shutdown of an entire computer? Just save the fecking thing anywhere and get on with it! Modern software is such a disorganized mess.

    • Re:File - Save (Score:4, Insightful)

      by JSBiff ( 87824 ) on Thursday December 04, 2008 @06:39PM (#25995617) Journal

      What would you do instead of file save? Continuous save, where the file data is saved as you type? What if you decide the changes you made were a mistake? I think one of the basic premises, going a very long way back in the design of software, is that you don't immediately save changes, so that the user can make a choice whether to 'commit' the changes, or throw them away and revert back to the original state of the file. As far as I know, Notepad will only temporarily stop the shutdown of the computer, to ask you do you want to save the file - yes/no? I don't see how that is such a bad thing?

      Now, you might say that the solution for this is automatic file versioning. The problem is that if you have continuous save, you would either get a version for every single character typed, deleted, etc, or else you would get 'periodic' versions (like, a version from 30 seconds ago, a version from 30 seconds before that, etc) and pretty soon you'd have a ridiculous number of 'intermediate' versions. File versioning should, ideally, only be saving essentially 'completed' versions of the file (or at least, only such intermediate versions as the user chooses to save [because, if you are creating a large document, like a Master's Thesis, or book, you will probably not create it all in a single session, so in that case, you might have versions which don't correspond to completed 'final products', but you probably also don't want 1000 different versions either], instead of a large number of automatically created versions).

      • Re:File - Save (Score:4, Interesting)

        by he-sk ( 103163 ) on Thursday December 04, 2008 @07:26PM (#25996241)

        Explicit saving is a crutch based on limitations of early computers when disk space was expensive. Unfortunately, people are so used to it that they think it's a good idea. Kinda link having to reboot Windows every while so it doesn't slow down. (I know that it's not true anymore.)

        Think about it, when I create a document in the analog world with a pencil I don't have to save it. Every change is committed to paper.

        You're right, of course, the added value with digital documents is that I can go back to previous versions. But again, it's implemented using a crutch, namely Undo and Redo. Automatic file versioning is the obvious answer.

        Having many intermediate versions lying around is a non-problem. First of all, only deltas have to be saved with a complete version saved once in a while to minimize the chance of corruption. Secondly, just as with backups, the older the version is the less intermediate versions you need. Say one version every minute for the last hour. Then one version every hour for the last day before that. One version every day for the last week before that. And so on.

        A filesystem that supports transparent automatic versioning is such a no-brainer from a usability standpoint that I can't figure out why nobody has done it already. I guess it must be really hard.

        BTW, an explicit save can be simulated on a system with continuous saving by creating named snapshots.

      • Re:File - Save (Score:4, Interesting)

        by JesseMcDonald ( 536341 ) on Thursday December 04, 2008 @07:43PM (#25996459) Homepage

        Continuous save can be made workable with some reasonable rules for discarding unneeded versions. First, keep every version the user explicitly tags, as well as the baseline for the current session (to allow reversion). For the rest, devise a heuristic combining recency and amount of change to select old, trivial versions to be discarded. The further back you go into the history, the more widely spaced the checkpoints become. This is easier for structured documents, but with proper heuristics can also be applied to e.g. plain text. Temporal grouping (sessions, breaks in typing, etc.) can provide valuable clues in this area.

        Currently most programs only have two levels of history: the saved version(s), and the transient undo buffer. There's no reason that this sharp cut-off couldn't be turned into a gradual transition.

    • Re:File - Save (Score:4, Interesting)

      by mosb1000 ( 710161 ) <mosb1000@mac.com> on Thursday December 04, 2008 @06:46PM (#25995703)
      Maybe they should have a "finalize" option, whereby you save a special, read-only file, and it saves a backup of each "finalized" version. There's really no reason you should lose what you are working on when your computer crashes. And having an unsaved file shouldn't hold up quitting applications. Just start where you left off when you resume the application.
  • by JonLatane ( 750195 ) on Thursday December 04, 2008 @06:11PM (#25995223)
    But, at least in Mac OS X, exited processes consume "inactive" memory - basically being kept in memory until they are launched again. If Vista has a similar implementation, your swapfile may just contain a bunch of pages left over from previously-running applications. Are you experiencing actual system performance problems? Concerning yourself too much with the numbers only can be a bad thing.
  • by frog_strat ( 852055 ) on Thursday December 04, 2008 @06:17PM (#25995317)
    Virtual memory is now used for little tricks, in addition to providing more memory than is physically available.

    One example is ring transitions into kernel mode which start out as exceptions. (Everyone seems to have ignored call gate, the mechanism Intel offered for ring transitions).

    Another is memory mapped pointers. It is cool to be able to increment a pointer to file backed ram and not have to care if it is in ram or not.

    Maybe the OP is onto something. Imagine writing Windows drivers without having to worry about IRQL and paging.
  • I prefer none. (Score:5, Insightful)

    by mindstrm ( 20013 ) on Thursday December 04, 2008 @06:18PM (#25995343)

    This should generate some polarized discussion.

    There are two camps of thought.

    One will insist that, no matter how much memory is currently allocated, it makes more sense to swap out that which isn't needed in order to keep more free physical ram. They will argue until they are blue in the face that the benefits of doing so are good.
    Essentially - your OS is clever and it tries pre-emptively swap things out so the memory will be available as needed.

    The other camp - and the one I subscribe to - says that as long as you have enough physical ram to do whatever you need to do - any time spent swapping is wasted time.

    I run most of my workstations (Windows) without virtual memory. Yes, on occasion, I do hit a "low on virtual memory error" - usually when something is leaky - but I prefer to get the error and have to re-start or kill something rather than have the system spend days getting progressively slower, slowly annoying me more and more, and then giving me the same error.

    This is not to say that swap is bad, or that it shouldn't be used - but I prefer the simpler approach.

    • Re:I prefer none. (Score:5, Interesting)

      by Just Some Guy ( 3352 ) <kirk+slashdot@strauser.com> on Thursday December 04, 2008 @07:20PM (#25996153) Homepage Journal

      One will insist that, no matter how much memory is currently allocated, it makes more sense to swap out that which isn't needed in order to keep more free physical ram.

      Most of the people in this camp are coming from a Unix background where this is actually implemented effectively. For example, the FreeBSD machine next to my desk has 6GB of RAM, but even with about 3GB free, I'm currently about 1GB into my 16GB of swap. (Why 16? Because it's bigger than 6 but still a tiny part of my 750GB main drive.)

      FreeBSD, and I assume most other modern Unixes, will copy idle stuff from RAM to swap when it's sufficiently bored. Note that it doesn't actually delete the pages in physical memory! Instead, it just marks them as copied. If those processes suddenly become active, they're already in RAM and go on about their business. If another process suddenly needs a huge allocation, like if my site's getting Slashdotted, then it can discard the pages in RAM since they've already been copied to disk.

      That is why many Unix admins recommend swap. It helps the system effectively manage its resources without incurring a penalty, so why wouldn't you?

      It's my understanding that Windows never managed to get this working right, so a lot of MS guys probably prefer to avoid it.

  • by sdaemon ( 25357 ) on Thursday December 04, 2008 @06:22PM (#25995401)

    I can finally put my CS degree to good use, answering the same questions students would ask the TAs in basic OS and systems-level programming courses! ...except that the other comments have already answered the question. So, in true CS fashion, I will be lazy and refrain from duplicating effort ;)

    Laziness is a virtue! (And that's on-topic, because a lazy paging algorithm is a good paging algorithm).

  • by Khopesh ( 112447 ) on Thursday December 04, 2008 @06:23PM (#25995413) Homepage Journal

    I recall back in 2002 or so, a friend of mine maxed out his Windows XP system with 2gb of memory. Windows absolutely refused to turn off paging (swap), forcing him to whatever the minimum size was. The solution? He created a RAMdisk and put the paging file there.

    On Linux (and other modern systems, perhaps now including Windows), you can turn off swap. However, the Linux kernel's memory management isn't so great at the situation you hit when you need more memory than you have, but you can't swap. Usually, the memory hog crashes as a result (thankfully, Firefox now has session restore). I might be slightly out of date on this one.

    A well-tweaked system still has swap (in nontrivial amounts), but rarely uses it. Trust me, you can afford losing the few gigabytes from your filesystem. Again in Linux, /proc/sys/vm/swappiness [kerneltrap.org] can be tweaked to a percentage reflecting how likely the system is to swap memory. Just lower it. (Though note the cons to this presented at the kerneltrap article above.) My workstation currently has an uptime of 14 days, a swappiness of 60, and 42/1427 megs of swap in use as opposed to the 1932/2026 megs of physical memory in use at the moment.

    This is summarized for Windows and Linux on Paging [wikipedia.org] at Wikipedia.

  • Good Advice (Score:4, Interesting)

    by dark_requiem ( 806308 ) on Thursday December 04, 2008 @06:27PM (#25995489)
    Okay, so we've got most of the "you can run Vista with 4GB?!" jokes out of the way (hopefully). Here's my take on the situation.

    I have Vista x64 running in a machine with 8GB physical memory, and no page file. I can do this because I'm never running enough memory-hungry processes that I will exceed 8GB allocated memory. So, while the OS may be good at deciding what gets swapped to the hard disk, in my case, there's simply no need, as everything I'm running can be contained entirely within physical memory (and for the curious, I've been running like this for a year and a half, haven't run out of memory yet).

    However, if you don't have enough physical memory to store all the processes you might be running at once, then at some point the OS will need to swap to the hard drive, or it will simply run out of memory. I'm honestly not sure exactly how Vista handles things when it runs out of memory (never been a problem, never looked into it), but it wouldn't be good (probably BSoD, crash crash crash). I can tell you from personal experience that I regularly exceed 4GB memory usage (transcoding a DVD while playing a BluRay movie while ...). With your configuration, that's when you'd start to crash.

    Long story short, with just 4GB, I would leave the swap file as is. Really, you should only disable the swap file if you know based on careful observation that your memory usage never exceeds the size of your installed physical memory. If you're comfortable with the risks involved, and you know your system and usage habits well, then go for it. Otherwise, leave it be.
  • by fermion ( 181285 ) on Thursday December 04, 2008 @06:34PM (#25995561) Homepage Journal
    In college we ran the ATT Unix PC for a year or so. Apple also used this memory scheme. IIRC, the physical memory is first used for the kernel and system processes. How every much these processes take, that memory becomes more or less unavailable for the user. In your case, since you have 4GB physical, and 2GB used, this may mean that Vista is using 2GB for the system, if all memory is used.

    What is left over is the physical memory needed by the system. It seems like the OS preferred a fixed amount of memory, so it would just set up fixed space on the hard disk. So, even if all you have a 1 MB of available memory, the system would set up say 10MB, and that is what would be used. The pages that are being used will be stored in the physical ram, while everything would be stored on the HD.

    If page management is working correctly, this should be transparent to the user. The management software or hardware will predict what pages were needed, and transfer those page to ram. One issue we I had was available memory was not hard disk plus physical available ram, but was limited by the available hard disk space.

    So, it seems to me that virtual paged memory is still useful because with multiple applications loaded, memory can be a mess, and big fast hard drives it should not be an issue. I don't how Vista works, but it seems that *nix works very hard to insure that the pages that are needed are loaded to physical memory, and page faults do not occur. In this case, where virtual memory equals available physical memory, it would seem that since only physical memory is being used, there would be no performance hit from virtual memory. it is only there in case an application is run that need more memory. It is nice that we do not get those pesky memory errors we got in the very old days.

  • by PolygamousRanchKid ( 1290638 ) on Thursday December 04, 2008 @07:01PM (#25995913)

    If it's there, and you can see it . . . it's real.

    If you can see it, but it's not there . . . it's virtual.

    If you can't see it, and it's not there . . . it's gone.

  • by jellomizer ( 103300 ) on Thursday December 04, 2008 @07:06PM (#25995985)

    Yes the OP is right if you don't page to disk and go off all RAM then you will be faster. However with a good paging it will help you from things from getting slower or not working when you really need the extra Horse Power, and you probably wouldn't even notice it.

    First we got the 80/20 rule where 20% of the Data is used 80% of the time. So a large chunk of data will rarely be used, being that it not being used read or written just kinda sitting there. You might as well page it to disk so you have more space free.

    Next if you get a big chunk of memory request say you open a VM system that need Gigabytes of memory. Say 3 GB and you only had 2 GB free. That means before you app can run you will have to wait for 1GB of data to be dumped to the disk. Vs. say a good paging algorithm which would already have that 1 GB already paged so you can fill the RAM with the VM for a faster access then probably depending on the paging algorithm pieces will slowly get paged back to disk allowing you run say an other 512meg load on your system without having the system dump that 512meg of data. If you didn't page you would be stuck as you don't have the ram to run the application. Or a poor paging algorithm will spend so much time paging the data until it gets enough free to operate.

    Drive space is relatively cheap if you are going to do some high RAM intensive apps. With a good paging you can get by with about half as much RAM saving money.

    Most systems have more ram then ever but the apps use more ram then ever too. (This isn't necessarily bloat) Lets say your app does a lot of Square roots. The time it takes for it to process say 1,000,000 Square Roots vs. Dumping to memory the recalculated Square Roots values and doing a quick memory lookup of the answer. That way you get faster calculation time at cost of RAM.

  • by Lord Byron II ( 671689 ) on Thursday December 04, 2008 @07:17PM (#25996123)
    There has been this ridiculous notion floating around recently that swap space and paging files are relics and need to be eliminated. You can only safely eliminate them only so long as you're 100% confident you'll never use more RAM than you actually have. But there are lots and lots of memory hogging applications - video editors, image editors, scientific applications, etc. And when you consider that a web browser can eat up to 300MB of RAM, it shouldn't be hard to imagine a multitasking user running out by using too many little programs.
  • by bored ( 40072 ) on Thursday December 04, 2008 @07:27PM (#25996245)

    I note a lot of people are insisting that "virtual memory" refers to the virtual address space given to a execution context, and what the author really means is "paging".

    The funny thing is that these are traditionally poorly defined/understood terms which are gaining a hard consensus for the meanings due to some recent OS books, and poor comp-sci education which insists on a particular definition. Everyone is faulting M$ for using the term incorrectly, even though the original mac OS and other OS's used the term in the same way. Wikipedia defines it one way and then goes on to give historical systems which don't really adhere to the definition. For example the B5000 (considered the first commercial machine with virtual memory) didn't even have "contiguous" working memory as required by the wikipedia definition. It had what would be more specifically called multiple variable sized segments which could be individually swapped. Again, the mac OS evolved from a single process model to muliprocess, in the same address space (look up mac switcher) and implemented "virtual memory" using a system without a MMU by swaping the allocated pieces of memory to disk if they weren't currently locked (in use) and reallocating the memory. Aka they had "virtual memory" in single fragmented address space.

    The other example is people use "paging" to describe the act of swaping portions of the memory to disk, misunderstanding that paging is more about splitting an address space or segment up into fixed pieces for address translation to physical, and that disk swapping of pages isn't required for paging. Aka, your system is still "paging" if you disable swapping.

    Even the term swapping is unclear because the need to differentiate between swaping pages, and swapping whole processes (or even segments) resulted in people avoided the term swapping to describe systems which were swapping pages instead of segments/regions/processes. These systems were generally called "demand paged" or something similar to indicate that they didn't need to swap a complete process or dataset (see DOSSHELL).

    So, give the guy a break, in may ways he is just as correct, if not more so.

E = MC ** 2 +- 3db

Working...