Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Windows Operating Systems Software Hardware

Why Use Virtual Memory In Modern Systems? 983

Cyberhwk writes "I have a system with Windows Vista Ultimate (64-bit) installed on it, and it has 4GB of RAM. However when I've been watching system performance, my system seems to divide the work between the physical RAM and the virtual memory, so I have 2GB of data in the virtual memory and another 2GB in the physical memory. Is there a reason why my system should even be using the virtual memory anymore? I would think the computer would run better if it based everything off of RAM instead of virtual memory. Any thoughts on this matter or could you explain why the system is acting this way?"
This discussion has been archived. No new comments can be posted.

Why Use Virtual Memory In Modern Systems?

Comments Filter:
  • by alain94040 ( 785132 ) * on Thursday December 04, 2008 @06:00PM (#25995059) Homepage

    You must be confused about virtual vs. physical memory. In modern processors, there is no penalty for using virtual memory, all translation from virtual to physical address space is done internal to the processor and you won't notice the difference.

    So all the physical memory installed in your PC is used by the processor as one big pool of resources. Processes can think whatever they want and address huge memory spaces, that's all in virtual land. Virtual memory only starts impacting performance when pages are being swapped in and out, because all your processes need more resident memory than you actually have.

    Swapping means accessing the disk and freezing the requesting process until its page of memory has arrived from the disk, which takes millions of processor cycles (a lifetime from the processor's point of view). It's not so bad if you swap once, as the processor can work on other processes while waiting for the data to arrive, but if all your programs keep pushing each other out of physical memory, you get thrashing and consider yourself happy if the mouse pointer is still responsive!

    So you may want to change the title of your post to: "why use physical memory in modern systems?". I would point you to an article I wrote on that topic in 1990, but somehow I can't find a link to it on the web :-)

    fairsoftware.net - software developers share revenue from the apps they build

  • by Anonymous Coward on Thursday December 04, 2008 @06:02PM (#25995077)
    Virtual memory [wikipedia.org] is very useful.

    Note that "virtual memory" is not just "using disk space to extend physical memory size".

  • by nleaf ( 953206 ) on Thursday December 04, 2008 @06:08PM (#25995151)
    The problem is whatever Vista is using for page replacement, not virtual memory itself. Let's say you're using 2 GB of physical memory, and then start up some memory-intensive program that uses another 2 GB. You're done; if you run out of space now, the OS can do nothing about it, and is forced to do something drastic like start killing off processes.

    The question then is why isn't Vista making fullest use out of virtual memory? It is probably trying to proactively move little used pages out of physical memory to make space for new pages. Its an attempt at optimization--after all, if you're not using that data, why let it take up valuable physical memory?
  • by El Lobo ( 994537 ) on Thursday December 04, 2008 @06:08PM (#25995153)
    Absolutely not true. You can even install and run Vista on a computer with 1Gb ram and no page file. And run applications. So it doesn't reserve 1Gb for itself and thus, your myth is busted. Vista's memory manager will use as much memory it can (free memory is a waste, so it will use it rather than watch it empty). But as soon as a process needs memory it will give it back.
  • by JonLatane ( 750195 ) on Thursday December 04, 2008 @06:11PM (#25995223)
    But, at least in Mac OS X, exited processes consume "inactive" memory - basically being kept in memory until they are launched again. If Vista has a similar implementation, your swapfile may just contain a bunch of pages left over from previously-running applications. Are you experiencing actual system performance problems? Concerning yourself too much with the numbers only can be a bad thing.
  • Winner (Score:2, Informative)

    by Anonymous Coward on Thursday December 04, 2008 @06:20PM (#25995373)

    Each process has it's own addressspace. This is necessary and usefull for several reasons.
    1) You can get memory at any virtual address you request. This is necessary as .exe files usually contain no relocation section and thus need to be mounter always at the same virtual address.
    2) Processes are well seperated, and on a 64Bit system each 32bit process can get a memoryspace which is addressable with a 32bit pointer, but the whole system having more memory than thet.
    3) executable code (mainly .dlls) can be mapped as copy-on-write using the physical RAM only once, even if they are mounted in several process. Once a process modifies the executable it gets its own copy of the modified page, and the virtual address does not change.
    And if you are talking about paged memory, it is still usefull even if you still have enough RAM, as you can reserve memory for future use. To be able to satisfy a commit of that memory at a later time the OS reserves that space in the swap file. As there is no data to write to the HDD you get no performance hit.

  • by Timothy Brownawell ( 627747 ) <tbrownaw@prjek.net> on Thursday December 04, 2008 @06:21PM (#25995389) Homepage Journal

    At the very least, the amount of swap should be easily configurable like it is in Linux. I haven't actually used a swap partition in Linux for years, preferring instead to have 6 or 8gb of RAM, which is now cheap.

    It is, (Right-click "My Computer")->Properties, "Advanced" tab, "Settings" under Performance, "Advanced" tab, "Change" under "Virtual memory". Almost as easy as "dd if=/dev/zero of=swapfile bs=1G count=1; swapon swapfile", spclly if u cant spel cuz u txt 2 much.

  • by Khopesh ( 112447 ) on Thursday December 04, 2008 @06:23PM (#25995413) Homepage Journal

    I recall back in 2002 or so, a friend of mine maxed out his Windows XP system with 2gb of memory. Windows absolutely refused to turn off paging (swap), forcing him to whatever the minimum size was. The solution? He created a RAMdisk and put the paging file there.

    On Linux (and other modern systems, perhaps now including Windows), you can turn off swap. However, the Linux kernel's memory management isn't so great at the situation you hit when you need more memory than you have, but you can't swap. Usually, the memory hog crashes as a result (thankfully, Firefox now has session restore). I might be slightly out of date on this one.

    A well-tweaked system still has swap (in nontrivial amounts), but rarely uses it. Trust me, you can afford losing the few gigabytes from your filesystem. Again in Linux, /proc/sys/vm/swappiness [kerneltrap.org] can be tweaked to a percentage reflecting how likely the system is to swap memory. Just lower it. (Though note the cons to this presented at the kerneltrap article above.) My workstation currently has an uptime of 14 days, a swappiness of 60, and 42/1427 megs of swap in use as opposed to the 1932/2026 megs of physical memory in use at the moment.

    This is summarized for Windows and Linux on Paging [wikipedia.org] at Wikipedia.

  • by clarkn0va ( 807617 ) <apt,get&gmail,com> on Thursday December 04, 2008 @06:23PM (#25995423) Homepage

    thinks "Virtual Memory" is the same thing as paging...

    Mac Classic (OS 8 for sure) used the term "Virtual Memory" the same way Windows today uses "Page File" or unix uses "swap", so you can at least understand why some people might be confused by this.

    db

  • by jandrese ( 485 ) <kensama@vt.edu> on Thursday December 04, 2008 @06:27PM (#25995481) Homepage Journal
    Man, I hated that assumption in 2000, and I hate it in XP. It's the one that means when you bring Firefox up after it has been minimized, that the OS will have to laboriously swap in all of the memory for it from disk, which takes forever when you're talking about a slow laptop hard drive. I made it a habit of switching the paging file management to "manual" and reducing the paging size down to 2mb. It makes the whole system way more responsive when you're like me and have a bunch of applications open at once and in the background, and memory is so cheap that buying a little extra so you never run out (2GB) is easy.
  • by amliebsch ( 724858 ) on Thursday December 04, 2008 @06:28PM (#25995499) Journal
    Are you sure? I was under the impression that Vista aggressively pages out when it thinks it can do it without impacting system performance, but will utill keep the page in RAM. That way the RAM can be quickly freed and used for other processes if needed (i.e., large programs can start quickly), and the disk page ignored and overwritten (ultimately never helping but never hurting much either) if it is ultimately never used.
  • by clarkn0va ( 807617 ) <apt,get&gmail,com> on Thursday December 04, 2008 @06:28PM (#25995501) Homepage

    Vista reserves 1 GB to itself, so your system will only ever have 3 GB available for processes.

    Not exactly. 32-bit OSes won't normally report more than about 3.2 GiB of system RAM, as a 32-bit OS can only address 4 GiB (PAE/himem aside), and the upper addresses are reserved for hardware.

    64-bit OSes (even Vista) will use and report RAM to much higher upper limits.

    Or something like that.

  • by Daniel Weis ( 1209058 ) on Thursday December 04, 2008 @06:30PM (#25995521)
    Wrong. I routinely run my laptop in this configuration. No issues.
  • by TypoNAM ( 695420 ) on Thursday December 04, 2008 @06:34PM (#25995551)

    Sorry, got to correct the path to where exactly I got that quote from:
    System Properties -> Advanced -> Performance area, click Settings -> Advanced tab (on Windows XP, as for 2000 its the default tab).

  • by d3vi1 ( 710592 ) on Thursday December 04, 2008 @06:34PM (#25995555)

    I think he is referring to the userspace/kernelspace split in Windows NT. On 32bit Windows XP, by default, the userspace (ring3) will have at most 2 GB of the physical RAM, and the kernel space would get the rest (some of it paged and some of it not). On systems with more than 3G of RAM (a lot by 2002 standards), it was kinda pointless to reserve that much for the kernel space, so they added a boot.ini flag that changed the split to _AT_MOST_ 3GBytes for the userspace and the rest for kernel space.
    In Vista the split for 3G/1G of RAM is default. Actually on a system with 4G of RAM running in 32bit mode, you can't use all of them even if you try (in Windows XP), because right under the 4G limit you would have the PCI memory address mappings, that can be as large as 512M for a common video card with half a gig of RAM. Add to that the RAID controllers and the other hardware, and you have about 800megs of RAM unused because they can't be addressed, as their address-space is used by the installed devices.
    I think that http://support.microsoft.com/kb/823440/ [microsoft.com] and http://support.microsoft.com/kb/171793/ [microsoft.com] should describe what I'm talking about pretty clearly.

  • by fermion ( 181285 ) on Thursday December 04, 2008 @06:34PM (#25995561) Homepage Journal
    In college we ran the ATT Unix PC for a year or so. Apple also used this memory scheme. IIRC, the physical memory is first used for the kernel and system processes. How every much these processes take, that memory becomes more or less unavailable for the user. In your case, since you have 4GB physical, and 2GB used, this may mean that Vista is using 2GB for the system, if all memory is used.

    What is left over is the physical memory needed by the system. It seems like the OS preferred a fixed amount of memory, so it would just set up fixed space on the hard disk. So, even if all you have a 1 MB of available memory, the system would set up say 10MB, and that is what would be used. The pages that are being used will be stored in the physical ram, while everything would be stored on the HD.

    If page management is working correctly, this should be transparent to the user. The management software or hardware will predict what pages were needed, and transfer those page to ram. One issue we I had was available memory was not hard disk plus physical available ram, but was limited by the available hard disk space.

    So, it seems to me that virtual paged memory is still useful because with multiple applications loaded, memory can be a mess, and big fast hard drives it should not be an issue. I don't how Vista works, but it seems that *nix works very hard to insure that the pages that are needed are loaded to physical memory, and page faults do not occur. In this case, where virtual memory equals available physical memory, it would seem that since only physical memory is being used, there would be no performance hit from virtual memory. it is only there in case an application is run that need more memory. It is nice that we do not get those pesky memory errors we got in the very old days.

  • by hezekiah957 ( 1219288 ) on Thursday December 04, 2008 @06:38PM (#25995595)
    Guess what. I AM running Vista on such a setup. Your move, AC.
  • by Dadoo ( 899435 ) on Thursday December 04, 2008 @06:38PM (#25995605) Journal

    I think I'm going to need to add a comment to that Wikipedia page. I'm not sure when the definition changed, but a long time ago (mid 80s), "virtual memory" did mean "making a program believe it had more memory than there was on the system". At least three different vendors defined it that way: Motorola, Data General, and DEC. I still have the Motorola and DG manuals that say so.

  • by maxume ( 22995 ) on Thursday December 04, 2008 @06:53PM (#25995779)

    Ram is cheap. If you are swapping, buy more ram.

    Computers are cheap. If adding more ram doesn't fix the swapping on your current computer, buy a new computer that can use more ram.

  • by SiChemist ( 575005 ) * on Thursday December 04, 2008 @06:55PM (#25995815) Homepage

    You can also adjust the "swappiness" of a computer running linux. I've set my desktop to have a swappiness of 10 (in a scale of 0 to 100 where 0 means don't swap at all). In Ubuntu, you can do sudo sysctl vm.swappiness=10 to set the swappiness until next boot or edit /etc/sysctl.conf and add vm.swappiness=10 to the bottom of the file to make it permanent.

    The default swappiness level is 60.

  • by Cramer ( 69040 ) on Thursday December 04, 2008 @06:55PM (#25995819) Homepage

    if you decide to hibernate

    If I decide anything, it will have to wake up, pull everything back off disk, to do anything. If your power settings lead to hibernation after some (long) idle time, then it doesn't f'ing matter if it's already pushed everything to disk hours previous.

    it helps with crash recovery

    BULL! When the system restarts, it does not care one bit what's in the pagefile. All the work you had open and unsaved is still gone. All of the filesystem delayed writes will be completed after just a few minutes of idle time, so pushing everything to the pagefile is nothing but a waste of time. And if you're on a laptop, your system very likely had to spin the drive back up to write all that crap.

  • Some advantages (Score:5, Informative)

    by pavon ( 30274 ) on Thursday December 04, 2008 @06:57PM (#25995833)

    That page mostly talks about what virtual memory is and doesn't directly list why it is an improvement.

    Some folks have already mentioned the fact that it eliminates memory fragmentation, and that it allows mapping of files and hardware into memory without dedicating (wasting) part of the address space to those uses.

    Another reason is that you can have 2^64 bytes of total system memory, even if the individual applications are 32-bit, and can only address 2^32 bytes of memory. Since the 32-bit applications are presented a virtual address space, it doesn't matter if their pages are located above the 32-bit boundary.

    It means that per-process memory protection is enforced by the CPU paging table. Without virtual memory you would have to reimplement something like it just for memory protection.

    It means that the linker/loader don't have to patch the executable with modified address locations when it is loaded into memory.

    The above two reasons have the corollary that libraries can be shared in memory much more easily.

    And that's just off the top of my head. Virtual memory is a very, very useful thing.

  • No he doesn't (Score:4, Informative)

    by fm6 ( 162816 ) on Thursday December 04, 2008 @06:57PM (#25995837) Homepage Journal

    You must be confused about virtual vs. physical memory. In modern processors, there is no penalty for using virtual memory, all translation from virtual to physical address space is done internal to the processor and you won't notice the difference.

    Huh? That's totally wrong. If it were true, you wouldn't need any RAM.

    It's true that address translation is hard-wired in modern processors. But that just means that figuring out where the data is is as fast as for data that's already in RAM. Actually reading or writing it is only as fast as the media it's stored on. So if you have a lot of big applications running, and there isn't enough RAM for them all to be in physical memory at once, your system "thrashes", as data migrates back and forth between the two media. That's why adding RAM is very often the best way to speed up a slow system, especially if you're running Microsoft's latest bloatware. Defragging the swap disk can also be helpful.

    To answer the original question: actually, you often don't need any virtual memory. But sometimes you do. Disk space is cheap, so it makes sense to allocate a decent amount of virtual memory and just not worry about whether it's absolute necessary.

  • Re:No he doesn't (Score:3, Informative)

    by DragonWriter ( 970822 ) on Thursday December 04, 2008 @07:00PM (#25995885)

    That's why adding RAM is very often the best way to speed up a slow system, especially if you're running Microsoft's latest bloatware.

    Not running Microsoft's latest bloatware is probably the best way to speed up a slow system if you are currently doing that.

  • by hardburn ( 141468 ) <hardburn@wumpus-ca[ ]net ['ve.' in gap]> on Thursday December 04, 2008 @07:00PM (#25995893)

    It never did change. "Virtual Memory" always meant a trick the kernel and CPU do to make programs think they are accessing a different memory address than they actually are. This trick is necessary in all multitasking operating systems.

    Once you've made the jump to mapping real memory addresses to fake ones, it's easy to map the fake addresses to a swap file on the hard drive instead of actual RAM. The confusion of the terms started when naive programmers at the UI level called that swap file "virtual memory".

  • by thatseattleguy ( 897282 ) on Thursday December 04, 2008 @07:01PM (#25995909) Homepage

    Circa 1975 through 2000 or so, the "native" (*) versions of the Pick Operating System worked exactly this way. Even the OS-level programmers - working in assembly language (!) - only saw virtual memory in the form of disk pages. When you put an address in a register, it was a reference to a disk page, not physical memory. The pages were auto-magically brought into memory at the moment needed and swapped out when not by a tiny VM-aware paging kernel. That was the only part of the system that understood that there was anything besides disk storage in the entire system.

    Might sound inefficient, but I remember the first Pick system I worked on supported 14-16 simultaneous online users using only 64kb of physical memory (and it was core - REAL core - at that point).

    Now get off my lawn, kids.

    -TSG-

    (*) "Native" meaning "with no other OS involved underneath, bare Pick on the metal". These types (Reality, Ultimate, CDI Series/1, etc) were mostly gone by Y2K in favor of more modern systems with underlying host OS (Un*x or NT) handling the hardware and Pick riding on top as a database/programming environment (Universe, Unidata, AP / D3). Though I'm sure there are still hundreds if not thousands of the old systems chugging away in back rooms of warehouses and the like to this day.

  • Re:Why use swap? (Score:3, Informative)

    by croddy ( 659025 ) * on Thursday December 04, 2008 @07:02PM (#25995917)

    Because you don't just use RAM to hold your processes; you use it for caching frequently accessed data from your (comparatively slower) hard disks as well. Thus, there is never any such thing as "enough" RAM, that is, until you have enough primary storage to equal the sum of all the secondary storage you'll use in a single computing session PLUS the amount of memory needed by all your running processes.

    But nobody has that much memory. It's a waste of money. So, we trust the OS to swap out inactive pages and then fill the remaining space with disk caches. Then we spec our systems with as much primary storage as we need to contain actively used memory from processes as well as a healthy disk cache for the persistent data we're working on. With modern memory management, we get substantially higher disk access performance as well as a system that's affordable because it doesn't contain a terabyte of solid-state memory that can't even remember anything through a reboot.

  • by jellomizer ( 103300 ) on Thursday December 04, 2008 @07:06PM (#25995985)

    Yes the OP is right if you don't page to disk and go off all RAM then you will be faster. However with a good paging it will help you from things from getting slower or not working when you really need the extra Horse Power, and you probably wouldn't even notice it.

    First we got the 80/20 rule where 20% of the Data is used 80% of the time. So a large chunk of data will rarely be used, being that it not being used read or written just kinda sitting there. You might as well page it to disk so you have more space free.

    Next if you get a big chunk of memory request say you open a VM system that need Gigabytes of memory. Say 3 GB and you only had 2 GB free. That means before you app can run you will have to wait for 1GB of data to be dumped to the disk. Vs. say a good paging algorithm which would already have that 1 GB already paged so you can fill the RAM with the VM for a faster access then probably depending on the paging algorithm pieces will slowly get paged back to disk allowing you run say an other 512meg load on your system without having the system dump that 512meg of data. If you didn't page you would be stuck as you don't have the ram to run the application. Or a poor paging algorithm will spend so much time paging the data until it gets enough free to operate.

    Drive space is relatively cheap if you are going to do some high RAM intensive apps. With a good paging you can get by with about half as much RAM saving money.

    Most systems have more ram then ever but the apps use more ram then ever too. (This isn't necessarily bloat) Lets say your app does a lot of Square roots. The time it takes for it to process say 1,000,000 Square Roots vs. Dumping to memory the recalculated Square Roots values and doing a quick memory lookup of the answer. That way you get faster calculation time at cost of RAM.

  • Re:No he doesn't (Score:4, Informative)

    by lgw ( 121541 ) on Thursday December 04, 2008 @07:23PM (#25996211) Journal

    Note: virtual memory is not necessarily on disk. "Virtual" memory just refers to the fact that the memory address that the application uses isn't the physical memory address (and in fact there might not *be* a physical memory address this instant), nothing more.

    Defragging the swap disk can also be helpful.

    I think this is never helpful. Pagefile reads are basically random for the first block being paged in for a given app, and modern hard drives mostly claim no seek latency as long as you have 1 read queued (of course, that claim might be BS).

    For Windows, the OS doesn't *create* framgenation of the page file over time. If there is a large contiguous space available when the pagefile is created (and there usually is, right after OS installation), Windows will use that block, and it will never fragment. Also, if your pagefile was fragmented at creation, defragging by itself won't fix it, as it doesn't move any pagefile blocks.

    I hope the same thing is true in Linux - if defragging your swap drive helps, someone has done something very wrong to begin with.

  • by bored ( 40072 ) on Thursday December 04, 2008 @07:27PM (#25996245)

    I note a lot of people are insisting that "virtual memory" refers to the virtual address space given to a execution context, and what the author really means is "paging".

    The funny thing is that these are traditionally poorly defined/understood terms which are gaining a hard consensus for the meanings due to some recent OS books, and poor comp-sci education which insists on a particular definition. Everyone is faulting M$ for using the term incorrectly, even though the original mac OS and other OS's used the term in the same way. Wikipedia defines it one way and then goes on to give historical systems which don't really adhere to the definition. For example the B5000 (considered the first commercial machine with virtual memory) didn't even have "contiguous" working memory as required by the wikipedia definition. It had what would be more specifically called multiple variable sized segments which could be individually swapped. Again, the mac OS evolved from a single process model to muliprocess, in the same address space (look up mac switcher) and implemented "virtual memory" using a system without a MMU by swaping the allocated pieces of memory to disk if they weren't currently locked (in use) and reallocating the memory. Aka they had "virtual memory" in single fragmented address space.

    The other example is people use "paging" to describe the act of swaping portions of the memory to disk, misunderstanding that paging is more about splitting an address space or segment up into fixed pieces for address translation to physical, and that disk swapping of pages isn't required for paging. Aka, your system is still "paging" if you disable swapping.

    Even the term swapping is unclear because the need to differentiate between swaping pages, and swapping whole processes (or even segments) resulted in people avoided the term swapping to describe systems which were swapping pages instead of segments/regions/processes. These systems were generally called "demand paged" or something similar to indicate that they didn't need to swap a complete process or dataset (see DOSSHELL).

    So, give the guy a break, in may ways he is just as correct, if not more so.

  • Re:Multics (Score:3, Informative)

    by debrain ( 29228 ) on Thursday December 04, 2008 @07:32PM (#25996317) Journal

    Those who forget history will have to try to re-invent it. Badly.

    I believe that is an insightful combination of two quotes:

    - Those who forget history are doomed to repeat it. (alt. George Santayana: "Those
    who cannot remember the past are condemned to repeat it.")

    - "Those who don't understand UNIX are condemned to reinvent it, poorly." Henry Spencer

  • by klapaucjusz ( 1167407 ) on Thursday December 04, 2008 @07:51PM (#25996575) Homepage

    On Linux (and other modern systems, perhaps now including Windows), you can turn off swap. However, the Linux kernel's memory management isn't so great at the situation you hit when you need more memory than you have, but you can't swap. Usually, the memory hog crashes as a result

    Oh my... you want the full answer on this one?

    Modern operating systems don't actually allocate the memory that a user-space application asks for. When the user-space application calls brk, malloc or VirtualAlloc, the calls always succeeds. The memory will be actually allocated lazily, the first time the application touches it. If the system runs out of memory, it is too late to return an error code from the memory allocation call, so the application will most likely crash with SIGSEGV. (An alternate, somewhat suboptimal, semantics is to crash the whole system.)

    Obviously, not everyone is happy with this situation. Some proprietary Unices (notably AIX) deal with it by sending warnings to a process before crashing it with SIGSEGV. Under Linux, you can tune this behaviour with the vm.overcommit_memory and vm.overcommit_ratio sysctls, which tune how willingly the system will return success for memory allocations for memory it doesn't actually have. While tuning these values is something of a black art, when done correctly, it can prevent crashes of properly written applications without impairing functionality.

  • Re:Can't hibernate (Score:5, Informative)

    by ZosX ( 517789 ) <zosxavius@nOSpAm.gmail.com> on Thursday December 04, 2008 @07:51PM (#25996577) Homepage

    Uh. You do realize that block of ram are not written contiguously right? You won't find it any different on Linux or MacOS or any operating system for that matter. You also realize that the access time of RAM is effectively 0 right? Yeah, the AC was right. Nothing in the KB article about ram fragmentation. That program is also one of those create "free" ram programs that I despise so much. These kinds of utilities might be somewhat marginally useful on a very resource bound system, but I can hardly see the use for this crap. Even if RAM were to be somehow "defragmented" how could it possibly make it any faster? The bottleneck isn't in accessing the addresses. An OS keeps a running tab of what is stored where. As soon as it makes the request for the data its coming off of the RAM as fast as the FSB will let it pass through. The reason defragmenting is effective on hard drives is because the hard drive has a physical dimension where the heads take actual time to move to the desired location. In RAM there is no moving parts and hence, extremely low latency, which is measured in nanoseconds versus the milliseconds they use to measure latency in hard drives.

    I smell snake oil here. That is, unless you have some real science to back up the benefits of ram "defragmenting"

  • by Anonymous Coward on Thursday December 04, 2008 @07:52PM (#25996591)
    Any process is free to lock memory so that it will never be paged out. It's not something that is reserved for Microsoft products.
  • by nmb3000 ( 741169 ) on Thursday December 04, 2008 @08:01PM (#25996705) Journal

    Either he/she thinks "Virtual Memory" is the same thing as paging

    Physical memory, virtual memory, address space, and paging files are some of the most misunderstood things your average computer "expert" deals with. When it comes to Windows, few can probably explain why only 3GB of 4GB physical RAM shows up on a 32-bit system. Fewer even can probably define the difference between "virtual memory" and "paging file".

    I highly recommend any Windows users or administrators read Mark Russinovich's latest blog entry Pushing the Limits of Windows: Virtual Memory [technet.com] . It goes over all these things and describes the difference between virtual memory, committed memory, and why it really is important to have a paging file, even on that system with 8GB of physical RAM. Should be required reading for any Windows admin.

  • by ChrisA90278 ( 905188 ) on Thursday December 04, 2008 @08:12PM (#25996831)

    "I've been running without a pagefile, in all versions of Windows,..."

    Not really. On a modern OS when executable code is loaded from disk to RAM. It isn't really loaded. What they do is map the file that holds the code into virtual memory. So in effect when you run a program called "foobar.exe" you have made that file a swap file. It gets better. The OS never has to copy pages out of ram because the data is already in foobar.exe. When the OS needs space it can re-use the pages without need to write them to a swap file because it knows where to get the data.

    So yu are in effect using as many swap files as programs you are running

  • Re:Can't hibernate (Score:3, Informative)

    by ZosX ( 517789 ) <zosxavius@nOSpAm.gmail.com> on Thursday December 04, 2008 @08:13PM (#25996847) Homepage

    Its all snake oil man. First of all nothing is even allocated directly to the physical ram. It all goes into the virtual memory pool IIRC. The best a program like this can do is make sure the blocks are contiguous, but I truly fail to see the benefit of that. If it ain't broke.....

  • by Bert64 ( 520050 ) <bert@[ ]shdot.fi ... m ['sla' in gap]> on Thursday December 04, 2008 @08:15PM (#25996871) Homepage

    AmigaOS multitasked, and didn't use memory mapping like that...
    It had a flat memory model, and ran on processors which lacked the necessary memory management hardware.

    If you did have an upgraded cpu with MMU, then there were third party virtual memory addons.

  • Re:Can't hibernate (Score:3, Informative)

    by ZosX ( 517789 ) <zosxavius@nOSpAm.gmail.com> on Thursday December 04, 2008 @08:15PM (#25996877) Homepage

    Also note the lack of benchmarks on the software in question's website. They only make vague references to what the program actually does and then they talk alot about all the bonus stuff it comes with (new task manager, etc). Snaaaaaaake oil.

  • by Tony Hoyle ( 11698 ) * <tmh@nodomain.org> on Thursday December 04, 2008 @08:25PM (#25996963) Homepage

    4gb on OSX here and I've got 0 bytes in swap even after having both Wow and firefox up for an hour or two.

    Windows is very agressive in what it puts into swap.. if you come to a machine after it's been idle for a few hours it runs like a dog while it swaps everything back.. that's a royal pain at work where we're doing database work - if mssql gets paged out it takes a *long* time to recover to a running state. I tend to issue a create database and go and make my morning cup of tea.

  • by uhmmmm ( 512629 ) <.uhmmmm. .at. .gmail.com.> on Thursday December 04, 2008 @08:25PM (#25996973) Homepage

    What could be done, and I believe Linux can do, is write the pages out to swap space, but as long as there's no reason not to, keep the copies in RAM as well.

    If you need the RAM for disk cache, the RAM copy can be dropped since it already exists on disk. If the program needs to access memory, it's already there and doesn't need swapped back in.

    If the claim that windows swapped out everything when the computer was left idle overnight is correct, then it is indeed a suboptimal paging algorithm. An idle computer has no reason to remove things from RAM, as there's nothing it needs the space for.

  • by belg4mit ( 152620 ) on Thursday December 04, 2008 @08:38PM (#25997121) Homepage

    >The confusion of the terms started when naive programmers at the UI level called that swap
    >file "virtual memory".
    No, the confusion started before that when someone thought it was a good idea to use the
    term virtual to apply to something real. In colloquial English virtual is synonymous
    with pseudo. Obviously a swap-file is "pseudo memory" and not real RAM. A more accurate
    term for "virtual memory" would be "remapped" or even (though less informative) "aliased."

    On a related note, it wouldn't surprise me if swap-files played into the luser tendency to
    call disks "memory."

  • by blincoln ( 592401 ) on Thursday December 04, 2008 @09:03PM (#25997351) Homepage Journal

    In colloquial English virtual is synonymous with pseudo.

    That's how it's being used here too. A program thinks it's reading and writing to a contiguous block of memory at a particular address. That continuous block of memory doesn't really exist, and if it's being paged then there isn't even physical memory being accessed.
    Would you argue that VMWare's name is a misnomer just because real electronic computer hardware is involved and not some bizarre substitute like hydraulic logic?

  • by qw0ntum ( 831414 ) on Thursday December 04, 2008 @09:15PM (#25997469) Journal
    I know it was a joke but actually, in an oversimplified sense, yes. A main point of virtual memory in its true sense is to abstract the limitations of your amount of physical memory away from user programs, and instead present them with an effectively limitless virtual address space with which to work with. When the program says, "read from memory address 0x(some huge number)", the OS/memory management unit will translate that address request from a virtual page address to a physical frame via the page table. If there is no frame in memory that contains the data pointed to by the requested address, that's when you have a page fault. Then the operating goes to disk and fetches the data you requested.

    Your performance would be abysmally slow, and obviously probably wouldn't work at all with modern operating systems (just a theoretical point here!), but assuming a good implementation of virtual memory you should be able to run everything just fine. Of course, if you don't have enough disk space for your address space, you'll run into problems. :)
  • Re:Can't hibernate (Score:4, Informative)

    by thebigmacd ( 545973 ) on Thursday December 04, 2008 @09:27PM (#25997561)

    There was never any implication that MS has anything to do with the disk drive business; SpaceLifeForm said sales of Windows would be helped.

    If the drive fails in 2 years instead of 5, the owner is likely going to go out and buy a new PC three years earlier than they need to, instead of getting the drive replaced; this generally means another sale of Windows.

  • by mikael_j ( 106439 ) on Thursday December 04, 2008 @09:31PM (#25997607)

    Clearly someone has never done tech support, and I don't mean "helped friends/relatives fix something" I mean in the trenches, taking calls all day long, every day.

    Trust me on this one, there are lots of really stupid people out there, and sadly tech support is a great place to find out that being intelligent and friendly don't help when you're faced with some guy with a "fancy" last name, an e-mail address that indicates that he is a partner at a well-known law firm and serious entitlement issues ("I WANT THIS FIXED NOW YOU GOD DAMN MOTHERFUCKING HIGH SCHOOL DROPOUT LOSER PUNK DO YOU HAVE ANY IDEA HOW MUCH MONEY I'M LOSING EVERY HOUR THAT MY (residential $15/month DSL) BROADBAND ISN'T WORKING I'M GONNA FILE A FUCKING LAWSUIT I WANT A FUCKING SUPERVISOR RIGHT GOD DAMN NOW YOU SHITHEAD LAZY KNOW-NOTHIN....", well you get the point). Also, when there's an outage these are the people who make you aware of the outage before the NOC calls to tell you about it because within 30 seconds of their DSL going down there's going to be about 50 of these people waiting to yell at you for the DSLAM getting destroyed by a direct lightning strike (and yeah, I've had to deal with something like 50% of the idiots who called about that particular outage demanding to speak to a supervisor because they felt I wasn't doing my job when I explained that it would take several days to repair the building the DSLAM was housed in before a replacement DSLAM could be installed. Also, this is the kind of person who works as a lawyer while somehow being unaware of the term "force majeure").

    To sum it up: There are lots of stupid assholes out there, it's not just plain stupidity due to genetic factors, there's also the issue of people who simply choose to stay uneducated about even the most basic computer skills (while relying on their computer to do their job) like understanding the difference between "a program" and "a website" or how to find the start menu in WinXP/Vista...

    /Mikael

  • by Desert Raven ( 52125 ) on Thursday December 04, 2008 @10:07PM (#25997895)

    Been there, feel your pain. But trust me, there's worse.

    If you REALLY want to lose faith in humanity, work in emergency services. I spent 6 years in medicine, and two years in law enforcement. You truly get to meet the scrapings of the gene pool that way.

  • by Reziac ( 43301 ) * on Thursday December 04, 2008 @10:11PM (#25997927) Homepage Journal

    That's a unique perspective :)

    But I suppose technically correct, unless you load the entire image into memory, frex on a system with no hard disk, or with the current live CDs.

    Back in the early '90s, a friend's dad had such a system in his gov't job, as a security measure -- it had a gig of RAM (then worth somewhere over $25,000) and NO hard disk. He loaded the OS and apps from tape every time he used the machine.

  • by ykliu ( 1276508 ) on Thursday December 04, 2008 @11:17PM (#25998475)

    Here are two great posts by the famous Mark Russinovich, they may give some insight:

    - http://blogs.technet.com/markrussinovich/archive/2008/11/17/3155406.aspx [technet.com]
    - http://blogs.technet.com/markrussinovich/archive/2008/07/21/3092070.aspx [technet.com]

  • by Mr. Slippery ( 47854 ) <tms&infamous,net> on Thursday December 04, 2008 @11:46PM (#25998705) Homepage

    I spent 6 years in medicine, and two years in law enforcement. You truly get to meet the scrapings of the gene pool that way.

    Hmmm. Judging by what I've been seeing lately, I'm not sure if you're referring to suspects or to your law enforcement co-workers...

    Perhaps I'm unfairly projecting the situation in Baltimore. A friend of mine was recently beaten and arrested by Baltimore cops for participating in a nonviolent animal rights demonstration, and a few weeks later a friend of hers, an acquaintance of mine, was arrested for trying to film Baltimore cops threatening and harassing her boyfriend. I've seen Baltimore cops arrive on the scene of a disturbance, grab a guy who wasn't involved and slam him against the wall, assault another guy who was only involved peripherally, let the instigators go, and completely ignore a witness who attempted to tell them what was going on.

    No wonder that Baltimore citizens have no trust in their police force.

    I know there's at least one intelligent and decent human being serving on the BCPD, but the environment seems to be corroding his soul.

    I hope higher standards were in place where you served.

  • Re:Can't hibernate (Score:3, Informative)

    by aurispector ( 530273 ) on Thursday December 04, 2008 @11:51PM (#25998753)

    I thought the same thing when I saw the ".ro". Ram is so goddamn fast anymore I can't believe there's any benefit at all to "defragging" it. However, regarding paging the systems we use have design roots that date back decades. Back in the day, ram was prohibitively expensive and had to be treated like a valuable, limited resource and the OS was designed to deal with that fact. We now are stuck with paging systems that are archaic compared to the present state of RAM prices. The fact that we can often get away with turning off the entire paging system tells us something about this. Paging might still make sense in certain situations, but how many commonly used applications truly need several gigabytes of ram in order to function properly?

  • Re:Can't hibernate (Score:1, Informative)

    by Anonymous Coward on Thursday December 04, 2008 @11:51PM (#25998761)

    The designs of your CPU's cache and translation look-aside buffer are optimized for locality of reference. RAM is much slower than CPU cache, and TLB misses cause extra cycles wasted translating virtual addresses into physical ones. So, contiguous RAM is a Good Thing.

    Amiga OS 4.0 defragments RAM when the system is idle. Surely, this isn't simply snake oil.

  • by Stormie ( 708 ) on Friday December 05, 2008 @12:02AM (#25998835) Homepage

    AmigaOS multitasked, and didn't use memory mapping like that... It had a flat memory model, and ran on processors which lacked the necessary memory management hardware.

    And paid the price, in the form of one program being able to trample another's memory, or crash the whole system (hence the famous Guru Meditation).

    The Amiga was actually the first thing I thought of when I read this Ask Slashdot - you may recall the immense prejudice against virtual memory from a lot of Amiga users, who thought that virtual memory simply meant swapping to disk. They didn't realise that releasing a range of Amigas which all had MMUs (i.e. non-crippled 030+ CPUs) and a version of the OS with virtual memory would cure a number of ills completely unrelated to swapping, such as memory fragmentation and the aforementioned ability of one rogue program to bring down the system.

  • by Fallingcow ( 213461 ) on Friday December 05, 2008 @12:04AM (#25998851) Homepage

    Just haven't gotten the wifi to work yet (1 week).

    Have you tried NDISWrapper?

    You can install it through Synaptic, the graphical package installation program.

    You'll need a Windows XP driver (some others might work, too, but XP is the best one to try first, in my experience) for your wireless card, and it needs to be in a zip file or similar (that is, not a .exe installer, since you need access to the installation files).

    Unzip the drivers to a folder. Make sure there's at least one .inf file in it, and if there's more than one, figure out which one looks like it's for your card (sometimes they put drivers for several different ones in a single archive)

    Open a console. You will be typing just two commands:

    sudo ndiswrapper -i /path/to/the/driver.inf
    sudo ndiswrapper -m

    The first command installs the driver, the second sets it to start at boot.

    If it still doesn't work after a reboot, make sure you've got the right driver, and maybe try one for another version of Windows. Some just won't work period, but many (most?) will.

    You can look at the man page for ndiswrapper [die.net] if you need more info.

    If you need extra info on your wireless card to help you find the Windows driver, try the command "lspci" at the command line, your card should be somewhere on the resulting list of hardware.

  • Re:OT: (Score:3, Informative)

    by Tacvek ( 948259 ) on Friday December 05, 2008 @12:13AM (#25998901) Journal

    For a short period of time, it was a really useful technique on LiveCDs if there was no swap partition on-disk, or you wanted to preserve the on-disk swap partition for forensic analysis. I'm fairly certain it has been much improved since then to the point where this does not really buy you anything. I was never aware of the technical reason for this to be true, but it clearly was some sort of inefficiency in the use of "plain" RAM, which the swap system did not suffer from.

  • Re:Can't hibernate (Score:1, Informative)

    by Anonymous Coward on Friday December 05, 2008 @12:19AM (#25998935)

    Actually, there is a significant delay while reading from RAM, if there wasn't we would not have a need for L1/L2 cache. When the CPU needs to access data that is not in cache it must ask the memory controller for it, and it is nowhere near as fast as the CPU. The good thing is, the CPU can ask the MCU to grab a large block of contiguous RAM at one time, wait for the MCU interrupt, and get a large chunk back in slightly longer time that say a single address.

    This is where memory fragmentation comes into play. If an an application needs to access a large number of variables at once, it can grab one large block from the MCU, but if they are scattered all over the place, each address must be fetched individually.

    Feel free to test this: create a RAM disk that can store a multi-gig database and do some heavy queries (full table scans) against it. There will be zero disk I/O but the CPU will spend 25-50% of it's time in an I/O wait state. Of course, windows tries to page everything, so it won't be quite as accurate, but you'll get the idea.

  • by twizmer ( 1206952 ) on Friday December 05, 2008 @01:14AM (#25999205)
    No, no, he had to pat himself on the back about his degree, and his brilliant laziness. Definitely true CS fashion.
  • Re:Agreed (Score:3, Informative)

    by PitaBred ( 632671 ) <slashdot&pitabred,dyndns,org> on Friday December 05, 2008 @01:40AM (#25999365) Homepage

    Don't forget laptops. I get better battery performance and system performance with a lower swappiness, it doesn't start the drive up as often. I use 20 to good effect (especially with 4GB of RAM)

  • by Zan Lynx ( 87672 ) on Friday December 05, 2008 @02:37AM (#25999665) Homepage

    The answer to why /PAE and /3GB are not defaults is 3rd party drivers.

    Yes, believe it or not, there are Windows drivers that believe user-space pointers and kernel-space pointers will always and forever be in the same 2GB memory spaces they have always been in.

  • Life without paging (Score:3, Informative)

    by Animats ( 122034 ) on Friday December 05, 2008 @02:40AM (#25999679) Homepage

    QNX, the real-time operating system, does not page to disk. (Not for most processes, anyway. Read on.) The GUI has a bar in the lower right hand corner which shows how much memory is in use; if memory fills up, processes have requests for more memory rejected.

    The big advantage of this is consistent performance. Hard real time works. There are no pauses waiting for the disk. This is what you want for entertainment applications, like video players, audio players, and games.

    It's possible to set up paging for specific programs, though. GCC has paging, so that huge programs can be compiled, somewhat slowly, on smaller systems. By default, though, paging is not used. Provided that applications aren't bloatware, this works quite well.

    It's something to think about for Linux. Should programs be paged by default? Maybe that era is over.

  • by rdebath ( 884132 ) on Friday December 05, 2008 @03:10AM (#25999817)

    If you consider the "REAL" memory to be that stuff that's normally labeled "cache".

    Example this Athlon 64 had 1MB of physical memory on the CPU divided into "pages" of 8 bytes. If your working set is less than a megabyte you will be running entirely from this memory and running very very fast.

    As soon as you start 'paging out' to the so called "main memory" your performance takes a dive.

    The only differences between this and the level normally mentioned are speed, size and the fact that this level is usually controlled completely by the hardware.

  • by Sits ( 117492 ) on Friday December 05, 2008 @04:28AM (#26000177) Homepage Journal

    If you have caches of a size smaller than your real RAM, the order in which you try to access memory really CAN make a difference because cache is many times faster than regular RAM and will try and do things like speculative readahead. If what you are working with is already in the cache by the time you request it then you won't stall for as long.

    If you are forever causing the cache to become flushed and forcing the cache to be refilled with a different contents (perhaps because you are causing a large number of random memory access and the cache's readahead is getting your future access wrong so it has to be turned off) then performance will by comparison be slower than a sequential memory access workload.

    The above is of course a gross simplification (and doesn't apply if what you are reading fits entirely within cache and is already there). If you have the technical chops you can read more about how order of access can have an impact on speed in Ulrich Drepper's what every programmer should know about memory [lwn.net] on LWN [lwn.net].

  • by Spad ( 470073 ) <slashdot.spad@co@uk> on Friday December 05, 2008 @06:18AM (#26000749) Homepage

    HKLM\System\CurrentControlSet\Control\Session Manager\Memory Management

    DisablePagingExecutive

    0 = Drivers and system code can be paged to disk as needed.

    1 = Drivers and system code must remain in physical memory.

  • by TheThiefMaster ( 992038 ) on Friday December 05, 2008 @07:25AM (#26001133)

    Modern cpus support virtual memory in hardware, meaning it causes no slowdown at all (unless the referenced page isn't in ram of course).
    AMD's prototypes even support the nested virtual memory layout used by virtual machines.

  • by qw0ntum ( 831414 ) on Friday December 05, 2008 @08:45AM (#26001507) Journal
    You mean to say "page file" rather than "virtual memory". Calling it "virtual memory" is something of a misnomer... virtual memory is a scheme for hiding the details of physical memory from user programs and providing protection between programs; it can also be used for implementing shared memory. Go read your textbook from your OS course for more info.

    Assuming you mean "page file", the question is "will you ever use more than 1024gb worth of memory at any one time?" Chances are you won't, so you'd probably be fine, assuming no developers of any software you are running made bad assumptions about your configuration. And if you ever used more than 1024gb, you'd hit an out of memory exception when you went over, which the page file would prevent. Potentially, you'd have a lot of old "junk" in memory that you probably don't need to use that can be swapped out to disk.

    You have to remember that if your machine is using 32 bit addresses, the virtual address space for each process is 4gb: conceivably, each process can address up to that amount of memory to hold its code, stack, and data (execution data). For the vast, vast majority of programs all of these are on the order of MB (even Firefox sometimes!), but theoretically it would be possible for all your processes to request 4gb of data.
  • by hesiod ( 111176 ) on Friday December 05, 2008 @11:16AM (#26002887)

    he/she (wait, this is /.) thinks "Virtual Memory" is the same thing as paging

    I will refer you to Windows XP's "Performance Options", section "Virtual Memory" which states out "A paging file is an area on the hard disk that Windows uses as if it were RAM." That would probably cause this "confusion".

  • Re:Can't hibernate (Score:4, Informative)

    by Just Some Guy ( 3352 ) <kirk+slashdot@strauser.com> on Friday December 05, 2008 @11:29AM (#26003033) Homepage Journal

    Does this happen in USA a lot? if a light in the fridge goes out, do you buy a new one? when a tire is blown out, do you buy a new car?

    No, but it's a little different with electronics in general. First, assume here that we're talking about out-of-warranty items so that the owner is responsible for costs. Each town used to have several TV repair shops, but it came to be that it wasn't any cheaper to fix one than to replace it. The same with a clock radio; while you might be able to find someone qualified to troubleshoot and fix it, it'd probably be cheaper just to buy a new one. Well, a lot of people lump computers into the same category. If the hard drive (or CPU or RAM or video card) fails, then they figure it might be cheaper to buy a new one than to replace the bad parts.

    Honestly, they're probably right. Suppose you're Joe Sixpack with a busted Dell and take it to Best Buy so their experts [1] can check it out. They quote you $147 for a new 60GB hard drive [2] plus $75 in labor. You're looking at $200+ to fix a two-year-old PC. Being the frugal type, you check out dell.com and see that you can buy a brand new one for $279 that's faster, has more storage, and has that Mojave thing so you can view photos. I won't really hold it against you for spending an extra $50 to get a new, better computer with a full warranty [3].

    [1] Work with me here.
    [2] You could get a 750GB drive for the same price, but your computer was "designed for a 60GB drive", and they're hard to get now. Luckily for you, they were able to find one in the warehouse.
    [3] ...which will run out the week before the embedded graphics chipset overheats.

  • answer (Score:1, Informative)

    by Anonymous Coward on Friday December 05, 2008 @12:29PM (#26003823)

    Here's exactly what you're looking for:

    http://www.tomshardware.com/reviews/vista-workshop,1775-6.html

  • by bugnuts ( 94678 ) on Friday December 05, 2008 @01:06PM (#26004311) Journal

    "why use physical memory in modern systems"

    Obviously, all computers use physical memory... duh :)
    The question should be "why swap memory to disk in modern systems?"

    The answer is that pretty much ALL performance-based systems (such as everything in the top 500 [top500.org] supercomputers, do not page. It is a performance versus convenience issue. Swapping is a huge convenience for most users, which allows large programs to load and run, yet not monopolize limited resources such as physical memory. If you do very little wrt running memory hog applications, then swapping will not be done, and will hardly affect you.

    Nevertheless, equating virtual memory to page swapping on the front page of a geek journal was plainly wrong and should've been addressed.

  • by hardburn ( 141468 ) <hardburn@wumpus-ca[ ]net ['ve.' in gap]> on Friday December 05, 2008 @01:54PM (#26004883)

    Using up the extra power gets us programmers who can worry about solving the actual problem, rather than hand optimizing their assembler to execute in less than 50 clock cycles. We may not be technically accomplishing more computer work per clock cycle, but we're definately accomplishing more programmer work per wallclock hour.

  • by FredFredrickson ( 1177871 ) * on Friday December 05, 2008 @02:56PM (#26005705) Homepage Journal
    I'd say problem solving skills, at the basic level, is a very accurate indication of overall intelligence.

    Being able to read, memorize, and recite stupid facts from a book does not require intelligence. Being able to read, understand, and utilize concepts- that's intelligence.

    Same goes for computers.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...