



Why Use Virtual Memory In Modern Systems? 983
Cyberhwk writes "I have a system with Windows Vista Ultimate (64-bit) installed on it, and it has 4GB of RAM. However when I've been watching system performance, my system seems to divide the work between the physical RAM and the virtual memory, so I have 2GB of data in the virtual memory and another 2GB in the physical memory. Is there a reason why my system should even be using the virtual memory anymore? I would think the computer would run better if it based everything off of RAM instead of virtual memory. Any thoughts on this matter or could you explain why the system is acting this way?"
You mean physical memory right :-) (Score:5, Informative)
You must be confused about virtual vs. physical memory. In modern processors, there is no penalty for using virtual memory, all translation from virtual to physical address space is done internal to the processor and you won't notice the difference.
So all the physical memory installed in your PC is used by the processor as one big pool of resources. Processes can think whatever they want and address huge memory spaces, that's all in virtual land. Virtual memory only starts impacting performance when pages are being swapped in and out, because all your processes need more resident memory than you actually have.
Swapping means accessing the disk and freezing the requesting process until its page of memory has arrived from the disk, which takes millions of processor cycles (a lifetime from the processor's point of view). It's not so bad if you swap once, as the processor can work on other processes while waiting for the data to arrive, but if all your programs keep pushing each other out of physical memory, you get thrashing and consider yourself happy if the mouse pointer is still responsive!
So you may want to change the title of your post to: "why use physical memory in modern systems?". I would point you to an article I wrote on that topic in 1990, but somehow I can't find a link to it on the web :-)
fairsoftware.net - software developers share revenue from the apps they build
Re:You mean physical memory right :-) (Score:5, Insightful)
You must be confused about virtual vs. physical memory.
Indeed. When I read this story my knee jerk reaction was "please be gentle." And thankfully the first +5 post on this story is informative and helpful and relatively kind.
... 2 ... 1 ...
I fear the "turn off your computer, put it in a box and mail it back to the manufacturer" hardcore hardware experts that are going to show up in 3
Re:You mean physical memory right :-) (Score:5, Funny)
Gentle answers is what 6 years in customer support teaches you.
That, or hating everyone ;-)
Re:You mean physical memory right :-) (Score:5, Funny)
Gentle answers is what 6 years in customer support teaches you.
That, or hating everyone ;-)
That kind of attitude really pisses me off! ;-)
Re:You mean physical memory right :-) (Score:5, Funny)
That kind of attitude really pisses me off! ;-)
yes, I detest being gently hated by patronising tech support heroes
Re:You mean physical memory right :-) (Score:5, Insightful)
What you do realize is 99% of the human population is dumber than headless chickens.
Most people are not incredibly knowledgeable about computers. There's a big difference. Pretty much everyone is very good at something. That's why some people get paid to sell merchandise, design hardware, repair engines, cook food, synthesize chemicals, or perform surgery, and others get paid to solve computer problems.
Re:You mean physical memory right :-) (Score:5, Informative)
Clearly someone has never done tech support, and I don't mean "helped friends/relatives fix something" I mean in the trenches, taking calls all day long, every day.
Trust me on this one, there are lots of really stupid people out there, and sadly tech support is a great place to find out that being intelligent and friendly don't help when you're faced with some guy with a "fancy" last name, an e-mail address that indicates that he is a partner at a well-known law firm and serious entitlement issues ("I WANT THIS FIXED NOW YOU GOD DAMN MOTHERFUCKING HIGH SCHOOL DROPOUT LOSER PUNK DO YOU HAVE ANY IDEA HOW MUCH MONEY I'M LOSING EVERY HOUR THAT MY (residential $15/month DSL) BROADBAND ISN'T WORKING I'M GONNA FILE A FUCKING LAWSUIT I WANT A FUCKING SUPERVISOR RIGHT GOD DAMN NOW YOU SHITHEAD LAZY KNOW-NOTHIN....", well you get the point). Also, when there's an outage these are the people who make you aware of the outage before the NOC calls to tell you about it because within 30 seconds of their DSL going down there's going to be about 50 of these people waiting to yell at you for the DSLAM getting destroyed by a direct lightning strike (and yeah, I've had to deal with something like 50% of the idiots who called about that particular outage demanding to speak to a supervisor because they felt I wasn't doing my job when I explained that it would take several days to repair the building the DSLAM was housed in before a replacement DSLAM could be installed. Also, this is the kind of person who works as a lawyer while somehow being unaware of the term "force majeure").
To sum it up: There are lots of stupid assholes out there, it's not just plain stupidity due to genetic factors, there's also the issue of people who simply choose to stay uneducated about even the most basic computer skills (while relying on their computer to do their job) like understanding the difference between "a program" and "a website" or how to find the start menu in WinXP/Vista...
/Mikael
Re:You mean physical memory right :-) (Score:5, Funny)
Hey, they gave you a gun and ammo to narrow the gene pool, what happened?
Re:You mean physical memory right :-) (Score:4, Funny)
hmm, maybe all IT people should be allowed to handout pills and carry guns.
Re:You mean physical memory right :-) (Score:5, Insightful)
I'm fine with folks not knowing about computers. That's cool. The thing that annoys me, though, is that they're /proud/ of it. Its like its a badge of honor! Any sort of discussion about computer issues will always bring up some yahoo who says "Oh, I don't know a /thing/ about that! hur hur, in my day, all we had was pen and paper..." etc etc etc. The fact is, knowledge - basic knowledge - of computers is only going to get more important. Hiding your head under a rock isn't going to magically make it go away.
And its not the age thing, either - I've got a friend who is in his 70's, and his knowledge of technical things is way up there - he's a pure linux guy, uses myth to serve TV content all around the house, and is a very active member in the local unix club. Some people just don't seem to want to learn the basics.
Re:You mean physical memory right :-) (Score:5, Insightful)
Re:You mean physical memory right :-) (Score:5, Insightful)
Actually, that may be politically correct, but it's not true.
Sure, everyone has strong and weak sides, but nevertheless, the tendency is for some people to know a lot about a huge array of topics, and other people being pretty unknowledgeable about pretty much everything.
That nobody can specialize in everything is however true.
You do need one surgeon, and a different cryptographer, true. But still, the odds is that either of them will know more about the basics of the work of the other than a random person you ask on the street.
Re:... everyone is very good at something. (Score:5, Insightful)
Ummm... no. There are a statistically significant number of humans who aren't notably good at anything. I have unhappily encountered too many of them, both in and out of tech work. This is akin to actually believing that "all men are created equal" merely because it would be really really neat and make you feel all warm-and-fuzzy inside if it were true.
Even if your pollyanna perspective was true, being competent at some task doesn't directly equate with an absence of dumbassery. There are numerous species of "dumb" creatures that can be trained to memorize some task and then mimic (repeat) it perfectly ad nauseum... including H. sapiens. An ability to memorize and mimic doesn't equate directly with intelligence. It's a precursor, a prerequisite, perhaps, but not the Real McCoy.
A shocking number of humans, including many regarded as "average" by testing standards, never actually reach a state of true intelligence. Too many of them are profoundly ignorant and quite determined to remain that way.
Re:You mean physical memory right :-) (Score:5, Insightful)
Never going to happen.
When cars were first sold to the public, if you bought one you'd damned well better know how to fix it yourself. Fast forward to now. Plenty of idiots still buy cars and are completely fucked when it comes time to do something as minor as changing the oil or spark plugs. </gratuitous car analogy>
That's around a hundred years of people refusing to learn really simple shit. What makes you think it will be different with computers over a shorter timespan?
Re:You mean physical memory right :-) (Score:4, Insightful)
Uh, no. When cars were FIRST sold to the public, if you bought one you could afford to pay one of your servants to maintain it.
Besides, that's still a bad analogy, because it's not that most people couldn't change the oil or spark plugs on a car, it's just that it's too much of a pain in the ass for people to do it. I could teach anyone how to do it in theory. You just follow a few simple steps. But it's much easier to simply pay a guy 25 bucks every couple of months than have to crawl under the car, muck around with dirty oil, figure out where to dispose the old stuff, and so on. Given that, there's not really any real need for me to know how to do it, any more than I need to know how to perform surgery or cook escargot. Although in point of fact I do know how to change the oil on my car (having changed the oil on numerous motorcycles purely for the fun of it), I see no reason to call anyone who didn't have a clue how to do it an idiot.
Computers are getting simpler. They are getting to the point where it makes sense to learn how to use them and how to fix them when something minor goes wrong. This is the standard level for computer literacy. A better car analogy would be to observe that when cars were first sold to the public, they were complicated to operate, difficult to start, and not many people saw the use of them. Over time, however, they became simpler and simpler, to the point where it is reasonably expected that any given adult will be able to drive a car. This is what is increasingly happening with computers. Some from the older generations will learn to adapt to the new technology, and some will not. But within our lifetimes, computer competence will be expected of people, especially when computers have become simple and ubiquitous. To an extent, this is already the case. However, the general expectation is not that anybody could write software (i.e. design a car part) or be able to fix computers that have suffered a serious malfunction (i.e. replace the cooling system). It's not even be that most people are expected to be able to handle routine maintenance on their own, hence the need for automatic software updates--you don't need to understand the details, just that you need to do it every so often. Just like changing your car's oil.
Re:You mean physical memory right :-) (Score:5, Funny)
It's a Christmas miracle!
Don't let Frank hear you say that. (Score:4, Funny)
It's a FESTIVUS miracle.
You insensitive clod.
Where's my aluminum pole?
Re:You mean physical memory right :-) (Score:5, Funny)
So do I really only need 640k of physical memory if I have a modern system?
Re:You mean physical memory right :-) (Score:5, Informative)
Your performance would be abysmally slow, and obviously probably wouldn't work at all with modern operating systems (just a theoretical point here!), but assuming a good implementation of virtual memory you should be able to run everything just fine. Of course, if you don't have enough disk space for your address space, you'll run into problems.
Re:You mean physical memory right :-) (Score:5, Insightful)
Actually no the author was correct in Microsoft's Windows' terms. This is the exact text used in System Properties -> Advanced tab under Virtual memory:
"A paging file is an an area on the hard disk that Windows uses as if it were RAM."
You might think well they said paging file not virtual memory, well click on Change button and you'll see the dialog pop up named "Virtual Memory" of which you can specify multiple paging files on multiple drives if you wanted to. Defaulted to a single paging file on the C:\ or boot drive. So blame Microsoft for the confusing use of virtual memory and paging file back and forth. I guess they mean by virtual memory as in the collection usage of paging files after the fact (for those situations where there's more than one paging file used, just like on Linux you can have more than one swap file in use).
Anyway, I too have seen Windows 2000 and XP just love to make heavy use of the paging file even though there is clearly enough physical memory available. Some friends of mine have even disabled Windows from using a paging file completely, at first you will get a warning about it, but other than that they have reported better system performance and no draw backs noticed since then. This is on systems with at least 3GB of RAM.
Re:You mean physical memory right :-) (Score:5, Informative)
Sorry, got to correct the path to where exactly I got that quote from:
System Properties -> Advanced -> Performance area, click Settings -> Advanced tab (on Windows XP, as for 2000 its the default tab).
Re: (Score:3, Insightful)
So blame Microsoft for the confusing use of virtual memory and paging file
I'm no Microsoft fanboy, but I don't think you can blame them for this, especially when "virtual memory" originally did mean what the OP thinks it does. I'd like to know when the definition changed.
Re:You mean physical memory right :-) (Score:5, Informative)
It never did change. "Virtual Memory" always meant a trick the kernel and CPU do to make programs think they are accessing a different memory address than they actually are. This trick is necessary in all multitasking operating systems.
Once you've made the jump to mapping real memory addresses to fake ones, it's easy to map the fake addresses to a swap file on the hard drive instead of actual RAM. The confusion of the terms started when naive programmers at the UI level called that swap file "virtual memory".
Re:You mean physical memory right :-) (Score:4, Informative)
AmigaOS multitasked, and didn't use memory mapping like that...
It had a flat memory model, and ran on processors which lacked the necessary memory management hardware.
If you did have an upgraded cpu with MMU, then there were third party virtual memory addons.
Re:You mean physical memory right :-) (Score:4, Informative)
And paid the price, in the form of one program being able to trample another's memory, or crash the whole system (hence the famous Guru Meditation).
The Amiga was actually the first thing I thought of when I read this Ask Slashdot - you may recall the immense prejudice against virtual memory from a lot of Amiga users, who thought that virtual memory simply meant swapping to disk. They didn't realise that releasing a range of Amigas which all had MMUs (i.e. non-crippled 030+ CPUs) and a version of the OS with virtual memory would cure a number of ills completely unrelated to swapping, such as memory fragmentation and the aforementioned ability of one rogue program to bring down the system.
Re: (Score:3, Funny)
Re:You mean physical memory right :-) (Score:5, Interesting)
I've been running without a pagefile, in all versions of Windows, for about 10 years now -- on any machine with more than 512mb.
The only drawback is that a few stupid Photoshop plugins whine and refuse to run, because if they don't see a pagefile, they believe there is "not enough memory" -- a holdover from the era when RAM was expensive and the pagefile was a busy place. Sometimes I think about making a very small pagefile just for them, but have never actually got around to doing it.
Re:You mean physical memory right :-) (Score:5, Informative)
"I've been running without a pagefile, in all versions of Windows,..."
Not really. On a modern OS when executable code is loaded from disk to RAM. It isn't really loaded. What they do is map the file that holds the code into virtual memory. So in effect when you run a program called "foobar.exe" you have made that file a swap file. It gets better. The OS never has to copy pages out of ram because the data is already in foobar.exe. When the OS needs space it can re-use the pages without need to write them to a swap file because it knows where to get the data.
So yu are in effect using as many swap files as programs you are running
OT: (Score:5, Funny)
Perhaps set up a small ramdisk and pf to that?
Re:You mean physical memory right :-) (Score:5, Informative)
Have you tried NDISWrapper?
You can install it through Synaptic, the graphical package installation program.
You'll need a Windows XP driver (some others might work, too, but XP is the best one to try first, in my experience) for your wireless card, and it needs to be in a zip file or similar (that is, not a .exe installer, since you need access to the installation files).
Unzip the drivers to a folder. Make sure there's at least one .inf file in it, and if there's more than one, figure out which one looks like it's for your card (sometimes they put drivers for several different ones in a single archive)
Open a console. You will be typing just two commands:
sudo ndiswrapper -i /path/to/the/driver.inf
sudo ndiswrapper -m
The first command installs the driver, the second sets it to start at boot.
If it still doesn't work after a reboot, make sure you've got the right driver, and maybe try one for another version of Windows. Some just won't work period, but many (most?) will.
You can look at the man page for ndiswrapper [die.net] if you need more info.
If you need extra info on your wireless card to help you find the Windows driver, try the command "lspci" at the command line, your card should be somewhere on the resulting list of hardware.
Re:I just disabled it on my ASUS Eee PC (Score:4, Informative)
HKLM\System\CurrentControlSet\Control\Session Manager\Memory Management
DisablePagingExecutive
0 = Drivers and system code can be paged to disk as needed.
1 = Drivers and system code must remain in physical memory.
Can't hibernate (Score:5, Interesting)
Windows makes me CRAZY about this. the OS is internally configured to use an LRU algorithm to aggressively page.
("Technical bastards" who question my use of paging and swap interchangeably in this post can send their flames to /dev/null \Device\Null or NUL depending on OS)
What I found when disabling paging on an XP pro system with 2GB RAM is that the system performance is explosively faster without the disk IO.
Even an *idle* XP pro system swaps - explaining the time it takes for the system to be responsive to your request to maximize a window you have not used in a while.
I was thrilled to have a rocket-fast system again - until I tried to hibernate my laptop. Note that the hibernation file is unrelated to the swap/paging space.
The machine consistently would blue screen when trying to hibernate if swap/paging was disabled. Enabling swap enabled the hibernation function again. Since reboots take *FOREVER* to reload all the crap that XP needs on an enterprise-connected system - systems mangement, anti-virus agent, software distribution tool, and the required ram-defragger which allows XP to "stand by" when you've got more than 1GB of RAM, plus IM, etc
I reboot as infrequently as possible and consider "stand by" and "hibernate" required functions. As a result, I live with XP and paging enabled, and tolerate the blasted system "unpaging" apps that have been idle a short time.
Poo!
Re:Can't hibernate (Score:5, Informative)
Uh. You do realize that block of ram are not written contiguously right? You won't find it any different on Linux or MacOS or any operating system for that matter. You also realize that the access time of RAM is effectively 0 right? Yeah, the AC was right. Nothing in the KB article about ram fragmentation. That program is also one of those create "free" ram programs that I despise so much. These kinds of utilities might be somewhat marginally useful on a very resource bound system, but I can hardly see the use for this crap. Even if RAM were to be somehow "defragmented" how could it possibly make it any faster? The bottleneck isn't in accessing the addresses. An OS keeps a running tab of what is stored where. As soon as it makes the request for the data its coming off of the RAM as fast as the FSB will let it pass through. The reason defragmenting is effective on hard drives is because the hard drive has a physical dimension where the heads take actual time to move to the desired location. In RAM there is no moving parts and hence, extremely low latency, which is measured in nanoseconds versus the milliseconds they use to measure latency in hard drives.
I smell snake oil here. That is, unless you have some real science to back up the benefits of ram "defragmenting"
Depends on whether you have different types of RAM (Score:4, Informative)
If you have caches of a size smaller than your real RAM, the order in which you try to access memory really CAN make a difference because cache is many times faster than regular RAM and will try and do things like speculative readahead. If what you are working with is already in the cache by the time you request it then you won't stall for as long.
If you are forever causing the cache to become flushed and forcing the cache to be refilled with a different contents (perhaps because you are causing a large number of random memory access and the cache's readahead is getting your future access wrong so it has to be turned off) then performance will by comparison be slower than a sequential memory access workload.
The above is of course a gross simplification (and doesn't apply if what you are reading fits entirely within cache and is already there). If you have the technical chops you can read more about how order of access can have an impact on speed in Ulrich Drepper's what every programmer should know about memory [lwn.net] on LWN [lwn.net].
Re:Can't hibernate (Score:5, Funny)
I have had good experience with Fast defrag freeware from http://www.amsn.ro/ [www.amsn.ro]
Ah, the joy of running closed-source system-level software of dubious necessity from a tiny shop in a Warsaw Pact country. Was it recommended by the new Nigerian friend that you're helping transfer money?
Re:Can't hibernate (Score:4, Informative)
There was never any implication that MS has anything to do with the disk drive business; SpaceLifeForm said sales of Windows would be helped.
If the drive fails in 2 years instead of 5, the owner is likely going to go out and buy a new PC three years earlier than they need to, instead of getting the drive replaced; this generally means another sale of Windows.
Re:Can't hibernate (Score:4, Insightful)
Does this happen in USA a lot? if a light in the fridge goes out, do you buy a new one? when a tire is blown out, do you buy a new car?
Gee, and then some people wonder why Americans spend 50% of the global resources...
Re:Can't hibernate (Score:4, Funny)
Your post pissed me off! I'm buying a new computer.
Re:Can't hibernate (Score:4, Informative)
Does this happen in USA a lot? if a light in the fridge goes out, do you buy a new one? when a tire is blown out, do you buy a new car?
No, but it's a little different with electronics in general. First, assume here that we're talking about out-of-warranty items so that the owner is responsible for costs. Each town used to have several TV repair shops, but it came to be that it wasn't any cheaper to fix one than to replace it. The same with a clock radio; while you might be able to find someone qualified to troubleshoot and fix it, it'd probably be cheaper just to buy a new one. Well, a lot of people lump computers into the same category. If the hard drive (or CPU or RAM or video card) fails, then they figure it might be cheaper to buy a new one than to replace the bad parts.
Honestly, they're probably right. Suppose you're Joe Sixpack with a busted Dell and take it to Best Buy so their experts [1] can check it out. They quote you $147 for a new 60GB hard drive [2] plus $75 in labor. You're looking at $200+ to fix a two-year-old PC. Being the frugal type, you check out dell.com and see that you can buy a brand new one for $279 that's faster, has more storage, and has that Mojave thing so you can view photos. I won't really hold it against you for spending an extra $50 to get a new, better computer with a full warranty [3].
[1] Work with me here. ...which will run out the week before the embedded graphics chipset overheats.
[2] You could get a 750GB drive for the same price, but your computer was "designed for a 60GB drive", and they're hard to get now. Luckily for you, they were able to find one in the warehouse.
[3]
Re:Can't hibernate (Score:4, Insightful)
You'd have thought after all this time they could've corrected one of the most annoying "features" which stops me using Windows for any amount of time? It certainly appears like after X amount of inactivity (whatever it may be classified as) stuff just gets swapped out even if you have enough physical memory!
Considering the way I normally work is to have many applications open, perhaps an IDE, a handful of terminals, a web browser, e-mail client, then spend X amount of time with one application, then switch to another (test/deploy/whatever), then maybe check e-mail & web, by the time I get around to switching to my next task the previous applications have at least partially been swapped to disk.
When I was using Windows at work, by the end of each day I was getting so incensed by it'd be a big hands in the air and muffled swearing whenever it happened, a total productivity killer.
Lets just say I'm back in Linux & Solaris land now, I have almost the same set of applications open with no problems - and that's on top of running my testing environment on the same machine.
rephrasing his question charitably... (Score:5, Interesting)
I'd assume what he's asking is: in modern systems where the amount of physical RAM is considerably larger than what most people's programs in total use, why does the OS ever swap RAM out to disk?
The answer is basically to free up RAM for disk cache, based on a belief (sometimes backed up by benchmarks) that for typical use patterns, the performance hit of sometimes having to swap RAM back into physical memory is outweighed by the performance gain of a large disk cache.
Of course, OS designers are always revisiting these assumptions---it may be that for some kinds of use patterns using a smaller disk cache and swapping RAM out to disk less leads to better performance, or at least better responsiveness (if that's the goal).
Re:rephrasing his question charitably... (Score:5, Informative)
Re:rephrasing his question charitably... (Score:5, Interesting)
The question, though, is how is the reduction in disk cache size resulting from having no virtual memory to speak of affecting your runtime? Rather than seeing it all at once, like when you swap back in Firefox, are you taking longer to navigate directories because it has to read them in every single time? And when you're using firefox, does it take longer to check its disk cache? Are you saving 2 seconds when you switch applications by losing 60 seconds over the course of 10 minutes as you're actually using an individual application?
Saving the 60 seconds (perhaps at the expense of the 2 seconds) is exactly what the block cache is trying to do for you. Whether it's succeeding or not, or how well, is a different question. :-)
Re:rephrasing his question charitably... (Score:5, Interesting)
It might save 60 seconds, but it's saving the wrong 60 seconds. I'm not going to notice everything being very slightly faster, but I'll notice Firefox being swapped back from disc. I only care how long something takes if I have to wait for it.
Kernel developers seem to mostly care about benchmarks, and interactive latency is hard to benchmark. This leads to crazy things like Andrew Morton claiming to run swappiness 100 (swappiness 0 is the only acceptable value IMO if you need swap at all). I don't use swap, and with 4GB ram I never need it.
Re:rephrasing his question charitably... (Score:5, Insightful)
Whether or not it works (and I'm not sure how well it does), there's something odd about swapping out RAM contents to disk so that you can mirror disk contents in RAM.
Re:rephrasing his question charitably... (Score:5, Interesting)
The problem I noticed with XP (dunno if Vista does the same) is that it doesn't seem to give running apps priority over disk cache. So if you have your browser in the background and hit a lot of files (e.g. a virus scan), the browser would get paged to disk and would take forever to bring back to the foreground.
What would be great is a setting like, "disk cache should never exceed 256 MB unless there is free RAM". In other words, if the total memory footprint of the OS and my running apps is less than my physical RAM minus 256 MB, they will never be swapped to disk. As I start approaching the limit, the first thing to be scaled back should be disk cache. Disk cache >256 MB will not be preserved by swapping my apps to disk.
As it is, I set XP's swapfile manually to 128 MB (any smaller and I would get frequent complaints about it being too small even though I have 3 GB of RAM). If it really needs more memory, it will override my setting and increase the swapfile size. But 99% of the time this limits the amount of apps XP can swap to disk to just 128 MB, which for me results in a much speedier system.
I do agree with that (Score:3, Insightful)
One problem is that there are relatively frequent types of disk-access patterns where caching them gives little to no benefit in return for the paging out of RAM it requires. A virus scan (touching most of your files exactly once) is one canonical example. Media playback (touching a few large files in sequential block reads) is another.
The difficult question is how to exclude these kinds of wasted caching while still retaining the benefits of caching frequently accessed files, and not introducing excessive
Re:rephrasing his question charitably... (Score:5, Interesting)
The answer is basically to free up RAM for disk cache, based on a belief (sometimes backed up by benchmarks) that for typical use patterns, the performance hit of sometimes having to swap RAM back into physical memory is outweighed by the performance gain of a large disk cache.
We're rapidly getting to the point where there's enough RAM for not only all the programs you're running, but all of the disk that those programs will access! Paging memory out to disk just doesn't make much sense anymore. I've run WIndows with no page file since Win2000 came out, and never had a problem with that.
My current (non-gaming) desktop box has 8GB of RAM, and cost me about $1000. I rarely use that much memory for the combined total of apps, OS footprint, and all non-streaming files (there's no point in caching streaming media files on a single-user system, beyond maybe the first block).
I expect my next $1000 system in a few years will have 64GB of RAM, at which point there really will be no point in using a page file for anything. And with a solid-state hard drive, I'm not sure there will be any point in read caching either (though write caching will still help I guess).
Re:rephrasing his question charitably... (Score:4, Interesting)
As an admin for a video editing shop, we turned off swap long ago. The programs we use know how much ram and how much disk ("cache") to use already, and they don't want anyone getting in their way.
Especially not swapping, which thrashes the seek time.
No he doesn't (Score:4, Informative)
You must be confused about virtual vs. physical memory. In modern processors, there is no penalty for using virtual memory, all translation from virtual to physical address space is done internal to the processor and you won't notice the difference.
Huh? That's totally wrong. If it were true, you wouldn't need any RAM.
It's true that address translation is hard-wired in modern processors. But that just means that figuring out where the data is is as fast as for data that's already in RAM. Actually reading or writing it is only as fast as the media it's stored on. So if you have a lot of big applications running, and there isn't enough RAM for them all to be in physical memory at once, your system "thrashes", as data migrates back and forth between the two media. That's why adding RAM is very often the best way to speed up a slow system, especially if you're running Microsoft's latest bloatware. Defragging the swap disk can also be helpful.
To answer the original question: actually, you often don't need any virtual memory. But sometimes you do. Disk space is cheap, so it makes sense to allocate a decent amount of virtual memory and just not worry about whether it's absolute necessary.
Re: (Score:3, Informative)
Not running Microsoft's latest bloatware is probably the best way to speed up a slow system if you are currently doing that.
Re:No he doesn't (Score:4, Informative)
Note: virtual memory is not necessarily on disk. "Virtual" memory just refers to the fact that the memory address that the application uses isn't the physical memory address (and in fact there might not *be* a physical memory address this instant), nothing more.
Defragging the swap disk can also be helpful.
I think this is never helpful. Pagefile reads are basically random for the first block being paged in for a given app, and modern hard drives mostly claim no seek latency as long as you have 1 read queued (of course, that claim might be BS).
For Windows, the OS doesn't *create* framgenation of the page file over time. If there is a large contiguous space available when the pagefile is created (and there usually is, right after OS installation), Windows will use that block, and it will never fragment. Also, if your pagefile was fragmented at creation, defragging by itself won't fix it, as it doesn't move any pagefile blocks.
I hope the same thing is true in Linux - if defragging your swap drive helps, someone has done something very wrong to begin with.
Re:You mean physical memory right :-) (Score:5, Insightful)
No, I don't think the OP is confused.
Back in the days of mainframes only, say before 1980 or so, all the systems I worked on (NCR, IBM and Burroughs) used the term "virtual memory" to refer to secondary memory storage on a slower device. Early on the secondary device was CRAM (Card Random Access Memory) and later it was disk.
But the point is that Virtual Memory originally referred to main memory storage on a secondary device. Furthermore, this is still the term used for paged storage in Microsoft Windows. Check out the Properties page on the "Computer" menu item on Vista or "My Computer" icon on XP which talks about Virtual Memory when setting the size of the paging file.
The OP is totally correct in his use of Virtual memory both by historical precedent and by current usage in Windows.
Virtual Memory v Paging (Score:5, Informative)
thinks "Virtual Memory" is the same thing as paging...
Mac Classic (OS 8 for sure) used the term "Virtual Memory" the same way Windows today uses "Page File" or unix uses "swap", so you can at least understand why some people might be confused by this.
db
Re:You mean physical memory right :-) (Score:5, Informative)
Either he/she thinks "Virtual Memory" is the same thing as paging
Physical memory, virtual memory, address space, and paging files are some of the most misunderstood things your average computer "expert" deals with. When it comes to Windows, few can probably explain why only 3GB of 4GB physical RAM shows up on a 32-bit system. Fewer even can probably define the difference between "virtual memory" and "paging file".
I highly recommend any Windows users or administrators read Mark Russinovich's latest blog entry Pushing the Limits of Windows: Virtual Memory [technet.com] . It goes over all these things and describes the difference between virtual memory, committed memory, and why it really is important to have a paging file, even on that system with 8GB of physical RAM. Should be required reading for any Windows admin.
Memory exists to be used (Score:5, Insightful)
Memory exists to be used. If memory is not in use, you are wasting it. The reality is that your system will operate with higher performance if unused data is paged out of RAM to disk and the newly freed memory is used for additional disk caching. Vista's memory manager is actually reasonably smart and will only page data out to disk when it really won't be used, or you experience an actual low-memory condition.
Re:Memory exists to be used (Score:5, Interesting)
I've known this argument for many years, I just don't think it applies anymore. The extra disk cache doesn't really help much, and what ends up happening is that I come in to work in the morning, unlock my work XP PC, and I sit there for 30 seconds while everything gets slowly pulled of the disk. XP thought it would be wise to page all that stuff out to disk, after all, I wasn't using it. But why would I care about the performance of the PC when I'm not actually using it?
At the very least, the amount of swap should be easily configurable like it is in Linux. I haven't actually used a swap partition in Linux for years, preferring instead to have 6 or 8gb of RAM, which is now cheap.
Re:Memory exists to be used (Score:4, Insightful)
What you're actually complaining about is that Windows did a poor job of deciding what to page out. Sure, you could "turn off swap" if you have enough memory, and you won't ever have to wait for anything to be paged in.. But your system would be faster if you had a good paging algorithm and could use unaccessed memory pages for disk cache instead.
Re: (Score:3, Interesting)
I think you might have awfully high expectations of the paging algorithm, if you think it's "bad" because it paged out data that wasn't being used for something like 16 hours.
Perhaps the problem is that the cost/benefit values of "keep an app that isn't being touched in RAM" vs. "increase the available memory for disk caching", while they may be appropriate when the computer is actually being used, are not optimal for a computer left idle overnight. The idle computer has a higher-than-expected cost (in ter
Re:Memory exists to be used (Score:5, Informative)
At the very least, the amount of swap should be easily configurable like it is in Linux. I haven't actually used a swap partition in Linux for years, preferring instead to have 6 or 8gb of RAM, which is now cheap.
It is, (Right-click "My Computer")->Properties, "Advanced" tab, "Settings" under Performance, "Advanced" tab, "Change" under "Virtual memory". Almost as easy as "dd if=/dev/zero of=swapfile bs=1G count=1; swapon swapfile", spclly if u cant spel cuz u txt 2 much.
Re:Memory exists to be used (Score:5, Informative)
You can also adjust the "swappiness" of a computer running linux. I've set my desktop to have a swappiness of 10 (in a scale of 0 to 100 where 0 means don't swap at all). In Ubuntu, you can do sudo sysctl vm.swappiness=10 to set the swappiness until next boot or edit /etc/sysctl.conf and add vm.swappiness=10 to the bottom of the file to make it permanent.
The default swappiness level is 60.
Agreed (Score:5, Interesting)
Linux kernel maintainer Andrew Morton sets his swappiness [kerneltrap.org] to 100 (page as much physical memory as you can, the opposite of this Ask-Slashdot's desires), which he justified in an interview (see above link) by saying:
My point is that decreasing the tendency of the kernel to swap stuff out is wrong. You really don't want hundreds of megabytes of BloatyApp's untouched memory floating about in the machine. Get it out on the disk, use the memory for something useful.
Of course, there's another view, also presented at the above kerneltrap article: If you swap everything, you'll have a very long wait when returning to something you haven't touched in a while.
If you have limited resources, milk the resources you have plenty of; workstations should have high swappiness while laptops, who suffer in disk speed, disk capacity, and power, are probably better suited with lower swappiness. Don't go crazy, though ... swappiness = 0 is the same as running swapoff -a and will crash your programs when they need more memory than is available (as the kernel isn't written for a system without swap).
Re:Memory exists to be used (Score:4, Interesting)
Memory exists to be used. If memory is not in use, you are wasting it.
While I grant this statement is in a sense true, a system designer would do well to ponder the distinction between "not used" and "freely available".
RAM that is not currently being used, but which will be required for the next operation is not "wasted"; it is being held in reserve for future use. So when you put that "unused" RAM to use, the remaining unused RAM, plus the RAM you can release quickly, has to be greater than the amount of physical RAM the user is likely to need on short notice. Guess wrong, and you've done him no favors.
I'm not sure what benchmark you are using to say Vista's vm manager is "reasonably smart"; so far as I know no sensible vm scheme flushes swaps out pages if there is enough RAM to go around.
My own experience with Vista over about eighteen months was that it is fine as long as you don't do anything out of the ordinary, but if you suddenly needed a very large chunk of virtual memory, say a GB or so, Vista would be caught flat footed with a ton of pages it needed to get onto disk. Thereafter, it apparently never had much use for those pages, because you can release the memory you asked for and allocate it again without any fuss. It's just that first time. What was worse was that apparently Vista tried to (a) grow the page file in little chunks and (b) put those little chunks in the smallest stretch of free disk it could find. I had really mediocre performance with my workloads which required swapping with only 2-3GB of RAM, and I finally discovered that the pagefile had been split into tens of thousands of fragments! Deleting the page file, then manually creating a 2GB pagefile, brought performance back up to reasonable.
One of the lessons of this story is to beware of assuming "unused" is the same as "available", when it comes to resources. Another is not to take any drastic steps when it comes to using resources that you can't undo quickly. Another is that local optimizations don't always add up to global optimizations. Finally, don't assume too much about a user.
If I may wax philosophical here, one thing I've observed is that most problems we have in business, or as engineers, doesn't come from what we don't know, or even the things we believe that aren't true. It's the things we know but don't pay attention to. A lot of that is, in my experience, fixing something in front of us that is a problem, without any thought of the other things that might be connected to it. Everybody knows that grabbing resources you don't strictly need is a bad thing, but it is a kind of shotgun optimization where you don't have to know exactly where the problem is.
Virtual Memory or Paging (Score:4, Informative)
Re:Virtual Memory or Paging (Score:4, Informative)
I think I'm going to need to add a comment to that Wikipedia page. I'm not sure when the definition changed, but a long time ago (mid 80s), "virtual memory" did mean "making a program believe it had more memory than there was on the system". At least three different vendors defined it that way: Motorola, Data General, and DEC. I still have the Motorola and DG manuals that say so.
Some advantages (Score:5, Informative)
That page mostly talks about what virtual memory is and doesn't directly list why it is an improvement.
Some folks have already mentioned the fact that it eliminates memory fragmentation, and that it allows mapping of files and hardware into memory without dedicating (wasting) part of the address space to those uses.
Another reason is that you can have 2^64 bytes of total system memory, even if the individual applications are 32-bit, and can only address 2^32 bytes of memory. Since the 32-bit applications are presented a virtual address space, it doesn't matter if their pages are located above the 32-bit boundary.
It means that per-process memory protection is enforced by the CPU paging table. Without virtual memory you would have to reimplement something like it just for memory protection.
It means that the linker/loader don't have to patch the executable with modified address locations when it is loaded into memory.
The above two reasons have the corollary that libraries can be shared in memory much more easily.
And that's just off the top of my head. Virtual memory is a very, very useful thing.
The real answer is (Score:5, Funny)
[/tinfoil hat]
Re: (Score:3, Funny)
Virtual memory and pagefiles still exist so that there will be persistent, recoverable storage of your browsing and search history, illegally downloaded music, and furrie porn should anybody come a-knockin after you hit the power switch. [/tinfoil hat]
</worrying> You're close but do you know why I only drink rain memory and grain memory, Mandrake? It's because virtual memory and pagefiles are the greatest Communist conspiracy to sap and impurify our precious computerly processes. <love the bomb>
Would it help if (Score:5, Funny)
Then all your virtual memory is in RAM.
I'll leave it to someone else to explain why that isn't a good idea.
Re:Would it help if (Score:5, Interesting)
Turn it off, then! (Score:5, Insightful)
We who know what we are doing are free to take the risk of running our computers without a swapfile.
Most people are not in a position where they can be sure that they will never run out of physical memory. Because of that, all operating systems for personal computers set up a swapfile by default: It's better for joe average computer owner to complain about a slow system than for him to lose his document when the system crashes because he filled up the physical memory (and there is no swap file to fall back on).
Re: (Score:3, Interesting)
You can infer from the OP what he was talking about. Oh dammit!
Why use a file system? (Score:5, Interesting)
The other extreme point of view is that modern systems should only have virtual memory and, instead of having an explicit file system, treat mass storage as a level-4 cache. In fact, systems that support mmap(2) do this partially.
The idea here is that modern memory management is actually pretty good, and that it's best to let the OS decide what to keep in RAM and what to swap out, so that issues like prefetching can be handled transparently.
Re: (Score:3, Funny)
Modern like the IBM System 38 circa 1980?
Multics (Score:5, Insightful)
Re: (Score:3, Informative)
Those who forget history will have to try to re-invent it. Badly.
I believe that is an insightful combination of two quotes:
- Those who forget history are doomed to repeat it. (alt. George Santayana: "Those
who cannot remember the past are condemned to repeat it.")
- "Those who don't understand UNIX are condemned to reinvent it, poorly." Henry Spencer
Ditto the original versions of the Pick OS (Score:3, Informative)
Circa 1975 through 2000 or so, the "native" (*) versions of the Pick Operating System worked exactly this way. Even the OS-level programmers - working in assembly language (!) - only saw virtual memory in the form of disk pages. When you put an address in a register, it was a reference to a disk page, not physical memory. The pages were auto-magically brought into memory at the moment needed and swapped out when not by a tiny VM-aware paging kernel. That was the only part of the system that understood that
File - Save (Score:4, Interesting)
For that matter, why do we even need to explicitely "save" anymore? Why does the fact that Notepad has 2KB of text to save prevent the shutdown of an entire computer? Just save the fecking thing anywhere and get on with it! Modern software is such a disorganized mess.
Re:File - Save (Score:4, Insightful)
What would you do instead of file save? Continuous save, where the file data is saved as you type? What if you decide the changes you made were a mistake? I think one of the basic premises, going a very long way back in the design of software, is that you don't immediately save changes, so that the user can make a choice whether to 'commit' the changes, or throw them away and revert back to the original state of the file. As far as I know, Notepad will only temporarily stop the shutdown of the computer, to ask you do you want to save the file - yes/no? I don't see how that is such a bad thing?
Now, you might say that the solution for this is automatic file versioning. The problem is that if you have continuous save, you would either get a version for every single character typed, deleted, etc, or else you would get 'periodic' versions (like, a version from 30 seconds ago, a version from 30 seconds before that, etc) and pretty soon you'd have a ridiculous number of 'intermediate' versions. File versioning should, ideally, only be saving essentially 'completed' versions of the file (or at least, only such intermediate versions as the user chooses to save [because, if you are creating a large document, like a Master's Thesis, or book, you will probably not create it all in a single session, so in that case, you might have versions which don't correspond to completed 'final products', but you probably also don't want 1000 different versions either], instead of a large number of automatically created versions).
Re:File - Save (Score:4, Interesting)
Explicit saving is a crutch based on limitations of early computers when disk space was expensive. Unfortunately, people are so used to it that they think it's a good idea. Kinda link having to reboot Windows every while so it doesn't slow down. (I know that it's not true anymore.)
Think about it, when I create a document in the analog world with a pencil I don't have to save it. Every change is committed to paper.
You're right, of course, the added value with digital documents is that I can go back to previous versions. But again, it's implemented using a crutch, namely Undo and Redo. Automatic file versioning is the obvious answer.
Having many intermediate versions lying around is a non-problem. First of all, only deltas have to be saved with a complete version saved once in a while to minimize the chance of corruption. Secondly, just as with backups, the older the version is the less intermediate versions you need. Say one version every minute for the last hour. Then one version every hour for the last day before that. One version every day for the last week before that. And so on.
A filesystem that supports transparent automatic versioning is such a no-brainer from a usability standpoint that I can't figure out why nobody has done it already. I guess it must be really hard.
BTW, an explicit save can be simulated on a system with continuous saving by creating named snapshots.
Re:File - Save (Score:4, Interesting)
Continuous save can be made workable with some reasonable rules for discarding unneeded versions. First, keep every version the user explicitly tags, as well as the baseline for the current session (to allow reversion). For the rest, devise a heuristic combining recency and amount of change to select old, trivial versions to be discarded. The further back you go into the history, the more widely spaced the checkpoints become. This is easier for structured documents, but with proper heuristics can also be applied to e.g. plain text. Temporal grouping (sessions, breaks in typing, etc.) can provide valuable clues in this area.
Currently most programs only have two levels of history: the saved version(s), and the transient undo buffer. There's no reason that this sharp cut-off couldn't be turned into a gradual transition.
Re:File - Save (Score:4, Interesting)
I don't know about Vista's paging implementation.. (Score:4, Informative)
More than just memory management (Score:4, Interesting)
One example is ring transitions into kernel mode which start out as exceptions. (Everyone seems to have ignored call gate, the mechanism Intel offered for ring transitions).
Another is memory mapped pointers. It is cool to be able to increment a pointer to file backed ram and not have to care if it is in ram or not.
Maybe the OP is onto something. Imagine writing Windows drivers without having to worry about IRQL and paging.
I prefer none. (Score:5, Insightful)
This should generate some polarized discussion.
There are two camps of thought.
One will insist that, no matter how much memory is currently allocated, it makes more sense to swap out that which isn't needed in order to keep more free physical ram. They will argue until they are blue in the face that the benefits of doing so are good.
Essentially - your OS is clever and it tries pre-emptively swap things out so the memory will be available as needed.
The other camp - and the one I subscribe to - says that as long as you have enough physical ram to do whatever you need to do - any time spent swapping is wasted time.
I run most of my workstations (Windows) without virtual memory. Yes, on occasion, I do hit a "low on virtual memory error" - usually when something is leaky - but I prefer to get the error and have to re-start or kill something rather than have the system spend days getting progressively slower, slowly annoying me more and more, and then giving me the same error.
This is not to say that swap is bad, or that it shouldn't be used - but I prefer the simpler approach.
Re:I prefer none. (Score:5, Interesting)
One will insist that, no matter how much memory is currently allocated, it makes more sense to swap out that which isn't needed in order to keep more free physical ram.
Most of the people in this camp are coming from a Unix background where this is actually implemented effectively. For example, the FreeBSD machine next to my desk has 6GB of RAM, but even with about 3GB free, I'm currently about 1GB into my 16GB of swap. (Why 16? Because it's bigger than 6 but still a tiny part of my 750GB main drive.)
FreeBSD, and I assume most other modern Unixes, will copy idle stuff from RAM to swap when it's sufficiently bored. Note that it doesn't actually delete the pages in physical memory! Instead, it just marks them as copied. If those processes suddenly become active, they're already in RAM and go on about their business. If another process suddenly needs a huge allocation, like if my site's getting Slashdotted, then it can discard the pages in RAM since they've already been copied to disk.
That is why many Unix admins recommend swap. It helps the system effectively manage its resources without incurring a penalty, so why wouldn't you?
It's my understanding that Windows never managed to get this working right, so a lot of MS guys probably prefer to avoid it.
Finally! A use for my CS degree! (Score:4, Funny)
I can finally put my CS degree to good use, answering the same questions students would ask the TAs in basic OS and systems-level programming courses! ...except that the other comments have already answered the question. So, in true CS fashion, I will be lazy and refrain from duplicating effort ;)
Laziness is a virtue! (And that's on-topic, because a lazy paging algorithm is a good paging algorithm).
Swap is expected, so without it, you crash. (Score:5, Informative)
I recall back in 2002 or so, a friend of mine maxed out his Windows XP system with 2gb of memory. Windows absolutely refused to turn off paging (swap), forcing him to whatever the minimum size was. The solution? He created a RAMdisk and put the paging file there.
On Linux (and other modern systems, perhaps now including Windows), you can turn off swap. However, the Linux kernel's memory management isn't so great at the situation you hit when you need more memory than you have, but you can't swap. Usually, the memory hog crashes as a result (thankfully, Firefox now has session restore). I might be slightly out of date on this one.
A well-tweaked system still has swap (in nontrivial amounts), but rarely uses it. Trust me, you can afford losing the few gigabytes from your filesystem. Again in Linux, /proc/sys/vm/swappiness [kerneltrap.org] can be tweaked to a percentage reflecting how likely the system is to swap memory. Just lower it. (Though note the cons to this presented at the kerneltrap article above.) My workstation currently has an uptime of 14 days, a swappiness of 60, and 42/1427 megs of swap in use as opposed to the 1932/2026 megs of physical memory in use at the moment.
This is summarized for Windows and Linux on Paging [wikipedia.org] at Wikipedia.
Good Advice (Score:4, Interesting)
I have Vista x64 running in a machine with 8GB physical memory, and no page file. I can do this because I'm never running enough memory-hungry processes that I will exceed 8GB allocated memory. So, while the OS may be good at deciding what gets swapped to the hard disk, in my case, there's simply no need, as everything I'm running can be contained entirely within physical memory (and for the curious, I've been running like this for a year and a half, haven't run out of memory yet).
However, if you don't have enough physical memory to store all the processes you might be running at once, then at some point the OS will need to swap to the hard drive, or it will simply run out of memory. I'm honestly not sure exactly how Vista handles things when it runs out of memory (never been a problem, never looked into it), but it wouldn't be good (probably BSoD, crash crash crash). I can tell you from personal experience that I regularly exceed 4GB memory usage (transcoding a DVD while playing a BluRay movie while
Long story short, with just 4GB, I would leave the swap file as is. Really, you should only disable the swap file if you know based on careful observation that your memory usage never exceeds the size of your installed physical memory. If you're comfortable with the risks involved, and you know your system and usage habits well, then go for it. Otherwise, leave it be.
just recalling how this works (Score:4, Informative)
What is left over is the physical memory needed by the system. It seems like the OS preferred a fixed amount of memory, so it would just set up fixed space on the hard disk. So, even if all you have a 1 MB of available memory, the system would set up say 10MB, and that is what would be used. The pages that are being used will be stored in the physical ram, while everything would be stored on the HD.
If page management is working correctly, this should be transparent to the user. The management software or hardware will predict what pages were needed, and transfer those page to ram. One issue we I had was available memory was not hard disk plus physical available ram, but was limited by the available hard disk space.
So, it seems to me that virtual paged memory is still useful because with multiple applications loaded, memory can be a mess, and big fast hard drives it should not be an issue. I don't how Vista works, but it seems that *nix works very hard to insure that the pages that are needed are loaded to physical memory, and page faults do not occur. In this case, where virtual memory equals available physical memory, it would seem that since only physical memory is being used, there would be no performance hit from virtual memory. it is only there in case an application is run that need more memory. It is nice that we do not get those pesky memory errors we got in the very old days.
Easy way to remember real vs. virtual . . . (Score:5, Funny)
If it's there, and you can see it . . . it's real.
If you can see it, but it's not there . . . it's virtual.
If you can't see it, and it's not there . . . it's gone.
Running with averages. (Score:3, Informative)
Yes the OP is right if you don't page to disk and go off all RAM then you will be faster. However with a good paging it will help you from things from getting slower or not working when you really need the extra Horse Power, and you probably wouldn't even notice it.
First we got the 80/20 rule where 20% of the Data is used 80% of the time. So a large chunk of data will rarely be used, being that it not being used read or written just kinda sitting there. You might as well page it to disk so you have more space free.
Next if you get a big chunk of memory request say you open a VM system that need Gigabytes of memory. Say 3 GB and you only had 2 GB free. That means before you app can run you will have to wait for 1GB of data to be dumped to the disk. Vs. say a good paging algorithm which would already have that 1 GB already paged so you can fill the RAM with the VM for a faster access then probably depending on the paging algorithm pieces will slowly get paged back to disk allowing you run say an other 512meg load on your system without having the system dump that 512meg of data. If you didn't page you would be stuck as you don't have the ram to run the application. Or a poor paging algorithm will spend so much time paging the data until it gets enough free to operate.
Drive space is relatively cheap if you are going to do some high RAM intensive apps. With a good paging you can get by with about half as much RAM saving money.
Most systems have more ram then ever but the apps use more ram then ever too. (This isn't necessarily bloat) Lets say your app does a lot of Square roots. The time it takes for it to process say 1,000,000 Square Roots vs. Dumping to memory the recalculated Square Roots values and doing a quick memory lookup of the answer. That way you get faster calculation time at cost of RAM.
The idea is ridiculous (Score:3, Insightful)
virtual address space, virtual memory, swapping... (Score:5, Informative)
I note a lot of people are insisting that "virtual memory" refers to the virtual address space given to a execution context, and what the author really means is "paging".
The funny thing is that these are traditionally poorly defined/understood terms which are gaining a hard consensus for the meanings due to some recent OS books, and poor comp-sci education which insists on a particular definition. Everyone is faulting M$ for using the term incorrectly, even though the original mac OS and other OS's used the term in the same way. Wikipedia defines it one way and then goes on to give historical systems which don't really adhere to the definition. For example the B5000 (considered the first commercial machine with virtual memory) didn't even have "contiguous" working memory as required by the wikipedia definition. It had what would be more specifically called multiple variable sized segments which could be individually swapped. Again, the mac OS evolved from a single process model to muliprocess, in the same address space (look up mac switcher) and implemented "virtual memory" using a system without a MMU by swaping the allocated pieces of memory to disk if they weren't currently locked (in use) and reallocating the memory. Aka they had "virtual memory" in single fragmented address space.
The other example is people use "paging" to describe the act of swaping portions of the memory to disk, misunderstanding that paging is more about splitting an address space or segment up into fixed pieces for address translation to physical, and that disk swapping of pages isn't required for paging. Aka, your system is still "paging" if you disable swapping.
Even the term swapping is unclear because the need to differentiate between swaping pages, and swapping whole processes (or even segments) resulted in people avoided the term swapping to describe systems which were swapping pages instead of segments/regions/processes. These systems were generally called "demand paged" or something similar to indicate that they didn't need to swap a complete process or dataset (see DOSSHELL).
So, give the guy a break, in may ways he is just as correct, if not more so.
Re:Vista reserves 1 GB (Score:5, Informative)
Re:Vista reserves 1 GB (Score:5, Informative)
I think he is referring to the userspace/kernelspace split in Windows NT. On 32bit Windows XP, by default, the userspace (ring3) will have at most 2 GB of the physical RAM, and the kernel space would get the rest (some of it paged and some of it not). On systems with more than 3G of RAM (a lot by 2002 standards), it was kinda pointless to reserve that much for the kernel space, so they added a boot.ini flag that changed the split to _AT_MOST_ 3GBytes for the userspace and the rest for kernel space.
In Vista the split for 3G/1G of RAM is default. Actually on a system with 4G of RAM running in 32bit mode, you can't use all of them even if you try (in Windows XP), because right under the 4G limit you would have the PCI memory address mappings, that can be as large as 512M for a common video card with half a gig of RAM. Add to that the RAID controllers and the other hardware, and you have about 800megs of RAM unused because they can't be addressed, as their address-space is used by the installed devices.
I think that http://support.microsoft.com/kb/823440/ [microsoft.com] and http://support.microsoft.com/kb/171793/ [microsoft.com] should describe what I'm talking about pretty clearly.
Re: (Score:3, Informative)
Vista reserves 1 GB to itself, so your system will only ever have 3 GB available for processes.
Not exactly. 32-bit OSes won't normally report more than about 3.2 GiB of system RAM, as a 32-bit OS can only address 4 GiB (PAE/himem aside), and the upper addresses are reserved for hardware.
64-bit OSes (even Vista) will use and report RAM to much higher upper limits.
Or something like that.
Re:To misquote (Score:4)
Re: (Score:3, Informative)
Because you don't just use RAM to hold your processes; you use it for caching frequently accessed data from your (comparatively slower) hard disks as well. Thus, there is never any such thing as "enough" RAM, that is, until you have enough primary storage to equal the sum of all the secondary storage you'll use in a single computing session PLUS the amount of memory needed by all your running processes.
But nobody has that much memory. It's a waste of money. So, we trust the OS to swap out inactive pages and
Re:Only 4 GB? (Score:5, Funny)
The Kessel Run is obviously a surviving salesman problem.
The traveling salesman is selling zombie survival kits at the onset of the zombie apocalypse. He must sell $X worth of kits to afford his choppa ticket, and return to the evac zone. The evac choppa is waiting for him (or does continuous runs), so time is not an issue, and he can make long-winded sales pitches in safe houses.
Distance traveled is an issue, because the horde is everywhere, and the best strategy is to minimize exposure and avoid detection.
Quickness (acceleration, agility) is an issue because it helps you avoid detection, and when detected, you need to escape or hide quickly.
Speed (top speed of your van) is an issue because you often need to make a beeline to the nearest safe house, or to the evac zone once you have met your quota.
A surviving salesman is rated on his total distance traveled. A lower distance is indicative of a better salesman, and a better vehicle. Being able to zoom through the most dangerous areas will shorten your trip (path length) due to the increased demand and reduced supply of zombie survival kits in said areas.
For the Millennium Falcon, the above applies with a few differences. Han Solo and Chewbaca are hiding from the Empire, not the zombie horde. Instead of selling survival kits, they're smuggling contraband. Instead of running to safe houses, they're running off to Mos Eisley or other fringe/pirate friendly planets the Empire doesn't have (complete) control over. The money gained isn't for a choppa ticket, but for the general livelihood of Han and Chewbaca.