Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Windows Operating Systems Software Hardware

Why Use Virtual Memory In Modern Systems? 983

Cyberhwk writes "I have a system with Windows Vista Ultimate (64-bit) installed on it, and it has 4GB of RAM. However when I've been watching system performance, my system seems to divide the work between the physical RAM and the virtual memory, so I have 2GB of data in the virtual memory and another 2GB in the physical memory. Is there a reason why my system should even be using the virtual memory anymore? I would think the computer would run better if it based everything off of RAM instead of virtual memory. Any thoughts on this matter or could you explain why the system is acting this way?"
This discussion has been archived. No new comments can be posted.

Why Use Virtual Memory In Modern Systems?

Comments Filter:
  • by chrylis ( 262281 ) on Thursday December 04, 2008 @06:06PM (#25995133)

    The other extreme point of view is that modern systems should only have virtual memory and, instead of having an explicit file system, treat mass storage as a level-4 cache. In fact, systems that support mmap(2) do this partially.

    The idea here is that modern memory management is actually pretty good, and that it's best to let the OS decide what to keep in RAM and what to swap out, so that issues like prefetching can be handled transparently.

  • File - Save (Score:4, Interesting)

    by Anonymous Coward on Thursday December 04, 2008 @06:07PM (#25995145)

    For that matter, why do we even need to explicitely "save" anymore? Why does the fact that Notepad has 2KB of text to save prevent the shutdown of an entire computer? Just save the fecking thing anywhere and get on with it! Modern software is such a disorganized mess.

  • by etymxris ( 121288 ) on Thursday December 04, 2008 @06:12PM (#25995227)

    I've known this argument for many years, I just don't think it applies anymore. The extra disk cache doesn't really help much, and what ends up happening is that I come in to work in the morning, unlock my work XP PC, and I sit there for 30 seconds while everything gets slowly pulled of the disk. XP thought it would be wise to page all that stuff out to disk, after all, I wasn't using it. But why would I care about the performance of the PC when I'm not actually using it?

    At the very least, the amount of swap should be easily configurable like it is in Linux. I haven't actually used a swap partition in Linux for years, preferring instead to have 6 or 8gb of RAM, which is now cheap.

  • by Xerolooper ( 1247258 ) on Thursday December 04, 2008 @06:12PM (#25995235)
    Urg... must... not... feed... trolls...
    You can infer from the OP what he was talking about. Oh dammit!
  • by Trepidity ( 597 ) <delirium-slashdot@@@hackish...org> on Thursday December 04, 2008 @06:14PM (#25995273)

    I'd assume what he's asking is: in modern systems where the amount of physical RAM is considerably larger than what most people's programs in total use, why does the OS ever swap RAM out to disk?

    The answer is basically to free up RAM for disk cache, based on a belief (sometimes backed up by benchmarks) that for typical use patterns, the performance hit of sometimes having to swap RAM back into physical memory is outweighed by the performance gain of a large disk cache.

    Of course, OS designers are always revisiting these assumptions---it may be that for some kinds of use patterns using a smaller disk cache and swapping RAM out to disk less leads to better performance, or at least better responsiveness (if that's the goal).

  • by frog_strat ( 852055 ) on Thursday December 04, 2008 @06:17PM (#25995317)
    Virtual memory is now used for little tricks, in addition to providing more memory than is physically available.

    One example is ring transitions into kernel mode which start out as exceptions. (Everyone seems to have ignored call gate, the mechanism Intel offered for ring transitions).

    Another is memory mapped pointers. It is cool to be able to increment a pointer to file backed ram and not have to care if it is in ram or not.

    Maybe the OP is onto something. Imagine writing Windows drivers without having to worry about IRQL and paging.
  • Re:Would it help if (Score:5, Interesting)

    by Changa_MC ( 827317 ) on Thursday December 04, 2008 @06:21PM (#25995385) Homepage Journal
    I know it's not a good idea now, but this was seriously a great trick under win98. Win98 Recognized my full 1GB of RAM, but seemed to want to swap things to disk rather than use over 256MB of RAM. So I just created a RAM disk using the second 512MB of RAM, and voila! Everything ran much faster. When everything is broken, bad ideas become good again.
  • by mindstrm ( 20013 ) on Thursday December 04, 2008 @06:26PM (#25995457)

    Because currently, modern systems leak. A cold re-start puts things back into a fresh state - and we need that.

    Modern memory management is fantastic - but I'll still arguet hat my workstations work better and smoother with swap disabled than without it - which is telling.

  • Good Advice (Score:4, Interesting)

    by dark_requiem ( 806308 ) on Thursday December 04, 2008 @06:27PM (#25995489)
    Okay, so we've got most of the "you can run Vista with 4GB?!" jokes out of the way (hopefully). Here's my take on the situation.

    I have Vista x64 running in a machine with 8GB physical memory, and no page file. I can do this because I'm never running enough memory-hungry processes that I will exceed 8GB allocated memory. So, while the OS may be good at deciding what gets swapped to the hard disk, in my case, there's simply no need, as everything I'm running can be contained entirely within physical memory (and for the curious, I've been running like this for a year and a half, haven't run out of memory yet).

    However, if you don't have enough physical memory to store all the processes you might be running at once, then at some point the OS will need to swap to the hard drive, or it will simply run out of memory. I'm honestly not sure exactly how Vista handles things when it runs out of memory (never been a problem, never looked into it), but it wouldn't be good (probably BSoD, crash crash crash). I can tell you from personal experience that I regularly exceed 4GB memory usage (transcoding a DVD while playing a BluRay movie while ...). With your configuration, that's when you'd start to crash.

    Long story short, with just 4GB, I would leave the swap file as is. Really, you should only disable the swap file if you know based on careful observation that your memory usage never exceeds the size of your installed physical memory. If you're comfortable with the risks involved, and you know your system and usage habits well, then go for it. Otherwise, leave it be.
  • by mea37 ( 1201159 ) on Thursday December 04, 2008 @06:29PM (#25995509)

    I think you might have awfully high expectations of the paging algorithm, if you think it's "bad" because it paged out data that wasn't being used for something like 16 hours.

    Perhaps the problem is that the cost/benefit values of "keep an app that isn't being touched in RAM" vs. "increase the available memory for disk caching", while they may be appropriate when the computer is actually being used, are not optimal for a computer left idle overnight. The idle computer has a higher-than-expected cost (in terms of user experience) associated with paging the idle app, and a lower-than-expected benefit to increasing the cache size.

  • Agreed (Score:5, Interesting)

    by Khopesh ( 112447 ) on Thursday December 04, 2008 @06:35PM (#25995575) Homepage Journal

    Linux kernel maintainer Andrew Morton sets his swappiness [kerneltrap.org] to 100 (page as much physical memory as you can, the opposite of this Ask-Slashdot's desires), which he justified in an interview (see above link) by saying:

    My point is that decreasing the tendency of the kernel to swap stuff out is wrong. You really don't want hundreds of megabytes of BloatyApp's untouched memory floating about in the machine. Get it out on the disk, use the memory for something useful.

    Of course, there's another view, also presented at the above kerneltrap article: If you swap everything, you'll have a very long wait when returning to something you haven't touched in a while.

    If you have limited resources, milk the resources you have plenty of; workstations should have high swappiness while laptops, who suffer in disk speed, disk capacity, and power, are probably better suited with lower swappiness. Don't go crazy, though ... swappiness = 0 is the same as running swapoff -a and will crash your programs when they need more memory than is available (as the kernel isn't written for a system without swap).

  • Re:File - Save (Score:4, Interesting)

    by mosb1000 ( 710161 ) <mosb1000@mac.com> on Thursday December 04, 2008 @06:46PM (#25995703)
    Maybe they should have a "finalize" option, whereby you save a special, read-only file, and it saves a backup of each "finalized" version. There's really no reason you should lose what you are working on when your computer crashes. And having an unsaved file shouldn't hold up quitting applications. Just start where you left off when you resume the application.
  • by martyros ( 588782 ) on Thursday December 04, 2008 @06:47PM (#25995719)

    The question, though, is how is the reduction in disk cache size resulting from having no virtual memory to speak of affecting your runtime? Rather than seeing it all at once, like when you swap back in Firefox, are you taking longer to navigate directories because it has to read them in every single time? And when you're using firefox, does it take longer to check its disk cache? Are you saving 2 seconds when you switch applications by losing 60 seconds over the course of 10 minutes as you're actually using an individual application?

    Saving the 60 seconds (perhaps at the expense of the 2 seconds) is exactly what the block cache is trying to do for you. Whether it's succeeding or not, or how well, is a different question. :-)

  • by Reziac ( 43301 ) * on Thursday December 04, 2008 @06:57PM (#25995839) Homepage Journal

    I've been running without a pagefile, in all versions of Windows, for about 10 years now -- on any machine with more than 512mb.

    The only drawback is that a few stupid Photoshop plugins whine and refuse to run, because if they don't see a pagefile, they believe there is "not enough memory" -- a holdover from the era when RAM was expensive and the pagefile was a busy place. Sometimes I think about making a very small pagefile just for them, but have never actually got around to doing it.

  • by Solandri ( 704621 ) on Thursday December 04, 2008 @06:59PM (#25995859)

    The problem I noticed with XP (dunno if Vista does the same) is that it doesn't seem to give running apps priority over disk cache. So if you have your browser in the background and hit a lot of files (e.g. a virus scan), the browser would get paged to disk and would take forever to bring back to the foreground.

    What would be great is a setting like, "disk cache should never exceed 256 MB unless there is free RAM". In other words, if the total memory footprint of the OS and my running apps is less than my physical RAM minus 256 MB, they will never be swapped to disk. As I start approaching the limit, the first thing to be scaled back should be disk cache. Disk cache >256 MB will not be preserved by swapping my apps to disk.

    As it is, I set XP's swapfile manually to 128 MB (any smaller and I would get frequent complaints about it being too small even though I have 3 GB of RAM). If it really needs more memory, it will override my setting and increase the swapfile size. But 99% of the time this limits the amount of apps XP can swap to disk to just 128 MB, which for me results in a much speedier system.

  • Can't hibernate (Score:5, Interesting)

    by anomaly ( 15035 ) <tom DOT cooper3 AT gmail DOT com> on Thursday December 04, 2008 @07:00PM (#25995881)

    Windows makes me CRAZY about this. the OS is internally configured to use an LRU algorithm to aggressively page.

    ("Technical bastards" who question my use of paging and swap interchangeably in this post can send their flames to /dev/null \Device\Null or NUL depending on OS)

    What I found when disabling paging on an XP pro system with 2GB RAM is that the system performance is explosively faster without the disk IO.

    Even an *idle* XP pro system swaps - explaining the time it takes for the system to be responsive to your request to maximize a window you have not used in a while.

    I was thrilled to have a rocket-fast system again - until I tried to hibernate my laptop. Note that the hibernation file is unrelated to the swap/paging space.

    The machine consistently would blue screen when trying to hibernate if swap/paging was disabled. Enabling swap enabled the hibernation function again. Since reboots take *FOREVER* to reload all the crap that XP needs on an enterprise-connected system - systems mangement, anti-virus agent, software distribution tool, and the required ram-defragger which allows XP to "stand by" when you've got more than 1GB of RAM, plus IM, etc

    I reboot as infrequently as possible and consider "stand by" and "hibernate" required functions. As a result, I live with XP and paging enabled, and tolerate the blasted system "unpaging" apps that have been idle a short time.

    Poo!

  • by lgw ( 121541 ) on Thursday December 04, 2008 @07:09PM (#25996019) Journal

    The answer is basically to free up RAM for disk cache, based on a belief (sometimes backed up by benchmarks) that for typical use patterns, the performance hit of sometimes having to swap RAM back into physical memory is outweighed by the performance gain of a large disk cache.

    We're rapidly getting to the point where there's enough RAM for not only all the programs you're running, but all of the disk that those programs will access! Paging memory out to disk just doesn't make much sense anymore. I've run WIndows with no page file since Win2000 came out, and never had a problem with that.

    My current (non-gaming) desktop box has 8GB of RAM, and cost me about $1000. I rarely use that much memory for the combined total of apps, OS footprint, and all non-streaming files (there's no point in caching streaming media files on a single-user system, beyond maybe the first block).

    I expect my next $1000 system in a few years will have 64GB of RAM, at which point there really will be no point in using a page file for anything. And with a solid-state hard drive, I'm not sure there will be any point in read caching either (though write caching will still help I guess).

  • by hey! ( 33014 ) on Thursday December 04, 2008 @07:09PM (#25996023) Homepage Journal

    Memory exists to be used. If memory is not in use, you are wasting it.

    While I grant this statement is in a sense true, a system designer would do well to ponder the distinction between "not used" and "freely available".

    RAM that is not currently being used, but which will be required for the next operation is not "wasted"; it is being held in reserve for future use. So when you put that "unused" RAM to use, the remaining unused RAM, plus the RAM you can release quickly, has to be greater than the amount of physical RAM the user is likely to need on short notice. Guess wrong, and you've done him no favors.

    I'm not sure what benchmark you are using to say Vista's vm manager is "reasonably smart"; so far as I know no sensible vm scheme flushes swaps out pages if there is enough RAM to go around.

    My own experience with Vista over about eighteen months was that it is fine as long as you don't do anything out of the ordinary, but if you suddenly needed a very large chunk of virtual memory, say a GB or so, Vista would be caught flat footed with a ton of pages it needed to get onto disk. Thereafter, it apparently never had much use for those pages, because you can release the memory you asked for and allocate it again without any fuss. It's just that first time. What was worse was that apparently Vista tried to (a) grow the page file in little chunks and (b) put those little chunks in the smallest stretch of free disk it could find. I had really mediocre performance with my workloads which required swapping with only 2-3GB of RAM, and I finally discovered that the pagefile had been split into tens of thousands of fragments! Deleting the page file, then manually creating a 2GB pagefile, brought performance back up to reasonable.

    One of the lessons of this story is to beware of assuming "unused" is the same as "available", when it comes to resources. Another is not to take any drastic steps when it comes to using resources that you can't undo quickly. Another is that local optimizations don't always add up to global optimizations. Finally, don't assume too much about a user.

    If I may wax philosophical here, one thing I've observed is that most problems we have in business, or as engineers, doesn't come from what we don't know, or even the things we believe that aren't true. It's the things we know but don't pay attention to. A lot of that is, in my experience, fixing something in front of us that is a problem, without any thought of the other things that might be connected to it. Everybody knows that grabbing resources you don't strictly need is a bad thing, but it is a kind of shotgun optimization where you don't have to know exactly where the problem is.

  • by Mprx ( 82435 ) on Thursday December 04, 2008 @07:12PM (#25996057)

    It might save 60 seconds, but it's saving the wrong 60 seconds. I'm not going to notice everything being very slightly faster, but I'll notice Firefox being swapped back from disc. I only care how long something takes if I have to wait for it.

    Kernel developers seem to mostly care about benchmarks, and interactive latency is hard to benchmark. This leads to crazy things like Andrew Morton claiming to run swappiness 100 (swappiness 0 is the only acceptable value IMO if you need swap at all). I don't use swap, and with 4GB ram I never need it.

  • Re:I prefer none. (Score:5, Interesting)

    by Just Some Guy ( 3352 ) <kirk+slashdot@strauser.com> on Thursday December 04, 2008 @07:20PM (#25996153) Homepage Journal

    One will insist that, no matter how much memory is currently allocated, it makes more sense to swap out that which isn't needed in order to keep more free physical ram.

    Most of the people in this camp are coming from a Unix background where this is actually implemented effectively. For example, the FreeBSD machine next to my desk has 6GB of RAM, but even with about 3GB free, I'm currently about 1GB into my 16GB of swap. (Why 16? Because it's bigger than 6 but still a tiny part of my 750GB main drive.)

    FreeBSD, and I assume most other modern Unixes, will copy idle stuff from RAM to swap when it's sufficiently bored. Note that it doesn't actually delete the pages in physical memory! Instead, it just marks them as copied. If those processes suddenly become active, they're already in RAM and go on about their business. If another process suddenly needs a huge allocation, like if my site's getting Slashdotted, then it can discard the pages in RAM since they've already been copied to disk.

    That is why many Unix admins recommend swap. It helps the system effectively manage its resources without incurring a penalty, so why wouldn't you?

    It's my understanding that Windows never managed to get this working right, so a lot of MS guys probably prefer to avoid it.

  • by Anonymous Coward on Thursday December 04, 2008 @07:23PM (#25996193)

    Vista's memory manager is actually reasonably smart

    We must be using different Vista implementations. I have VU64 with 2GB of memory on a dual core. I had better performance in OS/2 with 16MB on a 386/20. As far as I can tell Vista takes the least used stuff and keeps it in RAM paging out executables that are running, so switching to another application is abysmally slow, how about 30+ seconds to get the lock menu from C-A-D or the fingerprint scanner taking a similar amount of time before it will wake up and recognize a swipe at the login. Just now coming back to edit from preview I had to wait over 15 seconds to get the Firefox RCLICK spellcheck up that I used less than 60 seconds ago! What the heck is it doing that these frequently accessed tasks have such low priority, aside from looking for copyright infringement on my unsupported sound-card (both the Lenovo integrated chipset and an add-in card that claimed Vista compatibility.) Vista paging/caching/drivers has a LONG way to go to even be useful, let alone smart.

  • Re:File - Save (Score:4, Interesting)

    by he-sk ( 103163 ) on Thursday December 04, 2008 @07:26PM (#25996241)

    Explicit saving is a crutch based on limitations of early computers when disk space was expensive. Unfortunately, people are so used to it that they think it's a good idea. Kinda link having to reboot Windows every while so it doesn't slow down. (I know that it's not true anymore.)

    Think about it, when I create a document in the analog world with a pencil I don't have to save it. Every change is committed to paper.

    You're right, of course, the added value with digital documents is that I can go back to previous versions. But again, it's implemented using a crutch, namely Undo and Redo. Automatic file versioning is the obvious answer.

    Having many intermediate versions lying around is a non-problem. First of all, only deltas have to be saved with a complete version saved once in a while to minimize the chance of corruption. Secondly, just as with backups, the older the version is the less intermediate versions you need. Say one version every minute for the last hour. Then one version every hour for the last day before that. One version every day for the last week before that. And so on.

    A filesystem that supports transparent automatic versioning is such a no-brainer from a usability standpoint that I can't figure out why nobody has done it already. I guess it must be really hard.

    BTW, an explicit save can be simulated on a system with continuous saving by creating named snapshots.

  • Re:File - Save (Score:4, Interesting)

    by JesseMcDonald ( 536341 ) on Thursday December 04, 2008 @07:43PM (#25996459) Homepage

    Continuous save can be made workable with some reasonable rules for discarding unneeded versions. First, keep every version the user explicitly tags, as well as the baseline for the current session (to allow reversion). For the rest, devise a heuristic combining recency and amount of change to select old, trivial versions to be discarded. The further back you go into the history, the more widely spaced the checkpoints become. This is easier for structured documents, but with proper heuristics can also be applied to e.g. plain text. Temporal grouping (sessions, breaks in typing, etc.) can provide valuable clues in this area.

    Currently most programs only have two levels of history: the saved version(s), and the transient undo buffer. There's no reason that this sharp cut-off couldn't be turned into a gradual transition.

  • Re:Good Advice (Score:3, Interesting)

    by obeythefist ( 719316 ) on Thursday December 04, 2008 @08:06PM (#25996771) Journal

    From what I've seen under WinXP, it will most likely crash some apps, panic slightly, throw the user overboard, then create a pagefile on some random disk and go back to sucking it's thumb.

  • by dch24 ( 904899 ) on Thursday December 04, 2008 @08:59PM (#25997325) Journal
    You obviously don't fit those requirements. Real Photoshop, FCP, and Avid users aren't concerned about swap, they're concerned about Disk I/O speeds, and they don't want Windows swapping things to disk.

    As an admin for a video editing shop, we turned off swap long ago. The programs we use know how much ram and how much disk ("cache") to use already, and they don't want anyone getting in their way.

    Especially not swapping, which thrashes the seek time.
  • by phr1 ( 211689 ) on Thursday December 04, 2008 @09:12PM (#25997425)
    which is written as if it were just like any other compute task, to be scheduled as the OS sees fit (maybe with bumped up priority, but that's not enough to make sure the right thing happens). It instead has to be scheduled as a realtime task with guaranteed bounds on the response time for low-overhead user operations, which means locking stuff in RAM even at the cost of more swapping on less interactive tasks. That also means turning on realtime kernel support in systems that don't already have it active. I've thought for a while that the Linux window system should be rewritten by game programmers, who tend to have some clue about how to identify the parts of an interactive program have to be responsive, and to make those parts actually be responsive.
  • by Reziac ( 43301 ) * on Thursday December 04, 2008 @10:41PM (#25998217) Homepage Journal

    My understanding is that Windows uses the pagefile primarily for single-use and rarely-used stuff, a great deal of which is read only at boot-time or when programs start up. So it can indeed "fill up" even when there's plenty of RAM left over. And you don't really want rarely-used data cluttering RAM if your RAM supply is limited.

    BUT... "rarely used" is a relative term. There is a registry setting (which I don't know off the top of my head tho I'm sure a search would find it) that forces Windows to keep all that "rarely used" OS-related stuff in RAM instead of paged out, and I gather this setting improves performance immensely -- for the same reasons as killing the pagefile entirely does (reads from disk are painfully slow compared to reads from RAM).

    As to the slow boot -- did you have the pagefile set permanent? and preferably on its own partition so it can't become fragmented? Temporary pagefile that has to be written at boot time, plus fragmented all to hell (which does nothing for stability) due to the disk being fragmented, is about the slowest/worst way to do it.

    First thing I do on any machine that's going to have a pagefile is give the nasty thing its own partition, which the user is forbidden to use for anything else. If for some reason a separate partition is impossible, I kill the pagefile and all tempfiles, defrag, then reset the pagefile at a fixed permanent size, so it won't refragment.

  • by Anonymous Coward on Thursday December 04, 2008 @10:56PM (#25998337)

    What am I missing?

    If you have 4GB RAM, and 4GB pagefile, how are you better off than if you have 8GB RAM and no pagefile?

    Either way, you run into a brick wall when you hit 8GB.

    If I start to run out of memory, you say add 4GB of swap; but why not add 4GB RAM?

    That said, there can be definite performance benefits of swapping unused pages to disk to free the memory for caching purposes, and that's a valid argument; but I've never understood the "because you'll run out" argument.

  • by Eskarel ( 565631 ) on Friday December 05, 2008 @01:15AM (#25999213)
    Yeah, and get the sack because you hung up on 90% of your callers.
    People treat tech support like shit. Sometimes it's because they're assholes and sometimes it's because the policies at the place they're calling mean you can't do anything else.
    I treated Dell tech support people like dirt not because I wanted to, but because it was the only way to bypass the "I'm paid to not send a tech out" policy Dell had put in place. I didn't want their help, or their support knowledge, I wanted them to send a tech out to replace the part that had broken, or to send me a new part so I can do it myself, I'd already done all the diagnostics.
  • by rjstegbauer ( 845926 ) on Friday December 05, 2008 @09:01AM (#26001625)

    We should all be glad that those days are far behind us.

    I would disagree.

    For the time that there was a 640K limit, software designers for just about *every* application had to worry about size and performance.

    Now with 4GB memories and 500 GB disks and 3GHz dual core processors, *anyone* can write an application that works without worry about efficiencies.

    I kinda wish we would hit another brick wall like that so designers have to actually architect what they are building.

    I think it would be good for the software engineering discipline.

    Randy

  • by Bozovision ( 107228 ) on Friday December 05, 2008 @09:01AM (#26001635) Homepage

    I went to a talk given at Microsoft Research, here in Cambridge, UK, a year or two ago, the theme of which was the forthcoming changes we can expect to see in operating systems.

    One of the issues that was discussed was the use of virtual memory/swapping - the technique was invented in Cambridge I think. The idea behind virtualising resources is to be able to share resources amongst competing programs. But in a world of 8GB RAM, the point was made that RAM is no longer a limited resource which needs sharing, and consequently, except for when you are running programs like simulations which need vaaaast amounts of RAM to run, virtual memory isn't needed.

    The speaker said that Microsoft had done some experimentation with turning virtual memory off on computers with large amounts of memory, but that it hadn't gone well. One problem is that some programs are written with the assumption that virtual memory is present and will be needed, so they explicitly swap pages in and out. These programs die. Unfotunately at the moment Windows is one of these.

    So, good idea in principle on a modern system running a set number of tasks, but not possible at the moment in practice.

    Jeff

  • by Scoth ( 879800 ) on Friday December 05, 2008 @10:38AM (#26002503)

    Uhg, you bring back bad memories for me. I was working DSL support at Earthlink when 9/11 happened, and there was a lot of telco equipment in/around the WTC. I had people who lived in NYC on the phone while the buildings were still burning wanting to know why their DSL was down and when it'd be back up. A couple even complained that they had trouble getting through. Completely shocked me.

BLISS is ignorance.

Working...