javester asks: "Has anybody given any thought to compiling metrics for high-availability for the different OSs on a high-availability non-clustered system? Is 'High-availability Windows' an oxymoron? Can you even get close the 5 9s (99.999% which is about 5 minutes of downtime a year) on a typical stand-alone Windows 2000 Server running IIS 5 with typical patch-it-up, three-finger salute routine? If you are just serving up a web-site on a plain-vanilla Windows box, how highly-available can it get? By my calculations, with the typical reboot cycle being 3 minutes, and with a security patch requiring a reboot being released on a weekly basis - a stand-alone high-availability Windows box looses about 156 minutes a year just applying patches! So it can never get past the third 9! In *nix environments, reboots are not required as often (except for kernel changes - how APT!), since you can recycle the appropriate daemon without restarting. But really, has anybody made a formal study?" Now wouldn't this make an interesting college project?