Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Businesses Upgrades Hardware

How Do I Provide a Workstation To Last 15 Years? 655

An anonymous reader writes "My father is a veterinarian with a small private practice. He runs all his patient/client/financial administration on two simple workstations, linked with a network cable. The administration application is a simple DOS application backed by a database. Now the current systems, a Pentium 66mhz and a 486, both with 8MB of RAM and 500MB of hard drive space, are getting a bit long in the tooth. The 500MB harddrives are filling up, the installed software (Windows 95) is getting a bit flakey at times. My father has asked me to think about replacing the current setup. I do know a lot about computers, but my father would really like the new setup to last 10-15 years, just like the current one has. I just dont know where to begin thinking about that kind of systems lifetime. Do I buy, or build myself? How many spare parts should I keep in reserve? What will fail first, and how many years down the line will that happen?"
This discussion has been archived. No new comments can be posted.

How Do I Provide a Workstation To Last 15 Years?

Comments Filter:
  • by MouseR ( 3264 ) on Sunday April 05, 2009 @04:41PM (#27468057) Homepage

    Or just get quality components to begin with.

    At the office, I'm still running a 350mghz PowerMac G4 computer (the bugger is 10 years old) as a server.

    All original components. None failed. System still has it's original bleeding-edge 320megs of Ram, runs Mac OS X Tiger.

    It was given a new 40gig baracuda drive that's been sitting on shelves for years. had never been used.

    We use this machine as a slowest-denominator software test platform for a product in development and as a distributed networked compiler farm node and backup server for another more important machine (it backs up the backup machine's main OS, not it's files).

    MS can argue all it wants about Apple making "aesthetic" machines, they actually use good components. Current XServe hardware being another case in point.

  • by Anonymous Coward on Sunday April 05, 2009 @04:45PM (#27468093)

    Most of the failures of my home machines in the last six years have been fans and/or the power supplies housing them. (sometimes hard to tell what died first) With six desktop class machines running in the house, I've only had one drive failure, but I've replaced four power supplies and several frozen case fans. These aren't gaming rigs, just basic surf/email/homework boxes.

    That said, with the price of used off lease gear on ebay and elsewhere these days, you could pick up machines that would run rings around the existing systems for uder $300.

  • by amcdiarmid ( 856796 ) <amcdiarm.gmail@com> on Sunday April 05, 2009 @04:51PM (#27468123) Journal

    Mod parent up.

    You need DOS and Win96 compatibility: You can virtualize the existing system into a new system, and make it portible; back-up-able (as a Virtual Machine) by virtualizing it.

    As an aside, I always thought Win95 was a dog. You may wish to check to see if XP compatibility mode will work, or check (ha ha) to see if WINE will work. (Actually, trying the application set with WINE is not a bad idea - it should be compatible with Windows 95 by now.)

    Remember it could be worse: I have a friend who deals with Vet who has an old Xenix system - they buy parts of ebay in bulk;)

  • by Shivani1141 ( 996696 ) on Sunday April 05, 2009 @04:58PM (#27468199)
    Not so with the flash drives. I was looking into an equivalent for MTBF for flash drives, and not finding one i started looking into the maximum capacity of writes, and found an article extolling a sort of half-life figure for flash drives. looking into the drive i have installed in my media center (for Quiet) an OCZ model. i found that i'd have to be writing to the drive at maximum capacity 24/7 for 18 years before the available capacity of the drive would decrease by half. they're quite long-lived, if the maximum writes per sector figures are to be believed.
  • by mariushm ( 1022195 ) on Sunday April 05, 2009 @05:11PM (#27468337)

    Build a computer with a processor that has very low frequency, something like an AMD Sempron LE-1300.

    It runs by default at 2300Mhz but you should be able to lower it to something like 1Ghz or maybe even lower, which will increase the compatibility with DOS (if needed and if there are any incompatibility) and it also means that the computer will run even without the fan running over the processor.

    You can solve the power supply fan problems by buying a passively cooler power supply.

    You could also get a SSD drive or maybe a cheap Flash to IDE/SATA adapter and use 1 or 2 GB compact Flash card for DOS.

    Though you can simply create a virtual machine or even DosBox (if you don't need some complex printing functions)..

  • by prefec2 ( 875483 ) on Sunday April 05, 2009 @05:13PM (#27468357)

    The electrolytic capacitor on the main board are also a typical part to fail. The hotter the system the shorter there lifetime. So a cool motherboard and system is required.

  • by mysidia ( 191772 ) on Sunday April 05, 2009 @05:15PM (#27468371)

    Keep in mind that it may be prudent to pick less-reliable hardware that should still last 4 or 5 years (most likely), over slightly more-reliable hardware, WHEN the price difference makes it more cost-effective to ANTICIPATE replacement.

    Even the most reliable components may be expected to fail in 5 years.

    I think he's been very fortunate that his setup has lasted 15 years. On average, a computer has had a lifetime of 5 years, before some hardware failures occur. To be honest.. in many cases, newer hard drives has been less reliable or has not lasted as long.

    The higher data density results in more failures not less. The more bits (at essentially the same rate of defects), means it's much more probable for there to be at least one sector defective on a larger drive.

    Power supplies can fail within 1 year or 10. It's random, so there can be no guarantee that the setup will last 15 years without any hardware replacement. (Even using the hardware he has right now, something could have failed in 1 year. A drive could go completely bad tomorrow.)

    So get a very decent power supply, preferably one that is efficient at the anticipated load (which you should calculate for the chosen hardware), but can handle a lot more.

    Using SSDs would improve reliability if used in a RAID 1 array, and a choice made with decent cache and wear-levelling, provided your app is reasonable they should last 50 years (typical use level), more likely the RAM dies first.

    But unfortunately, the suitable SSDs of any reasonable size are also highly expensive. the cheaper ones don't have the few gigabytes or so of battery-backed RAM cache that would be necessary for high speed. --- Which come to think of it, may also be a reliability risk, since most types of rechargeable batteries don't last 15 years.

    And I expect you don't need high speed for a small veterinary database, so the most inexpensive SLC or MLC may be just what the doctor ordered..

    Another possible application for flash is simply to boot off of it, and then use an ordinary mechanical hard drive for storing your data. This way, mechanical wear is not introduced when you boot your OS, and writes are rarely required.

    However, Windows XP (or Vista) is not suitable for this, as it likes to write to its own boot media. A Linux-based kiosk with a mysql-backed database app of some sort could work great there.

    Make sure you get a lot more space than you need, i.e. try to fit everything you need within 5 or 6 GBs. And use a 50gb drive, so you can have an "active" partition and "backup" partition

    Minimize mechanical wear on your drives by getting enough memory to run the workstations without a swapfile or pagefile. i.e. get 1GB or 2GB (a workstation that can use ECC memory is better, as you reduce the small possibility of silent data corruption), and make sure you disable all paging/swapping features within your OS.

    Use the most reliable drives available for a reasonable cost; these are probably NOT 1TB 7200RPM drives; these are more likely 30gb 5000RPM drives that come with a 3 year or 5 year warranty.

    Have each workstation backup the other workstation, i.e. so there are always two copies of the database. This is in addition to daily backups to external media to be stored offsite.

    Unless you are using a UNIX/Linux OS with a journalled filesystem (or something like ZFS), it's pretty much a fact, that you are likely going to need an OS recovery at least once.

    Each workstation should have two drives and a 'working partition' and backup partition. That you manually refresh every few months. Even better if they are separate physical disks (but again, more expensive)

    Reliability will be maximized if you use a UNIX or Linux based application. And you minimize unnecessary reads and writes to your mechanical media, and minimize unnecessary load (and therefore heat) emitted by your hardware.

    In any case, the usernames logging into the worksta

  • by ThePhilips ( 752041 ) on Sunday April 05, 2009 @05:16PM (#27468375) Homepage Journal

    Any chances that you still have the link(s)?

    Because my reading of Anand's research [anandtech.com] tells me that in active, non-stop use SSD would fail in about the same time as normal laptop 1.8"/2.5" harddrives - 1-1.5 years. Limit on number of rewrite cycles is high (~100k), yet is quite easy to reach.

  • No way... (Score:1, Informative)

    by Anonymous Coward on Sunday April 05, 2009 @05:22PM (#27468439)

    Wow... your advice might be somewhat interesting in the general semse, but I think it totally misses the point for this particular question.

    The guy has a legacy DOS app he wants to keep running. It runs fine on a 486 computer. Redesigning this system to be a quadcore server with new webapps etc is crazy! (not to mention, web technologies are about the LEAST stable area of computer development at the moment). The guy is running a 486 with a 500mb harddisk, and you want him to upgrade to a quadcore server with tons of memory and room for multiple disk expansions? I just don't agree with this at all..

    I also doubt that in 15 years SATA drives will no longer be available. Hell, I doubt that IDE drives will be that hard to find in 15 years, though I could be totally wrong on that one...

    The solution--IMHO of course--is what many others have suggested. Virtual machines.If you need two computers (as the poster mentioned), and REALLY do not want to worry about any changes in setup for 10-15 years, then buy three or or four modest desktops (and I would highly, highly recommend a backup server that does regular backups and lives offsite...this can run linux or whatever floats your boat). Design the system so that if one computer dies you can swap in another painlessly (restore from backup, unplug old, plugin new, done). Beyond that, UPS UPS UPS. IMHO, even with reliable component brands, getting a computer to last 15 years is still a crapshoot. I've done it with crap generic brands, while good brands have had unexpected failures. Plan for eventual hardware failures and you won't be disappointed :)

    I've been in a similar situation maintaining servers for a small family business. Depending on your age now, you may have a lot of time and flexibility to help the parents, but if you end up moving else where, college, gradschool, new job, whatever, you want the system to be simple and to take as little of your time as possible! It's best for all involved. (also consider VNC/Remote Desktop/SSH/etc to allow you to help out remotely)

  • by Vadim Makarov ( 529622 ) <makarov@vad1.com> on Sunday April 05, 2009 @05:23PM (#27468455) Homepage
    Get well-designed fans [noctua.at]? Might not worth the trouble for computers, but we get them for self-built scientific equipment with potentially long lab life.
  • by Rix ( 54095 ) on Sunday April 05, 2009 @05:39PM (#27468577)

    With your father's data, that is, rather than Windows 95 just barfing over the drive.

    I pray this system isn't connected to the internet in any way, because if it is it must have hundreds of worms crawling around in it. Windows 95 is of course terrible for this, but any system you plan to keep running unmanaged for 15 years should be kept far from any network and physically secure.

    I really can't imagine a single veterinary practise generating gigabytes of administravia, nor can I imagine some slapdash DOS application that does generate gigabytes of superfluous data being able to index it once it grew to that range.

    Check to see how much he's really using. If it's a small enough, move it over to a flash disk and run the application with DOSemu or in a VM. Build a system with cleanable/replacable air filters over the fans, and train your father to back up his data. (If he hasn't had a hard drive fail in 15 years of use, he's damn lucky.)

  • by cbiltcliffe ( 186293 ) on Sunday April 05, 2009 @05:48PM (#27468641) Homepage Journal

    Well, since the old machines are 15 years old, and still running fine, my suggestion would be this:

    Don't buy a new one. Don't build a new one. If you must have a backup computer, find another old machine that's in somebody's basement, garage, or otherwise not being used.
    Build a low-power machine (Celeron, Sempron, whatever) with quality parts (3 year warranty, at least), with RAID, install Linux on it, and use it as storage for the database.
    Pull the cover from the old machines, take an air compressor to them to clean them out, then replace all the fans. Again, use high quality parts.
    Format and reinstall Windows on both, so the flakiness goes away. Install all updates, and the customer/patient management database, and configure it all to access the data on the server.
    Then, pull the drive, and use something like Clonezilla on a laptop with a USB-IDE adapter to take an image of the drive and save it on the server.
    Now you've got a couple of clean machines, with fresh software, redundancy for the data, and nobody has to deal with a change as drastic as Win95 to Vista.
    If a drive fails, you've got an image of the software preconfigured.

    After you've done this, keep an eye out for old drives in the 1-5GB range. Try to get at least 3 or 4 that work well, so you've got spares for when one fails.

    As long as you don't get hit with a power surge or something, this is the most likely failure of anything this old, as it's just too low powered to generate enough heat to cause too many problems.
    And if you need them, I've got a couple of AT type power supplies kicking around that work fine.

    Also, make sure a proper backup is done of the data on the server. If he's got Internet access, encrypt it (GnuPG with a strong password or key) and send it to a gmail account, or something like that. Otherwise, a removable or USB drive that he can take offsite.

  • My list, FWIW (Score:1, Informative)

    by Anonymous Coward on Sunday April 05, 2009 @06:05PM (#27468817)

    *Hard drive. It was a high-quality drive, but it failed after 7 years. Good hard drives may last longer, but there is a strong stochastic component to drive longevity. You can't guarantee 15 years. No way.
    *Fan on video card. I didn't replace it because the video card didn't get hot, not even when playing 3D intensive games.
    *Mouse. 5 years, I bought it later. Hand-on-mouse detection sensor failed because of shoddy construction, so you may get a better life time, but íf you're wise you'll prioritise ergonomics instead.
    *CD burner. 8 years, software failure. Generic drivers didn't work with the burner, and the official drivers failed after an OS update. The manufacturer is belly up, so I bought a new drive.
    *Keyboard. 8 years, but I wasn't very nice to it. On inspection, the cause was probably loss of conductivity in a signal line due to corrosion. But I discovered the design of the keyboard was particularly vulnerable to this, so a good keyboard will last longer. Or just be nice to it in the first place. I also have working keyboard from the 80s but it isn't very comfortable and it makes enough noise to drive you around the bend and down the sewer.
    I think you should tell him that a) his system only lasted so long because he got lucky and b) that since you can't guarantee such a long lifetime, you won't design for it because you would be making a promise you can't keep.

  • by Dahamma ( 304068 ) on Sunday April 05, 2009 @06:23PM (#27468991)

    My father is also a veterinarian with a private practice... I don't know enough about the exact details of his software but can give you the high level, as well as issues he has had, etc.

    First, he has gone through a few (2 or 3 not sure) completely different systems (hardware and software) in the last ~20 years of having a "computerized" practice.

    When they got the first system the practice was much smaller - 3 vets in the partnership and a handful of employees. Over time it has grown to employ another 3 full time vets and a much larger staff. So that's question 1: it may be small now, but do you expect it to grow? 2 networked workstations won't be enough if he may have 20+ employees in the future, and deciding on something today (hardware and software) that at least supports upgrades will go a long way to prevent having to redo the whole thing later.

    Question 2 is related to the nature of his practice. Is it a relatively low-tech, rural practice or is he planning on modernizing/keeping up with technology? Back in the 70's the most high-tech equipment in most practices was the x-ray machine. Since then, my dad's practice has added an ultrasound, laparoscope, and most recently a digital x-ray that allows inexpensive, near instant access to results (without having to develop, etc) as well as convenient storage, display on a number of terminals in exam rooms, even convenient consults from remote specialists. That's in addition to all of the other benefits that come with professional veterinary software packages, like integration with outside labs to get faster test results, tracking of inventory and reordering, etc.

    Question 3: how much does he care about his data/computer systems? If down is it a minor inconvenience or a crippling liability? If the latter, do you really want to build something for him with off the shelf parts with no support? Are you available for 24/7 support if something goes wrong? My dad's practice has 24/7 1 hour business support (from IBM? or something similar). If a system goes down, HDD dies, network is flaky, etc they will have someone there in less than an hour to replace hardware, diagnose issues, restore backups, etc. Sure, that service costs money but has been necessary several times over the past couple decades and saved their ass when it happened. On the other hand, if your father is basically using the machines for payroll, inventory, and bookkeeping, he might be ok with a simple backup system and your help when something goes wrong...

    Anyway, I know my dad's practice now has a central server (I think just standard workstation HW with RAID and nightly backups?), a few terminals (I believe all Windows-based, since that's what the veterinary SW runs on), and most recently a medical grade monitor and high-res video card for x-ray display, along with a couple of WiFi laptops they use in exam rooms to show x-rays, look up histories, data entry, etc. All of it comes with 24/7 HW and SW support, which for their type of usage (and the fact they don't want or need a full time IT employee) I'd consider a must have...

    Anyway, hope that helped. But to summarize I'd rank the goals as (not counting cost, which of course needs to be factored in depending on personal situation):

    1) minimize downtime/lost revenue
    2) allow modernization/support for new technologies as necessary
    3) scalable if/when the practice grows in the future

    What I would most definitely NOT worry about is the latest fancy hardware. If he's still surviving on a 486 with 8MB RAM, any reasonable modern HW will be cheap and more than enough. By all means go for reliability over performance, especially if you are doing it yourself. If buying HW/SW/support from a professional company, they will make sure the HW is adequate and reliable (since it costs THEM much more in the long run if it isn't).

  • by Shivani1141 ( 996696 ) on Sunday April 05, 2009 @06:34PM (#27469085)
    Ah, found one of my original articles, oddly the one corroborating it 404s now. please, read it and make your own conclusions. http://www.storagesearch.com/ssdmyths-endurance.html [storagesearch.com]
  • by nemesisrocks ( 1464705 ) on Sunday April 05, 2009 @06:53PM (#27469265) Homepage

    Unfortunately, the mere act of cleaning out the dust in the machine might spell its demise.

    I can't even count the number of times I've thought "hey, there's a lot of dust in that machine, let's clean it", and the machine refused to power back on afterwards...

  • by Glonoinha ( 587375 ) on Sunday April 05, 2009 @07:31PM (#27469563) Journal

    Have you *ever* seen HDD surviving 80 years? Nope. (Ask any SAN admin for references.)

    That's not how MTBF works. It's an aggregate across the entire enterprise. Let's say you populate your infrastructure with 1,000 2.4" SSD's with a MTBF of 1,000,000 hours. In theory, you can assume that you're going to have one drive fail every 1,000 hours (or roughly one failure every 6 weeks, or roughly 9 failed drives each year.)

  • by Anonymous Coward on Sunday April 05, 2009 @07:59PM (#27469785)

    Modern flash drives do block rotation at the hardware level.

  • A few comments. (Score:4, Informative)

    by drolli ( 522659 ) on Sunday April 05, 2009 @08:33PM (#27470043) Journal

    The things i have seen failing are HDs, Power supplys (heat because of jammed fans), and cheap capacitors (on not-so-cheap mainboards), and monitors.

    1) Keep the power low, so ventilation and heat problem are no issues

    2) Use SSDs (keep the power low, no reason they fail)

    3) Use an RAID of SSDs (they are not out long enough to know how often they fail practically)

    4) Buy a few more HDs/SSDs of hte same type, just in case

    5) If you don't manage to build a system without fans, dust will be the biggest problem. Keeping the place clean can help.

    6) Even risking being modded down: If DOS did the job the last 15 Years, think about Freedos. Or DOSEMU running on to of a linux kernel.

    7) Buy a high quality power supply and and mainboard (not a very new one).

    8) Make a Virtual Workstation.

  • by Anonymous Coward on Sunday April 05, 2009 @09:49PM (#27470749)

    Flash ages in a peculiar way: A flash cell is naturally in either 0 or 1 state (let's assume 1). You can flip a bit to 0 and it doesn't age the chip at all. You can not flip it back to 1 though. That "erase" step can only be applied to a whole block of flash cells and this step does age the chip. That's why flash memory durability is specified as a number of "erase cycles".

    You can work with that to radically reduce the number of erase cycles needed to keep a logical-physical mapping. For example, you can add mappings to the end of a list (no erase cycle needed since you're just flipping some bits from 1 to 0) and mark outdated mappings by setting all bits to zero (again, no erase cycle needed). The controller just needs to look for the last mapping which isn't all 1s to find the end of the list and ignore mappings which are all 0s (they've been "deleted"). It can keep a sorted copy of the mappings in internal RAM to increase the lookup performance. Only when the list is about to overflow its allocated memory do you need to erase it and write a condensed version back. (You would use two lists and mark the latest with an incrementing serial number when it has been written completely, to avoid losing the mapping when the power goes out while you compact the list.)

    Actual wear leveling algorithms are proprietary, but you see that wear leveling is certainly possible and does not necessarily need other, more durable memory to work.

  • by Anonymous Coward on Sunday April 05, 2009 @09:56PM (#27470785)

    The leveling score for each block is written within the block and not in a separate "index" area.

  • by fractoid ( 1076465 ) on Sunday April 05, 2009 @10:40PM (#27471121) Homepage
    True, but I think the vet in question wants the actual box to last 15 years, so the tips above are useful too.

    I'd just add that you should try to stay away from anything with electrolytic capacitors on it. They're usually the first thing to go - these days some motherboards are advertised as "no electro caps".
  • by David Jao ( 2759 ) <djao@dominia.org> on Sunday April 05, 2009 @11:04PM (#27471335) Homepage

    Any chances that you still have the link(s)?

    Because my reading of Anand's research [anandtech.com] tells me that in active, non-stop use SSD would fail in about the same time as normal laptop 1.8"/2.5" harddrives - 1-1.5 years. Limit on number of rewrite cycles is high (~100k), yet is quite easy to reach.

    The article you cite does not contain the 1-1.5 years figure anywhere. How did you get that number? For what it's worth, I've been using solid state drives in both my laptops for more than a year now, with no problems whatsoever.

    Another very important point which often gets ignored is that a solid state drive failure is far more benign than a spinning platter drive failure. When a solid state drive fails, you lose the ability to write data, but you can still read data. On the other hand, failure of a spinning platter drive means that you can't read your data anymore, at least not without sending it to a very expensive data recovery firm.

  • by packeteer ( 566398 ) <packeteer AT subdimension DOT com> on Sunday April 05, 2009 @11:23PM (#27471491)

    You are right that fans and power supplies usually go first. I personally use a little bit of an overpowered power supply so it runs cooler and more stable.

    Also I think the more general concern I have is that possibly planning for a 15 year lifespan might be the wrong way of looking into this. It is always much better to have a flexible upgrade and repair plan than try and force something to last much longer than it is intended. Make no mistake that consumer hardware is not intended to last 15 years.

    I would much rather look at something like software of a data base that can upgrade smoothly in the future.

  • by tsm_sf ( 545316 ) on Sunday April 05, 2009 @11:48PM (#27471685) Journal
    Assuming that you have to rewrite his software, make it all web based (even if it runs off of one machine as a server without the Internet) and forgetaboutit. Keep it as basic and generic as possible and then the hardware will never matter.

    ...and then pick a host or service that will be around in a decade or two.
  • by Sj0 ( 472011 ) on Monday April 06, 2009 @12:05AM (#27471817) Journal

    I'd say an online UPS as a component to help prevent premature power supply failure. It rectifies the signal at all times and creates a new perfect sinewave at all times. That'll get rid of transients and make your power supply far more reliable after you get past infant mortality.

    My full solution would be a fanless rig, with RAID 1 for full redundancy of disks so if a hard disk fails, it doesn't take your data with it, and weekly backups to DAT tape stored off-site. Then I'd use a pair of power supplies, using a diode to prevent power from one from getting into the other, and a zener diode or 78 series linear regulators to ensure a failing supply can't overpower any one line. Then, from my little power circuit, the two power supplies would feed the one motherboard, which would be underclocked at reduced voltage. It would have the highest possible amount of RAM in it, because that would reduce the writes to the hard drives.

    That should be reasonably reliable.

  • by fedorowp ( 894507 ) <fedorowp@yaho[ ]om ['o.c' in gap]> on Monday April 06, 2009 @03:13AM (#27472797)

    I have experience building workstations and servers that last. Nearly all of the ones I've built for customers are still functional more than 10 years after first install.

    Experience counts so I suggest you use a system builder with a similar track-record.

    The more powerful the system, the more challenges in building it to last. Many of the items on the check-list below need to be balanced against the needs of the customer, including noise, environmental conditions, performance aspects, and frequently budget.

    Check-list for Building a Computer that Lasts

    • Minimize expansion hardware. Expansion slot connectors sometimes oxidize so the less plug-in hardware the better. This includes on-board video, serial-ports if needed, etc.
    • Use a high-end board from a quality manufacture. High-end boards tend to have powerful CPU voltage regulators and are designed to support lots of memory, which reduces memory controller issues as the board ages. They also tend to be the boards preferred by early-adopters, which manufacturers are probably more thorough in validating. My current preference is for Asus as they have the highest end consumer boards which support ECC for AMD CPUs. Make sure not to overtighten the mounting screws.
    • One or two identical memory modules. when memory modules are mismatched, or with more than two unbuffered modules, when the memory controller ages you're more likely to run into trouble. Use memory approved by the motherboard manufacture. ECC is recommended.
    • A great power supply. An oversized PC Power & Cooling power-supply is the best choice for environments that can handle a fan and noise isn't an issue. That said, quiet is very important in many situations, and PC Power & Cooling's Silencer models certainly aren't silent under load. For those situations I use an oversized Zalman heat-pipe cooled power supply I install a Noctua fan into. With that setup you don't hear a sound from the cooling fan and the power supply runs extremely cool.
    • Hard drive redundancy. RAID-1 or RAID-10 is the only way to go for normal systems. A quality true hardware RAID controller for Windows, and software RAID for Linux. A hot spare is recommended. When using a software RAID, if you need to be sure the machine will boot with a HD failure, use a hardware RAID for the boot volume. A rather neat low-cost way I'm doing that for the next Linux server I'm building is using an Addonics duel CF interface that has hardware RAID in it.
    • Plenty of cooling with quality fans. No sleeve bearing fans, and if the speed of any fans is reduced to control noice, make sure they can start from every rotational position.
    • Use quality HDs and install them correctly. For the past several years Western Digital's high-end hard drives have had a perfect track-record for me. The most important thing to remember when installing a HD is absolutely, positively, don't over-tighten the mounting screws. Plenty of clean power, good cooling, and eliminating any vibration being transfered to them is important. Mount them as low in the case as possible to help keep them cool, and leave space between drives. If you use Seagate drives, server class is a must. In the last server I build, I did a RAID-1 between an Intel X25-E SSD and mechanical HDs so all the eggs aren't in one brand/type of basket.
    • Good power protection. I've never had a computer damaged by lightening plugged into a metal-case Tripp Lite surge protector. Also protect the cable, DSL, and modem connections, and any non-fiber runs that go outside the building. Make sure you protect all network equipment too. Plug an APC Smart-UPS into the Tripp Lite and you have total protection. No other brand or model UPS has help up as well in the long-term. Dedicated circuits are the icing on the cake, but with the Tripp Lite + APC SmartUPS combination, as long as the outlet is wired correctly, no matter how bad the power is the computer has always worked fine for me.
    • P
  • Don't Buy Spares! (Score:3, Informative)

    by supernova_hq ( 1014429 ) on Monday April 06, 2009 @03:49AM (#27472969)
    If I have learned anything about computer maintenance it is the following:
    • Don't buy spare parts! (except ram, it increases in value...)
    • Moving parts == bad (but SSD don't last very long yet)
    • Don't rely on windows for long-term projects!

    Ram is the ONLY thing that appreciates over time. Don't buy spare parts for anything else. Unless it is something that will be obselete, but that's a dumb thing to buy for a long-term machine...

    If you can get him off windows (or any closed-source software for that matter), DO IT! You will always have the source code to linux+gnome+firefox+apache+mysql, but windows will probably never be available for more than 10 years.

  • by doodleboy ( 263186 ) on Monday April 06, 2009 @07:40AM (#27474127)

    My full solution would be a fanless rig, with RAID 1 for full redundancy of disks so if a hard disk fails, it doesn't take your data with it, and weekly backups to DAT tape stored off-site. Then I'd use a pair of power supplies, using a diode to prevent power from one from getting into the other, and a zener diode or 78 series linear regulators to ensure a failing supply can't overpower any one line. Then, from my little power circuit, the two power supplies would feed the one motherboard, which would be underclocked at reduced voltage. It would have the highest possible amount of RAM in it, because that would reduce the writes to the hard drives.

    On the software side, I would consider hosting the DOS app on linux using an emulator such as dosemu or dosbox. The OP's dad would have an environment very similar to what he's using now. I would probably use Debian stable for both boxes, which has very long release cycles and is very stable.

    With linux comes the option to replace the DAT tapes with an off-site rsync over ssh. If the main box dies, you'd be able to just swap in the backup box in a couple of minutes. If the data set isn't very large the mirror will complete in a couple of seconds. It's very easy to do:

    Create a RSA public/private key pair: ssh-keygen -t rsa, press enter at the password prompts.

    Copy the public key to the remote box: ssh-copy-id -i ~/.ssh/id_rsa.pub remotebox.

    Have a nightly cron job to push the files: rsync -ave ssh --delete /localfiles/ remotebox:/localfiles.

    For bonux points you could even throw in snapshots [mikerubel.org].

    I'm backing up hundreds of partitions this way at work, each with snapshots going back a month. Tapes are slow, unreliable and expensive. I would not use them for any purpose.

  • by Sj0 ( 472011 ) on Monday April 06, 2009 @11:37AM (#27476675) Journal

    I'm the head of the Reliability Centred Maintenance program at the industrial plant I work at. In RCM, we look at the dominant failure modes, and prescribe a maintenance program to mitigate the risk, or reduce the frequency.

    In this case, the "I want this computer to last for 15 years" implicitly means they don't want to do scheduled maintenance. They want it to sit there and run, like the previous machine. They don't want a PC in the way you or I think of a PC, they want an appliance that just works. That being the case, We NEED to look at reliability centred design, rather than maintenance.

    So what are the dominant failure modes for a PC? Clogged fans, failed power supply, hard disk failure. If you don't experience these failures, odds are your computer will run indefinitely.

    The first problem can be solved with a machine that doesn't have any fans. Design your machine so convection currents carry the heat out the top of the case. This will mean you'll never have a fan failure.

    The second problem can be solved with two methods: First, redundant(fanless) power supplies. Second, an online UPS to prevent dirty power from damaging the machine. I might actually just use an industrial deep cycle 12V battery with a pair of inverters, and a 12V smart battery charger on the AC side. It's dirty, but it's functional. Your charger should last 15 years, your battery should last 20, your inverters should last indefinitely and are redundant. With these two solutions in place, I wouldn't expect a total system failure for 25 years. If the charger fails, you should have more than enough time running a 50W fanless PC and 50W lcd monitor to schedule replacement of the charger.

    That leaves the hard drive as the only remaining failure mode. Hard drives aren't going to last 15 years. I had a hard drive from 1989 that lived to see the new millennium, but it's dead today. Along the way, many of its contemporaries decided to die. The only solution is to mitigate the consequences of failure with redundancy, so the drive can be replaced. A CompactFlash drive might be a good option, but the standard itself is only 15 years old today, so it's difficult to say whether such a solution would work. With this solution, you would probably need to replace a drive every 7 years, but it could be done during a scheduled outage, outside of office hours.

    If you're serious about reliability, leaving it to luck is a good way to be negatively surprised. I've worked with too many failed PCs in the past few weeks to believe you can just build it and forget it.

  • by Anonymous Coward on Tuesday April 07, 2009 @06:56AM (#27486979)

    I fix PCs in various locations and the dirtiest I have -ever- seen was one in a vet's. Absolutely full of dust and fluff -no wonder it had slowed to a crawl. A major requirement is an air filter - one of the cheapo air conditioning filters is all that's required, just vacuum it off every so often. A passively cooled system works best with a chimney to promote convection current.

"A car is just a big purse on wheels." -- Johanna Reynolds

Working...