Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Upgrades Hardware Technology

Can My Desktop Make It in the Big Leagues? 207

bionic-john wonders: "I work in an environment where the dollar is more than almighty (who doesn't?). One of my cost savings plans is to use desktop computers as servers. They cost much less, the parts are readily available and/or interchangeable - as opposed to waiting for overnight proprietary or obscure parts from a vendor, and so on. I understand that servers have redundancy on disk and power - but this can be emulated for a fraction of the cost, as well. Is there a performance difference between a desktop and a server with the same specs? Chipsets are chipsets, motherboards are motherboard, and memory is memory -- is there something special about a server other than looking at the rack of blades and feeling special?"
This discussion has been archived. No new comments can be posted.

Can My Desktop Make It in the Big Leagues?

Comments Filter:
  • by mind21_98 ( 18647 ) on Saturday October 09, 2004 @09:59PM (#10483084) Homepage Journal
    If you don't have much space to spare, I would go with rackmount servers anyway. Some also provide remote administration capability separate from the OS, meaning you can reboot it and such.
    • by innosent ( 618233 ) <jmdorityNO@SPAMgmail.com> on Sunday October 10, 2004 @12:56AM (#10483819)
      Besides that, if you don't have a specific vendor that you are required to order from, you can often find rackmount "server" machines for a fraction of the cost of an IBM, HP, or Dell. We use several 1U and 3U servers where I work that we purchased from 8anet [8anet.com]. Aside from the cases (check out the chenbro ones, very nice hot-swap features) and power supplies being more expensive, and motherboards having better management features (go with Supermicro, they have very nice network monitoring utilities, for things like fan speed, power, and temps, as well as expansion for hardware-based monitoring) and the fact that you will probably want registered DRAMs, there is no real difference between a server and a common tower workstation. All of those features which add to the price (hot-swap drives, redundant power supplies, high-end motherboards, and registered memory) are features that are really, REALLY, worth it when you are talking about machines that must be available when you need them.

      I believe we paid around $4500 for our 3U P4 2.8GHz 2GB RAM 2.4TB SATA RAID-5 NAS machine with N+1 redundant power supplies, about the same for our 3U Dual Xeon 2.8GHz 4GB RAM 52GB (6 15K rpm 18GB drives total) SCSI U320 RAID-10 database machines with N+1 redundant power supplies, and our 1U P4 2.8GHz 2GB RAM 80GB SATA RAID-1 web servers each run around $1400 (no redundant power supplies). Point is, there ARE other options, you don't have to use low-end hardware just because you can't afford IBM. Besides, why pay for servers from IBM, HP, or Dell, when you can buy two of the same caliber machine for the same amount of money or less. With two machines, you can do things like load balancing, increasing performance and adding redundancy at the same time.
      • Oh, yeah, and if you're looking to save money, blades are DEFINITELY not what you want. Blades are not meant to be cheap, they're meant to save space, for when space is worth more than money. Even in that case, though, I would look into the half-depth 1U rackmounts before going to blades, unless you're talking about maintaining a very large number of machines, since they are much cheaper (case and power supply $389 from 8anet, can fit 2 in each unit).
  • Just be careful about checking and cleaning the power supply sometimes, and you should be fine.

    In my experience, "servers" are merely designed with rack-mountable boxes as opposed to floor-sitting boxes.

    Bob-
    • What's a server? (Score:5, Informative)

      by fm6 ( 162816 ) on Saturday October 09, 2004 @10:37PM (#10483275) Homepage Journal
      There are floor-sitting servers too. I own a PowerEdge 1400SC [sentim.com.tr]. Of course, I sort of defeat my own argument by using it as a workstation.

      What makes this guy a server? I'm no expert, but here's what I see:

      • Lots of RAM. Came pre-configured with 1GB, and could handle many times that.
      • There's only two 32-bit PCI slots, but four 64-bit slots. Handy if you want to add RAID or Fibre support, a nuisance if you want the more ordinary kind of add-in.
      • No built-in sound card.
      • No AGP interface. Instead, there's a basic 4MB video interface on the motherboard.
      • Massive fans.
      Anyway, bionic-john is correct in thinking that a workstation will do as a server, provided only that you don't demand more of it than it's designed to do. (Which is always a question anyway.) I work for a hosting/colocation provider, and I see all kinds of stuff pressed into service as servers: cheap white boxes, Sun and Apple workstations, even an X-Box or two. Ultimately, all computers are interchangable. Specialized computers are just a matter of convenience and cost-effectiveness.
      • "I work for a hosting/colocation provider, and I see all kinds of stuff pressed into service as servers: cheap white boxes, Sun and Apple workstations, even an X-Box or two."

        So how cheap is this CoLo anyway? Most of the CoLos I have looked at are prices high enough that using an X-Box would just be silly.
        I'd love to find a CoLo shelf space priced for someone like me who wants to put say 100Gig on line but expects fairly low traffic.

        (in the NYC area would be nice as well)

        • So how cheap is this CoLo anyway? Most of the CoLos I have looked at are prices high enough that using an X-Box would just be silly. I'd love to find a CoLo shelf space priced for someone like me who wants to put say 100Gig on line but expects fairly low traffic.

          Why not just use the DSL/cable that'd be cheaper than a colo anyway? And keep in mind that installing Linux on an XBox is also pretty silly, but that doesn't stop anyone ...
          • Because most of the time cable/dsl has absolutely shitty upload speeds. And it usually violates the TOS.

            FYI check coloco.com or go with a cheapy dedicated company (valueweb.com)
            • Well, he specified low load ... and so I'm assuming he wants to archive something for his personal use (insert pr0n joke here). I use my Comcast connection for that; from what I've heard, both from friends who also do that and (unofficially, obviously) from Comcast tech support, they have a "don't ask, don't tell, don't abuse" unofficial policy. As long as you don't use a lot of bandwidth, and don't shove it in front of their noses, they'll leave you alone.
    • by flonker ( 526111 ) on Sunday October 10, 2004 @12:32AM (#10483731)

      Buy quality parts, and everything should be OK. Don't expect a $300 emachine to last out the year.

      A few tips:
      • RAID 0 (mirror) your hard drives. You will have hard drives fail.
      • Buy decent drives. I've had bad experiences with Quantum and Maxtor, and I've been happy with Seagate and Western Digital, but YMMV.
      • Go overkill on cooling. Fans fail more often than hard drives, and a dead fan will heat up the case, and can take out the hard drives.
      • That said, watch the fan on your power supply. They go out frequently.
      • Check the fans on your CPU too. They also go out frequently.
      • Be sure to buy a good NIC. A bad NIC might cause strange problems that are nearly impossible to diagnose.
      • Buy a cheap video card. You won't be plaing Doom3 on the server.
      • Backup to a USB hard drive.
      • If you don't need a UPS, make sure you at least have a surge supressor. On phone lines too, if you use them.
      • Servers have more RAM than desktop systems for a reason. Without knowing specifics, it's difficult to tell if you need more RAM, but bear that in mind. Web servers might cache .asp files. File servers don't need much RAM. Mail servers with antispam/antivirus stuff use quite a bit of RAM and CPU. Database servers cache everything and are CPU hungry.
      • Dual CPUs are a godsend. Sometimes an application will peg the CPU. This often makes the server appear to be hung. If you have two CPUs, only one CPU locks up, and usually the process eventually finishes, and you won't even notice.
      • Rackmounts exist for a reason. They save a lot of space. Rackmount cases are a little more expensive, but they can be worth the money. YMMV.

      That said, dual CPUs and rackmount cases are a luxury, and if cost is that important, you can skip them. And make sure there is a process in place to check on the health of the server. Even waving your hand behind the box once a week to check how hot the PSU exhaust is can save the business a lot of headache. (Hint: if no air is blowing, replace the PSU, and check the HDDs to make sure they're both still working.)

      Also, be wary of Dell. They use non-standard power supplies, so if your PSU goes out, you can't hop down to the local computer store and buy a replacement.

      • by Anonymous Coward on Sunday October 10, 2004 @12:36AM (#10483746)
        RAID 0 is striping. RAID 1 is mirroring.
      • most of the time a 300 $ crapper would last at least couple of years, too.

        just treat 'em like they could break up any day and you're set.

      • by tverbeek ( 457094 ) on Sunday October 10, 2004 @10:06AM (#10485679) Homepage
        Backup to a USB hard drive.

        I assume USB is to make it removable, but for that to do any good, you need to actually remove it, which means having at least one other USB drive to swap in when the one is off-site. If the budget doesn't allow for that, and you're just going to leave the backup there on top of the server all the time, then save yourself some money and mount an IDE drive in the case, and take advantage of the better speed to get daily backups done more effectively. Alternatively, do on-site daily backups across the network to an old machine otherwise destined for recycling but with a new large hard drive; that'll give you better disaster recovery ability if the main server dies and takes its drives with it.

        If you don't need a UPS, make sure you at least have a surge supressor.

        Please ignore that comment. You do need a UPS. Skimp on the specs and buy whatever's on sale with rebates at Best Buy this week if you must, but any machine you're going to call a "server" needs at least a few minutes of battery power to protect its data from sudden power outages and its electronics from power slumps.

        • Just don't think you're getting power filtering in a UPS; unless it's an "online" model and costs thousands of dollars, you're NOT.

          I bought an APC RS1500 ($400 CDN, 1500va), thinking it'd do power filtering. Well, it does, except that it doesn't do power filtering within a 35w range, if I recall correctly.

          According to APC, on 120v power, it has to go above 138v before it tries to filter by cutting voltage by 12% (Not dynamic, it just cuts whatever it gets by 12%). If it drops below 98v, it just boosts it
          • You can get an true online UPS in the $500USD range, but typical equipment isn't going to be so sensitive as to require that. A line interactive UPS is fine for a machine if you didn't put in a cheap power supply.

            (Though the range on the APC is pretty damn wide. If memory serves Tripp Lite, Best Power, and a couple other manufacturers are much less permissive.)
            • Problem is they don't advertise the info (range) beforehand, I only got the info via their techsupport.

              If the info had been published beforehand, I might certainly have purchased a different UPS. I thought that by buying APC I was getting the best; it might be the best quality, but it's certainly not the best performance. So it looks like my PSU is going to be doing most of the filtering... I've got dirty power here that has killed a whole bevy of PSUs, including the Antec TruePower Gold 330w that I had be
        • The USB is there mainly to make it removable. However, it serves the secondary purpose of protecting the HDD in case the PSU goes mad, and zaps everything or your fans fail on a Friday, and your internal HDDs get cooked over the weekend. When a HDD heats up, the platters expand, and the heads can lose their alignment information. Having the HDD outside the case protects it, oddly enough.

          As far as offsite backup is concerned, ideally have three USB disks, so that you always have one copy offsite. And ha
      • by Fweeky ( 41046 ) on Sunday October 10, 2004 @11:45AM (#10486120) Homepage
        "RAID 0 (mirror) your hard drives. You will have hard drives fail."

        RAID-1, you mean; RAID-0 is striping (hence 0 redundancy). And yes, anything even vaguely important should be on a RAID array in addition to backups. RAID doesn't help much when your controller freaks out or you hit a fs or user error.
        "Buy decent drives."

        Unless you're willing to trade off warranty, latency and quality against sequential transfer rate and storage, this means go SCSI.
        "Go overkill on cooling."

        Buy decent fans (twin ball bearing or so?) and monitor them. If noise isn't a concern, this might be a good application for Delta's more extreme fans :)
        "Check the fans on your CPU too. They also go out frequently."

        On a 1U rackmount, your case fans will most likely be your CPU fans too. Pair of Opterons? Fit passive heatsinks and a bunch of 15kRPM case fans, should be sorted.
        "Backup to a USB hard drive."

        Do they make those in 64GB versions now? No? I'll just use another RAID array then, thanks.
        "File servers don't need much RAM."

        Depends what your files are and how you're accessing them; do you want to have to hit disk for every access? With a lot of clients (which is kind of the point with a file server), a lot of memory is practically a requirement.
        "Dual CPUs are a godsend. Sometimes an application will peg the CPU. This often makes the server appear to be hung."

        A good kernel should avoid this, and HTT can help, but when you can get a well kitted-out 1U dual 1.4GHz PIII for under £500, why not?
        "Also, be wary of Dell. They use non-standard power supplies, so if your PSU goes out, you can't hop down to the local computer store and buy a replacement."

        My local computer store doesn't sell 1U PSU's. Dell do however support redundant ones; I'll take that over downtime while I replace a single one, however cheap/available.
        • by duck_prime ( 585628 ) on Monday October 11, 2004 @01:41PM (#10495003)
          Fweeky writes (emphasis added):
          Buy decent fans (twin ball bearing or so?)
          and monitor them. If noise isn't a concern, this might be a good application for Delta's more extreme fans :)
          This is perhaps the most important piece of advice I've seen yet. We use (pretty) cheap Dell servers, which have the lovely characteristic that the CPU, disk, fan (!), power supply, etc ad nauseam all give back status via SNMP query. This can be routed into free/cheap monitoring software (think Nagios), so you don't have to physically check the machines all the time. You'll get a nice email telling you that box 13 is getting hot and needs help. This sort of thing is especially important if you have row upon row of el-cheapo servers load-balanced; if you don't have good monitoring, servers will quietly fail and all you'll see is incremental degredation of service performance. This is good from a five-nines point of view, but you need that extra nudge to actually fix the problem.

          I can't speak to other brands of machine, because we only have Dells, but insist on proper monitorability.
  • by Rahga ( 13479 ) on Saturday October 09, 2004 @10:00PM (#10483094) Journal
    - Disks fail. When you stick a server in a rack and leave it running for 5 or 6 years (unlike your average /.'ers desktop which probably gets a shake-up far more often), you won't regret being able to hot-swap a failed drive on your RAID array with a spare.

    - Power supplies fail... To be honset, this isn't nearly as big a deal in the hot-swap arena as the hard drives. However, having 2 power supplies in a server machine means that things are significantly less bad when or if one of them happens to fail.

    - Vendor commitment. From those old Compaq Proliants to the new Dell Poweredge machines, they were built to be stuffed in a rack and left untouched (unless something fails... see above). They'll come with hardware that those vendors usually stake their reputation on or even had a hand in building. Even the management software isn't always bad....
    • Disks fail. When you stick a server in a rack and leave it running for 5 or 6 years (unlike your average /.'ers desktop which probably gets a shake-up far more often), you won't regret being able to hot-swap a failed drive on your RAID array with a spare.

      Right. With desktop hardware you won't be able to hot-swap, you'll have to suffer 60 seconds of downtime for a reboot.

      I had to do this the other day. Here's the process:

      1. Since the machine had a RAID config with a hot spare in place, as soon as the
  • Yes and No. (Score:4, Insightful)

    by Limburgher ( 523006 ) on Saturday October 09, 2004 @10:01PM (#10483097) Homepage Journal
    While there is something to be said for the "Server-Grade" hardware, and rack mountability at that, there is no good reason why intelligently chosen and configured "Desktop" hardware can't perform as well. The key is to recognize limitations of various components, such as being aware of SCSI vs. IDE specs, and the fact that standard PCI slots prevent total saturation of a 1GB NIC. If you choose your parts wisely, and with your goals in mind, you can save quite a bit of money without sacrificing performance or reliability, and maintaining vendor neutrality.
    • Re:Yes and No. (Score:4, Insightful)

      by cgenman ( 325138 ) on Saturday October 09, 2004 @10:29PM (#10483237) Homepage
      Agreed. I think the best thing is to recognize that cheap consumer-grade equipment is garbage, and that generally the more expensive consumer-grade equipment is less error-prone. Of course this is on a component-by-component basis. A 50 dollar NIC from 3com is solidly built, but any hard drive you buy has to be expected to fail... flagrantly... at exactly the wrong time. It's best to invest in an expensive motherboard and an expensive NIC. PSU's vary quite wildly with no respect for cost, so do some research on Tom's Hardware or one of the other hardware research sites out there before you buy anything. Some of the best PSU's out there are only 30 bucks... Of course, if a motherboard supports two PSU's, go for it. I've never had RAM fail in any capacity, so I can't say spending there will net great rewards, but certainly take care with your cooling system, from buying the best processor cooler, to cooling it with a good hydrodynamic fan with failure alarm, and always always adding more air circulation than necessary.

      There really are consumer parts aimed at the PC server environment, but nothing should be considered drop-in. Do some research on individual components, and good luck!

      (P.S. all of our servers are basically desktops)

      • Re:Yes and No. (Score:2, Interesting)

        by Urgoll ( 364 )
        I've never had RAM fail in any capacity, so I can't say spending there will net great rewards

        I have, and having the system say memory error corrected is so much better than random lockups and faulty operation. Get ECC memory if you value reliability and correctness.

        The problem I've found is that while it's possible to setup a workstation as a server and get good performance and reliability, it's so much work to research and build that it's often more cost-effective to just buy server grade hardware. Whe

        • wouldn't you put nearly the same effort researching your server hardware as well?

          unless you are comparing building a server out of workstation parts VS buying a server from dell. that would hardly be a fair comparison.
        • When you start counting your cost as part of the system machine, a server no longer seem expensive.

          Except that management will often count the tech guy's time as something that's already paid for. Even if they recognise that time spent doing A means time can't be spent doing B, their quarterly payroll budget will come out the same, so the cost of the tech guy's time on this project is perceived as Zero. (And if it means repurposing desktop gear the company already owns instead of buying new server gear,

  • Redundant parts (Score:2, Insightful)

    by Fubar420 ( 701126 )
    It's only relevant if uptime is key, but with desktops, you generally wont have:
    Redundant power supplies
    Redundant disks
    Hardware raid (other than 0/1)

    If that's not important to you, then by all means go for it
  • You're a DUMBASS! (Score:2, Insightful)

    by Anonymous Coward
    In the long run, it comes down to standardization and serviceability. If you've got maybe 2-4 servers, go for it. Otherwise, you're in for TONS of headaches. Desktop lines are changed CONSTANTLY, and you'll find yourself always trying to get a part for something that is discontinued.

    Still, you can do it. But I stand by the statement you're a dumbass.
    • Mod the parent up. If you have a dozen machines working as servers, you're just going to create major headaches for yourself when you discover four years from now that you have six different motherboards, four types of memory, eight video cards, and three different hard drive busses.

      Every time something breaks, you'll spend an hour trying to figure out the specs on the broken part in the broken machine. Every time you upgrade, you'll spend a whole day just trying to figure out what parts you have in all
      • I'm guessing that if you bought a VA Linux box, you're going to have a hard time calling someone up, reading them your serial number, and getting a duplicate RAID controller FED EX'd to you.

        Well... From experience with a couple of VA's box, they were built from off the shelf components. Adaptec controllers and whatnot... So a quick call to Adaptec, and boom! you've got yourself a new duplicate.

        And didnt a company go into supporting VA's server ?

        • So a quick call to Adaptec, and boom! you've got yourself a new duplicate.

          Adaptec probably isn't going to have an exact duplicate of controller some controller model XK-888 they made four years ago lying around. Dell and Sun are going to have an exact duplicate of the Adaptec controller they put in server model SM-444.

          That's (part of) the reason Dell and Sun have a markup.
          • On second thought, I take back what I said about server support from Dell [gripe2ed.com].

            I'll repeat what I said, though. People who buy servers from Digital, Compaq, HP, IBM, Dell, and Sun expect that they will be able to get exact duplicate replacement parts two, three, four years later (or even longer). When they can't get them, they have a legitimate gripe.

            People who buy off-the-shelf components do not have the same expectation.
    • Re:You're a DUMBASS! (Score:3, Interesting)

      by magefile ( 776388 )
      I work for a small company that runs low-traffic sites for maybe 25 clients (employees can manage their healthcare from these sites) that are all on desktop G5s & eMacs. We have 2 eMacs for DNS, one for mail, 2 G5s as database servers, 2 as HTML servers (that the databases route through) and one as an internal fileserver (through which everything is backed up to one of 6 Lacie drives that are moved offsite weekly).

      It's worked fine for us, and we have only had the servers go down twice in eight years -
  • mmm... server (Score:2, Interesting)

    by serialhex ( 780586 )
    redundancy... speed... noise... that coolness factor you get when you say 'yeah, i'm running quad 64 bit opterons with 4 gigs of ram each. yeah thats right, this bitch has 16 gigs of ram, what you got?' and umm... well thats about it. if your needs dont require you to have dual true gigabit nics, dual or quad processors, a scsi raid array and a space heater/pink noise generator, then get yourself some decient computers with the basics. servers are usually built with better parts (i dont know for sure bu
  • YES (Score:4, Informative)

    by Will Sargent ( 2751 ) on Saturday October 09, 2004 @10:16PM (#10483173) Homepage
    Yes, there are real differences between server equipment and desktop equipment. Most desktop components are built to be fast, cheap, and unreliable. They can and will flake if left on for long enough and subjected to server-grade levels of abuse.

    More details here [tersesystems.com].

  • by Tamerlan ( 817217 ) on Saturday October 09, 2004 @10:22PM (#10483208) Homepage
    Bits of experience form my days of administrating a heterogenous network of desktops-as-servers in an ISV shop (disclaimer: I am professional software developer, I did administration because I was most knowledgeable OS geek). Several reasons why you don't want dektops to be servers: * Power supplies. Beleive me, PSUs DO fail. And more hosts you have the higher probability of failure you get. Even if you keep a stock of PSUs in the closet. It still takes you about 20 mins to get desktop/server up and running again, and night failure is far worse. * Rack mounting is not a vendor trick to charge you more money. If you have more than trivial infrastructure, wiring on desktops and "floor-tops" is going to be your favourite nightmare. * SCSI and SCSI raids are just a waste of money on a desktop but it is must have for intense, parallel access of many users to their homes, mailboxes, whatever on server. * not last and not least: having someone working on a server is probably most stupid idea in the whole IT. Whatever OS you use, beleive me, users will find a way to devour 98% CPU time and 99% of memory. That leaves for server applications.. well .. do the math :) There are many other things, I just came up with whatever came into my mind right now.
    • SCSI and SCSI raids are just a waste of money on a desktop...
      I'm not an expert, but I've worked with developers who've insisted on having SCSI instead of IDE. The theory is that building a really large product requires a lot sustained disk I/O -- which is what SCSI is good at.
    • If you don't need enough disk space to where you need drives in an external enclosure, there is basically no reason to use SCSI. IDE RAID is much cheaper and just as fast, if you get a good enough controller. These days the primary benefit to having SCSI is the ability to use external devices, or dual-attach RAIDs which cannot be accomplished using IDE.

      I eagerly await the day when it becomes practical to use clusters to implement this stuff. If you had a clustering implementation of samba and/or nfsd, a

  • by Detritus ( 11846 ) on Saturday October 09, 2004 @10:25PM (#10483222) Homepage
    There is a major difference between servers and desktops in design philosophy and support. Servers are designed for stability and reliability, not for benchmarks or to demo the latest cutting edge technology. They are tested much more thoroughly and have better support from the manufacturer.

    I have an "obsolete" low-end server that I use for running FreeBSD. It has SMP, ECC RAM, SCSI disks, a boring but very reliable chipset, extensive documentation, diagnostics software, and a high-quality case and power supply. It is also tested and certified to run all of the popular server operating systems. The manufacturer support is excellent. The video card would suck for a modern desktop, but who cares. It never crashes, it just works. If it does break, I can get parts and service.

    • I have an "obsolete" low-end server that I use for running FreeBSD. It has SMP, ECC RAM, SCSI disks, a boring but very reliable chipset ...

      Exactly! Ebay is your friend here -- you can get an old ~1Ghz Proliant or IBM server for about $500-$600, which is probably cheaper than a "desktop" box. You many need to expand the box, but the memory and old SCSI drives are also dirt cheap. These boxes will be 100% rock-solid with Windows/Linux/BSD.

      Most server use (fileserving, SMTP/IMAP email, etc) does not requir
  • Really depends on how "BIG" you mean.

    Clustering, then yes, PC's might be a ways to go, but you trade manpower time for uptime per box.

    If its single app box, and you are not hot swappable, you wont be able to make it to a maintenance window for repairs.

    Blades are good, but not the end all, normally you have dozens of blades around a big beefy database, and the database box and disc storage are the expensive beasts. Also licensing is a factor, support, lots of things.

    Not all setups are the same, you shou
  • speak of the devil (Score:2, Informative)

    by bionic-john ( 731679 )
    well - it turns out that one of my white box servers crapped out on me moments after this article! I do not feel bad, nor do I feel like it should have been a 'server' quality machine. The machine was in fact a 1996 PII, it may have even been a cyrix. $200 later and a couple hours, I rolled out a new PIII-1000..the downside was working on SAT.

    The load that these machines take are not much more that what that PII could handle (in fact I think that load handled everything great other than its nightly da
    • You're not going to work at this job forever. The person who replaces you is going to curse at the dozen or so file/print servers running a hybrid linux distro. But, that's your manager's problem, not yours. If he or she has decided to save the money now and deal with the headaches later, then let it go.

      And no, you don't want to switch PSU's every year. The failure rate on most reasonably made parts probably goes down over time. Give me twenty brand new, untested power supplies, and I'll bet cold hard
  • I guess it all depends on your environment and the availability you want to provide. There are reasons that servers have redundant "everything". The primary reason, is for availability. If you work in an environment that can easily deal with a few hours of downtime at ANY given time, then I guess you may be able to pull this off.

    If you work at google.com, or some other high-availability company, please resign ASAP.

    Seriously - it depends on your budget. If your budget is _that_ shoestring, you should p
    • Google hardware (Score:4, Insightful)

      by r00t ( 33219 ) on Saturday October 09, 2004 @11:28PM (#10483521) Journal
      For years, Google was a giant pile of dirt-cheap
      no-name PCs. Each one had two IDE drives and a
      single Celeron CPU. Failure? Oh yeah, but it didn't
      matter at all. The software would just drop the
      broken box out of the cluster. Nobody would even
      bother to fix the PCs as they died! It was cheaper
      to just replace the whole cluster whenever too
      many of the boxes were dead.

      Now Google is large enough to get a good deal on
      custom-built rack-mount hardware. It's still IDE
      and cheapo consumer CPUs of course. Assuming that
      your server needs are a bit less that Google's,
      this option won't be available cheap for you.
      • This works great for google because they have a stateless HTTP-based application.

        Joe LAN Admin is usually dealing with fileserver and database applications that use long-lasting connections and lots of server state. (Even many HTTP apps make heavy use of server-side sessions.) There simply aren't cheap fail-over solutions for these apps. So it makes a lot more sense to buy a box that can maintain the uptime by itself.
      • Nobody would even bother to fix the PCs as they died! It was cheaper to just replace the whole cluster whenever too many of the boxes were dead.

        IIRC, they still do this. And they leave the dead boxes where they are; they figure once most of the computers in a specific section of their server rooms have died, they'll pitch 'em all and reuse the space. Until then, it's cheaper to buy more commodity boxes and pop 'em into more office/server space.
    • Google (Score:3, Informative)

      by John Murray ( 149 )
      I've seen some of the hardware google uses and it's not fancy name brand redundant everything servers. In fact their setups might shock some IT traditionalists. They seem to use standard mother boards mounted on open shelfs(no case), with a psu and an IDE hard drive.

      From what I've read about google their philosophy is it's better to have a number of redundant servers, then one critical server.

    • If you're young and new to the industry, take this advice: the big-wigs won't be as impressed by saving $200 on a server as they would by a $400 MORE EXPENSIVE server staying alive for ONLY 1 HOUR longer.

      That may be true where you work, but there are plenty of places where a real, there-on-the-books savings this quarter outweighs a hypothetical, this-could-happen savings some day down the line. Especially if they don't consider computers to be mission-critical zero-downtime equipment (and in many organis

  • by the eric conspiracy ( 20178 ) on Saturday October 09, 2004 @11:02PM (#10483403)

    The correct answer to the question is what is the value of downtime to you. Often a few hours of being offline dwarfs the savings possible from this approach.

    There is no question you will have more downtime with desktop hardware - it in just not engineered with 365/24 in mind. You can add in a few extra fans and make sure you don't have any proprietary parts like Dell and HP throw into their desktops, but in the long run you WILL have more downtime.

  • My amatuer opinion (Score:4, Informative)

    by dtfinch ( 661405 ) * on Saturday October 09, 2004 @11:06PM (#10483426) Journal
    It's not officially a Good Idea, but is fine for some environments.

    Just take into account that server and desktop hardware are designed with different goals in mind. Server hardware is meant for 100% uptime, even in the case of most hardware failures, and have good scalability under high loads, while desktop hardware aims to give you the best bang for your buck, understanding that your data is typically much less valuable.

    I'm guessing you'll be using IDE drives.

    Some of the more expensive (usually scsi) hard disks and controllers have a battery backed cache that can ensure that your writes are preserved in the event of a power loss. The lack of this requires you to sacrifice a great deal of write performance if you wish to ensure integrity. The sacrifice is a bit less if the hard disk preserves write order, which ensures integrity to the extent that the filesystem is capable, though you'll still lose data. Combining a desktop ups with a desktop server, set up to power down safely before the ups runs out and come back up afterwards, is sometimes enough to let you sleep some nights.

    The mtbf (mean time between failure) ratings for hard drives intended for desktop and server use are calculated differently. For servers, a consistent high load is assumed. For desktops, a low load and lots of sleep time are assumed. So a 1 million hour server HD might be equivalent to a 2 million hour desktop HD, and most desktop HD's are rated at like 300000 hours.

    Also, mtbf is not an estimate of how long a hard disk will last, just the chances of a fairly new drive going out unexpectedly. Like if they tested new hard disks for 500 hours to weed out the duds, then took 1000 of the survivors and tested them for another 1000 hours, and 4 went dead, they could claim an mtbf of 1000*1000/4=250000 hours AFAIK. But you can be sure most of them won't last that long, that's almost 30 years at full load. Like saying if 4 kids in 1000 die between ages 5 and 15, you can claim humans have an mean time between failure of 10*1000/4=2500 years. The real estimated lifetime of a hard disk may be roughly proportional how long the manufacturer is willing to warranty it for. Hard disks intended for server use tend to be warranteed for much longer.

    If you use a desktop, max out the ram to minimize disk use and schedule very regular incremental backups, as full backups will also greatly increase disk use. A desktop server will last the longest if it almost only touches the hard disk to perform necessary writes. And be aware that cheap desktops have a high lemon rate.

    If you buy a Dell PowerEdge 400sc, their cheapest line of servers, you're actually getting low end desktop hardware in an easy-access case for the about same price as their similar desktops, plus integrated gigabit. So using a desktop as a server isn't too horrible, if it's not vital.

    A good raid 5 file server with scsi drives, plenty of ecc ram, and a reduntant power supply can live almost forever without maintenance. They've been accidentally sealed behind walls without anyone noticing until many years later.
    • Last time I had anything to do with hardware, MTBF for hard drives had an unwritten disclaimer : Mean Time Between Failures assuming you replace the hard drives with new (identical) hard drives every year (or at least on a regular schedule.)

      Granted it has been a while, and nobody actually does that, and I can't find any supporting documentation on the matter ... but I believe that MTBF carries that stipulation. The theory being that if you replace the drives before they hit the other side of the bathtub c
  • Reliability (Score:5, Insightful)

    by alienw ( 585907 ) <alienw.slashdotNO@SPAMgmail.com> on Saturday October 09, 2004 @11:17PM (#10483466)
    Reliability is the only difference between a desktop and a server system. If you can tolerate an outage every few weeks, go ahead and use desktops. If you need 100% uptime, get a real server, it will pay for itself many times over.

    What if a hard drive dies? In a server, you pull it out, pop in a new one, and the RAID array fixes itself. The users don't notice a thing. In a desktop machine, you have to turn it off, unplug everything, open the case, unscrew the screws, unplug the cables, remove the drive, put in the new drive, put everything back together, restore the array manually, and hope you didn't lose some data. And all while you do this, the server is down and nobody can do anything.

    Just keep one thing in mind. If you pay too much, nothing will happen. If you get a crappy system, you will get fired.
    • Re:Reliability (Score:3, Interesting)

      by Vellmont ( 569020 )

      If you can tolerate an outage every few weeks, go ahead and use desktops.

      That has to be one of the most flat-out wrong statements I've heard this month. I've had several desktops working as servers over the years, and for the most part they all work flawlessly. I had one machine start getting very flaky after 3 years constant uptime, one where the hard drive failed (the HD was probbably 6 years old), and one where a PS failed (probbably about 6 years as well). With the exception of old age, the deskto
      • I've had several desktops working as servers over the years, and for the most part they all work flawlessly.

        "For the most part" is the key thing here. They'll work fine if your expectations aren't too high. If you can tolerate a dead hard drive/power supply/network card bringing down a server, desktops are fine.

        If the cost to the company of a 5-hour server outage is less than the cost of a real server, go ahead and use desktops. However, most places, even small businesses, lose thousands of dollars
    • One of Sun's mottos is Reliability, Availability, Servicability. You've covered one there and Availability generally follows on from that.

      As for servicability, hot-plug on most components is what really sets server-clas hardware apart from desktops. Even if you're not 24/7, being able to remove a failed hard disk at any time means you don't have to schedule an outage which may introduce more errors (hard disks tend to fail more often when they're stopped/started). On a decent server, you can replace a

      • Most PC Desktop motherboards over $100 come with PATA and/or SATA RAID onboard - my system has both in addition to a dual-channel UDMA133 controller onboard, for a total support capacity of 12 devices. Mind you, I wouldn't use the SATA RAID unless I was forced to, because it's a silicon image product; I'm using the PATA raid with a two-drive stripe, it's an ITE8212. However, if I were serious about serving, I would only use onboard for a software mirrored raid of my OS, and for optical drives, and I'd be pu
  • by salesgeek ( 263995 ) on Saturday October 09, 2004 @11:43PM (#10483576) Homepage
    Most of the posts have been reliability yada yada...

    Here are the real differences:

    Chipsets are different - and focus on throughput.
    RAM accuracy (yes... there is a difference)
    Built in pre-failure diagnostics
    Redundancy
    Hot swapable components

    When you look at pressing desktops into server use, analyze the cost of downtime. Let's say you have a sales team hooked to your server - 8 users. Server is down 1 hour. Sales are $8,000/day. You lose 1/8 of your sales for the day. You just lost $1K in revenue plus your time spent fixing. This happens 10 times... you can see where the desktop gets expensive.

  • If you like cutting corners then i'm sure your software will die before your hardware will.

    Some ingenuity can help cover the gap. Some idea's i've had for powerfull remote troubleshooting and repair get video cards with TV out, put them into an RF modulator and take the coax into a computer do video serving. This will allow you to see what exactly the computer is doing when its not responding.
    I dont know if conventional UPS serial lets you power off and power on the computers but relay boards can be wired
  • Why? (Score:2, Informative)

    by huber ( 723453 )
    we just bought five new 2u Dell Power Edge 2850's for 2k each!. That included two 2.8 Intel Xeons, three 36 gb seagate scsi 10K RPM drives (can have 6 total) with a 256MB RAID controlller , dual power supplies, dual gig ethernet, and no OS installed. Thats the price you just paid for a decent workstation. It's a bad idea.
  • by jackb_guppy ( 204733 ) on Saturday October 09, 2004 @11:54PM (#10483616)
    There are differences but most do not really look.

    Most cheap desktop motherboards have built-in video using "shared memory" - this is actaully taken from main memory and is a constant interuption to CPU to do what it needs to be done.

    Bandwidth of the PCI bus and ACPI forcing all cards to use the same interupt adding to the overhead of the OS to sort out the conflict and order. This can also lead to lockups or frozen IO - I know using 100M NIC with 100M disk controller.

    Multiple processors - and I am not talking about the CPUs! Server level parts most have intellegent controllers (ie their own co-processors) This way the main CPU can get work done and not worry about the reading a disk drive.

    Now: Does very server have to built to server standards? NO

    A old desktop box makes a great firewall, printer server or even departmental webserver. The key here, if it goes down how fast can it be replaced? With a firewall do not build one. Build two, the second just needs to boot and be plugged in. Same for a printer server or small localized webserver.

    But if you are crunching data - a database server for example - buy a real server. I like IBM X440 maxs out at 16 CPU (build sets of 4) data busses 256 bits wide not 32 or 64 of most mother boards. PCI-X slots 64bits wide and hotswapable cards, plus maxs out with these at like 100 of them. Though on VMWare's ESX and make a pile of "little white boxes" all virtually.

    You have also noted about RAID cards for IDE. besure they are intellegent (Co-processors) or the CPU is doing all the work.

    In the end to me real difference between Desktop / Server Class / Servers is CPU loading. How much of the "housekeeping" the CPU must perform.

    On desktop machine, the CPU does it all, It watches even byte the goes into and out of a disk drive or netcard. It gives up time to allow the video to share its memory. This all takes away from the base function of running an app. At one point a few years ago - the average machine was using up to 40% of its processing just to keep the screen updated.

    Server Class machines have helping processors to off load the CPU. Adding these into desktop box starts the transformation into a server - except missing true server need hotswapable everything.

    I have built machines with this in mind of years - My current home machine is dual PPro 200, with highend scsi and highend video (for the time, PCI Bus) working a large database and useing database design tool - it out preforms the 3Ghz P4 I have office, with IDE and shared video. Parts do make a difference.

    True Server machines are built differently, PERIOD. Look at the X440 from IBM, look at the top end machines Dell, HP/Compaq you will see the difference.

    Yes, they are sell servers that are really desktops in deguess. Dell 400SC small server is the same case and motherboards as Dell 800 desktop series. The difference ECC memory, and a front cover that covers the 2 USB slot and sound ports in the front. Also you can get this for less than matching desktop configuration. I got one for my wife's desktop.

    Lastly clustering...

    Clustering to me is the same as raid to disk drives. Lots of cheap servers sharing the load acting as a single larger machine. So all of this may be for naught.

  • Rackspace is usually at a premium. Desktop servers don't stack well and each year they are made in different sizes. Sometimes half an inch more width can be a problem if you need to swap one.

    Reliability. PC computers and components just aren't made for a 24/7 vibration-ridden environment. Their MTBF is probably not considered a significant design factor, as people just reboot their machine if something goes wrong.

    Open the case of an IBM or Dell rackmount server and prepare to be impressed. The design

  • BYOB (Score:4, Interesting)

    by adolf ( 21054 ) <flodadolf@gmail.com> on Sunday October 10, 2004 @12:41AM (#10483768) Journal
    I don't know how the rest of the world does it, and I don't really care.

    The mail server where I work used to consist of a 733MHz Celeron, branded E-Machines. It was a disused desktop machine from Joe Random (Joe, of course, has a shiny new Dell on his desk to replace it). Complete with a $3 PCI RTL8139 NIC, it was the epitome of cheap.

    If any part failed, including the 175-Watt PSU, the machine would die completely.

    It'd been that way since I started with the company.

    I mentioned it to a higher-up, who happens to be a rather important salesman of moderate technical inclination, and whose sales depend primarily on reliable email.

    He insisted that I do something about it, and so I began doing so.

    I fought with the RAID adapter in a Proliant that we had spare before I realized why people generally loathe binary drivers under Linux. I looked for another way to connect the hard drives, but the box only had one(!) real IDE channel, and it was consumed by a pair of CD-ROM drives.

    I sat and fathomed that for awhile: Big server box, stout steel constuction, Serverworks chipset, ECC RAM, huge cooling, 64-bit PCI, one P4 Xeon and room for a second. Unsupportable hardware RAID. One bloody IDE channel. No SCSI. The sound of nonsensical madness was deafening.

    So I just built one. I had a few priorities, like redundant PSU cooling, Pentium 4 (I'm an AMD fanboy, but thermal throttling is your friend, even if the chip is vastly overpriced), redundant storage, good IO performance, and the ability to replace any (or every) part with something that can be sourced locally within an hour or so. Oh, and it has to be cheap.

    I also made a list of non-priorities: Don't need a lot of number-crunching ability, don't need redundant PSUs, don't care about multiple CPUs.

    "Who makes server mainboards," I asked myself. I answered myself with "Tyan."

    I've never read anything but good stuff about Tyan. So I got one of their P4 boards. Not a "server" board, but one of their lesser (single-CPU) models which were hopefully developed by the same engineers. Two channels of SATA RAID, four DIMM slots, very few other built-in goodies, except for two additional PATA ports.

    It supports dual-channel ECC RAM, so I picked up a couple of quarter-gig sticks of that. Could've gotten more, but remember, this is a -budget- server. (It seldom swaps, and when it does, the disks are fast enough to make it a non-issue.)

    Also picked up a couple of Western Digital 80GB SATA drives, because Moving Parts Are Important, MMkay?, and at the time they were the only ones still offering a 5-year warranty. This machine is supposed to live longer than that before it is outgrown.

    And for good measure, I included a Pioneer DVD-R for offline backups. I hate tapes.

    I tossed it all in the cheapest black case I could find (newegg, $24, shipped). I threw away the included PSU and replaced it with a big Antec Truepower.

    Killed the hardware RAID in favor of Linux's software RAID1. I have no intentions of ever marrying a computer's software to something as general and failure-prone as a modern motherboard - out-of-the-box RAID is a great way to fuck yourself at disaster-recovery time.

    It runs Gentoo, and and filters and tosses mail something like twenty times the rate of the old E-Machines consumerbox (which had buried itself in backlogged mail a few times).

    We've got redundancy of cooling and storage, we've got a graceful fail-safe on the CPU fan, and we've got a disaster plan that includes being able to find parts from the mom-and-pop shop down the street, or mounting the SATA drives in that wretched Proliant with a PCI controller, or (at worst) setting up the Proliant's DVD-ROM and one of its 80gig drives as master/slave and restoring from DVD-R.

    I'm pleased with it. It was cheap. It went together slicker than greased shit. I don't think it's going to fail anytime soon, but if it does, at least I don't have to worry abou
    • It supports dual-channel ECC RAM, so I picked up a couple of quarter-gig sticks of that. Could've gotten more, but remember, this is a -budget- server.

      ECC is one of the most important things you can do.

      And for good measure, I included a Pioneer DVD-R for offline backups. I hate tapes.

      Daily rsync to another machine and a tape drive for monthly full-system backups. Considering the lifespan that CD-R's have shown, I don't expect that DVD-R's will ultimately be much better.

      Killed the hardware RAID in f
      • I rely on software RAID. It's not for monetary reasons, either - I know that real RAID controllers aren't expensive anymore. Software RAID just offers one fewer part to fail (does anyone here remember KISS?), and will not tie me down to using any specific type of hardware.

        I've not heard of any problems with software RAID that could not be attributed to user error. And, having made some of those errors myself, I think I'm now qualified to operate it. I've heard numerous horror stories about RAID cards g
        • I can remove one of the software-RAID1 SATA drives (hot, cold, whatever), plug it into any other SATA system, and it'll boot and run. I cannot do this with hardware RAID unless I have the same make and model controller on-hand.

          But you can, in most instances, run the drive without the controller if it fails. I've had this happen -- boot the rescue kernel, edit the fstab, fsck, and off you go....

          Then when the replacement controller comes in, you rebuild the array.


          I'm not at all hip to waiting two years
  • The line is blurred. A high end PC can easily outclass a low end server.

    What is usually the difference is form factor, quality of hardware, cooling, and type of hardware.

    A serverroom is usually cramped so the smaller the case the better. Or at least a case that doesn't need open areas all around it. Those 9inch racks ain't just there to look cool. It is just more efficient then stacking PC towers.

    Not all motherboards/hds/fans/etc are equal. Almost all can run 24/7 if your lucky but being under full load

  • by Malor ( 3658 ) * on Sunday October 10, 2004 @01:33AM (#10484033) Journal
    This is how most IT departments start, and it's a normal process of evolution.

    In the beginning, there isn't much money available, so most places cobble together 'servers' from spare desktop components, and throw them up in a closet somewhere. That generally works okay, and the company realizes that they like having servers, so over time, the installation grows.

    As it gets bigger, the lower reliability of desktop components will start to become apparent; servers will go down, hard drives will fail. It's just statistics; given enough samples, the lower quality of the cheaper components will start to make itself felt.

    Gradually, as IT departments grow, they tend to migrate towards better and better hardware. The really big outfits tend to use Dell and Compaq. Compaq in particular sells very, very expensive machines, which are very well engineered and hardly ever break. But you pay through the NOSE for this kind of service.

    So how do you know how much to spend on your servers? When you gain the ability to numerically measure how much it costs you when they fail. When your department and company mature to the point that you can accurately measure costs of downtime, then with management's decision on acceptable risk levels, you'll have a pretty good idea of what you should be spending on servers. Many big companies find that the cost of downtime is appalling, when they actually are able to measure it, and that the cost of even very expensive servers is minimal in comparison, so they buy the best stuff they can find.

    But until you can measure it, IMO you're fine with desktop components, as long as you buy GOOD ONES. Don't skimp on your drives, and make sure you have good cooling for them. Buy server cases; you can get good ones for a couple hundred bucks that will hold a billion drives, and then make sure to buy good cooling; you may want the boxes that mount 3.5" disks in 5.25" slots, with fans and hotswappability. I usually buy PC Power and Cooling power supplies for servers; even the Silencers are fairly loud, but they are very robust and well-built. Many of them are dual supplies in one box, which improves reliability even more. That's a lot of fans in each machine, so you may want to pick up a spare or two with each machine you buy. (Tape them inside the case). And the noise level, particularly once you get a number of them, will be high... but think of it as the sound of reliability and you won't mind it too much. Also note that when you get past a few machines, or if you spend a lot of time in server rooms, you should wear ear protection. I have worked in big colo facilities that were absolutely deafening, to the point that things sounded muffled when I left. That kind of noise DOES DO DAMAGE, and you want earplugs.

    Make sure you understand exactly what onboard network chipset you are buying: you most likely want an Nforce3 or an Intel, um, 865 or better, I think it is... where the network card is directly on the northbridge, so you can get the true gigabit speeds. When they are on the Southbridge, and look like they are PCI devices, you can't run gigabit full out. And never buy a motherboard that uses Realtek 8139 networking, they are garbage. They make the CPU work way too hard, and are NOT good for server machines.

    What you will end up with is a whole room full of Frankenclones, but if you've been smart and spent your money on good stuff, it'll be almost as reliable as the Dell/HP/Compaq/IBM clusters for a tiny fraction of the price. And you'll be able to get replacement parts anywhere. But you probably WON'T have spare parts on hand to fix things, unless you've been unusually clever in your design, because each new generation of machines will be different than that last, and you won't be able to use the same replacement parts interchangeably.

    Someday, when you find out what downtime costs you, the extra cost of the big label servers may suddenly look wonderful ... or it may not. I have seen a couple of
    • Compaq in particular sells very, very expensive machines, which are very well engineered and hardly ever break.

      You must be living on another planet or something. I worked for a company that did a web project for creative, to develop a music store to be called "MuVo". They scrapped the website (which was very good) over not wanting to pay the all-music guide for their content, they allegedly thought they would get to use our license to it, a notion they were explicitly abused of (but not abused enough,

  • Doesn't work (Score:3, Insightful)

    by Kris_J ( 10111 ) * on Sunday October 10, 2004 @01:56AM (#10484101) Homepage Journal
    By the time you've bought a desktop with all the high performance, high reliability options you'd need for a server, you've bought a server.
  • Until last year, we had a very good run with using pretty standard machines as linux web and file servers that were accessed constantly over a LAN. The only things that needed replacement were harddisks (so ensure you perform nightly backups to another machine on the LAN), and the occasional birthday present of extra RAM or bigger harddisks.

    This year we noticed Dell had very good rates for renting their rack servers, so we grabbed a couple, and will upgrade them on a 18-24 month basis. The affordability of
  • Nothing. (Score:2, Insightful)

    by J2000_ca ( 677619 )
    I have a PII which is as much as server and a quad xeon (works fine as a webserver, no downtime in the past year due to parts (only had it for a year)).
  • Mix 'n' match (Score:5, Insightful)

    by Basje ( 26968 ) <bas@bloemsaat.org> on Sunday October 10, 2004 @05:10AM (#10484747) Homepage
    There is no distinct line between server hardware and desktop hardware. A lower end server is easily build from decent desktop components. The bottom line is: buy good hardware.

    Don't skimp on the harddrives, but go for reliable ones. SATA Raptors are as reliable as many SCSI drives, and go in any modern desktop. RAID5 them. RAID5 in software isn't much of a CPU hog in modern machines. RAID5 in hardware is faster, but more expensive. Fit to budget.

    Hotplugging SATA is not really supported (tested) in Linux, but expect it to mature. When a drive fails at this moment, downtime is unavoidable. In the near future, expect this to improve.

    As for the mobo, memory, network, case. Get quality stuff, but don't go overboard. Onboard vga is fine for your purposes: it will act as a server.

    Depending on your needs, backup media need to be considered. Put DVD burners in the server. Backup often. When you need more storage, portable harddrives are great. You need more than one.

    Most important: (stress)test your equipment before putting it to use. Most broken hardware is broken from the beginning. Failing hardware is much less likely. The biggest difference between so called server hardware, and desktop hardware is the amount of checking it had before it leaves the factory. So do that yourself.
  • by slasher999 ( 513533 ) on Sunday October 10, 2004 @09:48AM (#10485594)
    You are not considering the actual cost of owning and running these machines, only the initial cost of hardware. If you learn how to do a proper analysis of the costs associated with each machine over a 3-5 year period, the typical server lifespan, you will find that purchasing an entry level server will be far less expensive. Better memory (ECC), server chipsets (Intel 7xxx vs Intel 865 for example), and chassis designed to provide adequate airflow for a server is a bargain compared to downtime while you fix your Dimension "server" every couple of months.

    You can do a 1U P4 3.0 with mirrored Enterprise quality SATA disks and 1GB of ECC RAM for well under $2000. Take a look at the Intel SR1325TP1-E server platform. It's the server chassis with proper cooling with an Intel TP1 board installed. The board has dual onboard nics and the chassis has about five fans. Very nice, and runs $500. Add the CPU for about $200, memory, and disks (SATA, CD, floppy) and you are done.
  • The simple answer is the following
    1)How much downtime can you afford due to lack of hotswap etc

    2)Can the desktop box do the job? If your trying to do some massive process, the answer might be no

    Lets face it - there are a LOT of "Mom and Pop" shops where if the server goes down for 1/2 day - it's not a major problem (Heck, I've worked at software shops like this - just keep working on what you already have out). Other places, your down for 5 minutes (of even 60 seconds) and the phones will be ringing (wh
  • This comment is aimed at "production use" -- for "test/development" (non production) machines, please disregard.

    While an HP/Compaq "Proliant DL380" at around $5,000 with a 2nd CPU, redundant fans, RAID hard drives, etc. is a _lot_ more expensive than a $1,000 white box with a couple of IDE drives with software RAID, it tends to be worth it. At least in my situation.

    I've used white box servers in the past, and they are fine while they work. Once something goes wrong you're sort of on your own to track down
  • by ReidMaynard ( 161608 ) on Monday October 11, 2004 @05:40PM (#10497403) Homepage
    Reid Maynard wonders: "I work in an environment where the dollar is more than almighty (who doesn't?). One of my cost savings plans is to replace the desktop computer with an abacus. They cost much less, the parts are readily available and/or interchangeable - as opposed to waiting for overnight proprietary or obscure parts from a vendor, and so on. -- is there something special about a desktop computer other than looking at the blinking lights and feeling special?"

UNIX was not designed to stop you from doing stupid things, because that would also stop you from doing clever things. -- Doug Gwyn

Working...