Can My Desktop Make It in the Big Leagues? 207
bionic-john wonders: "I work in an environment where the dollar is more than almighty (who doesn't?). One of my cost savings plans is to use desktop computers as servers. They cost much less, the parts are readily available and/or interchangeable - as opposed to waiting for overnight proprietary or obscure parts from a vendor, and so on. I understand that servers have redundancy on disk and power - but this can be emulated for a fraction of the cost, as well. Is there a performance difference between a desktop and a server with the same specs? Chipsets are chipsets, motherboards are motherboard, and memory is memory -- is there something special about a server other than looking at the rack of blades and feeling special?"
What about the space? (Score:5, Insightful)
Re:What about the space? (Score:4, Informative)
I believe we paid around $4500 for our 3U P4 2.8GHz 2GB RAM 2.4TB SATA RAID-5 NAS machine with N+1 redundant power supplies, about the same for our 3U Dual Xeon 2.8GHz 4GB RAM 52GB (6 15K rpm 18GB drives total) SCSI U320 RAID-10 database machines with N+1 redundant power supplies, and our 1U P4 2.8GHz 2GB RAM 80GB SATA RAID-1 web servers each run around $1400 (no redundant power supplies). Point is, there ARE other options, you don't have to use low-end hardware just because you can't afford IBM. Besides, why pay for servers from IBM, HP, or Dell, when you can buy two of the same caliber machine for the same amount of money or less. With two machines, you can do things like load balancing, increasing performance and adding redundancy at the same time.
Re:What about the space? (Score:3, Informative)
Power supply and air circulation (Score:2)
In my experience, "servers" are merely designed with rack-mountable boxes as opposed to floor-sitting boxes.
Bob-
What's a server? (Score:5, Informative)
What makes this guy a server? I'm no expert, but here's what I see:
Re:What's a server? (Score:2)
So how cheap is this CoLo anyway? Most of the CoLos I have looked at are prices high enough that using an X-Box would just be silly.
I'd love to find a CoLo shelf space priced for someone like me who wants to put say 100Gig on line but expects fairly low traffic.
(in the NYC area would be nice as well)
Re:What's a server? (Score:2)
Why not just use the DSL/cable that'd be cheaper than a colo anyway? And keep in mind that installing Linux on an XBox is also pretty silly, but that doesn't stop anyone
Re:What's a server? (Score:2)
FYI check coloco.com or go with a cheapy dedicated company (valueweb.com)
Re:What's a server? (Score:3)
Re:Power supply and air circulation (Score:5, Informative)
Buy quality parts, and everything should be OK. Don't expect a $300 emachine to last out the year.
A few tips:That said, dual CPUs and rackmount cases are a luxury, and if cost is that important, you can skip them. And make sure there is a process in place to check on the health of the server. Even waving your hand behind the box once a week to check how hot the PSU exhaust is can save the business a lot of headache. (Hint: if no air is blowing, replace the PSU, and check the HDDs to make sure they're both still working.)
Also, be wary of Dell. They use non-standard power supplies, so if your PSU goes out, you can't hop down to the local computer store and buy a replacement.
Re:Power supply and air circulation (Score:4, Informative)
Re:Power supply and air circulation (Score:2)
just treat 'em like they could break up any day and you're set.
Re:Power supply and air circulation (Score:4, Informative)
I assume USB is to make it removable, but for that to do any good, you need to actually remove it, which means having at least one other USB drive to swap in when the one is off-site. If the budget doesn't allow for that, and you're just going to leave the backup there on top of the server all the time, then save yourself some money and mount an IDE drive in the case, and take advantage of the better speed to get daily backups done more effectively. Alternatively, do on-site daily backups across the network to an old machine otherwise destined for recycling but with a new large hard drive; that'll give you better disaster recovery ability if the main server dies and takes its drives with it.
If you don't need a UPS, make sure you at least have a surge supressor.
Please ignore that comment. You do need a UPS. Skimp on the specs and buy whatever's on sale with rebates at Best Buy this week if you must, but any machine you're going to call a "server" needs at least a few minutes of battery power to protect its data from sudden power outages and its electronics from power slumps.
Re:Power supply and air circulation (Score:3, Interesting)
I bought an APC RS1500 ($400 CDN, 1500va), thinking it'd do power filtering. Well, it does, except that it doesn't do power filtering within a 35w range, if I recall correctly.
According to APC, on 120v power, it has to go above 138v before it tries to filter by cutting voltage by 12% (Not dynamic, it just cuts whatever it gets by 12%). If it drops below 98v, it just boosts it
Re:Power supply and air circulation (Score:2)
(Though the range on the APC is pretty damn wide. If memory serves Tripp Lite, Best Power, and a couple other manufacturers are much less permissive.)
Re:Power supply and air circulation (Score:2)
If the info had been published beforehand, I might certainly have purchased a different UPS. I thought that by buying APC I was getting the best; it might be the best quality, but it's certainly not the best performance. So it looks like my PSU is going to be doing most of the filtering... I've got dirty power here that has killed a whole bevy of PSUs, including the Antec TruePower Gold 330w that I had be
Re:Power supply and air circulation (Score:2)
As far as offsite backup is concerned, ideally have three USB disks, so that you always have one copy offsite. And ha
Re:Power supply and air circulation (Score:5, Informative)
RAID-1, you mean; RAID-0 is striping (hence 0 redundancy). And yes, anything even vaguely important should be on a RAID array in addition to backups. RAID doesn't help much when your controller freaks out or you hit a fs or user error.
Unless you're willing to trade off warranty, latency and quality against sequential transfer rate and storage, this means go SCSI.
Buy decent fans (twin ball bearing or so?) and monitor them. If noise isn't a concern, this might be a good application for Delta's more extreme fans
On a 1U rackmount, your case fans will most likely be your CPU fans too. Pair of Opterons? Fit passive heatsinks and a bunch of 15kRPM case fans, should be sorted.
Do they make those in 64GB versions now? No? I'll just use another RAID array then, thanks.
Depends what your files are and how you're accessing them; do you want to have to hit disk for every access? With a lot of clients (which is kind of the point with a file server), a lot of memory is practically a requirement.
A good kernel should avoid this, and HTT can help, but when you can get a well kitted-out 1U dual 1.4GHz PIII for under £500, why not?
My local computer store doesn't sell 1U PSU's. Dell do however support redundant ones; I'll take that over downtime while I replace a single one, however cheap/available.
Re:Power supply and air circulation (Score:2)
300GB, 16MB cache, 7200RPM, USB 2.0+Firewire external hard drive [maxtor.com]
Re:Power supply and air circulation (Score:4, Informative)
I can't speak to other brands of machine, because we only have Dells, but insist on proper monitorability.
Re:You're full of it. (Score:2)
Re:You're full of it. (Score:2)
Yep, there are differences... (Score:5, Informative)
- Power supplies fail... To be honset, this isn't nearly as big a deal in the hot-swap arena as the hard drives. However, having 2 power supplies in a server machine means that things are significantly less bad when or if one of them happens to fail.
- Vendor commitment. From those old Compaq Proliants to the new Dell Poweredge machines, they were built to be stuffed in a rack and left untouched (unless something fails... see above). They'll come with hardware that those vendors usually stake their reputation on or even had a hand in building. Even the management software isn't always bad....
Re:Yep, there are differences... (Score:3, Interesting)
Disks fail. When you stick a server in a rack and leave it running for 5 or 6 years (unlike your average /.'ers desktop which probably gets a shake-up far more often), you won't regret being able to hot-swap a failed drive on your RAID array with a spare.
Right. With desktop hardware you won't be able to hot-swap, you'll have to suffer 60 seconds of downtime for a reboot.
I had to do this the other day. Here's the process:
Yes and No. (Score:4, Insightful)
Re:Yes and No. (Score:4, Insightful)
There really are consumer parts aimed at the PC server environment, but nothing should be considered drop-in. Do some research on individual components, and good luck!
(P.S. all of our servers are basically desktops)
Re:Yes and No. (Score:2, Interesting)
I have, and having the system say memory error corrected is so much better than random lockups and faulty operation. Get ECC memory if you value reliability and correctness.
The problem I've found is that while it's possible to setup a workstation as a server and get good performance and reliability, it's so much work to research and build that it's often more cost-effective to just buy server grade hardware. Whe
Re:Yes and No. (Score:2)
unless you are comparing building a server out of workstation parts VS buying a server from dell. that would hardly be a fair comparison.
Re:Yes and No. (Score:2)
Except that management will often count the tech guy's time as something that's already paid for. Even if they recognise that time spent doing A means time can't be spent doing B, their quarterly payroll budget will come out the same, so the cost of the tech guy's time on this project is perceived as Zero. (And if it means repurposing desktop gear the company already owns instead of buying new server gear,
Redundant parts (Score:2, Insightful)
Redundant power supplies
Redundant disks
Hardware raid (other than 0/1)
If that's not important to you, then by all means go for it
You're a DUMBASS! (Score:2, Insightful)
Still, you can do it. But I stand by the statement you're a dumbass.
Re:You're a DUMBASS! (Score:2)
Every time something breaks, you'll spend an hour trying to figure out the specs on the broken part in the broken machine. Every time you upgrade, you'll spend a whole day just trying to figure out what parts you have in all
Re:You're a DUMBASS! (Score:2)
Well... From experience with a couple of VA's box, they were built from off the shelf components. Adaptec controllers and whatnot... So a quick call to Adaptec, and boom! you've got yourself a new duplicate.
And didnt a company go into supporting VA's server ?
Re:You're a DUMBASS! (Score:2)
Adaptec probably isn't going to have an exact duplicate of controller some controller model XK-888 they made four years ago lying around. Dell and Sun are going to have an exact duplicate of the Adaptec controller they put in server model SM-444.
That's (part of) the reason Dell and Sun have a markup.
Re:You're a DUMBASS! (Score:2)
I'll repeat what I said, though. People who buy servers from Digital, Compaq, HP, IBM, Dell, and Sun expect that they will be able to get exact duplicate replacement parts two, three, four years later (or even longer). When they can't get them, they have a legitimate gripe.
People who buy off-the-shelf components do not have the same expectation.
Re:You're a DUMBASS! (Score:3, Interesting)
It's worked fine for us, and we have only had the servers go down twice in eight years -
mmm... server (Score:2, Interesting)
Re:mmm... server (Score:5, Funny)
If they are bragging about their computers, I'm guessing neither.
YES (Score:4, Informative)
More details here [tersesystems.com].
Desktops tend to be horrible servers (Score:5, Informative)
Re:Desktops tend to be horrible servers (Score:2)
Re:Desktops tend to be horrible servers (Score:2)
If you don't need enough disk space to where you need drives in an external enclosure, there is basically no reason to use SCSI. IDE RAID is much cheaper and just as fast, if you get a good enough controller. These days the primary benefit to having SCSI is the ability to use external devices, or dual-attach RAIDs which cannot be accomplished using IDE.
I eagerly await the day when it becomes practical to use clusters to implement this stuff. If you had a clustering implementation of samba and/or nfsd, a
Can I have your job after you get fired? (Score:4, Interesting)
I have an "obsolete" low-end server that I use for running FreeBSD. It has SMP, ECC RAM, SCSI disks, a boring but very reliable chipset, extensive documentation, diagnostics software, and a high-quality case and power supply. It is also tested and certified to run all of the popular server operating systems. The manufacturer support is excellent. The video card would suck for a modern desktop, but who cares. It never crashes, it just works. If it does break, I can get parts and service.
Re:Can I have your job after you get fired? (Score:2)
Exactly! Ebay is your friend here -- you can get an old ~1Ghz Proliant or IBM server for about $500-$600, which is probably cheaper than a "desktop" box. You many need to expand the box, but the memory and old SCSI drives are also dirt cheap. These boxes will be 100% rock-solid with Windows/Linux/BSD.
Most server use (fileserving, SMTP/IMAP email, etc) does not requir
Re:Can I have your job after you get fired? (Score:2)
If you look at the motherboard in my server, everywhere they made a choice about chips, they went with something that had a proven track-record of stability and reliability.
One of the reasons that I bought it was that I couldn't find a desktop with the features that I wanted.
Integration Testing (Score:3, Interesting)
A coworker did something similar to what you are talking about. While it did save cash up-front, he spent a huge amount
Big Leagues, how big? (Score:2)
Clustering, then yes, PC's might be a ways to go, but you trade manpower time for uptime per box.
If its single app box, and you are not hot swappable, you wont be able to make it to a maintenance window for repairs.
Blades are good, but not the end all, normally you have dozens of blades around a big beefy database, and the database box and disc storage are the expensive beasts. Also licensing is a factor, support, lots of things.
Not all setups are the same, you shou
Re:Big Leagues, how big? (Score:2)
13 are file/print servers (linux)
1 - webmail (linux)
1 - SMTP relay/mail gateway (linux)
1 - web (win2k)
2 - DNS (linux)
1 - Terminal Server 2K
1 - Voicemail (windows)
Humm, sounds like a normal small office environment. PC's would be the best for cost wise. I'd create a standard win2k/linux ghost image to save you time, and easier to restore.
I'd buy a used voicemail setup off ebay, and ditch that headache, not worth the time or cost. Most opensource packages cant handle a very large v
Re:Big Leagues, how big? (Score:2)
Dell is blowing out their entry level server to make room for the next generation (which isn't nearly as nice, IMHO.)
Left hand side has a link to their entry level 400sc for $323 shipped to your door.
P4 2.8GHz w/ HT
128M ECC RAM
48x CD / keyboard / mouse
40G IDE, upgrade to 80G for $20, or to 160G for $60.
Intel Gigabit NIC
PCI 8M ATI video card, system has an AGP slot if needed.
One year onsite warranty.
Systems run cool and silent as delivered (I have two in my hom
speak of the devil (Score:2, Informative)
The load that these machines take are not much more that what that PII could handle (in fact I think that load handled everything great other than its nightly da
Re:speak of the devil (Score:2)
And no, you don't want to switch PSU's every year. The failure rate on most reasonably made parts probably goes down over time. Give me twenty brand new, untested power supplies, and I'll bet cold hard
Are you kidding? (Score:2)
If you work at google.com, or some other high-availability company, please resign ASAP.
Seriously - it depends on your budget. If your budget is _that_ shoestring, you should p
Google hardware (Score:4, Insightful)
no-name PCs. Each one had two IDE drives and a
single Celeron CPU. Failure? Oh yeah, but it didn't
matter at all. The software would just drop the
broken box out of the cluster. Nobody would even
bother to fix the PCs as they died! It was cheaper
to just replace the whole cluster whenever too
many of the boxes were dead.
Now Google is large enough to get a good deal on
custom-built rack-mount hardware. It's still IDE
and cheapo consumer CPUs of course. Assuming that
your server needs are a bit less that Google's,
this option won't be available cheap for you.
Re:Google hardware (Score:3, Informative)
Joe LAN Admin is usually dealing with fileserver and database applications that use long-lasting connections and lots of server state. (Even many HTTP apps make heavy use of server-side sessions.) There simply aren't cheap fail-over solutions for these apps. So it makes a lot more sense to buy a box that can maintain the uptime by itself.
Re:Google hardware (Score:2)
IIRC, they still do this. And they leave the dead boxes where they are; they figure once most of the computers in a specific section of their server rooms have died, they'll pitch 'em all and reuse the space. Until then, it's cheaper to buy more commodity boxes and pop 'em into more office/server space.
Google (Score:3, Informative)
From what I've read about google their philosophy is it's better to have a number of redundant servers, then one critical server.
Re:Google (Score:2, Informative)
Re:Are you kidding? (Score:2)
That may be true where you work, but there are plenty of places where a real, there-on-the-books savings this quarter outweighs a hypothetical, this-could-happen savings some day down the line. Especially if they don't consider computers to be mission-critical zero-downtime equipment (and in many organis
Desktops as Servers (Score:3, Insightful)
The correct answer to the question is what is the value of downtime to you. Often a few hours of being offline dwarfs the savings possible from this approach.
There is no question you will have more downtime with desktop hardware - it in just not engineered with 365/24 in mind. You can add in a few extra fans and make sure you don't have any proprietary parts like Dell and HP throw into their desktops, but in the long run you WILL have more downtime.
My amatuer opinion (Score:4, Informative)
Just take into account that server and desktop hardware are designed with different goals in mind. Server hardware is meant for 100% uptime, even in the case of most hardware failures, and have good scalability under high loads, while desktop hardware aims to give you the best bang for your buck, understanding that your data is typically much less valuable.
I'm guessing you'll be using IDE drives.
Some of the more expensive (usually scsi) hard disks and controllers have a battery backed cache that can ensure that your writes are preserved in the event of a power loss. The lack of this requires you to sacrifice a great deal of write performance if you wish to ensure integrity. The sacrifice is a bit less if the hard disk preserves write order, which ensures integrity to the extent that the filesystem is capable, though you'll still lose data. Combining a desktop ups with a desktop server, set up to power down safely before the ups runs out and come back up afterwards, is sometimes enough to let you sleep some nights.
The mtbf (mean time between failure) ratings for hard drives intended for desktop and server use are calculated differently. For servers, a consistent high load is assumed. For desktops, a low load and lots of sleep time are assumed. So a 1 million hour server HD might be equivalent to a 2 million hour desktop HD, and most desktop HD's are rated at like 300000 hours.
Also, mtbf is not an estimate of how long a hard disk will last, just the chances of a fairly new drive going out unexpectedly. Like if they tested new hard disks for 500 hours to weed out the duds, then took 1000 of the survivors and tested them for another 1000 hours, and 4 went dead, they could claim an mtbf of 1000*1000/4=250000 hours AFAIK. But you can be sure most of them won't last that long, that's almost 30 years at full load. Like saying if 4 kids in 1000 die between ages 5 and 15, you can claim humans have an mean time between failure of 10*1000/4=2500 years. The real estimated lifetime of a hard disk may be roughly proportional how long the manufacturer is willing to warranty it for. Hard disks intended for server use tend to be warranteed for much longer.
If you use a desktop, max out the ram to minimize disk use and schedule very regular incremental backups, as full backups will also greatly increase disk use. A desktop server will last the longest if it almost only touches the hard disk to perform necessary writes. And be aware that cheap desktops have a high lemon rate.
If you buy a Dell PowerEdge 400sc, their cheapest line of servers, you're actually getting low end desktop hardware in an easy-access case for the about same price as their similar desktops, plus integrated gigabit. So using a desktop as a server isn't too horrible, if it's not vital.
A good raid 5 file server with scsi drives, plenty of ecc ram, and a reduntant power supply can live almost forever without maintenance. They've been accidentally sealed behind walls without anyone noticing until many years later.
Re:My amatuer opinion (Score:2)
Granted it has been a while, and nobody actually does that, and I can't find any supporting documentation on the matter
Reliability (Score:5, Insightful)
What if a hard drive dies? In a server, you pull it out, pop in a new one, and the RAID array fixes itself. The users don't notice a thing. In a desktop machine, you have to turn it off, unplug everything, open the case, unscrew the screws, unplug the cables, remove the drive, put in the new drive, put everything back together, restore the array manually, and hope you didn't lose some data. And all while you do this, the server is down and nobody can do anything.
Just keep one thing in mind. If you pay too much, nothing will happen. If you get a crappy system, you will get fired.
Re:Reliability (Score:3, Interesting)
If you can tolerate an outage every few weeks, go ahead and use desktops.
That has to be one of the most flat-out wrong statements I've heard this month. I've had several desktops working as servers over the years, and for the most part they all work flawlessly. I had one machine start getting very flaky after 3 years constant uptime, one where the hard drive failed (the HD was probbably 6 years old), and one where a PS failed (probbably about 6 years as well). With the exception of old age, the deskto
Re:Reliability (Score:2)
"For the most part" is the key thing here. They'll work fine if your expectations aren't too high. If you can tolerate a dead hard drive/power supply/network card bringing down a server, desktops are fine.
If the cost to the company of a 5-hour server outage is less than the cost of a real server, go ahead and use desktops. However, most places, even small businesses, lose thousands of dollars
Re:Reliability (Score:2)
As for servicability, hot-plug on most components is what really sets server-clas hardware apart from desktops. Even if you're not 24/7, being able to remove a failed hard disk at any time means you don't have to schedule an outage which may introduce more errors (hard disks tend to fail more often when they're stopped/started). On a decent server, you can replace a
Re:Reliability (Score:2)
Re:Reliability (Score:2)
What if a hard drive dies? What if network card dies? What if the power supply gives up the ghost? You have to power the box down and manually replace the offending component, while your business grinds to a halt. If you are lucky, y
Re:Reliability (Score:2)
If you have a 1/year failure rate per machine and 20 machines, that's a failure rate of about one every 20 days, distributed evenly if you are lucky (and several in one week if you are not lucky.)
That said, almost ALL of the system failures I have seen in the past 15 years point to 'insuf
Re:Reliability (Score:2)
Differences between servers and desktops (Score:3, Insightful)
Here are the real differences:
Chipsets are different - and focus on throughput.
RAM accuracy (yes... there is a difference)
Built in pre-failure diagnostics
Redundancy
Hot swapable components
When you look at pressing desktops into server use, analyze the cost of downtime. Let's say you have a sales team hooked to your server - 8 users. Server is down 1 hour. Sales are $8,000/day. You lose 1/8 of your sales for the day. You just lost $1K in revenue plus your time spent fixing. This happens 10 times... you can see where the desktop gets expensive.
Re:Differences between servers and desktops (Score:2)
You're assuming that buisness hours are only 8 hours.
Go for it. (Score:2)
Some ingenuity can help cover the gap. Some idea's i've had for powerfull remote troubleshooting and repair get video cards with TV out, put them into an RF modulator and take the coax into a computer do video serving. This will allow you to see what exactly the computer is doing when its not responding.
I dont know if conventional UPS serial lets you power off and power on the computers but relay boards can be wired
Why? (Score:2, Informative)
There are differences... (Score:4, Interesting)
Most cheap desktop motherboards have built-in video using "shared memory" - this is actaully taken from main memory and is a constant interuption to CPU to do what it needs to be done.
Bandwidth of the PCI bus and ACPI forcing all cards to use the same interupt adding to the overhead of the OS to sort out the conflict and order. This can also lead to lockups or frozen IO - I know using 100M NIC with 100M disk controller.
Multiple processors - and I am not talking about the CPUs! Server level parts most have intellegent controllers (ie their own co-processors) This way the main CPU can get work done and not worry about the reading a disk drive.
Now: Does very server have to built to server standards? NO
A old desktop box makes a great firewall, printer server or even departmental webserver. The key here, if it goes down how fast can it be replaced? With a firewall do not build one. Build two, the second just needs to boot and be plugged in. Same for a printer server or small localized webserver.
But if you are crunching data - a database server for example - buy a real server. I like IBM X440 maxs out at 16 CPU (build sets of 4) data busses 256 bits wide not 32 or 64 of most mother boards. PCI-X slots 64bits wide and hotswapable cards, plus maxs out with these at like 100 of them. Though on VMWare's ESX and make a pile of "little white boxes" all virtually.
You have also noted about RAID cards for IDE. besure they are intellegent (Co-processors) or the CPU is doing all the work.
In the end to me real difference between Desktop / Server Class / Servers is CPU loading. How much of the "housekeeping" the CPU must perform.
On desktop machine, the CPU does it all, It watches even byte the goes into and out of a disk drive or netcard. It gives up time to allow the video to share its memory. This all takes away from the base function of running an app. At one point a few years ago - the average machine was using up to 40% of its processing just to keep the screen updated.
Server Class machines have helping processors to off load the CPU. Adding these into desktop box starts the transformation into a server - except missing true server need hotswapable everything.
I have built machines with this in mind of years - My current home machine is dual PPro 200, with highend scsi and highend video (for the time, PCI Bus) working a large database and useing database design tool - it out preforms the 3Ghz P4 I have office, with IDE and shared video. Parts do make a difference.
True Server machines are built differently, PERIOD. Look at the X440 from IBM, look at the top end machines Dell, HP/Compaq you will see the difference.
Yes, they are sell servers that are really desktops in deguess. Dell 400SC small server is the same case and motherboards as Dell 800 desktop series. The difference ECC memory, and a front cover that covers the 2 USB slot and sound ports in the front. Also you can get this for less than matching desktop configuration. I got one for my wife's desktop.
Lastly clustering...
Clustering to me is the same as raid to disk drives. Lots of cheap servers sharing the load acting as a single larger machine. So all of this may be for naught.
few reasons (Score:2)
Rackspace is usually at a premium. Desktop servers don't stack well and each year they are made in different sizes. Sometimes half an inch more width can be a problem if you need to swap one.
Reliability. PC computers and components just aren't made for a 24/7 vibration-ridden environment. Their MTBF is probably not considered a significant design factor, as people just reboot their machine if something goes wrong.
Open the case of an IBM or Dell rackmount server and prepare to be impressed. The design
BYOB (Score:4, Interesting)
The mail server where I work used to consist of a 733MHz Celeron, branded E-Machines. It was a disused desktop machine from Joe Random (Joe, of course, has a shiny new Dell on his desk to replace it). Complete with a $3 PCI RTL8139 NIC, it was the epitome of cheap.
If any part failed, including the 175-Watt PSU, the machine would die completely.
It'd been that way since I started with the company.
I mentioned it to a higher-up, who happens to be a rather important salesman of moderate technical inclination, and whose sales depend primarily on reliable email.
He insisted that I do something about it, and so I began doing so.
I fought with the RAID adapter in a Proliant that we had spare before I realized why people generally loathe binary drivers under Linux. I looked for another way to connect the hard drives, but the box only had one(!) real IDE channel, and it was consumed by a pair of CD-ROM drives.
I sat and fathomed that for awhile: Big server box, stout steel constuction, Serverworks chipset, ECC RAM, huge cooling, 64-bit PCI, one P4 Xeon and room for a second. Unsupportable hardware RAID. One bloody IDE channel. No SCSI. The sound of nonsensical madness was deafening.
So I just built one. I had a few priorities, like redundant PSU cooling, Pentium 4 (I'm an AMD fanboy, but thermal throttling is your friend, even if the chip is vastly overpriced), redundant storage, good IO performance, and the ability to replace any (or every) part with something that can be sourced locally within an hour or so. Oh, and it has to be cheap.
I also made a list of non-priorities: Don't need a lot of number-crunching ability, don't need redundant PSUs, don't care about multiple CPUs.
"Who makes server mainboards," I asked myself. I answered myself with "Tyan."
I've never read anything but good stuff about Tyan. So I got one of their P4 boards. Not a "server" board, but one of their lesser (single-CPU) models which were hopefully developed by the same engineers. Two channels of SATA RAID, four DIMM slots, very few other built-in goodies, except for two additional PATA ports.
It supports dual-channel ECC RAM, so I picked up a couple of quarter-gig sticks of that. Could've gotten more, but remember, this is a -budget- server. (It seldom swaps, and when it does, the disks are fast enough to make it a non-issue.)
Also picked up a couple of Western Digital 80GB SATA drives, because Moving Parts Are Important, MMkay?, and at the time they were the only ones still offering a 5-year warranty. This machine is supposed to live longer than that before it is outgrown.
And for good measure, I included a Pioneer DVD-R for offline backups. I hate tapes.
I tossed it all in the cheapest black case I could find (newegg, $24, shipped). I threw away the included PSU and replaced it with a big Antec Truepower.
Killed the hardware RAID in favor of Linux's software RAID1. I have no intentions of ever marrying a computer's software to something as general and failure-prone as a modern motherboard - out-of-the-box RAID is a great way to fuck yourself at disaster-recovery time.
It runs Gentoo, and and filters and tosses mail something like twenty times the rate of the old E-Machines consumerbox (which had buried itself in backlogged mail a few times).
We've got redundancy of cooling and storage, we've got a graceful fail-safe on the CPU fan, and we've got a disaster plan that includes being able to find parts from the mom-and-pop shop down the street, or mounting the SATA drives in that wretched Proliant with a PCI controller, or (at worst) setting up the Proliant's DVD-ROM and one of its 80gig drives as master/slave and restoring from DVD-R.
I'm pleased with it. It was cheap. It went together slicker than greased shit. I don't think it's going to fail anytime soon, but if it does, at least I don't have to worry abou
Re:BYOB (Score:2)
ECC is one of the most important things you can do.
And for good measure, I included a Pioneer DVD-R for offline backups. I hate tapes.
Daily rsync to another machine and a tape drive for monthly full-system backups. Considering the lifespan that CD-R's have shown, I don't expect that DVD-R's will ultimately be much better.
Killed the hardware RAID in f
Re:BYOB (Score:2)
I've not heard of any problems with software RAID that could not be attributed to user error. And, having made some of those errors myself, I think I'm now qualified to operate it. I've heard numerous horror stories about RAID cards g
Re:BYOB (Score:2)
But you can, in most instances, run the drive without the controller if it fails. I've had this happen -- boot the rescue kernel, edit the fstab, fsck, and off you go....
Then when the replacement controller comes in, you rebuild the array.
I'm not at all hip to waiting two years
Re:BYOB (Score:2)
'Sides, CPUs caught up with hard drives a -long- time ago. The performance hit of software RAID-1 cost me perhaps $10 in CPU power. Whooptie shit.
2. Clearly. Now, tell me, how would this be any different if I had purchased an Proliant ML330 for more money and less function? Remember, I said nothing
An answer from someone who done it all (Score:2)
What is usually the difference is form factor, quality of hardware, cooling, and type of hardware.
A serverroom is usually cramped so the smaller the case the better. Or at least a case that doesn't need open areas all around it. Those 9inch racks ain't just there to look cool. It is just more efficient then stacking PC towers.
Not all motherboards/hds/fans/etc are equal. Almost all can run 24/7 if your lucky but being under full load
part of the evolution of an IT department (Score:5, Insightful)
In the beginning, there isn't much money available, so most places cobble together 'servers' from spare desktop components, and throw them up in a closet somewhere. That generally works okay, and the company realizes that they like having servers, so over time, the installation grows.
As it gets bigger, the lower reliability of desktop components will start to become apparent; servers will go down, hard drives will fail. It's just statistics; given enough samples, the lower quality of the cheaper components will start to make itself felt.
Gradually, as IT departments grow, they tend to migrate towards better and better hardware. The really big outfits tend to use Dell and Compaq. Compaq in particular sells very, very expensive machines, which are very well engineered and hardly ever break. But you pay through the NOSE for this kind of service.
So how do you know how much to spend on your servers? When you gain the ability to numerically measure how much it costs you when they fail. When your department and company mature to the point that you can accurately measure costs of downtime, then with management's decision on acceptable risk levels, you'll have a pretty good idea of what you should be spending on servers. Many big companies find that the cost of downtime is appalling, when they actually are able to measure it, and that the cost of even very expensive servers is minimal in comparison, so they buy the best stuff they can find.
But until you can measure it, IMO you're fine with desktop components, as long as you buy GOOD ONES. Don't skimp on your drives, and make sure you have good cooling for them. Buy server cases; you can get good ones for a couple hundred bucks that will hold a billion drives, and then make sure to buy good cooling; you may want the boxes that mount 3.5" disks in 5.25" slots, with fans and hotswappability. I usually buy PC Power and Cooling power supplies for servers; even the Silencers are fairly loud, but they are very robust and well-built. Many of them are dual supplies in one box, which improves reliability even more. That's a lot of fans in each machine, so you may want to pick up a spare or two with each machine you buy. (Tape them inside the case). And the noise level, particularly once you get a number of them, will be high... but think of it as the sound of reliability and you won't mind it too much. Also note that when you get past a few machines, or if you spend a lot of time in server rooms, you should wear ear protection. I have worked in big colo facilities that were absolutely deafening, to the point that things sounded muffled when I left. That kind of noise DOES DO DAMAGE, and you want earplugs.
Make sure you understand exactly what onboard network chipset you are buying: you most likely want an Nforce3 or an Intel, um, 865 or better, I think it is... where the network card is directly on the northbridge, so you can get the true gigabit speeds. When they are on the Southbridge, and look like they are PCI devices, you can't run gigabit full out. And never buy a motherboard that uses Realtek 8139 networking, they are garbage. They make the CPU work way too hard, and are NOT good for server machines.
What you will end up with is a whole room full of Frankenclones, but if you've been smart and spent your money on good stuff, it'll be almost as reliable as the Dell/HP/Compaq/IBM clusters for a tiny fraction of the price. And you'll be able to get replacement parts anywhere. But you probably WON'T have spare parts on hand to fix things, unless you've been unusually clever in your design, because each new generation of machines will be different than that last, and you won't be able to use the same replacement parts interchangeably.
Someday, when you find out what downtime costs you, the extra cost of the big label servers may suddenly look wonderful
Re:part of the evolution of an IT department (Score:2)
You must be living on another planet or something. I worked for a company that did a web project for creative, to develop a music store to be called "MuVo". They scrapped the website (which was very good) over not wanting to pay the all-music guide for their content, they allegedly thought they would get to use our license to it, a notion they were explicitly abused of (but not abused enough,
Doesn't work (Score:3, Insightful)
Leased servers, and "desktops" (Score:2, Interesting)
This year we noticed Dell had very good rates for renting their rack servers, so we grabbed a couple, and will upgrade them on a 18-24 month basis. The affordability of
Nothing. (Score:2, Insightful)
Mix 'n' match (Score:5, Insightful)
Don't skimp on the harddrives, but go for reliable ones. SATA Raptors are as reliable as many SCSI drives, and go in any modern desktop. RAID5 them. RAID5 in software isn't much of a CPU hog in modern machines. RAID5 in hardware is faster, but more expensive. Fit to budget.
Hotplugging SATA is not really supported (tested) in Linux, but expect it to mature. When a drive fails at this moment, downtime is unavoidable. In the near future, expect this to improve.
As for the mobo, memory, network, case. Get quality stuff, but don't go overboard. Onboard vga is fine for your purposes: it will act as a server.
Depending on your needs, backup media need to be considered. Put DVD burners in the server. Backup often. When you need more storage, portable harddrives are great. You need more than one.
Most important: (stress)test your equipment before putting it to use. Most broken hardware is broken from the beginning. Failing hardware is much less likely. The biggest difference between so called server hardware, and desktop hardware is the amount of checking it had before it leaves the factory. So do that yourself.
Penny wise, pound foolish as the saying goes (Score:4, Informative)
You can do a 1U P4 3.0 with mirrored Enterprise quality SATA disks and 1GB of ECC RAM for well under $2000. Take a look at the Intel SR1325TP1-E server platform. It's the server chassis with proper cooling with an Intel TP1 board installed. The board has dual onboard nics and the chassis has about five fans. Very nice, and runs $500. Add the CPU for about $200, memory, and disks (SATA, CD, floppy) and you are done.
All depends (Score:2)
1)How much downtime can you afford due to lack of hotswap etc
2)Can the desktop box do the job? If your trying to do some massive process, the answer might be no
Lets face it - there are a LOT of "Mom and Pop" shops where if the server goes down for 1/2 day - it's not a major problem (Heck, I've worked at software shops like this - just keep working on what you already have out). Other places, your down for 5 minutes (of even 60 seconds) and the phones will be ringing (wh
Beware mean-time-to-repair (Score:2)
While an HP/Compaq "Proliant DL380" at around $5,000 with a 2nd CPU, redundant fans, RAID hard drives, etc. is a _lot_ more expensive than a $1,000 white box with a couple of IDE drives with software RAID, it tends to be worth it. At least in my situation.
I've used white box servers in the past, and they are fine while they work. Once something goes wrong you're sort of on your own to track down
Can My Abacus Make It in the Big Leagues? (Score:3, Funny)
Re:oh far out! (Score:2)
New good quality fans, and filters are a good investment though. If you skimp on them you'll only end up having to clean out the dust from the case and replace the fans possibly after the damage has been done.
Re:oh far out! (Score:4, Insightful)
In fact, the low-cost "servers" you would get from Dell aren't that much more than consumer-grade parts specifically configured to be ran as servers. The cheapest ones come with IDE and Celerons / Pentium 4s.
When it comes to hardware, you should only buy what you need and enough redundancy to keep running through the installation of the next level of redundancy. Computers depreciate faster than any other expense you could have; they aren't drill presses or factory automation.
Simple economics: if two "servers" cost $1500 each, and you can get "PCs" for $750 each, you can either get four times as many or save half the cost--which can help you move to better equipment as the budget goes along.
Re:oh far out! (Score:2)
3 days for $1500, = $500 per day.
365 * $500 = $182,500 per year IT labor budget.
If the equivalent of one full-time six-figure person spends three MORE days configuring redundant desktops instead of redundant servers, either it's a horribly big network for that one person (if each replacement takes two hours--which is a lot--that's twelves replacements per year!) or they're incompetent.
Please don't tell that to my systems.... (Score:3, Informative)
Re:Please don't tell that to my systems.... (Score:2)
Re:Please don't tell that to my systems.... (Score:2)
A gigabit nic alone won't help much on existing 100mbit networks, but for another $180 of so you can get a 24 port 100mbit switch with 2 gigabit ports for servers or additional switches, eliminating the bottlenecks from simultaneous users or having to chain switches together over 100mbit.
Re:Please don't tell that to my systems.... (Score:2)
Re:Please don't tell that to my systems.... (Score:3, Informative)
Their Linux driver support hasn't been too good for me, for any of their non-cpu products. Imagine setting up a file server on gigabit and getting 20kb/s when you try to upload. Struggled with that for a while. It was related to using the 2.6 kernel, 2.4 works fine, but the problem was Intel specific. And on the desktop, with their integrated i845 video, using OpenGL will crash the system after a couple minutes. I'm
Re:Memory is memory (Score:2)