Linux Clustering Hardware? 201
Kanagawa asks: "The last few years have seen a slew of new Linux clustering and blade-server hardware solutions; they're being offered by the likes of HP, IBM, and smaller companies like Penguin Computing. We've been using the HP gear for awhile with mixed results and have decided to re-evaluate other solutions. We can't help but notice that the Google gear in our co-lo appears to be off-the-shelf motherboards screwed to aluminum shelves. So, it's making us curious. What have Slashdot's famed readers found to be reliable and cost effective for clustering? Do you prefer blade server forms, white-box rack mount units, or high-end multi-CPU servers? And, most importantly, what do you look for when making a choice?"
Depends (Score:3, Insightful)
(a) Your cost budget
(b) Your work requirement: A Search engine is different from a weather forecast center.
(c) Cost of ownership which includes maintenance etc
Dual Core Opteron Blades (Score:5, Insightful)
- Price
- Software availability
- Power consumption
- Density
Brand depends on what your company is confortable with. Some companies would want to have the backing of IBM, SUN or HP. Others will be quite satisfied with in house built blades. This days it's quite easy to build your own blade, some mother boards builders take care of almost all components and complexity (for example Tyan). But again, maybe the PHBs at your gig will run for the hills if you mention the word motherboard alone.
Read the Google paper ! (Score:5, Insightful)
It's a great article, I strongely suggest you read properly, and do what they said they did - evaluate need against what's available.
Cheap isnt always the way to go (Score:3, Insightful)
Not everyone can be Google (Score:3, Insightful)
However, it takes a special type of people to manage that kind of hardware. You have to deal with a high amount of failure, you have to be extra careful to avoid static problems, you've got to really think through how your going to wire things.
On the other hand, if you get something like a IBM BladeCenter, you have a very similar solution that may cost a little more but is significantly more reliable. More importantly, blades are just as robust as a normal server. You don't have to worry about your PHB not grounding himself properly when he goes to look at what you've setup.
I expect blades are going to be the standard form factor in the future. It just makes sense to centralize all of the cooling, power, and IO devices.
It's not that the MegaRAID cards are bad... (Score:4, Insightful)
It's only designed to hook up with Dell disc arrays and tape drives and everything else can shove it (from their point of view).
Do yourself a favor and skip 'em and just by the cards straight from LSI.
Re:No one size fits all answer but here is mine :) (Score:3, Insightful)
Re:Depends (Score:2, Insightful)
Re:We do nice house built clusters (Score:2, Insightful)
Re:Check out Xserve (Score:3, Insightful)
Re:XServe (Score:1, Insightful)
Let us compare, shall we?
Sun & HP & IBM all are perfectly willing to have response times under 4 hours. Not 4 business hours, not 365.24 days per year except.... 365.24 days per year period.
Does Apple do that? No.
Let's take sysadmin certification. Are they like Sun, HP, and IBM's AIX certs where they are good for the version of their OS you take it for? Or is it like Microsoft's, where it expires? Is it one test or set of tests...or is it a la cart? Hmmm, it is like Microsoft's.
Nope, Apple isn't serious about the enterprise, at least not where Unix is concerned.
By the way, for the original poster: Here's a nickle, sonny; get yourself a real computer. http://www.sun.com/servers/highend/>