Linux Clustering Hardware? 201
Kanagawa asks: "The last few years have seen a slew of new Linux clustering and blade-server hardware solutions; they're being offered by the likes of HP, IBM, and smaller companies like Penguin Computing. We've been using the HP gear for awhile with mixed results and have decided to re-evaluate other solutions. We can't help but notice that the Google gear in our co-lo appears to be off-the-shelf motherboards screwed to aluminum shelves. So, it's making us curious. What have Slashdot's famed readers found to be reliable and cost effective for clustering? Do you prefer blade server forms, white-box rack mount units, or high-end multi-CPU servers? And, most importantly, what do you look for when making a choice?"
Depends (Score:3, Insightful)
(a) Your cost budget
(b) Your work requirement: A Search engine is different from a weather forecast center.
(c) Cost of ownership which includes maintenance etc
Re:Depends (Score:2, Insightful)
Re:Depends (Score:2)
Congratulations!
You get the honor the second post AND mod points for insightful, and you didn't have any useful information.
The grandparent was correct (and insightful, for the smart reader).
There's no correct answer to a generic question like that.
Gates Will Not Be Happy (Score:1, Offtopic)
when all his three-CPU XBoxes become Linux clusters!
Bwahahahahaha!!!
Re:Gates Will Not Be Happy (Score:2)
Re:Gates Will Not Be Happy (Score:2)
And how was the grandparent post 'Offtopic'?
Re:Gates Will Not Be Happy (Score:2)
Re:Gates Will Not Be Happy (Score:2)
Excuse me, but how are they going to copy Linux code running on it, when Linux cluster code has been around for years?
If they could do, they would have done it.
And the increase in unit sales wouldn't be enough to pay for Gate's shoe shines. How many Linux buffs running clusters do you think there are?
As for free advertisement, it's far MORE a free advertisement for how Linux can run on anything and turn it into something useful.
Dead Popes - sig? (Score:1)
Dual Opteron 1U rack units.... (Score:5, Interesting)
Re:Dual Opteron 1U rack units.... (Score:5, Informative)
Re:Dual Opteron 1U rack units.... (Score:3, Informative)
Here [dell.com] is a Dell PowerEdge 750 1U server with a P4 2.8GHz HyperThreaded, 256M and an 80G SATA (with room to add another drive and up to 4G of memory) for $499 shipped to your door. Yea, I'm a Dell fanboy, but even if I wasn't I would still see a pretty good price point in that box.
Note that this specific box is pretty low end and could use some upgrades, but it is a complete machine ready to run, e
Re:Dual Opteron 1U rack units.... (Score:2)
Re:Dual Opteron 1U rack units.... (Score:2)
Sounds nice, but to upgrade that 2.8 GHz even to 3.0 Ghz, Dell rips you off another $299, and so on. A 60% price increase for a 7% CPU speed increase sounds similar to what airlines are doing with their pricing. So, not a company I would trust with my business.
Re:Dual Opteron 1U rack units.... (Score:2)
Getting the 3.0 is an option, but it's a dumb option - perhaps intended to influence the purchase decisions towards the 2.8, again to clear the remaining stock. I was comparing a specific box and configur
Dealing with the heat (Score:2)
Here you go. (Score:2)
http://www.acmqueue.org/modules.php?name=Content& p a=showpage&pid=80&page=4 [acmqueue.org]
And some suppliers of efficient servers.
http://www.transmeta.com/success/server.html [transmeta.com]
Having dealt with HP, I would be inclined to look at one of the other vendors.
Re:Dual Opteron 1U rack units.... (Score:3, Informative)
(not to mention that the dual core opterons acutally consume less power than some of the early steppings of the single core ones)
Re:Dual Opteron 1U rack units.... (Score:3, Informative)
Re:Dual Opteron 1U rack units.... (Score:2)
Ammasso (Score:4, Interesting)
Re:Ammasso (Score:2)
Re:Ammasso (Score:1)
Re:Ammasso (Score:2)
Check out Xserve (Score:5, Informative)
Probably the most interesting news lately for OS X for HPC is the inclusion of Xgrid with Tiger. Xgrid is a low-end job manager that comes built-in to Tiger Client. Tiger Server can then control up to 128 nodes in a folding@home job management style. I've seen a lot of interest from customers in using this instead of tools like Sun Grid Engine for small clusters.
You can find some good technical info on running clustered code on OS X here [sourceforge.net].
The advantage of the Xserve is that it is cooler and uses less power than either Itanium or Xeon, and it's usually better than Opteron depending on the system. In my experience almost all C or Fortran code runs fine on OS X straight over from Linux with minimal tweaking. The disadvantage is that you only have one choice: a dual-CPU 1U box - no blades, no 8-CPU boxes, just the one server model. So if your clustered app needs lots of CPU power it might not be a good fit. For most sci-tech apps, though, it works fine.
If you're against OSX but still like the Xserve, Yellow Dog makes an HPC-specific [terrasoftsolutions.com] Linux distro for the Xserve.
Re:Check out Xserve (Score:2)
Re:Check out Xserve (Score:3, Informative)
Re:Check out Xserve (Score:2)
Next thing you'll see some post saying "I am a lawyer, actually."
Re:Check out Xserve (Score:2)
Re:Check out Xserve (Score:3, Informative)
Actually, the cheapest way to go would be to buy the 250GB ADM, apple part number M9356G/A, and pull out the 250GB drive and use it somewhere else (or just use that drive). Service parts tend to be pricey (like everyone else).
Re:Check out Xserve (Score:2)
Re:Check out Xserve (Score:3, Insightful)
Re:Check out Xserve (Score:2)
Re:Check out Xserve (Score:2)
So, fair enough with the marketing comment. I do technical pre-sales for Apple, so I've really touched and used this stuff. If you do have any techie questions, fire away.
Dual Core Opteron Blades (Score:5, Insightful)
- Price
- Software availability
- Power consumption
- Density
Brand depends on what your company is confortable with. Some companies would want to have the backing of IBM, SUN or HP. Others will be quite satisfied with in house built blades. This days it's quite easy to build your own blade, some mother boards builders take care of almost all components and complexity (for example Tyan). But again, maybe the PHBs at your gig will run for the hills if you mention the word motherboard alone.
Re:Dual Core Opteron Blades (Score:2)
Re:Dual Core Opteron Blades (Score:3, Interesting)
Re:Dual Core Opteron Blades (Score:2)
Re:Dual Core Opteron Blades (Score:2)
In terms of using boards on open trays -- why not? as long as you don't have airflow proble
Re:Dual Core Opteron Blades (Score:2)
Pictures? (Score:1, Interesting)
Re:Pictures? (Score:3, Funny)
IBM zSeries (Score:1, Interesting)
If you need this, you need it bad.
Re:IBM zSeries (Score:2)
Conversely, if you don't need it (because you've already decided on a cluster), then you really don't need it.
Re:IBM zSeries (Score:2)
So what did you get for your $2 million? Two things: incredible reliability and amazing I/0 bandwidth. You can fully saturate all 32 processors for weeks at a time if you wish, with uptime measured in *years*, 24x7. And that's gr
Re:IBM zSeries (Score:2)
Read the Google paper ! (Score:5, Insightful)
It's a great article, I strongely suggest you read properly, and do what they said they did - evaluate need against what's available.
well... (Score:5, Informative)
we're hoping that upgrading to OpenSSI 1.9 (which uses a 2.6 kernel instead of the 2.4 kernel in the current stable release) will show better disk performance... but... yeah.
It's not that the MegaRAID cards are bad... (Score:4, Insightful)
It's only designed to hook up with Dell disc arrays and tape drives and everything else can shove it (from their point of view).
Do yourself a favor and skip 'em and just by the cards straight from LSI.
nes beowulf cluster (Score:2, Funny)
Re:nes beowulf cluster (Score:4, Funny)
Re:nes beowulf cluster (Score:2)
Total Overkill (Score:5, Funny)
That would be typical of a prima donna company like Google that's floating in cash from their IPO.
Around here, we don't waste money on fancy designer metals like aluminum. Salvaged wooden shipping palettes work just fine for us; they're free. And screws!? No need to waste resources on high-end fasteners when you can pick up surplus baling wire for less than a penny per foot. A couple of loops of wire and a few twists are all you need to assemble a working server.
The dotcom days are over. There's no reason to throw money around like there's no tomorrow.
No one size fits all answer but here is mine :) (Score:5, Informative)
I build Linux and Apple clusters for biotech, pharma and academic clients. I needed to announce this because clusters designed for lifesci work tend to have different architecture priorities than say clusters used for CFD or weather prediction
I've used *many* different platforms to address different requirements, scale out plans and physical/environmental constraints.
The best whitebox vendor that I have used is Rackable Systems (http://www.rackable.com/ [rackable.com] . They truly understand cooling and airflow issues, have great 1U half-depth chassis that let you get near blade density with inexpensive mass market server mainboards and they have great DC power distribution kit for larger deployments.
For general purpose 1U "pizza box" style rackmounts I tend to use the Sun V20z's when Opterons are called for but IBM and HP both have great dual-Xeon and dual-AMD 1U platforms. For me the Sun Opterons have tended to have the best price/performance numbers from a "big name" vendor.
Two years ago I was building tons of clusters out of Dell hardware. Now nobody I know is even considering Dell. For me they are no longer on my radar -- their endless pretend games with "considering" AMD based solutions is getting tired and until they start shipping some Opteron based products they not going to be a player of any significant merit.
The best blade systems I have seen are no longer made -- they were the systems from RLX.
What you need to understand about blade servers is that the biggest real savings you get with the added price comes from the reduction in administrative burden and ease of operation. The physical form factor and environmental savings are nice but often not as important as the operational/admin/IT savings.
Because of this, people evaluating blade systems should place a huge priority on the quality of the management, monitoring and provisioning software provided by the blade vendor. This is why RLX blades were better than any other vendor even big players like HP, IBM and Dell.
That said though, the quality of whitebox blade systems is usually pretty bad -- especially concerning how they handle cooling and airflow. I've seen one bad deployment where the blade rack needed 12 inch ducting brought into the base just to force enough cool air into the rack to keep the mainboards from tripping their emergency temp shutdown probes. If forced to choose a blade solution I'd first grade on the quality of the management software and then on the quality of the vendor. I am very comfortable purchasing 1U rackmounts from whitebox vendors but I'd probably not purchase a blade system from one. Interestingly enough I just got a Penguin blade chasssis installed and will be playing with it next week to see how it does.
If you don't have a datacenter, special air conditioning or a dedicated IT staff then I highly recommend checking out OrionMultisystems. They sell 12-node desktop and 96-node deskside clusters that ship from the factory fully integrated and best of all they run off a single 110v electrical. They may not win on pure performance when going head to head against dedicated 1U servers but Orion by far wins the prize for "most amount of compute power you can squeeze out of a single electrical outlet..."
I've written a lot about clustering for bioinformatics and life science. All of my work can be seen online here: http://bioteam.net/dag/ [bioteam.net] -- apologies for the plug but I figure this is pretty darn on-topic.
-chris
Re:Thanks for the info (Score:2)
If I had mod points, this is definitely where i'd put one of them.
Re:No one size fits all answer but here is mine :) (Score:2)
Re:No one size fits all answer but here is mine :) (Score:3, Informative)
Depends on the specific code to meet your criteria of "twice as fast"...some apps will be more than twice as fast; some will be slightly faster, equal or in some cases slower.
For more general use cases (at least in my field) I can give a qualified answer of "dual Opteron currently represents the best price/performance ratio for small SMP (2-4 CPUs)".
I've also seen cheaper pricing from Sun than what you mentioned. You are right though in that there is a price difference between xeon vs opteron - whenever I
Re:No one size fits all answer but here is mine :) (Score:3, Informative)
Re:No one size fits all answer but here is mine :) (Score:3, Insightful)
Cheap isnt always the way to go (Score:3, Insightful)
Re:Cheap isnt always the way to go (Score:2)
actually, the same monitoring and swapout/repair of hardware is the same whether it's cheap hardware or expensive hardware. I've had a fair bit of hardware failures on an IBM cluster. Once monitoring catches it and notifies me, I get the new hardware in place and use xcat and kickstart to push everything back out to the replaced node(s). The most tim
Re:Cheap isnt always the way to go (Score:2)
I've seen people say "Hey, I can make a cluster out of my 16 old P3-400MHz computers, it's cheap". You can, sure, but a single 3GHz dual-xeon will both be faster and save you time, electricity, and space.
An upfront cheap may lead to far higher support costs.
Re:Cheap isnt always the way to go (Score:2)
I would agree that in many cases for clusters, you want high quality new computers. Why deal with a bunch of used P3 class computers, when a cluster of AMD64 computers could get more done with 1/4 the units.
However, if someone wants an inexpensive computer to browse the internet and do email, I do recommend getting a used
Re:Cheap isnt always the way to go (Score:2)
It depends on 2 things, your application and how much software development you can do. Google trades off a lot of software development against the lower hardware costs, and their application is well suited to the piles of cheap stuff as they mostly just want lots of RAM. Other applications are not so suitable and some people are severely constrained in the amount of development time they have (and need the application to run now, not when a new redundant version has been tested in several months time).
I do
Hardware? (Score:1)
What are people's experiences with OpenSSI vs OpenMosix?
Obviously (Score:5, Funny)
Re:Obviously (Score:3, Funny)
HP, instead of calling its big boxes Superdomes, should have used Thunderdrome!
Re:Obviously (Score:2)
Put two (or more) halfdomes together, and you get a Superdome? Makes sense to me.
Re:Obviously (Score:4, Funny)
Facts:
1. Blades are servers.
2. Blades fight ALL the time.
3. The purpose of the blade is to flip out and kill people.
Testimonial:
Blades can kill anyone they want! Blades cut off heads ALL the time and don't even think twice about it. These guys are so crazy and awesome that they flip out ALL the time. I heard that there was this blade who was eating at a diner. And when some dude dropped a spoon the blade killed the whole town. My friend Mark said that he saw a blade totally uppercut some kid just because the kid opened a window.
And that's what I call REAL Ultimate Power!!!!!!!!!!!!!!!!!!
Re:Obviously (Score:2)
It really depends on use. (Score:2)
We run a very large website. We have a 1U dual Athlon MP box for administration and log processing, and a 2U dual Xeon box with a 6 disk RAID 10 solution for apache + mysql. Its a great s
Mobos on Ikea shelves (Score:4, Interesting)
Not everyone can be Google (Score:3, Insightful)
However, it takes a special type of people to manage that kind of hardware. You have to deal with a high amount of failure, you have to be extra careful to avoid static problems, you've got to really think through how your going to wire things.
On the other hand, if you get something like a IBM BladeCenter, you have a very similar solution that may cost a little more but is significantly more reliable. More importantly, blades are just as robust as a normal server. You don't have to worry about your PHB not grounding himself properly when he goes to look at what you've setup.
I expect blades are going to be the standard form factor in the future. It just makes sense to centralize all of the cooling, power, and IO devices.
To be honest, I think *everyone* will be Google (Score:2)
Anyway, the magic formula for the system to use is MIPS per Watt, or MFLOPS per Watt. The power requirements and heat produced by high density computers is a real problem to dea
SunFire Servers (Score:5, Informative)
http://www.sun.com/servers/entry/v20z/index.jsp [sun.com]
http://www.sun.com/servers/entry/v40z/index.jsp [sun.com]
They're the entry-level servers from Sun, so they have great support. They're on the WHQL List, so Windows XP, 2003 Server and the forthcoming 64-bit versions all run fine.
They also run Linux quite well, and as if that wasn't enough, they all scream along with Solaris installed.
The v20z is a 1 or 2 way Opteron box, in a 1RU case. the v40z is a two or for CPU box that is available with single or dual core Opterons.
Plus, they're one of the cheapest, if not the cheapest, Tier 1 Opteron servers on the market.
Re:SunFire Servers (Score:3, Informative)
Besides that, we also have several clusters of midsized towers from a local company,
Re:SunFire Servers (Score:2)
Great story re w/ build your own IBM cluster (Score:4, Interesting)
Clarification.. (Score:2, Funny)
It Depends... (Score:4, Informative)
Answering this... It Depends...
What is your cluster's tolerence to failure? If a node can fail, then you have the option of buying a lot of cheap hardware and replacing as necessary. This is the way that most big web farms work.
What is your cluster machine requirements? Do you have heavy I/O? Does cache memory matter? Do you need a beefy FSB and 64G of RAM per node? You may find that spending $3000/node ends up being cheaper than buying three $1000 nodes because the $3000 node is capable of processing more per unit time than the three $1000 units are.
What is your power/rack cost constraint? Google is an invalid comparison simply because of their size. They boomed when a lot of people were busting and co-lo's were hungry for business. I'd bet they have a song of a deal in terms of space, power, and pipe. You are not Google and I doubt you have a similar deal. Thus, you may find that there is a middle ground where it is better to get a more powerful machine to use less rack space/power.
In the end, you have to optimize between these three variables. You'll probably find that the solution, for you specifically, is going to be unique. For example, you may find that: Node failure is an option since the software will recover, power/rack costs are sufficiently high that you have to limit yourself, and CPU power with a good cache is crucial, but I/O isn't. This means getting a cheaper Athlon based motherboard with so-so I/O and cheesy video is a good choice since it frees your budget for a fast CPU. Combine with the cheapest power suppy the systems can tolerate and PXE boot and you have your ideal system.
Best of luck.
Classic problem: Fast, Cheap, Good. Pick Two. (Score:3, Interesting)
For sheer processor density, if you need complete servers, the IBM BladeCenter servers offer the most "Bang" (Fast), and they are fairly reliable and compact (Good). They are not cheap. They do have better density than the HP Blades. WETA Digital (Peter Jackson's FX company) uses them.
That will get you 2 server processors, two server-class IDE drives + 2 GigE ports + all peripherals (Power, KVM, CD, Mangement, GigE switches, SAN switches if you want, etc.) per one-half of a rack unit. This is well over twice the density of pizza box units when you count external peripherals like the networking switch, KVM, etc.
Google's setup is Fast and Cheap, but their hardware reliability is quite lousy. However, their clustering setup is specifically designed around expected hardware failure.
(As a side note, Google no longer uses bare boards for their basic nodes. They use fairly small and slow nodes with a LOT of RAM from some company I can't remember. They look kind of like over-sized hard drives.)
If you need crap-loads of raw computing power, in a relatively compact power-efficient chassis (1024 processors/rack), IBM's Blue Gene simply cannot be beat. This is Captial-F Fast, and Capital-G Good, but you certainly can't afford one. (While it provides more cycles for the watt and dollar than any other setup, it isn't exactly as simple as a Beowulf cluster.) And you would still need to buy pesky things like large GigE switches and storage. Check out the current issue of the IBM Journal of Research and Development on IBM's website (or your local university library) for all sorts of juicy details.
[Yes, I am an IBM shill]
So realistically, you really need to look at your application. If it can tolerate failure of any individual node on a regular basis, get the cheapest stuff you can find that will fit in your space and CPU requirements. If node reliability is important, but space is not, 1U servers from any of the three major vendors (or Apple, if that is your thing) will do the job just fine. If you need reliability and space, then honestly IBM's BladeCenter boxen are the best, as long as they fit your application. (I am not just speaking as an IBM'er here... they really are the best blades out there.)
SirWired
Good Thread (Score:2)
Cluster + Infiband = Big NUMA (Score:2)
These guys make a hypervisor that sits on top of infiniband between multiple blades and turns them into one big shared-memory linux NUMA system, up to around 32 cpus and probably even bigger eventually. Plus they can dynamically move cpus and memory in and out of each virtual machine, split the group into multiple smaller virutal machines, etc.
I have no connection to them, just saw them at one of the east coast linux tradeshows. I think their hypervisor will eventually be superced
I highly recommend Penguin Computing (Score:3, Informative)
You may find this interesting (Score:2)
Re:Personally (Score:1)
Re:Personally (Score:1)
Re:XServe (Score:5, Interesting)
Let me know when they stop trying to force their iPod updater (you know, the one that breaks Real's compatability DRM software) onto my servers. No matter how many times you put that update in the "Never update this" category, it shows back up the next time you run Software Update. Until they stop trying to play childish games on my production servers, I'll not consider them ready for the enterprise.
Re:XServe (Score:4, Interesting)
Re:XServe (Score:2)
So, I'm not sure why all the hate. The software update server is bundled with Tiger Server, which you can get for as little as $500.
Re:XServe (Score:2)
And if they then ignore the customers choice, they aren't ready for the enterprise either.
But, for the record, you can uncheck updates you don't want in YOU just fine without Red Carpet.
Re:XServe (Score:2)
Re:XServe (Score:4, Interesting)
Why would a system configured as a fileserver have that software on it to begin with? Is Apple's apt tool so bad that it tries to patch software that hasn't been installed?
Re:XServe (Score:2)
Most likely this is some idiotic setting in the default update settings (although it take some digging to find it).
Re:XServe (Score:2)
And to answer Guido's question, it's not a matter of fixing dependencies for iTunes - on desktop systems [which do have iTunes installed] you can upgrade iTunes just fine if you tell it not to install the iPod Updater, but it will keep trying to install it again and again and again.
Re:XServe (Score:2)
For example, when you update iTunes, you must accept retroactive licensing changes to all of the music that you have purchased.
Re:XServe (Score:2)
Re:XServe (Stay clear) (Score:2)
Not recommended.
Re:We do nice house built clusters (Score:2, Insightful)
Re:We do nice house built clusters (Score:2)
Guess it says something about legacy bios if you really need to have a video system in a machine that will never have a monitor/keyboard attached in normal operation. The other detail that really irritates when it comes to off the shelf pc equipment, cant count how many times I've seen a bios post screen sitting there saying 'keyboard error, keyboard not attached - hit F6 to