Linux Clustering Hardware? 201
Kanagawa asks: "The last few years have seen a slew of new Linux clustering and blade-server hardware solutions; they're being offered by the likes of HP, IBM, and smaller companies like Penguin Computing. We've been using the HP gear for awhile with mixed results and have decided to re-evaluate other solutions. We can't help but notice that the Google gear in our co-lo appears to be off-the-shelf motherboards screwed to aluminum shelves. So, it's making us curious. What have Slashdot's famed readers found to be reliable and cost effective for clustering? Do you prefer blade server forms, white-box rack mount units, or high-end multi-CPU servers? And, most importantly, what do you look for when making a choice?"
Check out Xserve (Score:5, Informative)
Probably the most interesting news lately for OS X for HPC is the inclusion of Xgrid with Tiger. Xgrid is a low-end job manager that comes built-in to Tiger Client. Tiger Server can then control up to 128 nodes in a folding@home job management style. I've seen a lot of interest from customers in using this instead of tools like Sun Grid Engine for small clusters.
You can find some good technical info on running clustered code on OS X here [sourceforge.net].
The advantage of the Xserve is that it is cooler and uses less power than either Itanium or Xeon, and it's usually better than Opteron depending on the system. In my experience almost all C or Fortran code runs fine on OS X straight over from Linux with minimal tweaking. The disadvantage is that you only have one choice: a dual-CPU 1U box - no blades, no 8-CPU boxes, just the one server model. So if your clustered app needs lots of CPU power it might not be a good fit. For most sci-tech apps, though, it works fine.
If you're against OSX but still like the Xserve, Yellow Dog makes an HPC-specific [terrasoftsolutions.com] Linux distro for the Xserve.
well... (Score:5, Informative)
we're hoping that upgrading to OpenSSI 1.9 (which uses a 2.6 kernel instead of the 2.4 kernel in the current stable release) will show better disk performance... but... yeah.
Re:Dual Opteron 1U rack units.... (Score:5, Informative)
Re:Dual Opteron 1U rack units.... (Score:3, Informative)
(not to mention that the dual core opterons acutally consume less power than some of the early steppings of the single core ones)
No one size fits all answer but here is mine :) (Score:5, Informative)
I build Linux and Apple clusters for biotech, pharma and academic clients. I needed to announce this because clusters designed for lifesci work tend to have different architecture priorities than say clusters used for CFD or weather prediction
I've used *many* different platforms to address different requirements, scale out plans and physical/environmental constraints.
The best whitebox vendor that I have used is Rackable Systems (http://www.rackable.com/ [rackable.com] . They truly understand cooling and airflow issues, have great 1U half-depth chassis that let you get near blade density with inexpensive mass market server mainboards and they have great DC power distribution kit for larger deployments.
For general purpose 1U "pizza box" style rackmounts I tend to use the Sun V20z's when Opterons are called for but IBM and HP both have great dual-Xeon and dual-AMD 1U platforms. For me the Sun Opterons have tended to have the best price/performance numbers from a "big name" vendor.
Two years ago I was building tons of clusters out of Dell hardware. Now nobody I know is even considering Dell. For me they are no longer on my radar -- their endless pretend games with "considering" AMD based solutions is getting tired and until they start shipping some Opteron based products they not going to be a player of any significant merit.
The best blade systems I have seen are no longer made -- they were the systems from RLX.
What you need to understand about blade servers is that the biggest real savings you get with the added price comes from the reduction in administrative burden and ease of operation. The physical form factor and environmental savings are nice but often not as important as the operational/admin/IT savings.
Because of this, people evaluating blade systems should place a huge priority on the quality of the management, monitoring and provisioning software provided by the blade vendor. This is why RLX blades were better than any other vendor even big players like HP, IBM and Dell.
That said though, the quality of whitebox blade systems is usually pretty bad -- especially concerning how they handle cooling and airflow. I've seen one bad deployment where the blade rack needed 12 inch ducting brought into the base just to force enough cool air into the rack to keep the mainboards from tripping their emergency temp shutdown probes. If forced to choose a blade solution I'd first grade on the quality of the management software and then on the quality of the vendor. I am very comfortable purchasing 1U rackmounts from whitebox vendors but I'd probably not purchase a blade system from one. Interestingly enough I just got a Penguin blade chasssis installed and will be playing with it next week to see how it does.
If you don't have a datacenter, special air conditioning or a dedicated IT staff then I highly recommend checking out OrionMultisystems. They sell 12-node desktop and 96-node deskside clusters that ship from the factory fully integrated and best of all they run off a single 110v electrical. They may not win on pure performance when going head to head against dedicated 1U servers but Orion by far wins the prize for "most amount of compute power you can squeeze out of a single electrical outlet..."
I've written a lot about clustering for bioinformatics and life science. All of my work can be seen online here: http://bioteam.net/dag/ [bioteam.net] -- apologies for the plug but I figure this is pretty darn on-topic.
-chris
Re:Check out Xserve (Score:3, Informative)
Actually, the cheapest way to go would be to buy the 250GB ADM, apple part number M9356G/A, and pull out the 250GB drive and use it somewhere else (or just use that drive). Service parts tend to be pricey (like everyone else).
Re:No one size fits all answer but here is mine :) (Score:3, Informative)
Depends on the specific code to meet your criteria of "twice as fast"...some apps will be more than twice as fast; some will be slightly faster, equal or in some cases slower.
For more general use cases (at least in my field) I can give a qualified answer of "dual Opteron currently represents the best price/performance ratio for small SMP (2-4 CPUs)".
I've also seen cheaper pricing from Sun than what you mentioned. You are right though in that there is a price difference between xeon vs opteron - whenever I consider a more expensive alternative I tend to have fresh app benchmark data handy to back up the justification.
-Chris
SunFire Servers (Score:5, Informative)
http://www.sun.com/servers/entry/v20z/index.jsp [sun.com]
http://www.sun.com/servers/entry/v40z/index.jsp [sun.com]
They're the entry-level servers from Sun, so they have great support. They're on the WHQL List, so Windows XP, 2003 Server and the forthcoming 64-bit versions all run fine.
They also run Linux quite well, and as if that wasn't enough, they all scream along with Solaris installed.
The v20z is a 1 or 2 way Opteron box, in a 1RU case. the v40z is a two or for CPU box that is available with single or dual core Opterons.
Plus, they're one of the cheapest, if not the cheapest, Tier 1 Opteron servers on the market.
Re:Cheap isnt always the way to go (Score:1, Informative)
Short argument: Google choosed the cheapest way, they are happy with it and don't plan to change their mind.
Long argument: What you don't realize is that cheap hardware is so DAMN cheap, that you CAN afford employing the manpower required to maintain your cluster. In other words: instead of spending a lot of money on expensive hardware, buy the cheapest gear (where the price/perf ratio is optimal), and then spend a fraction of the money you saved to employ technicians that will maintain/replace defectuous hardware. Moreover with such cheap hardware, you don't even care about the warranty/support, you could trash it directly without even bothering contacting the manufacturer to get replacement parts ! Nowadays $300 is all you need to get a mobo + AMD Sempron 2800 + 512 MB RAM + 80 GB harddisk + NIC 100 Mbps + PSU. So why spend $3000 on a dual-opteron server when you could buy 10, TEN, fscking white boxes for the same price. The only reason would be that the software cannot be parallelized, or that you have constraints on the space occupied by your clusters (that's why I said "in *most* cases cheap is the way to go"). But then even in thoses cases you could maybe spend the money to rent more racks/space, and then you would save money by buying those white boxes.
Re:SunFire Servers (Score:3, Informative)
Besides that, we also have several clusters of midsized towers from a local company, which have the benefit of being able to be used as desktop machines when the cluster gets replaced.
It Depends... (Score:4, Informative)
Answering this... It Depends...
What is your cluster's tolerence to failure? If a node can fail, then you have the option of buying a lot of cheap hardware and replacing as necessary. This is the way that most big web farms work.
What is your cluster machine requirements? Do you have heavy I/O? Does cache memory matter? Do you need a beefy FSB and 64G of RAM per node? You may find that spending $3000/node ends up being cheaper than buying three $1000 nodes because the $3000 node is capable of processing more per unit time than the three $1000 units are.
What is your power/rack cost constraint? Google is an invalid comparison simply because of their size. They boomed when a lot of people were busting and co-lo's were hungry for business. I'd bet they have a song of a deal in terms of space, power, and pipe. You are not Google and I doubt you have a similar deal. Thus, you may find that there is a middle ground where it is better to get a more powerful machine to use less rack space/power.
In the end, you have to optimize between these three variables. You'll probably find that the solution, for you specifically, is going to be unique. For example, you may find that: Node failure is an option since the software will recover, power/rack costs are sufficiently high that you have to limit yourself, and CPU power with a good cache is crucial, but I/O isn't. This means getting a cheaper Athlon based motherboard with so-so I/O and cheesy video is a good choice since it frees your budget for a fast CPU. Combine with the cheapest power suppy the systems can tolerate and PXE boot and you have your ideal system.
Best of luck.
Re:Dual Opteron 1U rack units.... (Score:3, Informative)
Re:Dual Opteron 1U rack units.... (Score:3, Informative)
Here [dell.com] is a Dell PowerEdge 750 1U server with a P4 2.8GHz HyperThreaded, 256M and an 80G SATA (with room to add another drive and up to 4G of memory) for $499 shipped to your door. Yea, I'm a Dell fanboy, but even if I wasn't I would still see a pretty good price point in that box.
Note that this specific box is pretty low end and could use some upgrades, but it is a complete machine ready to run, esp if you want to go on the cheap.
Re:No one size fits all answer but here is mine :) (Score:3, Informative)
Re:Check out Xserve (Score:3, Informative)
I highly recommend Penguin Computing (Score:3, Informative)