How Well Does Windows Cluster? 665
cascadefx asks: "I work for a mid-sized mid-western university. One of our departments has started up a small Beowulf cluster research project that he hopes to grow over time. At the moment, the thing is incredibly weak... but it is running on old hardware and is basically used for dog and pony shows to get more funding and hopefully donations of higher-end systems. It runs Linux and works, it is just not anything to write home about. Here's the problem: my understanding is that an MS rep asked what it would take to get them to switch to a Microsoft cluster. Is this possible? Are there MS clusters that do what Beowulf clusters are capable of? I thought MS clusters were for load balancing, not computation... which is the hoped-for goal of this project. Can the Slashdot crowd offer some advice? If there are MS clusters, comparisons of the capabilities would be welcome." One has to only go as far as Microsoft's site to see its current attempt at clustering, but what is the real story. Have any of you had a chance to pit a Linux Beowulf cluster against one from Microsoft? How did they compare?
Licensing (Score:3, Informative)
Software costs for 100 linux machines are close to nill.
Software costs for 100 Windows machines probably won't be.
Granted I have read the licensing on the MS Clustering link but if it like anything else you'll need either a license of some kind on every machine.
Re:Licensing (Score:5, Informative)
From reading the MS Site it looks licensing is based of the EULA of the software being used, so if you are using win2kpro you have to have a copy of win2kpro for each machine etc etc.
Re:Licensing (Score:5, Funny)
the TCO!!!!!!!!!!!!!
you know how expencive a CS student is!!!! oh my god, how can they afford the astronomical amount of having 5 or 6 of them on one project.
don't you know that if you move to windows for all your reaseach project clustering needs, you only need a chimp....and since educating a chimp is much cheaper than educating 6 bright young men, your university will save a considerable amount of money....especialy when you lay off all those expencive profs and hire an animal trainer.
Re:Licensing (Score:4, Funny)
You have to pay someone to clean the cage, and that person alone is going to get paid more than any CS student, probably more than 2 of them.
Never mind that chimps demand a much higher standard of living than students do.
-Steve
Re:Licensing (Score:5, Funny)
Re:Licensing (Score:4, Informative)
At the end, it all comes to your soft, if you develop a highly scalable, almost share nothing algorithm, Linux Clustering is the way to go. For fail-over Linux you have tha HA Linux project, once more, Open Source!
Re:Licensing/Reliablity (Score:5, Interesting)
I can only hope MS's poor performance will make them switch.
That's easy (Score:3, Funny)
Re:That's easy/ Wish I could. (Score:3, Interesting)
even easie: (Score:3, Troll)
:)
hawk
Re:Licensing (Score:3, Funny)
---
Extra! Extra! Read all about it [slashdot.org]! Slashdot editors censor dissenters [slashdot.org].
Why is a hammer called `an American screwdriver?' (Score:3, Insightful)
Actually, it does loosen the screw. The rellies on the farm use a hammer quite effectively as a screwdriver (both ways) and spanner where appropriate. It just doesn't do the screw in question much good...
I've used a hammer myself to gut a dead hard drive for the magnets, when I didn't have a small enough star driver. I just flattened the top of the bolt out to tinfoil thickness and pulled it straight through the metal cover. The technique with screws is different, some light taps can loosen them in their substrate (typically wood or sheetmetal) enough to winkle out by hand. Using somer CRC/WD40 often helps as well.
Re:Licensing (Score:5, Insightful)
> BSODs down to near nil and others can't but it is always the OS's fault. Hmmmm.
I find this intetresting because I have seen it too.
In my experience a well run Windows system by a person with real clue can last a while and be pretty blue screen free. The same is true for a system run by an idiot who got it all installed right and hardly does anything with the box, just plays some specific game or uses Word or something.
However, when you start installing software and doing different things, they gan get real flakey real fast. Not just in reliability either... users shit all over the box!
I saw someone turn on their computer...it came up... and the desktop was just littered with icons... full. They never manage their stuff, they just keep all that crap that every little software package installs.... is it just me or are companies that make Windows software extremely arrogent? id say MAYBE 1% of the software I use is something important enough that I want an icon for it on my desktop made special... but every peice of windows software seems to think its that special.
my little rant... the unmanageability is why I don't use it. I installed debian GNU/linux on this box 2.5 years ago, have installed software and iuninstalled it over and over... and it never gets unstable.
In a cluster, where software isn't being installed and uninstalled, windows will probably be just fine. Tho frankly, id rather a bunch of unix boxen with tools like cfengine to manage such things.
-Steve
Re:Licensing (Score:5, Insightful)
Every man-hour spent reading, auditing, and managing licenses is a man-hour that is not applied to real work (he says, posting to /. from his desk at work ;-). Every hour the compute nodes sit idle while licensing is sorted out is a 4.17% performance hit for that day.
All those licenses cost money, which means fewer CPUs. If a compute node costs $400, and licensing is $100/node, you can afford 25% fewer nodes. This is indistinguishable from a free OS that has a 25% performance flaw.
Then there's risk. The software mafia aren't going to audit a Linux cluster, sapping administrative time, and perhaps cease-and-desisting it offline. Linux cluster admins are never going to go to jail because they threw another machine online for the hell of it. Linus Torvalds will never sue a Linux cluster operator into oblivion to make an example of them. These are all possibilities with a proprietary product, and all-too-likely with a notorious lawyer-pit like Microsoft.
Fault tolerance (Score:2, Insightful)
Anyhow, imagine how much you're paying in software licensing for a large cluster? For a univeristy project, this just doesn't seem to make sense.
Capt. Obvious. (Score:4, Funny)
Ah, so this is a typical Ask Slashdot then?
--saint
money, for one thing (Score:3, Insightful)
So unless they're willing to give you their OS for free, why would you even consider it? Suddenly your supercomputer cluster would cost like a real supercomputer... then you could have just bought a real supercomputer!
Re:money, for one thing (Score:3, Interesting)
Point? (Score:5, Interesting)
1. Not having to buy an licence for each machine
2. Having an infinitely configurable system (meaning that you can load as much or as little of the OS and libraries as you want/need)
3. The use of high quality, low/no cost development tools.
It seems to me as though running a cluster on 2k would possibly be easier (point and click) but less efficient.
Re:Point? (Score:5, Interesting)
Here's the problem: my understanding is that an MS rep asked what it would take to get them to switch to a Microsoft cluster. Is this possible?
What would it take to switch? Well, you go to the rep and ask for X P4 1.8 GHz desktops, X licenses of Win2k, Y trips to Wicrosoft Clustering Training Junket in sunny Bermuda.
What does MS get in return? A department of trained CS students and professors who, when someone asks about distributed computations, will respond "Microsoft" instead of "Linux". And when those students enter the real world and the PHB (who wants MS anyway) asks about clustering the answer will be "Microsoft".
Remember, Linux earns mindshare, but Microsoft buys it... and it is almost always easier to buy someones loyalty than to earn it.
Re:Point? (Score:5, Informative)
"Q. How does a Windows-based supercluster compare with one running UNIX or Linux?
A. In short, there's very little substantive difference, but owners of existing UNIX-based solutions will face changes that will cause them some work and discomfort (less for users than for their current administrators and support staff). These are offset in part by lower costs of ownership (technical skills required), breadth of applications and support tools, vendor support options, and commonality with the constantly improving desktop environment.
From a hardware perspective, there's very little difference seen by the application. In the past, UNIX-based hardware environments had better floating-point performance, but that's been offset in the last few years by Moore's Law curves for large-volume products that have advanced faster than specialty products have, as well as the price and support cost differentials between these vendors' products.
From a software perspective, Windows is a markedly different environment, designed with priorities set by a much different market segment than traditional science and engineering. Windows NT® and now Windows 2000 were designed to meet the needs of those ISVs building products for businesses that are unable or unwilling to dedicate their best people to support their infrastructure (versus focusing on building solutions for their business mission), as well as the needs of a hardware community that required continuous integration of new devices and components."
There's a name for that... (Score:3, Funny)
I believe the military have a term for this.
It's called a Cluster F**K.
Take a look at (Score:5, Informative)
Re:Take a look at (Score:5, Funny)
--
You're Reading Managed Agreement [slashdot.org]
I believe you're correct... (Score:3, Interesting)
New Microsoft Product!! (Score:3, Funny)
savvy, their new Cluster Product will be called:
The Cluster Bomb!
department title said it all... (Score:4, Funny)
too late
Re:department title said it all... (Score:2, Insightful)
so what do you call it? (Score:5, Funny)
:)
hawk
Nope (Score:3, Funny)
Re:so what do you call it? (Score:4, Funny)
Didn't make sense to me, but the sys admins certainly were adamant
Re:so what do you call it? (Score:3, Funny)
Except for the spelling error, that'd just about sum it up, eh?
Here's the deal: (Score:5, Informative)
Simply put, it works well (but the cost is often an issue due to the cost of hardware in an enterprise) but it is not the same clustering you see with the Unices. E-mail me at my account if you have more specific questions.
My intent is not to start or participate in a flame war, but the term clustering simply implies different things on different OS'.
mod up the parent (Score:3, Insightful)
There are plenty of resources on the net that provide specific details about building clusters and how to optimize the performance. don't forget applications need to be re-written to make them friendly to distribute/parallel processing.
Re:Here's the deal: (Score:5, Insightful)
Except your post is factually incorrect. MSCS is a POS -- to say it works "well" is true if you mean "well... it works.... kinda."
It basically just enables multi-initiator support for SCSI chains (so a chain can be connected to 2+ hosts), allows more memory for large applications (if the application is written correctly to use it) and (this is the main feature) allows services to fail-over from one host to the other.
This is where it MSCS should be good, but it just isn't. Basically imagine you have 2 NT servers. A is running Services, and B isn't running any Services except the basics. Do a NET STOP on all the services on A, wait for it to completely finish, and then, and only then, do a NET START on those same services on B. Visualize how long in your mind that would take, and then double it. If anything goes wrong, like a service won't stop (imagine that) or a service can't start due to a dependancy, it throws a monkey into the whole works.
Also, the clusters disks can only be used by one node at a time, and while it would have been trivial for Microsoft to expose each disk to both hosts always (by automatically mounting the disk on the "other" node over the network) they just didn't bother.
It's also got alot of setup caveats. Read the entire manual very carefully and take notes before you even purchase hardware. Then go on-line and read all the addendums and known issues. A good understanding of NT is not enough -- MSCS is a different build (compile) of NT than the Workstation/Server version. She is a woman who has serious issues, some of which can't be fixed by you.
And then there's the blue screens. And the 7 hour installation procedure. And the way you are strongly cautioned from deleting or changing some MSCS settings after being set, with loving MS-style advice that a reinstall is your best bet.
However, for just plain applications, it's OK. Anything you can run from the command line proper can be put in the cluster and will fail over. So if your one of the majority of Acrobat Distiller user who installs in a manner that violates the EULA, i.e. on NT polling the "In" folder of a network share, MSCS can fail over Distiller VERY FAST (it's not a service, so no delays). However, with a little brains and a little ActiveState Perl (or cygwin I suppose) you could hack together a work-a-like using DFS + rsync and save a lot of money.
Kudos to your post for not trying to engender a flame war. But you kinda imply that MSCS is worth the exorbitant price tag, and it just isn't for what little it does and the problems and extra headache it brings with it. I'm not flaming you, just spreading the word:
DON'T BUY MSCS -- IT SUCKS. IF THEY GIVE IT TO YOU FOR FREE, SEND IT BACK OR GIVE IT TO SOMEONE YOU DISLIKE.
Back on topic, what MS may try and sell you is something based on the Microsoft Message Queue and the Microsoft Transaction Server. Those are more BackOffice-variety PHB-entitled products that really don't do much except provide an API for sending guaranteed IPC and doing transactions, even for VB monkeys who don't really understand what that means but think it sounds just plain awesome. Free with the option pack.
This is part of that Microsoft program to divert "wins" from Linux to Microsoft at all cost, especially from IBM. So the sales rep probably doesn't have a clue what your cluster really does, what you want it for, or what MS products it would actually take to build a knockoff. They may have a anti-beowulf team cooking something up right now, and guess what pal?! They're hoping your administration will take the bait of free hardware and licenses, and you'll end up beta-testing a 0.1a version of some bizarro-beowulf for MS. What a deal!!!
Good luck. I'd stick to you guns and inside on using something already proven to work for your goals, like Beowulf or AppleSeed.
Re:Here's the deal: (Score:3, Insightful)
In case you haven't noticed, the 'Asl Slashdot' sections are for answering the original submitter's question, but they also provide a wealth of information to other readers. My post was intended to be informative, but then again YMMV.
Re:, but what the hell (Score:3, Informative)
I used to have some WLBS (Windows Load Balancing Services) systems (NT4's idea of load balancing cluster).
They worked, more or less, most of the time (about 4 reboots/day on average I think). The problem was, the thing was IMPOSSIBLE to debug and troubleshoot, for the simple reason that it was impossible to know where the problem was. WLBS did terrible layer 2 trickery to route requests around, and as a result it didn't work well with anything more complex than a hub.
Luckily it's now gone and not missed.
Disclaimer: the opinions here expressed are of course my own and do not necessarily reflect any organization's
Beowulf (Score:5, Interesting)
http://www.windowsclusters.org/projects.htm gives a list of current Windows clusters.
Finally, are you out of your tiny little mind? I wonder why M$ is so keen to help. There is no such thing as a free lunch, espically from M$.
Re:Beowulf (Score:4, Interesting)
Why do they do this? Simple: it's a long-term marketing trick (and a cheap writeoff.) Train the students with Windows, Office, Visual Studio, MSSQL Server, IIS, et bloody cetera, and that's what they'll know when they get out into the working world. Companies that already use M$ shit will have an easier time hiring new people. Companies that are deciding on new systems will have people in their IT dept. who say, "Well, I don't know anything about Linux/Solaris/gcc/Apache/whatever, but I know all about NT and VC++ and IIS," and may well make multimillion-dollar purchasing decisions on that basis. It's not hard to figure out.
Re:Beowulf (Score:3, Insightful)
If I were going to run a cluster that needed to take advantage of computational power I would go Linux. However my choice would be baised off of the fact that up until this point I still have not seen enough documented proof to support the theory that Microsoft vs. Linux cluster is even a battle. From my current knowledge I would have to deduce that they currently have their different uses even though the linked article above says that Microsoft Clusters are capabable of computational colaboration. Again as many have already stated, cost is always a factor when dealing with Microsoft and you have to take it into consideration.
I really will need to study the articles more closely in the link above. Many thanks for publishing it, this is the first thing I have read to support Microsofts capablity of computational colaboration within a cluster environment.
Remember my little Penguins do not be so quick to judge any OS even Microsoft's. Microsoft may not be cheap, it may be filled with bugs, and it may not always be the most secure. But it does serve its uses in the world, for now.
"It runs Linux and works" - 'nuff said? (Score:5, Insightful)
Make him convince you that the time and cost of the switch is going to gain you something.
Does your current setup not do what you need it to do?
MS Cluster is not the same (Score:5, Informative)
We run a MS cluster here. VERY big app... so big, I am loathe to name figures, because that would identify to MS just who is talking here...
But, we use MS clustering for our web app. Our setup is that we have a database server with 4 procs, and a growing array of web servers with 1 proc each, all of which use disk space on a SAN. W2K clustering manages the load balancing as well as allocating disk space out of the SAN to virtual partitions as needed. The original poster is correct; MS clustering is for load balancing, not computation. I have seen many times Microsoft sales reps don't have a clue of what they're trying to sell; they're just told from on high to replace Linux with Microsoft wherever they can. I think this is clearly a case of that.
My advice? Ask the sales rep to demonstrate how MS clustering will solve a common comp-sci problem with more MIPS than each box alone has. Point out that you're not running a web server or any such service on these boxes, but that they're for raw computation. Even better, see if he'll let you talk to a technician on how W2K clustering can meet your 'unique' (at least to MS) needs.
Now, for everyone else... Don't get me wrong. W2K clustering is a great technology for building highly performant, highly reliable, highly scalable applications quickly and easily. But it scales in the direction of millions of users, not millions of computations.
Re:MS Cluster is not the same (Score:5, Informative)
MS Computational Clustering [microsoft.com]
Re:MS Cluster is not the same (Score:5, Informative)
Notes for any and all interested in this; it's a technical preview, which any other company would call a pre-Beta or an Alpha release. The only way anyone sane would use this in a production system would be as an Early Adoption Partner...
Re:MS Cluster is not the same (Score:3, Informative)
This is a great idea. Scalapak benchmarks are a popular choice. Also think about what are you really getting for your money (license fees)? I work with a modest Beowulf (~50 cpus) using Linux and I have no doubt that it would be technically possible to use Windows... but you would spend a lot of time installing kludgy ports of unix tools: cygnus wintools, PBS, rsh, perl, etc. At the very least the two most popular message passing libraries (MPI and PVM) both rely on rsh.
All the tools that make a Beowulf what it is are free software, there is really NO added value by running them on Windows.
Windows 2000 Advanced Server (Score:2, Informative)
Distributed computing for Windows has been around for a while though, Seti@home has been doing it for years.
BTW: MS Slashdotted (Score:4, Funny)
Re:BTW: MS Slashdotted (Score:5, Funny)
MS AppCenter server (Score:2, Interesting)
However there is a server solution I saw demoed at a MS DPS I attended called Application Center [microsoft.com]. It allows you to manage your cluster and distributes workloads throughout the cluster.
Now, I'm not sure if you NEED this to take advantage of Windows 2000 clustering. The last time I worked with a MS cluster was under NT 4 and it was failover only. The load balancing was "faked" by a router that would just alternate which server the request was sent to.
(insert "yeah but MS is evil" comment here)
(insert "yeah but Linux Beowulf clusters cost less" comment here)
(insert "yeah but who wants to have to reboot your cluster all the time" comment here)
(insert "I wish the sigs were longer because that's a really good quote by Richard Feynman" comment here)
what?? (Score:5, Insightful)
first the rep needs to prove that $199.00 per node for software fees has to provide major benifits over the Linux cluster. How many windows clusters can he list for you to call and ask about it? refrences, ones you can call and talk to the guys running/maintaining it. Show where microsoft provided increased profits or savings over an open alternative.
If they cant give you a dollar amount that shows increased profits or major savings then be sure to tell the rep that he shouldn't let the door hit him in the ass on the way out. It isnt MS versus Open anymore in today's economy.. it's what can get it done and save me money or can give me more profits... and this is what makes Open solutions win... microsoft can't give savings and the performance difference isnt enough to give profits that will more than overcome the added expense of Microsoft.
Get real numbers, talk to real people running real clusters on all platforms. if you have real numbers then you can make solid decisions.
Windows Clustering (Score:3, Interesting)
Haven't seen the reported "bsod round table" where one machine crashes, shortly followed by another and another. The problems we have seen is a single machine bsods, and the other machines in the cluster don't realize it's down.
If your already in the MS camp, it will work, it look at other solutions. I think they will be more cost effective.
Re:Windows Clustering (Score:2)
should be "If your already in the MS camp, it will work, if you are not I would look at other solutions. I think they will be more cost effective."
Seen in list of software included... (Score:2, Interesting)
* PLAPACK package (open source software)
heh.
-JT
Well, with Condor... (Score:2, Interesting)
I've been looking at this a lot myself now, as I'm also building a cluster for use in a computational bio lab at Florida State. It certainly seems that Linux is the only way to go right now. In case anyone cares, my cluster right now is 16 nodes of:
Tyan S2460 with 2 Athlon MP1800+ processors per node
1 gig PC2100 RAM per node
20 gig 7200 RPM Maxtor HD
3Com Gigabit over copper Ethernet
low-end cheapass video and floppy, etc.
All in these really nice rack cases, with a big black 2001 monolith-esque rolling rack to shove it all around in. It cost just about $26,000 to build so far, but the plans are to expand it to as many as 512 nodes within the next year or so. Whee!
MS Technology Preview? (Score:2)
Seems to me that historically, MS rushes a v1.0 product out to stem the tide of a competing product and then spend the next couple releases getting a "real" product out the door.
I have zero experience with unix clustering but would be suspicious of the MS offering until it has a chance to mature.
Clustered MPEG encoding with TMPGenc? (Score:2, Interesting)
That said, with the three computers I have at my place (a p3 desktop, a celeron I use as a low grade server, and my p3 notebook) I'd love to be able to set up a cluster for encoding. Such operations will be the killer app for clustered systems IMHO.
Poke around at IU (Score:2)
Stability issues (Score:5, Informative)
In terms of performance, Windows kernels have pretty good latency compared to 2.2.x linux kernels, so running a full screen dos app might give very good performance, but there's a lot of overhead munching into your RAM, which is likely to be an expensive premium on older hardware.
Lastly, with Windows, I've never heard of doing channel bonding for ethernet (3 100TX cards ~= 1 gigabit), nor diskless booting that I know of. These can be really necessary for large clusters to keep maintenance down and performance up without buying higher end equipment.
channel bonding (Score:3, Informative)
however, I would take issue with your assertion that 3 100mbit cards are roughly equal to a gigabit card. while it's true that something like 4 100mbit cards will give you close to the real performance of a gigabit card when used on a low end PC, there is much to be gained by using actual gigabit (use of giant frames, better latencies, etc.)
if you're going to build a cluster, and you actually have a budget, you're going to buy decent yet cheap server boxes. these will most likely include 64bit PCI slots, and there lies your motivation for gigabit. the performance there is unparalleled when using a real wirespeed switch, without using faster technologies of a proprietary nature.
my 2 cents.
Don't know if this answers your question... (Score:5, Funny)
Limits seem to be the key (Score:3, Interesting)
While I haven't been near a Microsoft Cluster in a while, I do remember a couple of things that really stand out about them:
The number of systems able to be part of the cluster is severely limited. At the time, it was limited to 2, but I'm pretty sure that has increased to a somewhat larger single digit number.
The number of applications available to run on the cluster is just as severely limited. Again at the time, there were exactly zero applications, but I know that there is at least one (Exchange) now.
Given the limitations of what uses you can put an MS cluster to, I wouldn't bother with it in the first place.
this is an excellent idea (Score:2)
Windows clustering a la Microsoft (Score:2, Interesting)
I followed the link to Microsoft's clustering solution. Another link took me to a free evaluation page - the package includes:
All for $7.95 shipping and handling!
May be a cheap way to get a few Win2K licenses?
Sales rep? (Score:2)
Keep in mind, the sales rep will not have your best interest in mind, just your money.
Yes, I know there are good reps out there, but I have become quite jaded with them on the whole, especially since I used to be that presales engineer telling clients that what the rep just said was:
a) not feasible
b) harder than described
c) not worth the money invested.
anyhow, my $.0002
First Hand Info (Score:4, Informative)
Notes from experience:
1) Clustering with Windows requires one of the following OS setups: Win2K Server WITH MS Application Center, OR Win2k Advanced Server. (Similarly with the XP platform)
2) OS Licenses therefor will run between $1000-2000 _per-machine_!
3) If you need Application center, which you likely will, you're talking (If I remember correctly) about another $1g per.
4) Of course MS is just getting into this so don't expect it to be easy, well documented or stable.
Finishing Notes:
Obviously, Linux would be mucho cheaper
Easiest, and still cheaper than MS would be the Plug-n-Play Mac solution!
About that Mac solution..... (Score:4, Informative)
Yellow dog linux sells a cute little piece of hardware designed for clustering around PPC. very cute...maybe the best balance of cost effective and easy in terms of clustering that ive seen.
http://www.terrasoftsolutions.com/products/briQ
-jef
No command line (Score:4, Insightful)
We're primarily using the Beowulf for computations which are "embarassingly parallel" - in other words, tasks for which it is trivial to partition the input into 16 equal-sized pieces, give one to each node, and then collect the results and paste them together. For example, multiplying incredibly huge matrices and brute-force keyspace searching are embarassingly parallel.
For us, the primary advantage of running Linux on the Beowulf is that most of the time we don't need to write custom software to speed up a calculation. We just write a shell script that rsh's into each box and runs a program with slightly different command-line parameters on each one.
Obviously for some computational problems it's worth using MPI to have the processes communicate with each other, or load-balancing software so that we can run lots of smaller, but different-sized jobs, and these techniques would probably work equally well whether you're running Windows or Linux.
But for experimentation and prototyping, and quickly distributing easy problems, I think there's an incredible advantage to having a command line. (Of course you could install Cygwin on all of the Windows boxen...but why?)
Tell them it would take a lack of common sense (Score:4, Funny)
If there's one place where Linux excells and Microsoft needs to be kept out of with armed guards, constentino wire, and rabbid dogs it's the computing research centers in higher education. Scraping by to live and make post graduate tuition can suck, but having to fight for grant money that only lines the pockets of the richest man on the planet just so you can do your thesis is adding way too much insult to injury. For the sake of future scholars, show this salesweasle the door with the help of your foot.
Asking the wrong questions.... (Score:3, Insightful)
The question here that isn't being asked is about the application. Sure, you have a cluster. But just what is it doing? What numbers are you crunching with that many gigaflops? To take the beowulf idea out of the realm of geek bragging rights into actual useful production takes an application, and you can bet that most are customer designed in house.
Very little of the OS itself is involved in the real applications that make beowulfs useful and money-making. Take a look at your intended application, and see what its requirements are. If you are writing it in house, tell the MS rep to take a leap, since you wont have to worry about 100+ MS licenses, Visual Studio licenses, or whatever else. If your intended application requires MS OS underneath, hold out on the rep until he agrees to a dramatically reduced price on the software. But worrying about the OS in a cluster before looking at the application is counter productive.
The OS doesn't matter - tools do (Score:5, Informative)
For a computational cluster, the OS itself shouldn't really matter. What matters is, do you have the tools you need, and does the environment allow you to work with the cluster in a flexible way.
For a typical compuatational cluster, what determines the performance will be the quality of your application. Only if you pick an OS with some extremely poor basic functionality (like, horribly slow networking), will the OS have an impact on performance.
People optimize how their application is parallelized (eg. how well it scales to more nodes). The OS doesn't matter in this regard. They optimize how well the simple computational routines perform (like, optimizing an equation solver for the current CPU architecture) - again, the OS doesn't matter.
So, in this light, you might as well run your cluster on Windows instead of Linux, or MacOS, or even DOS with a TCP/IP stack (if you don't need more thatn 640K
However, there's a lot more to cluster computing than just pressing "start". You need to look at how your software performs. You need to debug software on multiple nodes concurrently. You need to do all kinds of things that requires, that your environment and your tools will allow you to work on any node of the cluster, flexibly, as if that node was the box under your desk.
And this is why people don't run MS clusters. Windows does not have proper tools for software development (*real* software development, like Fortran and C - VBScript hasn't really made it's way into anything resembling high performance (and god forbid it never will)).
Furthermore, you cannot work with 10 windows boxes concurrently, like they were all sitting under your desk. Yes, I know terminal services exist, and they're nice if you're a system administrator, but they are *far* from being usable to run debuggers and tracing tools on a larger number of nodes, interactively and concurrently.
Last but not least, there are no proper debugging and tracing tools for windows. Yes, they have a debugger, and third party vendors have debuggers too. But anyone who's been thru the drill on Linux (using strace, wc -l
So sure - for a dog&pony show, windows will perform similar to any other networked OS with regards to computational clusters. But for real-world use ? No, you need tools to work.
Ask for modifiable code and no injurous NDAs (Score:5, Insightful)
Simple... ask for :
Re:Ask for modifiable code and no injurous NDAs (Score:3, Interesting)
2. This is a given, except when closed source would be revealed publicly (which is also a given).
3. The very idea is ridiculous.
4. Pretty much a given as well (free, that is).
Balancing versus Distributing. (Score:4, Insightful)
If the server holds the data and you have a potential of a lot of clients doing requests (thus I/O, Bandwidth, like a P2P crunching system to name a popular example) In that example, I don't see why you'd want to switch to microsoft if you got it to work on linux, you'll need to have a very good knowledge (or hire someone with) of Microsoft Server products if you want to move to anything more than a standalone server. Also last time I checked with M$ for that solution because I wanted a safer domain and maximum uptime, everything was doubled for 2 machines, I thought it would be a bit cheaper than that but heck, for the price of the Advanced server VS the standalone, with 25 users, you can get an extra tape drive and cheap RAID1 to mirror your critical drives (on a small buisness server)
So if you mention that you WISH you'll get donations, and you want raw computing power, instead of buying MS licenses, concentrate on the goal you try to acheive: distributed crunching power with scalable servers, so basically you'll need HARDWARE to crunch. (I still don't get why you'd NEED server to run number crunching, workstations can do the same and transmit to a server, like I was stating before). Check what you have, check what you need, design around that, do a cost analysis since it seems to be very critical in your case.
There are some cases where you'll want MS servers, here at work I've setted up a MS server to have less configuration and troubleshooting issues with my win2k Pro machines (at least I know when something screws up it's MS related for sure
Inside the brain of the ms salesman (Score:3, Funny)
...Execute search MS - terms: cluster
Results: Microsoft Clustering, formally known as wolfpack.
...Execute talk: Yes...MS does clustering, what would it take to convince you to use ours.
I think if I was in the customer's position, I'd agree to it just to shove it back in their face when I ask how it distributes the computing load etc...of course that would be
blah blah blah computing load blah blah
...Execute search MS - terms: computing load
Fuzzy Logic Results: Microsoft Clustering, formally known as wolfpack. Use for load balancing.
I just think that its funny... (Score:5, Funny)
Just come out and say it.
What's your app? (Score:5, Insightful)
Beowulf clusters get built to support your application, not the other way around. Your choice of hardware and OS will depend on the parallel nature of your code. Do you need myrinet, or can you get away with fast ethernet? Will your code even compile under win32? Do the supporting libraries (PVM/MPI/BLAS whatever) run under win32? What about the queuing system?
How are you going to manage the cluster? You need automation, even for small clusters. How easy is it to add a new user, apply a patch or change a bios setting on your cluster without having to plug a keyboard and monitor into each node? What about central logging? How about automated OS installs when you add another 100 nodes when you get your funding?
Oh. Benchmark, benchmark, benchmark. That means your code, running your datasets, on your hardware and OS. Not vendor supplied numbers. If you have a serious hardware vendor, you should be able to wrangle demo mechines off them. Try before you buy.
Here's what to tell them. (Score:5, Interesting)
So, take the MS reps through the operation, tell them the capabilities. Ask them if they can meet or exceed them. If they say "Yes", you're either not using the real capabilities of your Linux machines, or they're lying.
steve
use Application Server, not Clustering (Score:5, Informative)
Introducing Windows 2000 Clustering Technologies [microsoft.com]
Application Center home page [microsoft.com]
Component Load Balancing [microsoft.com]
Windows 2000 Clustering (kinda) explained... (Score:3, Informative)
A few points:
You can set up any application or service to cluster & fail over if required, as long as:
Active/Active mode is more complicated, meaning instances of an application running on different nodes, all accessing the same data on disk. Only certain applications can do this successfully, e.g. Oracle, which does so by using a custom file system and effectively bypassing the Windows Cluster Service. Windows & most apps will normally throw a fit if there are clashing file requests from multiple nodes, since Windows caches file tables in memory and can thus lose track of the real situation on disk (bad news). I've seen it BSOD in such cases.
Can you imagine... (Score:3, Funny)
What it would take (Score:5, Funny)
You've got a golden opportunity here! Microsoft does it your way or they don't get the sale.
Let them know the nature of a cluster in a research project. Nodes will be swapped in and out. New ones will be added. Different OSs will be used. So tell them you want a copy of Windows for each potential node, licensed to the University and not to any individual node. Tell them you need full rights to install, reinstall, and uninstall any particular copy on any particular node. Tell them you will not accept any terms restricting the cluster to Windows only.
If you really want to play hardball, tell them you don't even want licenses, but bonafide user-owned copies of Windows subject only to the provisions of copyright. In other words, you don't want to be subject to any EULA. Then you'll discover how much Microsoft wants your cluster to be a Windows cluster.
Pooch for the Macintosh (Score:3, Informative)
This would allow you to use the Macs (OSX UNIXY goodness too) individually as personal workstations (for writing, graphics, computation, surfing the web) while at the same time using them in clusters for compute intensive work. This makes for a doubly productive machine and one that is much cheaper as more work can be accomplished with it than simply using it as a dedicated node.
Mac clusters are easy peasy to set up (even junior high students are doing it) as the one page instructions should indicate and Applescript'ability. Also pretty damn fast given the built in Gigabit of G4's and the Altivec (if taken advantage of like in Apple's version of BLAST).
Finally, the other item of interest. You can use any Mac you have. G3's, G4's of any model and speed as one does not have to balance everything like on typical clusters where all of your hardware has to be exactly alike. The Macs in your cluster can even have iMacs on the secretaries desk involved!
Simply... (Score:5, Interesting)
Then, point out the scads of Beowulf clusters and Linux/Unix based systems.
Finally, inform the rep and your management that you've chosen to use the more cost effective, higher performance and standardized choice...Unix.
If management resists further, do a cost analysis. That'll convince them.
299,792,458 m/s...not just a good idea, its the law!
I can always count on Ask Slashdot... (Score:5, Interesting)
Right now a coworker and I are looking at pricing and configuring a fault-tolerant cluster for a client who runs Windows 2000 and Exchange 2000. They're a bit paranoid, so they've decided they want a cluster. We've tried to educate them on exactly what a Microsoft cluster can and can't do, so it's difficult to understand exactly what they want (basically an entire network exactly like Microsoft's own, but for $1000).
Pricing on a two system cluster is around $50,000. Buying two copies of Exchange and Windows Advanced Server will total $20,000. Then there's the hardware costs. For our client, they've specifically requested this, so they're ready to pay.
My question to Whamo is are they really taking the Microsoft rep seriously? If they have to pay software costs for their new cluster that's going to mean two things: either buying less CPUs to add to the cluster, or not doing the project at all, because just the software will put them over budget. With Advanced Server running somewhere around $4000 that's a lot per machine when Linux costs at most $5 to burn a CD after downloading it via the university's T1/T3/etc. Whamo says "it is running on old hardware and is basically used for dog and pony shows to get more funding and hopefully donations of higher-end systems" and to me that is your answer. If you can't afford the hardware you can't afford to buy Microsoft's software...
Also, there's MOSIX [mosix.com] as well, but I don't have much experience with MOSIX and thus cannot comment on it.
MSCS Vs. Beowulf = Apples vs. Oranges (Score:3, Informative)
Shameless plug (Score:3, Informative)
We have been very successful in Windows clustering efforts and offer a professional MPI implementation for windows platforms. Give us a shot I am sure we could set up an evaluation of some sort.
That said, we have the following self-kudos:
CORNELL THEORY CENTER'S VELOCITY CLUSTER MAKES THE TOP 500 LIST (June 16, 2000)
"Our relationship with MPI Software Technology, Inc. has been extremely valuable," says Cornell Theory Center associate director for systems Dave Lifka. "Good job scheduling, resource management, and reliable MPI are the primary pieces of any high performance computing environment. MSTI has made the extra effort to make sure MPI/Pro and Cluster CoNTroller are ready for a production quality environment. The utilization and stability the AC3 systems is directly related to the quality of their software."
World's Largest NT Cluster Goes Live (August 25, 1999)
The Advanced Cluster Consortium (AC3), which includes Cornell University, Intel, Microsoft, Dell, Giganet, and MPI Software Technology, Inc., announced on August 12, 1999, that it had completed the installation of a 256-processor high-performance computer cluster using Windows NT 4.0. AC3's cluster bests a University of Illinois 192-processor NT cluster, which Windows NT Magazine covered in June 1999.
As you can see we've been at it a while!
Microsoft says... "little substantive difference" (Score:5, Funny)
I tried to read between the lines so we can get the "real" picture... my comments are in italics, and delimited with brackets.
Q. How does a Windows-based supercluster compare with one running UNIX or Linux?
A. In short, there's very little substantive difference [ except you have to pay for our software, and it's not cheap ], but owners of existing UNIX-based solutions will face changes that will cause them some work and discomfort (less for users than for their current administrators and support staff) [ because when the server blue screens in the middle of the night who gets called? ]. These are offset in part by lower costs of ownership (technical skills required) [ because incompetent Windows admins are a dime a dozen ], breadth of applications and support tools [ expenses ], vendor support options [ additional expenses ], and commonality with the constantly improving desktop environment [ which is completely useless for a (headless) server ].
From a hardware perspective, there's very little difference seen by the application. In the past, UNIX-based hardware environments had better floating-point performance [ and still does ], but that's been offset in the last few years by Moore's Law curves for large-volume products that have advanced faster than specialty products have [ now you can throw more hardware at the problem for the same price ], as well as the price and support cost differentials between these vendors' products.
From a software perspective, Windows is a markedly different environment [ hopefully if you agree with this statement you will agree (and believe) our other statements ], designed with priorities set by a much different market segment than traditional science and engineering [ we're trying to shoehorn our product into a market it doesn't belong ]. Windows NT® and now Windows 2000 were designed to meet the needs of those ISVs building products for businesses that are unable or unwilling to dedicate their best people [ incompetent employees/amoebas ] to support their infrastructure (versus focusing on building solutions for their business mission) [ because supporting infrastructure should not be that hard ], as well as the needs of a hardware community that required continuous integration of new devices and components [ such as digital camera support for your database server ].
[ we hope that you've become completely confused by this, please telephone your local Microsoft sales office and we will "explain" things to you... please have your credit card ready ]
MS clustering = bad mmkay? (Score:4, Informative)
2 x HP Netservers, both dual p2 Xeon, 1gb ram, and a small raid shelf with 8x 9gb disks. Both NT4 installs with the correct patchlevels.
One machine ran oracle, the other IIS, these were clustered so that one would take over the task of the other, should there be a problem.
Problems:
1) Crashing (daily at least)
2) Slow (astonishingly poor, disk defrags once a week helped this)
3) Sometimes one host would freeze, and the other wouldn't actually notice
4) Often a shutdown of one node would move the services across, but upon rejoining the cluster - the node with both services would refuse to give one back.
5) Often, IIS would stop talking, and neither node would actually realise.
The attempted solutions:
1) Replaced CPUs, memory, disks, eventually nodes
2) Reinstalled clustering software, eventually total clean installs of operating system and applications
3) Support from Microsoft, and Oracle, and HP who made the (certified) kit. Oracle+HP both pointed the finger at the OS, Microsoft simply failed to help, when we got any response from them at all.
4) (this helped) I used one of the spare HP9000 servers to monitor them remotely by trying test transactions - it alerted people when they fucked up.
I think the above says it all really. Standard software on correct hardware - it just didn't work properly. Microsoft can stick their clustering "technologies" where the sun don't shine.
MS parallel tools (Score:5, Informative)
COM+ and Queueing Components. AppCenter.
The way it works is this. You write a COM+ component that is transactionally queuing aware. Each component takes a work unit in, processes it, and then sends the result of the transaction to the queueing components for reassembly or re-issue (if a node fails to submit a result, for example, good for checkpointing).
You can use normal Windows 2000 Professional boxes for the worker bees, and use a few Windows 2000 Server boxes to co-ordinate the issuing of jobs and control, and munging the result sets coming back in.
If you need to submit a wide variety of jobs, obviously the COM+ components will be changing regularly, it'd be a good idea to go to AppCenter so that you can treat a bunch of machines as single whole. This allows you to upgrade or deploy an app in a few mouse clicks to literally thousands of machines in a few seconds. AppCenter also has pretty good resource management, something that might be necessary if multiple jobs are running at the same time.
The cool thing is the development environment is really friendly and you can make COM+ components pretty easily and test them locally (for the n=1 case) before deploying to the farm.
There are also specialist MP libraries for the Win32 platform, such as PVM or MPI (WMPI). These have the benefits of re-using the knowledge and API's that users might already be familiar with - one of the biggest thing when a place converts from one supercomputer to another is rejigging and reoptimizing the code for the new architecture.
BSOD!! (Score:4, Funny)
when they Blue Screen?????
A Cluster Bomb!!!!
Re:BSOD!! (Score:2, Interesting)
when they Blue Screen?????
A Cluster Fuck?
(if you diden't know what it ment, then you woulden't be offended)
Re:first post - no way (Score:4, Informative)
From Microsoft's site: "The Computational Clustering Technical Preview (CCTP) toolkit is used for creating and evaluating computational clusters built on the Windows® 2000 operating system."
Obviously, they are now attempting to compete with projects like Beowulf. It's probably all part of the M$ aggressive stance on Linux (and other competitors). The real question is, has anybody downloaded this kit and played with it. It's just a technology preview, so how mature is it in comparison to Beowulf or other clustering technologies?
Re:first post - no way (Score:4, Interesting)
The AC3 folks at cornell [cornell.edu] have done quite a bit with these windows clusters. I guess the parallel Matlab is pretty nifty, but there's no reason any of this stuff couldn't be done on a more mature platform.
Personally, my biggest turnoff is the fact that you need KVM switches wired up to each node...well that and the overhead of running the bloatware that is win2k. Compared to a 256 node headless linux cluster we built this just sucks. Hard.
GUI overhead - The Functionality Bloat (Score:3, Insightful)
Remember that one of Microsofts contentions in the anti-trust trial is that they cannot unbundle Internet Explorer from Windows, that the system is so interdependent that no elements can be left out and still function.
So they cannot compete on price, since all other things being equal a Windows machine must have a video graphics card.
They cannot compete on performance, since all other things benig equal a Windows machine must spend resources on storing and running the GUI.
Yesterday, I was showing a very happy WindowsXP owner (who also happens to be a somewhat savvy computer consultant with Unix and Linux experience) the beauty of Debian's apt and dselect packages. He was so happy with the granularity of not installing anything that he doesn't want, that I gave him my Debian 2.2r4 CD. (I'm running Woody anyway)
Bob-
Re:Custom (Score:3, Insightful)
when was the last time you used VB for engineering, computationally heavy tasks?
christ - almost all this shit is written in Fortran still... Fortran 77, i believe and not even 90.
try to change the value of 5 in VB, go ahead... i dare ya.
Re:not a good place to ask (Score:2, Insightful)
hardware currently running the Linux cluster.
Compare results.
Re:BSOD (Score:2, Funny)
You're so close, but wrong vendor.. (Score:2)
No worrying about Software licenses, AND "Professional" support from IBM for both hardware and software.
Re:You're running on old hardware right? (Score:3, Interesting)
/Brian
Re:You're running on old hardware right? (Score:3, Insightful)
IMHO that is a good tradeoff. Running X on a PC with a decent amount of memory and processing power (basically 64MB+, 200Mhz+) is not going to put any significant load on the machine. Similarly, the average windows machine can easily handle both the GUI and server processes. If you are experiencing performance problems with your server processes because of the GUI overhead any responsible sysadmin would upgrade the hardware because getting that close to the performance limit of your hardware is bound to cause you trouble anyway (a minor increase in server load would be enough).
Don't get me wrong, I love linux and have used it on old hardware and found it served my needs perfectly. However, you really need to know your stuff to get it up and running. When it comes to configuring things windows is easy when it can be and just as hard as unix when it needs to be. Basically, for simple server stuff you can get IIS up and running relatively easy. The default setup for apache on the other hand is pretty useable out of the box but as soon as you need to tweak it even slightly you are on your own. For professionals it doesn't matter, they have the time and need to get familiar with whatever they configure. Basically this type of sysadmin is knowledgeable and expensive. You are unlikely to find one in small organizations. Instead you will find loads of inexperienced script kiddies who terrorize their users with major fuckups. If I sound frustrated its because our local sysadmin (linux) just screwed up our mailserver (suse box and some ancient solaris machine) and I'm expecting some important mails. It's not the first time and I'm afraid there's more downtime ahead.
For the casual admin who just needs to get an unfamiliar service up and running with no fuss the windows way of doing things is simply easier. The overhead of a GUI is irrelevant in any business case you can come up with (business cases also include licensing, sysadmin salaries, hw cost, training cost, etc.).