Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Microsoft

How Well Does Windows Cluster? 665

cascadefx asks: "I work for a mid-sized mid-western university. One of our departments has started up a small Beowulf cluster research project that he hopes to grow over time. At the moment, the thing is incredibly weak... but it is running on old hardware and is basically used for dog and pony shows to get more funding and hopefully donations of higher-end systems. It runs Linux and works, it is just not anything to write home about. Here's the problem: my understanding is that an MS rep asked what it would take to get them to switch to a Microsoft cluster. Is this possible? Are there MS clusters that do what Beowulf clusters are capable of? I thought MS clusters were for load balancing, not computation... which is the hoped-for goal of this project. Can the Slashdot crowd offer some advice? If there are MS clusters, comparisons of the capabilities would be welcome." One has to only go as far as Microsoft's site to see its current attempt at clustering, but what is the real story. Have any of you had a chance to pit a Linux Beowulf cluster against one from Microsoft? How did they compare?
This discussion has been archived. No new comments can be posted.

How Well Does Windows Cluster?

Comments Filter:
  • Licensing (Score:3, Informative)

    by CodeMonky ( 10675 ) on Thursday February 21, 2002 @01:34PM (#3045600) Homepage
    Licensing would seem to be the first thing that comes to mind.
    Software costs for 100 linux machines are close to nill.
    Software costs for 100 Windows machines probably won't be.
    Granted I have read the licensing on the MS Clustering link but if it like anything else you'll need either a license of some kind on every machine.

    • Re:Licensing (Score:5, Informative)

      by CodeMonky ( 10675 ) on Thursday February 21, 2002 @01:37PM (#3045635) Homepage
      Followup:
      From reading the MS Site it looks licensing is based of the EULA of the software being used, so if you are using win2kpro you have to have a copy of win2kpro for each machine etc etc.
      • by the_2nd_coming ( 444906 ) on Thursday February 21, 2002 @01:43PM (#3045693) Homepage
        oh, but remember,

        the TCO!!!!!!!!!!!!! :-p

        you know how expencive a CS student is!!!! oh my god, how can they afford the astronomical amount of having 5 or 6 of them on one project.

        don't you know that if you move to windows for all your reaseach project clustering needs, you only need a chimp....and since educating a chimp is much cheaper than educating 6 bright young men, your university will save a considerable amount of money....especialy when you lay off all those expencive profs and hire an animal trainer.
      • Re:Licensing (Score:4, Informative)

        by maitas ( 98290 ) on Thursday February 21, 2002 @03:32PM (#3046665) Homepage
        For raw MPP numeric processing, W2k is too dam slow. You can boot Linux in 4MB of RAM and less than 64MB of disk, then, just load the libraries you need and nothing else, and you will have a preety decent system. Try thining W2K down and you will have a huge problem there. You can use Sun's GridEngine for Linux (http://www.sun.com/software/gridware/gridengine_p roject.html) and best of all, it's open source!
        At the end, it all comes to your soft, if you develop a highly scalable, almost share nothing algorithm, Linux Clustering is the way to go. For fail-over Linux you have tha HA Linux project, once more, Open Source!
    • by MrWinkey ( 454317 ) on Thursday February 21, 2002 @01:43PM (#3045705) Homepage
      My managers will only buy windows products as they have a site liscense with MS. They are looking into Linux a little bit because of the Terminal server w/ load balancing does not load balance and the clusterd computers do not talk to each other. The profiles on the 3 clusterd servers do not update each other at all. This was much better than the last attempt my boss did using an IBM pre configured configured box the whole cluster got a BSOD and corrupted a drive losing data for 3 days. People were not happy.

      I can only hope MS's poor performance will make them switch.

      • Persuade one of your mates to sell them a site license for Linux. If that doesn't work, find some pro-Linux company and offer them some easy cash.
        • It's a .gov so they have to use some contract we already have with IBM hardware and the MS site liscense. The director of my department is a big MS fan (even after he upgraded to XP on his laptop and corrupted the drive). I hope to be moving to a different department tho where I can possibly run linux on my desktop pc.
    • even easie: (Score:3, Troll)

      by hawk ( 1151 )
      They asked a direct question, give them a direct answer: the source code . . .


      :)


      hawk

  • Fault tolerance (Score:2, Insightful)

    by Simpler ( 558434 )
    Just make sure your distributed computing can handle you having to reboot boxes every now and again.

    Anyhow, imagine how much you're paying in software licensing for a large cluster? For a univeristy project, this just doesn't seem to make sense.

  • by saintlupus ( 227599 ) on Thursday February 21, 2002 @01:34PM (#3045607)
    One has to only go as far as Microsoft's site to see

    Ah, so this is a typical Ask Slashdot then?

    --saint
  • by ashultz ( 141393 ) on Thursday February 21, 2002 @01:34PM (#3045609)
    To start, it would take a few hundred dollars per box to put W2000 on it, since they presumably don't want you to just copy their evaluation version.

    So unless they're willing to give you their OS for free, why would you even consider it? Suddenly your supercomputer cluster would cost like a real supercomputer... then you could have just bought a real supercomputer!
    • Microsoft usually will you give you the os for free if you're a decent sized business. Especially if you're considering going linux instead. (And yes, it has happened to me unfortunately). Its kind of like that crack dealer telling you the first hit is for free.
  • Point? (Score:5, Interesting)

    by nate1138 ( 325593 ) on Thursday February 21, 2002 @01:36PM (#3045617)
    It seems to me that part of the beauty of a linux cluster is

    1. Not having to buy an licence for each machine

    2. Having an infinitely configurable system (meaning that you can load as much or as little of the OS and libraries as you want/need)

    3. The use of high quality, low/no cost development tools.

    It seems to me as though running a cluster on 2k would possibly be easier (point and click) but less efficient.
    • Re:Point? (Score:5, Interesting)

      by SnapShot ( 171582 ) on Thursday February 21, 2002 @02:44PM (#3046237)
      Here's the real question:

      Here's the problem: my understanding is that an MS rep asked what it would take to get them to switch to a Microsoft cluster. Is this possible?

      What would it take to switch? Well, you go to the rep and ask for X P4 1.8 GHz desktops, X licenses of Win2k, Y trips to Wicrosoft Clustering Training Junket in sunny Bermuda.

      What does MS get in return? A department of trained CS students and professors who, when someone asks about distributed computations, will respond "Microsoft" instead of "Linux". And when those students enter the real world and the PHB (who wants MS anyway) asks about clustering the answer will be "Microsoft".

      Remember, Linux earns mindshare, but Microsoft buys it... and it is almost always easier to buy someones loyalty than to earn it.

    • Re:Point? (Score:5, Informative)

      by Red Avenger ( 197064 ) on Thursday February 21, 2002 @03:33PM (#3046677)
      Microsoft answers your question here [microsoft.com].

      "Q. How does a Windows-based supercluster compare with one running UNIX or Linux?

      A. In short, there's very little substantive difference, but owners of existing UNIX-based solutions will face changes that will cause them some work and discomfort (less for users than for their current administrators and support staff). These are offset in part by lower costs of ownership (technical skills required), breadth of applications and support tools, vendor support options, and commonality with the constantly improving desktop environment.

      From a hardware perspective, there's very little difference seen by the application. In the past, UNIX-based hardware environments had better floating-point performance, but that's been offset in the last few years by Moore's Law curves for large-volume products that have advanced faster than specialty products have, as well as the price and support cost differentials between these vendors' products.

      From a software perspective, Windows is a markedly different environment, designed with priorities set by a much different market segment than traditional science and engineering. Windows NT® and now Windows 2000 were designed to meet the needs of those ISVs building products for businesses that are unable or unwilling to dedicate their best people to support their infrastructure (versus focusing on building solutions for their business mission), as well as the needs of a hardware community that required continuous integration of new devices and components."

  • by Anonymous Coward on Thursday February 21, 2002 @01:36PM (#3045625)
    Replace a Linux Cluster with Microsoft?

    I believe the military have a term for this.

    It's called a Cluster F**K.
  • Take a look at (Score:5, Informative)

    by wiredog ( 43288 ) on Thursday February 21, 2002 @01:36PM (#3045627) Journal
    Windows Clusters [windowsclusters.org].
  • by powerlinekid ( 442532 ) on Thursday February 21, 2002 @01:37PM (#3045633)
    From what I understand from reading Win 2k Advanced Server's help section on Windows clustering, it is mostly for stability. Kind of like a massive mirror raid system. I really don't see any performance advantage if you're looking for supercomputer speeds, unless your measure performance by uptime. As a side note, what were you using for clustering? I'm currently doing a cluster using mosix for my school [newpaltz.edu] and it seems to be going nice. I'm just curious as to what gives the best speed performance on the linux end.
  • by isotope23 ( 210590 ) on Thursday February 21, 2002 @01:38PM (#3045639) Homepage Journal
    With their vaunted stability, and marketing
    savvy, their new Cluster Product will be called:

    The Cluster Bomb!

  • by Em Emalb ( 452530 ) <ememalb AT gmail DOT com> on Thursday February 21, 2002 @01:38PM (#3045649) Homepage Journal
    "from the please...no-more-beowulf-jokes dept."

    too late
  • Here's the deal: (Score:5, Informative)

    by Null_Packet ( 15946 ) <nullpacket@NosPAM.doscher.net> on Thursday February 21, 2002 @01:39PM (#3045652)
    MCS (Microsoft Cluster Services) are designed for load balancing and fault tolerance, as where Beowulf Clusters (AFAIK) are more for distrubuted processing load for performance increases (massive threading). MCS works quite well, especially well on Fibre Channel and Brand Name Hardware such as Dells and Compaqs.

    Simply put, it works well (but the cost is often an issue due to the cost of hardware in an enterprise) but it is not the same clustering you see with the Unices. E-mail me at my account if you have more specific questions.

    My intent is not to start or participate in a flame war, but the term clustering simply implies different things on different OS'.
    • mod up the parent (Score:3, Insightful)

      by f00zbll ( 526151 )
      the post makes good points. clustering means a lot of different things. clustering for fail over can also differ drastically depending on the actual implementation. clustering MS Exchange is different than clustering a stateful application or transaction server. perhaps the original post should have been more precise and given a better idea the intended use.


      There are plenty of resources on the net that provide specific details about building clusters and how to optimize the performance. don't forget applications need to be re-written to make them friendly to distribute/parallel processing.

    • by iankerickson ( 116267 ) on Thursday February 21, 2002 @03:17PM (#3046528) Homepage
      MCS works quite well, especially well on Fibre Channel and Brand Name Hardware such as Dells and Compaqs.

      Except your post is factually incorrect. MSCS is a POS -- to say it works "well" is true if you mean "well... it works.... kinda."

      It basically just enables multi-initiator support for SCSI chains (so a chain can be connected to 2+ hosts), allows more memory for large applications (if the application is written correctly to use it) and (this is the main feature) allows services to fail-over from one host to the other.

      This is where it MSCS should be good, but it just isn't. Basically imagine you have 2 NT servers. A is running Services, and B isn't running any Services except the basics. Do a NET STOP on all the services on A, wait for it to completely finish, and then, and only then, do a NET START on those same services on B. Visualize how long in your mind that would take, and then double it. If anything goes wrong, like a service won't stop (imagine that) or a service can't start due to a dependancy, it throws a monkey into the whole works.

      Also, the clusters disks can only be used by one node at a time, and while it would have been trivial for Microsoft to expose each disk to both hosts always (by automatically mounting the disk on the "other" node over the network) they just didn't bother.

      It's also got alot of setup caveats. Read the entire manual very carefully and take notes before you even purchase hardware. Then go on-line and read all the addendums and known issues. A good understanding of NT is not enough -- MSCS is a different build (compile) of NT than the Workstation/Server version. She is a woman who has serious issues, some of which can't be fixed by you.

      And then there's the blue screens. And the 7 hour installation procedure. And the way you are strongly cautioned from deleting or changing some MSCS settings after being set, with loving MS-style advice that a reinstall is your best bet.

      However, for just plain applications, it's OK. Anything you can run from the command line proper can be put in the cluster and will fail over. So if your one of the majority of Acrobat Distiller user who installs in a manner that violates the EULA, i.e. on NT polling the "In" folder of a network share, MSCS can fail over Distiller VERY FAST (it's not a service, so no delays). However, with a little brains and a little ActiveState Perl (or cygwin I suppose) you could hack together a work-a-like using DFS + rsync and save a lot of money.

      Kudos to your post for not trying to engender a flame war. But you kinda imply that MSCS is worth the exorbitant price tag, and it just isn't for what little it does and the problems and extra headache it brings with it. I'm not flaming you, just spreading the word:

      DON'T BUY MSCS -- IT SUCKS. IF THEY GIVE IT TO YOU FOR FREE, SEND IT BACK OR GIVE IT TO SOMEONE YOU DISLIKE.

      Back on topic, what MS may try and sell you is something based on the Microsoft Message Queue and the Microsoft Transaction Server. Those are more BackOffice-variety PHB-entitled products that really don't do much except provide an API for sending guaranteed IPC and doing transactions, even for VB monkeys who don't really understand what that means but think it sounds just plain awesome. Free with the option pack.

      This is part of that Microsoft program to divert "wins" from Linux to Microsoft at all cost, especially from IBM. So the sales rep probably doesn't have a clue what your cluster really does, what you want it for, or what MS products it would actually take to build a knockoff. They may have a anti-beowulf team cooking something up right now, and guess what pal?! They're hoping your administration will take the bait of free hardware and licenses, and you'll end up beta-testing a 0.1a version of some bizarro-beowulf for MS. What a deal!!!

      Good luck. I'd stick to you guns and inside on using something already proven to work for your goals, like Beowulf or AppleSeed.

  • Beowulf (Score:5, Interesting)

    by Usquebaugh ( 230216 ) on Thursday February 21, 2002 @01:39PM (#3045666)
    A beowulf cluster is not limited to Linux, it could run on top of any OS. I believe NASA did the original design work to be OS agnostic.

    http://www.windowsclusters.org/projects.htm gives a list of current Windows clusters.

    Finally, are you out of your tiny little mind? I wonder why M$ is so keen to help. There is no such thing as a free lunch, espically from M$.

    • Re:Beowulf (Score:4, Interesting)

      by Daniel Dvorkin ( 106857 ) on Thursday February 21, 2002 @01:54PM (#3045805) Homepage Journal
      M$ does donations and low-cost setups for schools all the time. (Usually it's software, not hardware, of course, since software actually costs them next to nothing to produce. They recently gave a "$500,000" software donation to my school that, based on the number of CD's and software boxes, probably cost them something in the neighborhood of $25 -- but it's still a half-million-dollar tax writeoff.) Actually, plenty of other software companies do too, though I'm not sure anyone else is as aggressive about it as M$.

      Why do they do this? Simple: it's a long-term marketing trick (and a cheap writeoff.) Train the students with Windows, Office, Visual Studio, MSSQL Server, IIS, et bloody cetera, and that's what they'll know when they get out into the working world. Companies that already use M$ shit will have an easier time hiring new people. Companies that are deciding on new systems will have people in their IT dept. who say, "Well, I don't know anything about Linux/Solaris/gcc/Apache/whatever, but I know all about NT and VC++ and IIS," and may well make multimillion-dollar purchasing decisions on that basis. It's not hard to figure out.
    • Re:Beowulf (Score:3, Insightful)

      by $nyper ( 83319 )
      Actually the best place to test Microsoft Theories or Proof of Concepts would be in a traditionally black or minority college/university. Since the early to mid 90s (far as I can remember back) Microsoft has donated software to these institutions free of charge. No licensing fees what so ever at all. It is too bad that these smaller universities cannnot pump out some serious testing of these types of things. With licensing no longer being a factor we could all then see the true technical and performance results instead of our many biased Linux opinions. (Including my own.)

      If I were going to run a cluster that needed to take advantage of computational power I would go Linux. However my choice would be baised off of the fact that up until this point I still have not seen enough documented proof to support the theory that Microsoft vs. Linux cluster is even a battle. From my current knowledge I would have to deduce that they currently have their different uses even though the linked article above says that Microsoft Clusters are capabable of computational colaboration. Again as many have already stated, cost is always a factor when dealing with Microsoft and you have to take it into consideration.

      I really will need to study the articles more closely in the link above. Many thanks for publishing it, this is the first thing I have read to support Microsofts capablity of computational colaboration within a cluster environment.

      Remember my little Penguins do not be so quick to judge any OS even Microsoft's. Microsoft may not be cheap, it may be filled with bugs, and it may not always be the most secure. But it does serve its uses in the world, for now. ;)

  • by Booker ( 6173 ) on Thursday February 21, 2002 @01:41PM (#3045678) Homepage
    Ok, you have a solution in place. It works. Some sales guy wants you to change your solution that works.

    Make him convince you that the time and cost of the switch is going to gain you something.

    Does your current setup not do what you need it to do?
  • by merlin_jim ( 302773 ) <{James.McCracken} {at} {stratapult.com}> on Thursday February 21, 2002 @01:42PM (#3045685)
    Hello,

    We run a MS cluster here. VERY big app... so big, I am loathe to name figures, because that would identify to MS just who is talking here...

    But, we use MS clustering for our web app. Our setup is that we have a database server with 4 procs, and a growing array of web servers with 1 proc each, all of which use disk space on a SAN. W2K clustering manages the load balancing as well as allocating disk space out of the SAN to virtual partitions as needed. The original poster is correct; MS clustering is for load balancing, not computation. I have seen many times Microsoft sales reps don't have a clue of what they're trying to sell; they're just told from on high to replace Linux with Microsoft wherever they can. I think this is clearly a case of that.

    My advice? Ask the sales rep to demonstrate how MS clustering will solve a common comp-sci problem with more MIPS than each box alone has. Point out that you're not running a web server or any such service on these boxes, but that they're for raw computation. Even better, see if he'll let you talk to a technician on how W2K clustering can meet your 'unique' (at least to MS) needs.

    Now, for everyone else... Don't get me wrong. W2K clustering is a great technology for building highly performant, highly reliable, highly scalable applications quickly and easily. But it scales in the direction of millions of users, not millions of computations.
    • by crimoid ( 27373 ) on Thursday February 21, 2002 @01:56PM (#3045823)
      Apparently you (and most everyone else) didn't take the time to even look at the link provided. Microsoft DOES have computational clustering, not just "traditional" clustering.

      MS Computational Clustering [microsoft.com]
    • by merlin_jim ( 302773 ) <{James.McCracken} {at} {stratapult.com}> on Thursday February 21, 2002 @02:28PM (#3046092)
      I must now put on the traditional monkey hat of shame, for the naysayers are quite correct. There are TWO microsoft products called clustering. One is used by Windows 2000 Advanced Server to do load balancing, and is, in fact, split into two parts, the first called Clustering, the second Network Load Balancing... see this page [microsoft.com], which includes the statement "Both [of the Windows 2000 Advanced Server] Clustering technologies are backwards compatible with their Windows NT Server 4.0 predecessors". The other is High Performance Clustering (HPC), in its current form called Computational Clustering Technical Preview (CCTP), which I am certain has nothing to do with the previous Clustering technology... I doubt it was available for Windows NT 4.0, among other things (thus the Technical Preview status).

      Notes for any and all interested in this; it's a technical preview, which any other company would call a pre-Beta or an Alpha release. The only way anyone sane would use this in a production system would be as an Early Adoption Partner...
    • My advice? Ask the sales rep to demonstrate how MS clustering will solve a common comp-sci problem

      This is a great idea. Scalapak benchmarks are a popular choice. Also think about what are you really getting for your money (license fees)? I work with a modest Beowulf (~50 cpus) using Linux and I have no doubt that it would be technically possible to use Windows... but you would spend a lot of time installing kludgy ports of unix tools: cygnus wintools, PBS, rsh, perl, etc. At the very least the two most popular message passing libraries (MPI and PVM) both rely on rsh.

      All the tools that make a Beowulf what it is are free software, there is really NO added value by running them on Windows.
  • Clustering for windows requires Windows 2000 Advanced Server, and a great deal of patching. And with old hardware you are out of luck trying to run Windows 2000 Advanced Server.


    Distributed computing for Windows has been around for a while though, Seti@home has been doing it for years.
  • by datastew ( 529152 ) on Thursday February 21, 2002 @01:43PM (#3045699)
    Looks like Microsoft is busy being slashdotted.
  • MS AppCenter server (Score:2, Interesting)

    by Twister002 ( 537605 )
    Chances are the MS rep didn't understand MS clustering. He just knew that you had a Beowulf cluster and he wanted to sell you MS software so he figured he'd sell you a MS cluster, regardless of whether or not it would do what a Beowulf cluster could do.

    However there is a server solution I saw demoed at a MS DPS I attended called Application Center [microsoft.com]. It allows you to manage your cluster and distributes workloads throughout the cluster.

    Now, I'm not sure if you NEED this to take advantage of Windows 2000 clustering. The last time I worked with a MS cluster was under NT 4 and it was failover only. The load balancing was "faked" by a router that would just alternate which server the request was sent to.

    (insert "yeah but MS is evil" comment here)
    (insert "yeah but Linux Beowulf clusters cost less" comment here)
    (insert "yeah but who wants to have to reboot your cluster all the time" comment here)
    (insert "I wish the sigs were longer because that's a really good quote by Richard Feynman" comment here)
  • what?? (Score:5, Insightful)

    by Lumpy ( 12016 ) on Thursday February 21, 2002 @01:43PM (#3045706) Homepage
    an MS rep asked what it would take to get them to switch to a Microsoft cluster.

    first the rep needs to prove that $199.00 per node for software fees has to provide major benifits over the Linux cluster. How many windows clusters can he list for you to call and ask about it? refrences, ones you can call and talk to the guys running/maintaining it. Show where microsoft provided increased profits or savings over an open alternative.

    If they cant give you a dollar amount that shows increased profits or major savings then be sure to tell the rep that he shouldn't let the door hit him in the ass on the way out. It isnt MS versus Open anymore in today's economy.. it's what can get it done and save me money or can give me more profits... and this is what makes Open solutions win... microsoft can't give savings and the performance difference isnt enough to give profits that will more than overcome the added expense of Microsoft.

    Get real numbers, talk to real people running real clusters on all platforms. if you have real numbers then you can make solid decisions.
  • Windows Clustering (Score:3, Interesting)

    by cluge ( 114877 ) on Thursday February 21, 2002 @01:44PM (#3045708) Homepage
    Windows clustering works as advertised for the most part, but is expensive. Some exceptions include heavily loaded machine pulling from fiber channel arrays and NAS. Both of the network attached devices seem to have some problems. Driver issues? Don't know.

    Haven't seen the reported "bsod round table" where one machine crashes, shortly followed by another and another. The problems we have seen is a single machine bsods, and the other machines in the cluster don't realize it's down.

    If your already in the MS camp, it will work, it look at other solutions. I think they will be more cost effective.

    • If your already in the MS camp, it will work, it look at other solutions. I think they will be more cost effective.

      should be "If your already in the MS camp, it will work, if you are not I would look at other solutions. I think they will be more cost effective."
  • ...with M$'s "Computational Clustering Technical Preview":

    * PLAPACK package (open source software)

    heh.

    -JT
  • You can do a windows cluster thing, but it's still not as good even as Condor for Unix. All in all, I'd say to tell them to go screw themselves unless they want to give you money for a LOT more hardware as well as software, to make up for the fact that you're not going to be able to do as much with it. If MS wants to be taken seriously as a hardcore number-crunching OS, the bastards can EARN it instead of trying to bribe academics.

    I've been looking at this a lot myself now, as I'm also building a cluster for use in a computational bio lab at Florida State. It certainly seems that Linux is the only way to go right now. In case anyone cares, my cluster right now is 16 nodes of:

    Tyan S2460 with 2 Athlon MP1800+ processors per node
    1 gig PC2100 RAM per node
    20 gig 7200 RPM Maxtor HD
    3Com Gigabit over copper Ethernet
    low-end cheapass video and floppy, etc.
    All in these really nice rack cases, with a big black 2001 monolith-esque rolling rack to shove it all around in. It cost just about $26,000 to build so far, but the plans are to expand it to as many as 512 nodes within the next year or so. Whee!
  • How good is any MS product in its v1.0 release?

    Seems to me that historically, MS rushes a v1.0 product out to stem the tide of a competing product and then spend the next couple releases getting a "real" product out the door.

    I have zero experience with unix clustering but would be suspicious of the MS offering until it has a chance to mature.
  • I remember reading somewhere that everybody's favorite MPEG encoder (TMPGenc) supported a distributed model for encoding.

    That said, with the three computers I have at my place (a p3 desktop, a celeron I use as a low grade server, and my p3 notebook) I'd love to be able to set up a cluster for encoding. Such operations will be the killer app for clustered systems IMHO.

  • I believe Indiana University has two hardware-similar clusters, one running Unix and one running some flavor of DOS. I don't have to URL but it shouldn't be hard to find.
  • Stability issues (Score:5, Informative)

    by The Panther! ( 448321 ) <panther@austin.YEATSrr.com minus poet> on Thursday February 21, 2002 @01:45PM (#3045726) Homepage
    At my last job, we had a COW (Cluster of Workstations) running all sorts of operating systems. Except Windows. Why? Because they won't run in a production environment for more than a few days without freezing or crashing, and the system administrators refused to babysit them. With Windows 2000, I've had my home machine run for upwards of 28 days without a reboot, but only if all the video drivers are stable and the machine is not doing too much at any given point (say, burning cds while watching movies and keeping my net connection above 200k/s). But so help you if a driver freezes. There's no way to reset them. Your hardware will play into your decision as much as the operating system, I believe, due to stable driver support.

    In terms of performance, Windows kernels have pretty good latency compared to 2.2.x linux kernels, so running a full screen dos app might give very good performance, but there's a lot of overhead munching into your RAM, which is likely to be an expensive premium on older hardware.

    Lastly, with Windows, I've never heard of doing channel bonding for ethernet (3 100TX cards ~= 1 gigabit), nor diskless booting that I know of. These can be really necessary for large clusters to keep maintenance down and performance up without buying higher end equipment.

    • channel bonding (Score:3, Informative)

      by No-op ( 19111 )
      pretty much all of the Intel server cards as well as several of the desktop cards support channel bonding. all compaq server NICs support this as well, and it works great.

      however, I would take issue with your assertion that 3 100mbit cards are roughly equal to a gigabit card. while it's true that something like 4 100mbit cards will give you close to the real performance of a gigabit card when used on a low end PC, there is much to be gained by using actual gigabit (use of giant frames, better latencies, etc.)

      if you're going to build a cluster, and you actually have a budget, you're going to buy decent yet cheap server boxes. these will most likely include 64bit PCI slots, and there lies your motivation for gigabit. the performance there is unparalleled when using a real wirespeed switch, without using faster technologies of a proprietary nature.

      my 2 cents.
  • but based on personal experience, Windows ME is pretty much a cluster.
  • by marian ( 127443 ) on Thursday February 21, 2002 @01:47PM (#3045739)

    While I haven't been near a Microsoft Cluster in a while, I do remember a couple of things that really stand out about them:

    The number of systems able to be part of the cluster is severely limited. At the time, it was limited to 2, but I'm pretty sure that has increased to a somewhat larger single digit number.

    The number of applications available to run on the cluster is just as severely limited. Again at the time, there were exactly zero applications, but I know that there is at least one (Exchange) now.


    Given the limitations of what uses you can put an MS cluster to, I wouldn't bother with it in the first place.

  • not only can you do all of your research on a windows cluster, you can consolidate all of your major security holes into one small area. It makes tracking down the problem children on your network much easier.
  • Funny...
    I followed the link to Microsoft's clustering solution. Another link took me to a free evaluation page - the package includes:
    • Microsoft Windows 2000 Professional evaluation version
    • Microsoft Windows 2000 Server evaluation version
    • Microsoft Visual C++® 6.0 Standard Edition
    • MPI Pro 1.6 from MPI Software Technology, Inc.
    • Cluster CoNTroller 1.0.1 from MPI Software Technology, Inc.
    • Visual Fortran 6.5 Standard (Trial Version) from Compaq
    • Math Kernel Libraries 5.0 from Intel
    • Computational Cluster Monitor from Cornell Theory Center
    • PLAPACK package (open source software)

    All for $7.95 shipping and handling!

    May be a cheap way to get a few Win2K licenses? ;)
  • Your sales rep is going to try and do just that. Sale. However, if he has a decent presales engineer you can talk with about it, give him the details. For the most part, these engineers aren't the rabid moneey grubbers you see on the sales side. So, explain your issue to him, see what he has to say, and go with that. I seriously doubt the sales guy knows the difference between true clustering and fault tolerance, which is pretty much what MS's clustering services are.

    Keep in mind, the sales rep will not have your best interest in mind, just your money.

    Yes, I know there are good reps out there, but I have become quite jaded with them on the whole, especially since I used to be that presales engineer telling clients that what the rep just said was:

    a) not feasible
    b) harder than described
    c) not worth the money invested.

    anyhow, my $.0002

  • First Hand Info (Score:4, Informative)

    by GeckoX ( 259575 ) on Thursday February 21, 2002 @01:51PM (#3045769)
    We researched MS Clustering very extensively. We're already an MS shop and even still it was cost prohibitive.

    Notes from experience:

    1) Clustering with Windows requires one of the following OS setups: Win2K Server WITH MS Application Center, OR Win2k Advanced Server. (Similarly with the XP platform)

    2) OS Licenses therefor will run between $1000-2000 _per-machine_!

    3) If you need Application center, which you likely will, you're talking (If I remember correctly) about another $1g per.

    4) Of course MS is just getting into this so don't expect it to be easy, well documented or stable.

    Finishing Notes:

    Obviously, Linux would be mucho cheaper

    Easiest, and still cheaper than MS would be the Plug-n-Play Mac solution!
  • No command line (Score:4, Insightful)

    by Dominic_Mazzoni ( 125164 ) on Thursday February 21, 2002 @01:56PM (#3045818) Homepage
    I've been lucky enough to have access to a Beowulf in my current job, and the way we use it, Windows just wouldn't fly, because there isn't a powerful command line.

    We're primarily using the Beowulf for computations which are "embarassingly parallel" - in other words, tasks for which it is trivial to partition the input into 16 equal-sized pieces, give one to each node, and then collect the results and paste them together. For example, multiplying incredibly huge matrices and brute-force keyspace searching are embarassingly parallel.

    For us, the primary advantage of running Linux on the Beowulf is that most of the time we don't need to write custom software to speed up a calculation. We just write a shell script that rsh's into each box and runs a program with slightly different command-line parameters on each one.

    Obviously for some computational problems it's worth using MPI to have the processes communicate with each other, or load-balancing software so that we can run lots of smaller, but different-sized jobs, and these techniques would probably work equally well whether you're running Windows or Linux.

    But for experimentation and prototyping, and quickly distributing easy problems, I think there's an incredible advantage to having a command line. (Of course you could install Cygwin on all of the Windows boxen...but why?)
  • by bADlOGIN ( 133391 ) on Thursday February 21, 2002 @01:56PM (#3045822) Homepage
    Those linux boxen can run just fine w/o video cards, keyboards, or mice connected to them. Can the same be said of any 'Doze variant? Of course the licensing cost is the devils bargan to be wary of here. Even if they got some nice PR deal, if it's a RESEARCH operation at a university and someone might want to SHARE the fruits of the RESEARCH, it would require anyone else who wanted to verify or extend the work with the clustering software to also run 'Doze. Is M$ going to step up and offer the same deal (or better) to every other members of the research community if they want to contribute, analize, or validate and expand the work? I didn't think so.

    If there's one place where Linux excells and Microsoft needs to be kept out of with armed guards, constentino wire, and rabbid dogs it's the computing research centers in higher education. Scraping by to live and make post graduate tuition can suck, but having to fight for grant money that only lines the pockets of the richest man on the planet just so you can do your thesis is adding way too much insult to injury. For the sake of future scholars, show this salesweasle the door with the help of your foot.

  • by Toodles ( 60042 ) on Thursday February 21, 2002 @01:58PM (#3045834) Homepage
    Everyone talks about setting up beowulf clusters. It's pretty easy to set them up, just make sure there is a lot of usable bandwidth between the systems.

    The question here that isn't being asked is about the application. Sure, you have a cluster. But just what is it doing? What numbers are you crunching with that many gigaflops? To take the beowulf idea out of the realm of geek bragging rights into actual useful production takes an application, and you can bet that most are customer designed in house.

    Very little of the OS itself is involved in the real applications that make beowulfs useful and money-making. Take a look at your intended application, and see what its requirements are. If you are writing it in house, tell the MS rep to take a leap, since you wont have to worry about 100+ MS licenses, Visual Studio licenses, or whatever else. If your intended application requires MS OS underneath, hold out on the rep until he agrees to a dramatically reduced price on the software. But worrying about the OS in a cluster before looking at the application is counter productive.
  • by Oestergaard ( 3005 ) on Thursday February 21, 2002 @01:59PM (#3045851) Homepage

    For a computational cluster, the OS itself shouldn't really matter. What matters is, do you have the tools you need, and does the environment allow you to work with the cluster in a flexible way.

    For a typical compuatational cluster, what determines the performance will be the quality of your application. Only if you pick an OS with some extremely poor basic functionality (like, horribly slow networking), will the OS have an impact on performance.

    People optimize how their application is parallelized (eg. how well it scales to more nodes). The OS doesn't matter in this regard. They optimize how well the simple computational routines perform (like, optimizing an equation solver for the current CPU architecture) - again, the OS doesn't matter.

    So, in this light, you might as well run your cluster on Windows instead of Linux, or MacOS, or even DOS with a TCP/IP stack (if you don't need more thatn 640K ;)

    However, there's a lot more to cluster computing than just pressing "start". You need to look at how your software performs. You need to debug software on multiple nodes concurrently. You need to do all kinds of things that requires, that your environment and your tools will allow you to work on any node of the cluster, flexibly, as if that node was the box under your desk.

    And this is why people don't run MS clusters. Windows does not have proper tools for software development (*real* software development, like Fortran and C - VBScript hasn't really made it's way into anything resembling high performance (and god forbid it never will)).

    Furthermore, you cannot work with 10 windows boxes concurrently, like they were all sitting under your desk. Yes, I know terminal services exist, and they're nice if you're a system administrator, but they are *far* from being usable to run debuggers and tracing tools on a larger number of nodes, interactively and concurrently.

    Last but not least, there are no proper debugging and tracing tools for windows. Yes, they have a debugger, and third party vendors have debuggers too. But anyone who's been thru the drill on Linux (using strace, wc -l /proc/[pid]/maps, ...), and needed the same flexibility on windows, knows that there is a world of difference between what vendores can put in a GUI and what you can do when you have a system that was built for developers, by developers.

    So sure - for a dog&pony show, windows will perform similar to any other networked OS with regards to computational clusters. But for real-world use ? No, you need tools to work.

  • by WeeGadget ( 26046 ) <Slashdot@NOsPaM.Weesner.org> on Thursday February 21, 2002 @02:01PM (#3045863)
    ...what it would take to get them to switch to a Microsoft cluster.

    Simple... ask for :

    1. Modifiable source code... essential for University level research.
    2. Blanket permission to publish research methods and results, including code.
    3. No NDAs that could limit a student's job oportunities. (i.e. "No Compete" clauses etc.)
    4. Free or low cost would be nice :o)
    Jonathan Weesner
  • by tcc ( 140386 ) on Thursday February 21, 2002 @02:01PM (#3045872) Homepage Journal
    If you want to do some kind of renderfarming or number crunching across a network, why would you need *MANY* copies of win2k Server? I might have missed a point, but win2k datacenter is about load balancinglike bandwidth managing, IO requests, and uptime if one of the machines fails, etc...

    If the server holds the data and you have a potential of a lot of clients doing requests (thus I/O, Bandwidth, like a P2P crunching system to name a popular example) In that example, I don't see why you'd want to switch to microsoft if you got it to work on linux, you'll need to have a very good knowledge (or hire someone with) of Microsoft Server products if you want to move to anything more than a standalone server. Also last time I checked with M$ for that solution because I wanted a safer domain and maximum uptime, everything was doubled for 2 machines, I thought it would be a bit cheaper than that but heck, for the price of the Advanced server VS the standalone, with 25 users, you can get an extra tape drive and cheap RAID1 to mirror your critical drives (on a small buisness server)

    So if you mention that you WISH you'll get donations, and you want raw computing power, instead of buying MS licenses, concentrate on the goal you try to acheive: distributed crunching power with scalable servers, so basically you'll need HARDWARE to crunch. (I still don't get why you'd NEED server to run number crunching, workstations can do the same and transmit to a server, like I was stating before). Check what you have, check what you need, design around that, do a cost analysis since it seems to be very critical in your case.

    There are some cases where you'll want MS servers, here at work I've setted up a MS server to have less configuration and troubleshooting issues with my win2k Pro machines (at least I know when something screws up it's MS related for sure :)) , but in your case I'd say keep with what you've got unless you get a buttload of funding and a very good reason to move to win2k (which I don't really see), because a datacenter plus admin will cost you in the 6 digits to maintain and license.
  • by Sabalon ( 1684 ) on Thursday February 21, 2002 @02:06PM (#3045908)
    blah blah blah blah Linux cluster blah blah blah.

    ...Execute search MS - terms: cluster
    Results: Microsoft Clustering, formally known as wolfpack.

    ...Execute talk: Yes...MS does clustering, what would it take to convince you to use ours.

    I think if I was in the customer's position, I'd agree to it just to shove it back in their face when I ask how it distributes the computing load etc...of course that would be

    blah blah blah computing load blah blah
    ...Execute search MS - terms: computing load
    Fuzzy Logic Results: Microsoft Clustering, formally known as wolfpack. Use for load balancing.
  • by acoustix ( 123925 ) on Thursday February 21, 2002 @02:07PM (#3045924)
    that he says he "works for a mid-sized mid-western university" when his handle has a link to a Ball State University email address.

    Just come out and say it.
  • What's your app? (Score:5, Insightful)

    by gcoates ( 31407 ) on Thursday February 21, 2002 @02:09PM (#3045941)
    IAABA. (I am a beowulf admin).

    Beowulf clusters get built to support your application, not the other way around. Your choice of hardware and OS will depend on the parallel nature of your code. Do you need myrinet, or can you get away with fast ethernet? Will your code even compile under win32? Do the supporting libraries (PVM/MPI/BLAS whatever) run under win32? What about the queuing system?

    How are you going to manage the cluster? You need automation, even for small clusters. How easy is it to add a new user, apply a patch or change a bios setting on your cluster without having to plug a keyboard and monitor into each node? What about central logging? How about automated OS installs when you add another 100 nodes when you get your funding?

    Oh. Benchmark, benchmark, benchmark. That means your code, running your datasets, on your hardware and OS. Not vendor supplied numbers. If you have a serious hardware vendor, you should be able to wrangle demo mechines off them. Try before you buy.

  • by NerveGas ( 168686 ) on Thursday February 21, 2002 @02:12PM (#3045974)
    Years ago, I worked at an ISP that ran partly on Solaris, mostly on Linux. A few MS reps came in to try and get us to switch to NT. We let them go through their routine, then walked them around the operations room, telling them the capabilities of what we had, and asking if NT would match them. The response was repetetively "no". When we pressed them on a few issues, they gave in rather easily. When we asked them why you couldn't bind another IP to an ethernet card under NT without a reboot, they admitted "lazy programming."

    So, take the MS reps through the operation, tell them the capabilities. Ask them if they can meet or exceed them. If they say "Yes", you're either not using the real capabilities of your Linux machines, or they're lying.

    steve
  • by spongman ( 182339 ) on Thursday February 21, 2002 @02:19PM (#3046029)
    Microsoft has a few types of clustering:
    1. Failover clustering. This is an OS service that servers like SQL Server and Exchange plug into that allows Active/Passive or Active/Active clustering over a shared SCSI/Fibre bus. In theory you could write your app to use this service but I think it would be overkill.
    2. Network Load Balancing. This is just a software version of the standard kinds of NLB found in cisco boxes.
    3. Component Load Balancing. This is the most suitable. It's provided by Application Center and it allows you to deploy COM+ objects on a cluster of machines and have the calls distributed according to the load on those machines. You can control the threading and lifetime of the objects and view the status of the machines pretty easily using the Application Center MMC plugin (or SNMP, I believe). You'd have to wrap the computational part of your application into one or more COM objects. Once you've done that then you can create and call those objects in the cluster as if it were one machine - the clustering is transparent to the client application. I played around with AC a bit when it was in beta for a project that I was working on. We didn't go with it in the end because the design of our application ended up not requiring it (we just went with hardware load balancing), but it seemed like pretty cool technology - if you're into the whole COM thing. It has a really cool rolling deployment feature where you can redeploy your components (and/or IIS application if you have one) to your cluster incrementally while it's still running.
    Here's some links to docs on MS's site:

    Introducing Windows 2000 Clustering Technologies [microsoft.com]
    Application Center home page [microsoft.com]
    Component Load Balancing [microsoft.com]

  • by stereoroid ( 234317 ) on Thursday February 21, 2002 @02:20PM (#3046035) Homepage Journal

    A few points:

    • It's only available with Advanced Server, which means extra cost.
    • Nearly all applications & services (daemons) will be running on one node at a time. If they are set up correctly under Cluster Administrator, they still run on one node at a time, except that they can fail over.
    • A Cluster Group is the unit that runs on one node at at time and fails over, so it will contain applications and the resources those applications needs.
    • During a failover, resources in a cluster group are taken offline by order of dependency (unless the node crashed!), and brought back online also by dependency. So, if an application depends on a disk, the application goes offline before the disk, but the disk comes online before the application (logical).
    • Multiple groups run on multiple servers at any time, so if you spread them out, machines aren't sitting idle.

    You can set up any application or service to cluster & fail over if required, as long as:

    • It stores all its live working data on shared storage,
    • You correctly place it in a logical cluster group that includes the resources your app needs, and specify those dependencies (e.g. my app needs to use the disk and IP address in Cluster Group X, so it must be in Cluster Group X), and
    • You can specify what Registry keys (if any) need to migrate between nodes.

    Active/Active mode is more complicated, meaning instances of an application running on different nodes, all accessing the same data on disk. Only certain applications can do this successfully, e.g. Oracle, which does so by using a custom file system and effectively bypassing the Windows Cluster Service. Windows & most apps will normally throw a fit if there are clashing file requests from multiple nodes, since Windows caches file tables in memory and can thus lose track of the real situation on disk (bad news). I've seen it BSOD in such cases.

  • by TeknoHog ( 164938 ) on Thursday February 21, 2002 @02:28PM (#3046083) Homepage Journal
    this article without the obligatory Beowulf comments?
  • by Arandir ( 19206 ) on Thursday February 21, 2002 @02:32PM (#3046114) Homepage Journal
    my understanding is that an MS rep asked what it would take to get them to switch to a Microsoft cluster.

    You've got a golden opportunity here! Microsoft does it your way or they don't get the sale.

    Let them know the nature of a cluster in a research project. Nodes will be swapped in and out. New ones will be added. Different OSs will be used. So tell them you want a copy of Windows for each potential node, licensed to the University and not to any individual node. Tell them you need full rights to install, reinstall, and uninstall any particular copy on any particular node. Tell them you will not accept any terms restricting the cluster to Windows only.

    If you really want to play hardball, tell them you don't even want licenses, but bonafide user-owned copies of Windows subject only to the provisions of copyright. In other words, you don't want to be subject to any EULA. Then you'll discover how much Microsoft wants your cluster to be a Windows cluster.
  • by BWJones ( 18351 ) on Thursday February 21, 2002 @02:38PM (#3046173) Homepage Journal
    Try looking at Pooch from Dean Dauger. http://www.daugerresearch.com/pooch/whatis.html

    This would allow you to use the Macs (OSX UNIXY goodness too) individually as personal workstations (for writing, graphics, computation, surfing the web) while at the same time using them in clusters for compute intensive work. This makes for a doubly productive machine and one that is much cheaper as more work can be accomplished with it than simply using it as a dedicated node.

    Mac clusters are easy peasy to set up (even junior high students are doing it) as the one page instructions should indicate and Applescript'ability. Also pretty damn fast given the built in Gigabit of G4's and the Altivec (if taken advantage of like in Apple's version of BLAST).

    Finally, the other item of interest. You can use any Mac you have. G3's, G4's of any model and speed as one does not have to balance everything like on typical clusters where all of your hardware has to be exactly alike. The Macs in your cluster can even have iMacs on the secretaries desk involved!
  • Simply... (Score:5, Interesting)

    by Glock27 ( 446276 ) on Thursday February 21, 2002 @02:38PM (#3046175)
    ask the Microsoft rep to point out how many machines on the Top 500 Supercomputers List [top500.org] are running Microsoft operating systems.

    Then, point out the scads of Beowulf clusters and Linux/Unix based systems.

    Finally, inform the rep and your management that you've chosen to use the more cost effective, higher performance and standardized choice...Unix.

    If management resists further, do a cost analysis. That'll convince them.

    299,792,458 m/s...not just a good idea, its the law!

  • by doorbot.com ( 184378 ) on Thursday February 21, 2002 @02:38PM (#3046176) Journal
    ...to ask a question that I wanted to ask as well. Granted, this topic seems a little strange, considering the Linux cluster is in place, and it seems like the kind of question which encourages a Microsoft vs. Linux world domination showdown for grandmaster of the universe. It also shows a limited business sense on the part of the poster (why change something that works well when you can't afford a replacement?).

    Right now a coworker and I are looking at pricing and configuring a fault-tolerant cluster for a client who runs Windows 2000 and Exchange 2000. They're a bit paranoid, so they've decided they want a cluster. We've tried to educate them on exactly what a Microsoft cluster can and can't do, so it's difficult to understand exactly what they want (basically an entire network exactly like Microsoft's own, but for $1000).

    Pricing on a two system cluster is around $50,000. Buying two copies of Exchange and Windows Advanced Server will total $20,000. Then there's the hardware costs. For our client, they've specifically requested this, so they're ready to pay.

    My question to Whamo is are they really taking the Microsoft rep seriously? If they have to pay software costs for their new cluster that's going to mean two things: either buying less CPUs to add to the cluster, or not doing the project at all, because just the software will put them over budget. With Advanced Server running somewhere around $4000 that's a lot per machine when Linux costs at most $5 to burn a CD after downloading it via the university's T1/T3/etc. Whamo says "it is running on old hardware and is basically used for dog and pony shows to get more funding and hopefully donations of higher-end systems" and to me that is your answer. If you can't afford the hardware you can't afford to buy Microsoft's software...

    Also, there's MOSIX [mosix.com] as well, but I don't have much experience with MOSIX and thus cannot comment on it.
  • by Nickodemus ( 529872 ) on Thursday February 21, 2002 @03:44PM (#3046760)
    Microsoft Cluster services is designed for one thing: High Availability (little or no down time / load balancing). Beowulf clustering is designed for one thing: Parallel Processing (data analysis / number crunching). They are two different types of clustering. The debate on cost is a waste of time. While Linux is as capable of high availability clustering as Microsoft is, it has little cost. With Microsoft you have to buy a license of Advanced server for each cluster node and then have licenses for each application as well. For cluster aware Microsoft apps that means Enterprise editions. Advanced Server costs in the $4000 range. SQL 2000 Enterprise Edition cost in the range of $11,000 per node. If you are backending a website with a SQL cluster, just for SQL you are looking at around $20,000 per processor . If you are looking for a cluster to be online 24x7 then you go with Microsoft (and pay the additional money for support). If you are looking to predict weather patterns, analyse ocean currents, or predict the lottery, use Red Hat and Beowulf (and pay the additional money for support).
  • Shameless plug (Score:3, Informative)

    by Leimy ( 6717 ) on Thursday February 21, 2002 @04:12PM (#3046935)
    Since you asked come visit MPI Software Technology Inc. [mpi-softtech.com]

    We have been very successful in Windows clustering efforts and offer a professional MPI implementation for windows platforms. Give us a shot I am sure we could set up an evaluation of some sort.

    That said, we have the following self-kudos:

    CORNELL THEORY CENTER'S VELOCITY CLUSTER MAKES THE TOP 500 LIST (June 16, 2000)
    "Our relationship with MPI Software Technology, Inc. has been extremely valuable," says Cornell Theory Center associate director for systems Dave Lifka. "Good job scheduling, resource management, and reliable MPI are the primary pieces of any high performance computing environment. MSTI has made the extra effort to make sure MPI/Pro and Cluster CoNTroller are ready for a production quality environment. The utilization and stability the AC3 systems is directly related to the quality of their software."

    World's Largest NT Cluster Goes Live (August 25, 1999)
    The Advanced Cluster Consortium (AC3), which includes Cornell University, Intel, Microsoft, Dell, Giganet, and MPI Software Technology, Inc., announced on August 12, 1999, that it had completed the installation of a 256-processor high-performance computer cluster using Windows NT 4.0. AC3's cluster bests a University of Illinois 192-processor NT cluster, which Windows NT Magazine covered in June 1999.

    As you can see we've been at it a while! :)
  • by doorbot.com ( 184378 ) on Thursday February 21, 2002 @04:17PM (#3046988) Journal
    (From http://www.microsoft.com/windows2000/hpc/faq.asp [microsoft.com])

    I tried to read between the lines so we can get the "real" picture... my comments are in italics, and delimited with brackets.

    Q. How does a Windows-based supercluster compare with one running UNIX or Linux?

    A. In short, there's very little substantive difference [ except you have to pay for our software, and it's not cheap ], but owners of existing UNIX-based solutions will face changes that will cause them some work and discomfort (less for users than for their current administrators and support staff) [ because when the server blue screens in the middle of the night who gets called? ]. These are offset in part by lower costs of ownership (technical skills required) [ because incompetent Windows admins are a dime a dozen ], breadth of applications and support tools [ expenses ], vendor support options [ additional expenses ], and commonality with the constantly improving desktop environment [ which is completely useless for a (headless) server ].

    From a hardware perspective, there's very little difference seen by the application. In the past, UNIX-based hardware environments had better floating-point performance [ and still does ], but that's been offset in the last few years by Moore's Law curves for large-volume products that have advanced faster than specialty products have [ now you can throw more hardware at the problem for the same price ], as well as the price and support cost differentials between these vendors' products.

    From a software perspective, Windows is a markedly different environment [ hopefully if you agree with this statement you will agree (and believe) our other statements ], designed with priorities set by a much different market segment than traditional science and engineering [ we're trying to shoehorn our product into a market it doesn't belong ]. Windows NT® and now Windows 2000 were designed to meet the needs of those ISVs building products for businesses that are unable or unwilling to dedicate their best people [ incompetent employees/amoebas ] to support their infrastructure (versus focusing on building solutions for their business mission) [ because supporting infrastructure should not be that hard ], as well as the needs of a hardware community that required continuous integration of new devices and components [ such as digital camera support for your database server ].

    [ we hope that you've become completely confused by this, please telephone your local Microsoft sales office and we will "explain" things to you... please have your credit card ready ]
  • by JamesGreenhalgh ( 181365 ) on Thursday February 21, 2002 @04:35PM (#3047130)
    Having seen first hand how poorly the following setup ran, I'd say steer clear of Microsoft until they admit that reboots are not normal:

    2 x HP Netservers, both dual p2 Xeon, 1gb ram, and a small raid shelf with 8x 9gb disks. Both NT4 installs with the correct patchlevels.

    One machine ran oracle, the other IIS, these were clustered so that one would take over the task of the other, should there be a problem.

    Problems:
    1) Crashing (daily at least)
    2) Slow (astonishingly poor, disk defrags once a week helped this)
    3) Sometimes one host would freeze, and the other wouldn't actually notice
    4) Often a shutdown of one node would move the services across, but upon rejoining the cluster - the node with both services would refuse to give one back.
    5) Often, IIS would stop talking, and neither node would actually realise.

    The attempted solutions:

    1) Replaced CPUs, memory, disks, eventually nodes
    2) Reinstalled clustering software, eventually total clean installs of operating system and applications
    3) Support from Microsoft, and Oracle, and HP who made the (certified) kit. Oracle+HP both pointed the finger at the OS, Microsoft simply failed to help, when we got any response from them at all.
    4) (this helped) I used one of the spare HP9000 servers to monitor them remotely by trying test transactions - it alerted people when they fucked up.

    I think the above says it all really. Standard software on correct hardware - it just didn't work properly. Microsoft can stick their clustering "technologies" where the sun don't shine.
  • MS parallel tools (Score:5, Informative)

    by ajv ( 4061 ) on Thursday February 21, 2002 @08:02PM (#3048446) Homepage
    Getting past what are the wrong tools first: Beowulf is an architecture to do massively parallel computation, so we can eliminate two of the best known HA tools. Microsoft Cluster Service is two or four node high availability, similar to HA Linux's efforts. NLBS is a software form of a hardware load balancer, similar to Cisco Local Directors and only really good for web farms. So what does MS provide to do similar stuff as Beowulf?

    COM+ and Queueing Components. AppCenter.

    The way it works is this. You write a COM+ component that is transactionally queuing aware. Each component takes a work unit in, processes it, and then sends the result of the transaction to the queueing components for reassembly or re-issue (if a node fails to submit a result, for example, good for checkpointing).

    You can use normal Windows 2000 Professional boxes for the worker bees, and use a few Windows 2000 Server boxes to co-ordinate the issuing of jobs and control, and munging the result sets coming back in.

    If you need to submit a wide variety of jobs, obviously the COM+ components will be changing regularly, it'd be a good idea to go to AppCenter so that you can treat a bunch of machines as single whole. This allows you to upgrade or deploy an app in a few mouse clicks to literally thousands of machines in a few seconds. AppCenter also has pretty good resource management, something that might be necessary if multiple jobs are running at the same time.

    The cool thing is the development environment is really friendly and you can make COM+ components pretty easily and test them locally (for the n=1 case) before deploying to the farm.

    There are also specialist MP libraries for the Win32 platform, such as PVM or MPI (WMPI). These have the benefits of re-using the knowledge and API's that users might already be familiar with - one of the biggest thing when a place converts from one supercomputer to another is rejigging and reoptimizing the code for the new architecture.

One man's constant is another man's variable. -- A.J. Perlis

Working...