Catch up on stories from the past week (and beyond) at the Slashdot story archive


Forgot your password?
Programming IT Technology

Messaging vs. RPC 6

darrint asks: "I'm about to write yet another application with parts on different boxes and OS's and languages. Some of my server apps need to be fault tolerant and/or support load balancing. I've worked so far with CORBA and have also looked at the features of XML-RPC and Ensemble. I see two different approaches: remote procedure calls and messaging. Can anyone enlighten me as to the less obvious consequences of choosing one approach over the other? I'm particularly interested in how the approaches support fault tolerance."
This discussion has been archived. No new comments can be posted.

Messaging vs. RPC

Comments Filter:
  • I dont have much experience in this aside from working with distributed agent frameworks. Many of them are written in Java and usually implement a naming service on top of RMI but they also offer the CORBA approach. All this is seamless to the programmer. As I understood it, RMI (or RPC) is less overhead.
  • by bluGill ( 862 ) on Monday March 05, 2001 @05:35AM (#384597)

    As far as I can see, they are different schemits of doing the same thing. That is with messaging you have to parse your own messages and then call the correct function, while with RPC the correct function is called.

    RPC tends to be a little slower then customer written parsers, but it makes up for that by being eaiser to write to, more features (error handeling especcially). You can of course get all those advatnages with custom written message passing, it just requires more work.

    Message passing also works well when you have a lot of data to pass between two host which both of a byte order different from network, if your RPC protocol translate everything to network order. (Sun RPC did by default, others don't)

    In general RPC is quick and easy, and normally good enough. Message passing is harder, but you have more controll. If you need every last ounce of proformance, message passing will get it, but you have to do all the dirty work yourself, expect to spend a lot longer on in. RPC will take care of a lot of hard problems, including many that you don't anticipate having.

  • I think someone pointed out the queueing nature of messaging as opposed to RPC, but let me point it out here. There are multiple ways of doing RPC, some of which encapsulate your parameters in some kind of abstract notion of a "message" or "packet" (such as XML-RPC). However, they all have one thing in common: they're synchronous protocols at their core. You make a function call, the parameters are marshalled on the client, transmitted to the server, unmarshalled, the function is executed, and the result is marshalled and unmarshalled. This all happens in one thread of execution on the client, it seems like you called a local function.

    Messaging, on the other hand, is typically asynchronous. You prepare a message, and then pass it off to your client-based message handler. Then, at some point in the future, it's guaranteed to be executed. You might say you want an asynchronous notification of when it executes, you might be able to poll at a later time for the result, or you might just trust that the system is going to handle the situation appropriately. But the moment the parameters are marshalled on the client, the client can and will continue with its work.

    It's a fundamentally different model for many cases, and sometimes is NOT appropriate for application logic. And at its core, you can implement RPC over some kind of messaging system (in which case the client libraries will just not return from the "enqueue" call until they've gotten the "executed" event back from the server), but that's not what people use it for.

    There are a couple of things which messaging then gives you which conventional RPC does not:

    • You can very easily scale out your number of servers with messaging. While with XML-RPC or SOAP you could in theory just stick a load balancer in front of your server farm and scale out that way, enterprise messaging is more designed for the case where you have multiple servers handling multiple clients, and enterprise systems will make this very easy to handle.
    • You can pass the same message to multiple servers, thus allowing you to update multiple systems at the same time if it's an update type of message.
    • Messaging typically involves transactional semantics. Once you marshall the parameters on the client, the underlying system will guarantee that the message gets delivered, assuming the system doesn't explode (for example, if the server happens to be down, when it comes up the system will deliver your message, but if it never comes up the system will just hold your message forever unless you have an expiration).
    • Enterprise messaging systems are designed to integrate with transactional environments very well, while RPC is not. (you're doing a transactional call with RPC where the transaction began on the client. Who owns the transaction now? Big question with RPC. For most messaging systems, it's clear that it's all just one transaction).

    In short, the primary difference is that RPC typically connotes synchronous execution, while messaging is typically asynchronous execution.

  • by K-Man ( 4117 ) on Monday March 05, 2001 @10:53AM (#384599)
    RPC, CORBA, etc. and messaging are obviously related, but in some sense it's like comparing sockets and UDP. I was working on this conundrum a year or two ago, and eventually I decided to build a model with separate layers similar to OSI.

    You have several goals that you want to serve: reliable delivery, clean interface, type checking, load balancing, crash recovery, extensibility, and so on. If you can lay out the layers so that each goal is handled at some level, without requiring much intervention at a higher level, then your architecture will work.

    IIRC, this is what I came up with:

    object level (eg CORBA, database, web "shopping cart"):
    instance/state management (new, destroy, etc.)
    persistent connections/sessions/transactions/data
    type definition and inheritance
    interface (method) definition
    call level (eg RPC):
    marshalling (based on interface definition)
    call/return functionality
    object exception delivery (bad params, etc.)
    messaging level (eg UDP, message queues):
    reliable one-way delivery
    performance monitoring (queue sizes, etc.)
    network exceptions (eg unreachable host)
    queue management - restarting, rehosting, etc.

    This breaks the system into layers where each layer has a definite scope. Messaging only cares about the life cycle of each message; call level only cares about the duration of one call/return round trip (based on a single interface), and the object level worries about more persistent things and their lifecycles.

  • Excuse me for my poor english. Sorry but I strongly disagree your model. Moreover I believe all this middleware layer is a misconcept. Why? 1) more to learn, more code hard debug, less perfomance, less stability, bulky adm apps to know whats happend, not obvious advantage ORB's to plug because they need stubs (IDL another bad language) and lacks your propietary needs, poorer system uptime, high maintenance costs (derived from the lack of having 'broken apps') to maintain your client software and client data sync, updated etc..etc. 2) Have you ever seen a serious study (not vendor propaganda) that demonstrate middleware is cheaper than old dinosaur mainframe deployment? I have being searching for more than one year and the only numbers I've found (from banking industry) states that Client/Server tech costs 1.5 times more. I have spent about 6 month developing CORBA apps (DELPHI + VISIBROKER + NT) and results were not as much we expected. In fact we had stability problems, poor exception handling etc.. finally we solved much better with an old-fashioned socket app. 3) The reality if you see the polls is less than 10% of programmers are using these tools. 4) Development cycle is diverted from its main goal that is the implementation of the real straigth efficient solution in detriment of some 'theoric abstract model'. There are several hundreds of functions in bad designed foundations classes and libraries to add more!. So you must select what is really usefull and you will keep very few, but the time you spent diving is lost. The Alternative NEW MODEL. 2 level terminal/server 1) Broken apps must be gathered in the server side (nothing new thats the way done before Client/Server). 2) Standarization of messaging (XML?) and protocol data managment to deal with ultra-thin-clients (terminal devices). Same scheme would be used for Host to Host interaction. 3) Server Front-end to perform message mapping and service scheduling, load balancing, care of connections, queueing, session and protocols. Services are absolutely isolated from the outside world. This Server Front end (yet I have implemented one) is surprisely simple to do almost anything you need and introduce a very low overhead in overall process. My benchmarks showed me it can deliver 2000 transactios per second in a modest Linux Box PIII 64MB RAM. 4) Clients apps obviously runs at the server side the client device becomes an ultra-thin-client (a sort of a more generalized internet browser) that runs the specific 'applet' (maybe JAVA or other better language with less resource consumption [interpreted Ocaml]) so software updates are automatic and no distrubuted data to synchronize. Multiple hardware techs could be easily supported ranging from instruments, telecom-devices, POS Palms, cellular phones, PC based terminals etc. 5) This arquitecture is quite much simpler and cheaper to implement stable wide range solutions. With Standarized Ultra-Thin-Clients you can concentrate all development effort in the rigth place THE SERVER. No propietary software in the Client (terminal) side. I hope that in a few years Client/Server model and all associated middleware finish. The real battle is in the server side. Clientes are TERMINALS. Arturo Borquez
  • Don't misunderstand me, I wasn't defending monolithic ORB's, or over-upholstered development environments. In fact, the original goal of my work was to break down the ORB idea into smaller, separate, manageable pieces, and assess how well our legacy stuff (a high-volume website) was fitting into the puzzle. The target architecture that seemed most practical is like your "New Model"; the layers use XML to exchange data, clients are thin (http), and manual coordination of interfaces is sufficient for most uses.

    The "object layer" is a design abstraction, and I honestly haven't implemented an ORB ever. Many of the services needed to manage complicated object lifecycles can be supplied by simpler tools, like transaction managers, or cookies. The trick is to know what functionality you want, and compare alternatives.

"An organization dries up if you don't challenge it with growth." -- Mark Shepherd, former President and CEO of Texas Instruments