Is Client/Server Really Dead? 54
the-empty-string asks: "Technology fads come and go, but sometimes they do leave behind real systems supporting real business processes. There was a time when 'client/server' was all the rage, and today there are thousands of such systems still in use, happily serving HR departments, providing inventory management, or tracking complex production processes. These days, after 'reusable components', 'three-tier', 'J2EE', and other resume-enhancing keywords, the magic phrase is 'Web services'. Consequently, many companies think they must scrape their existing client/server applications, in order to "move them to the Web". While the advantages of exposing functionality to the outside world are beyond debate, does this mean perfectly good and working applications must be abandoned only because they are client/server, or do they still have a useful role to play? Also, what is the migration strategy you would recommend to your boss or your customer, when these systems have to be replaced no matter what?"
The client's someone eles's problem (Score:3, Insightful)
With a browser based soloution you can then assume the browser at the client end should be able to work properly across different versions as long as the platform has a suitable web browser
You just then need to work on making sure the server works properly and generates standards based output that any browser is able to render. As long as you can do that you've then eliminated a number of client headaches.
Re:The client's someone eles's problem (Score:2, Interesting)
And then the browser may also be used for general Internet access so requires a quick mandatory update because of the latest security hole. Your only real hope is to differentiate the version of the browser use for application access (and thus has been QAed) from the version used for general purpose work.
Re:The client's someone eles's problem (Score:2)
Re:The client's someone eles's problem (Score:2)
Re:The client's someone eles's problem (Score:1)
heh
the idea that this kind of corner cutting now will just lead to more work later when ie 5.5 goes away for good just doesnt seem to occur to anyone...
Re:The client's someone eles's problem (Score:2)
This is exactly the kind of relatively dumb for-filling exercise that is ideal for the web. I have seen some lovely soloutions, even freeware ones where most of the work is done in PHP back on the server and the browser is largely version agnostic.
Incidentally, the paranoia we went through to test our stuff is justified. Some idiot calling the API direct managed to exchange price and quantity (they switched the checks off as well). If the price is 5000 and the quantity is 10, selling 5000 at 10 is going to do bad things (and wipe about 100 million Euros off the market).
This is such a stupid question... (Score:4, Insightful)
I think this question is really just shows the limitation of thinking in terms of technology "brand names" - i.e. client/server, webservices, etc. Because what are web services - well, they are services provided by a server over a web (http) client. Sure, they all conform to a certain general format, and they are platform independant, but is this so different from the multitude of 3270-based VM applications that some many places used (and still use?) There is still a workstation client, and there is still a central server and database repository.
In fact, as we get to more complicated web pages (xforms, etc), and more streamlined client machines, we are ironically regressing back toward the days of 3270 dumb terminals that conneted en-masse to big 360 and 370 main frames. Because is there really that much difference between a company Intranet today and thier VM system a decade ago? Aside of useability and flexability, not a whole hell of a lot.
So I argue that client/server hasn't gone anywhere. Yeah, perhaps traditional client/server applications (which lack in flexibility and are more costly - in terms of development and platform requirements - to deploy) are going by the wayside, but they are being replaced by new client/server systems that use a web client rather than a custom built DOS/Windows/UNIX/VM application.
So, I reitterate, get your head out of your buzzword infested ass, and look at technology for what it is, not just the name that people attach to it. Until individual workstations are powerful and reliable enough, and the network between them is fast and flexible enough, and some other system for keeping ata permission based comes along, we're going to need servers. And with servers will come client, in whatever form they arrive.
Re:This is such a stupid question... (Score:4, Informative)
I agree with the main idea of your post, but you've made an error here which obscures its correctness. The 3270 is not a dumb terminal; in fact, this is what distinguishes it from the VT series and the ANSI X3.64 standard.
The 3270 is a smart terminal with support for forms-based input. The server specifies the types and locations of the fields required, and the terminal draws them, accepts input, and does basic verification, batch-submitting the entire form when complete. Typing lag, therefore, doesn't exist (this fact saved me from going completely insane when Office Depot couldn't keep its network running and thoroughput dropped to ~300bps).
So yes, HTML viewers and 3270 terminals are very much alike, and share many features, drawbacks, and programming issues.
Re:This is such a stupid question... (Score:2)
Re:This is such a stupid question... (Score:1)
I'd like to see something like this. Perhaps something with a 512MB flash drive, a bunch of RAM, and USB, VGA, and audio ports, running Mozilla?
I liked the JavaStation...
Re:This is such a stupid question... (Score:3, Insightful)
>between a company Intranet today and thier VM
>system a decade ago?
Yeah, the mainframe was a lot more reliable.
Web not good for high performance (Score:3, Insightful)
Some exchange members may effectively put a web server at tier 3 (Member interconnectivity server) and a browser on tier 4, but that is their problem not the exchange's.
If we were writing the application again, maybe we would move non-performance critical stuff off to the web. Already the front-end is written in Java, but a mixed architecture would be verycomplicated (how to support a web-server tied intimitely with the exchange interconnection software).
The main win that a web-based solution would bring is that it would make a client much easier to update. At the moment, the exchange has to coordinate the rolling out of new releases over 550 organisations with many thousands of workstations. Painful, eh?
And then on the browser side, which do you support? If you work at the sharp end, there are many incompatability issues. If you role out your own tested/debugged browser, you may as well roll out a dedicated client.
For use as a dedicated application clients, most browsers are awefully fat (big and slow with unwanted functionality) and the extra they bring make them less easy to support when some bozo calls up to say that their trades are being garbled or lost.
Re:Web not good for high performance (Score:2)
If you want a fat client, you use tools like Flash, Applets or ActiveX so that your code is running in a standardized container. You also get automated deployment.
Re:Web not good for high performance (Score:3, Insightful)
You usually can count on some compatability, but this is where working at the sharp end is an issue. Most things will work fine, but a few critical things will give you a headache.
Re:Web not good for high performance (Score:2)
Re:Web not good for high performance (Score:2)
Hmm (Score:1)
I don't understand how scraping them would help anything.
Re:Hmm (Score:2)
That's the same thing. (Score:3, Insightful)
A better question would be if proprietary client-server solutions, that require proprietary clients are doomed.
The web services model has a client already installed in most OS's, runs in most platforms, so it has a clear advantage.
Re:That's the same thing. (Score:2)
Once you add something like Java, or embed browser code into some other program, yeah, the distinction evaporates.
Re:That's the same thing. (Score:4, Insightful)
It does?
Web services != web browsing.
The only connection, aside from a convenient and hypeworthy similarity in the name, is that both use HTTP as the protocol.
As others have noted, web services are just another version of client/server, which happen to use things like HTTP for the communications.
Nothing new in the world (Score:4, Insightful)
For the large-part, client/server was basically taking an application and putting the database on a central server, using ODBC, Oracle client or whatever to transport SQL over the network. It's the logical progression when you're used to writing VB applications using an Access database and you want to make your existing model work for lots of users.
What we're now seeing, though, is that splitting an application at the point where the business logic talks to the database is one of the worst places to do it, architecturally speaking. You finish up with your business logic bound to the presentation in a way that makes it difficult to reuse, and eventually you have multiple implementations of the same piece of business logic with a single database underneath it, and a whole bunch of inconsistency.
So J2EE, web services and so on are really about trying to make it easy for developers to split their applications into chunks somewhere between the presentation and the business logic, rather than between the business logic and the database.
But this new technology doesn't necessarily mean that people will understand the architecture any better. Just wait long enough and someone will miss the point completely and produce a piece of software that repackages ODBC, JDBC or something similar as a web service.
Are client/server applications dead? No. But you do need to have a migration path that allows you to extract the business logic as distinct software components so that you can re-use them in new applications. Whether you use web services or J2EE to help you do that is of little consequence. You could just write good libraries instead, as long as you get the conceptual architecture right.
A good migration path is one which will allow you to incrementally move business logic from your existing client/server front-end into a reusable middle-tier. Don't try to rip complex applications out and replace them wholesale - it's much more risky.
Web based applications ARE client/server (Score:1)
I feel for you (Score:4, Insightful)
You're lost in a blizzard of buzzwords with no meaning, and acronyms for acronyms of buzzwords with no meaning. You need a vacation in a log cabin in the woods of Ohio with some deep technical books that they don't sell at Barns and Noble for a few weeks to get your feet grounded again. Your question is irrelevant. The real question you should be asking is, "Why would I ponder such a question?"
Re:I feel for you (Score:1, Redundant)
You need a vacation in a log cabin... (Score:5, Funny)
There is a small mailbox here.
>_
Go further. (Score:2, Insightful)
Heck, go futher. Go on vacation and REALLY get your head out of the clouds.
Take a REAL vacation with family & friends. No technical books, no computers, no shop talk; just cabin life, hiking, reading novels, fishing, playing, drinking, eating, sex, whatever.
Get far, far outside the box, where you can see the big picture and gain a real perspective on your life/project/job/whatever.
Then come back, with the perspective of an outsider, but still with enough knowledge to make an informed decision.
You might be suprised by the ideas that come into your head after a real vacation.
Re:I feel for you (Score:2)
Flamebait? Moron mods at it again.
Isnt Client Server the point of Web Services? (Score:4, Insightful)
web services gives the ability for anything to call it, 'client' programs, or web page generators. the idea is to allow a single backend that multiple front ends can utilize.
so no, i think client/ server is dead. i think it is actually now becoming client/web service.
Nothing is beyond debate! (Score:3, Insightful)
Um... No. The advantages of exposing systems currently implemented in some in-house client-server way to the outside world are far from proven in most cases.
Exposing things to the outside world guarantees only one thing in itself: you will be subject to more security vulnerabilities than you ever had before.
No, SQL is alive and well (Score:4, Insightful)
--Mike--
Re:No, SQL is alive and well (Score:2)
SQL is a language, not a protocol. You are probably thinking of SQL*Net and TNS, both of which happily run on TCP/IP networks (and AppleTalk and IPX/SPX, and DECnet and plenty more).
Uhh, No. (Score:3, Funny)
User interfaces suffer in move to web (Score:4, Insightful)
Where applications need to be broadly distributed, extranet and internet sites for example, HTTP/HTML can be appropriate. But for internal applications, for performance, convenience, sophistication of the UI, a compiled application running on your local desktop -and accessing a central server for shared data- is still gonna be best, imo.
Re:User interfaces suffer in move to web (Score:2)
Further, ActiveX, Java, Macromedia Flash, etc. were all developed to help provide a feature-rich UI environment in a web server housed application.
The point is, you have many options. It's not just web versus client/server.
web services are client/server (Score:1)
The term 'web services' still represents software with a 'client-server' architecture. The distinction is a web services client is the common web browser.
Clients can now be built using common open standards, instead of platform-specific native applications.
Many older 'client-server' systems can 'resurrected' by developing a web user interface using web protocols http/s + (html, Java applet, and/or active-x).
Still a few problems with HTML/Javascript/Web (Score:2, Insightful)
Not to say that the web does not have its place, it most assuredly does. It just not quite ready to completly replace dedicated clients yet.
Trashing useful apps for buzzwords sake? (Score:2, Interesting)
Depends on where you work and how much influence you have. If management where I work had a say, yea we would get rid of our client/server apps that work perfectly well and go to the newest buzzword compliant kid on the block. (Mind you, we would also go without a pilot or any load testing either, because that's all unnecessary you see, according to management). I still think that you use the best tool for the job, regardless. The end user doesn't give a crap how he gets his info, as long as it's fast and reliable. If you can fit into that model, then more power to a client/server app.
Of course, they also don't give a crap about scalability, but that's because they don't have to deal with it either.
Where are all the jerks on Slashdot today? (Score:2, Funny)
They are mostly on-topic.
There is a relatively low occurrence of lame wisecracks
Most of the posters seem to have read the content of the article before posting. If this keeps up, it threatens the very existence of the /. culture!
Buzzword Bullshit (Score:3, Interesting)
Certainly, there are a variaty of ways to distribute an application, when you start to have real compute power at the human end of things, but they're all variations on a distributed processing theme, conveniently placing a "tier" at an architectural point with clean, simple, and few interactions with components at another tier. Hence, three-tier and n-tier applications: they're just particular distributed "sweet spots" particularly appropriate for certain large classes of applications.
In the old days, if we even considered such distributed systems (requiring a PC on a desk, instead of just a terminal), we coded all the protocols, marshelling schemes, and so forth ourselves -- it was an in-house thing. So, yeah, all such distributed applications look the same, from that perspective, the way that all large C or C++ applications "look the same", if well-designed.
But, as particular distributed mechanisms meet large "sweet spots", such as a web browser/server split, they're pushed as the definitive "answer" to man-machine interaction.
And, just as quickly, or shortly thereafter, the shorcomings of a particular popular distributed architecture split become apparent, and the next "model" is pushed as the definitive "answer". The fact that it is tuned to a different class of problems is generally lost on those flogging it, and those buying it. The rest of us just kind of look at it and say, "yeah, so?" -- after filtering through the buzzword muck.
Now it is true, that, as client-server models (and all of these are just that, really) mature and are pushed into use, they are strained to the limits of their scalability, and two-tier architectures give way to three-tier architectures, and so on. So the new tunings offered by the "latest" architectural split does serve to solve new problems, but that in no way invalidates the fact that the "old" architectures did a splendid job of serving to solve the "old" problems.
The frustrating thing, from the perspective of someone like me, having worked with all sorts of distributed systems for close to 20 years, is the notion that one such architecture is sufficiently different from another that the "old skills" are now obsolete: client-server techniques do not transfer to three-tier architectures do not transfer to, what's the buzzword?, oh, yeah, "Web Services".
While the interfaces are new, and the implementations of the various components need to be picked up, a seasoned architect will look at them an say, "yup, that should scale the way we need". However, there is this notion that this can not happen quickly, and new "experts" need to be hired to replace the "old" experts.
Funny, most places don't get rid of their "if statement" C++ programmers when they need "while" statement-, or golly gee, "function call"-programmers. Understanding of standards particular to a given architectural model may be important, but it's such a small part of rolling out a working system, that any competent software engineer can deal with them.
Don't make the mistake a former employer of mine made: contracting out some servlet code to "Java experts"... that had no clue about threading issues because, while they "learned Java", they knew nothing about multitasking issues. Hint: those that understand the latter generally adapt to a different language faster than those that know a single language adapt to other than basic computer concepts. And so it is with buzzwords designed to obscure the obvious.
A couple of points... (Score:3, Insightful)
N-Tier development means you have a minimum of 3-tiers... Client, Logic and Database. The Logic can be broken into other parts such as business or data-access, etc. N-Tier development does not imply web server. It can, but you can also build a "fat client" using
One of the advantages of n-tier development is the abstraction between UI and logic. This allows for somewhat more rapid development on larger projects as you can divide the work up between several groups.
But the chief advantage is one of security. In order for client/server app to work, the user needs direct access to the database. With n-tier you can authenticate/authorize at the application layer, which gives you finer granularity of control than at the table layer. With the client/server model you need grant the user rights to an entire table, but with n-tier your app server has rights to the table, but the user may only have rights to the individual records that they should be able to see.
In a larger enterprise you also run into issues where once people have direct access to the database, they start running ad-hoc queries from reporting apps. It's nice to be flexible and convenient this way, but in larger environments, if those queries are not tuned they can consume a large amount of resources. So instead you keep the users out of the OLTP environment, and replicate the data to a data-warehousing environment that they can use for reporting. While you could possibly do this with training, you leave yourself at risk that your employees are all smart enough to understand. Better to just design the system to prevent potential issues.
Anyway, there's a lot of advantages in this model and there are books written on the topic. I just point out a few obvious ones that I've seen based on my experiences.
"Client/server" will never die (Score:2, Insightful)
Honestly... (Score:1, Flamebait)
Here's an analogy...
Just as TCP is built on top of IP, "nTier" is built on top of client/server. No matter how many "tiers" you have, there's always one tier that is the client (aka, user interface) and the rest are servers. It doesn't matter that one server might to X and another Y... they are still servers communicating with clients, and communicating with each other.
You make me laugh (Score:3, Funny)
Ahh the newbies.... (Score:2)
Now web services are being touted as the golden bullet. It's all bunk and marketing. Not to say web services do not have a place, they do, they just ain't a complete solution.
The sooner we come full circle and get back to terminals attached to the central server the better we'll all be.
But gotta go I've got to debug this AS/400<->NT<->CE database integrity problem, then I can get started on the CE gui lock up and lastly solve the backing up of two central servers and many remote PCs. Client server keeps me employed.
The question you should be posing is 'why do users always throw out systems?' I have no answer to this . The lemming syndrome springs to mind as does sheer stupidity.