Forgot your password?
typodupeerror
Programming Technology

AJAX Applications vs Server Load? 95

Posted by Cliff
from the functionality-and-used-clock-cycles-directly-proportional dept.
Squink asks: "I've got the fun job of having to recode a medium sized (500-1000 users) community site from the ground up. For this project, gratuitous use of XMLHttpRequest appears to be in order. However - with the all of the hyperbole surrounding AJAX, I've not been able to find any useful information regarding server load [Apache + MySQL] when using some of the more useful AJAX applications, such as autocomplete. Is this really a non-issue, or are people neglecting to discuss this for fear of popping the Web2.0 bubble?"
This discussion has been archived. No new comments can be posted.

AJAX Applications vs Server Load?

Comments Filter:
  • by Anonymous Coward on Monday December 05, 2005 @07:56PM (#14189705)
    After doing quite a bit of AJAX type work for my employer, that's the best advice I can give you. The most common things will be queried the most often, so caching is the key. If you're using PHP and MySQL, use something like eAccelerator for PHP (less important) and MySQL's query cache (most important!) properly tuned. And remember, not everything AJAX has to query a database.
    • by captainclever (568610) <rj@@@audioscrobbler...com> on Monday December 05, 2005 @08:05PM (#14189767) Homepage
      And Memcached [danga.com]!
    • The most common things will be queried the most often

      I'm sorry, what?

    • by Anonymous Coward
      Uhm, you should never give blanket advice like that. This is the simplest brute-force way optimize an app:

      STEP 1: develop a set of benchmarks.

      STEP 2: adjust something

      STEP 3: see if it improves your benchmarks. If not, roll it back. REPEAT STEP 2

      How can you possibly improve your app if you can't even tell when you've improved it? PHP accelerators may or may not help (actually I would recommend AVOIDING PHP because of the difficulty in dealing with persistent compiled PHP code). MySQL query cache may or may n
      • Better still than benchmarks by far would be code profiling, which can tell you exactly where your code is spending the most of its time, rather than just suggesting the general area as benchmarks do.
      • The poster stated his claim based on experience.
        This experience holds valid for most web applications (AJAX or not) as anyone who has worked on any large web applications can attest. Creative use of caching has shown time and again the most effective way to reduce server load. (for some reason spitting out a byte array is faster than calling a database and building a document with the results)

        I'm curious how "you can use benchmarks to *identify* which parts of the app are the slowest". This could be done by
        • For example, if you're pulling site news or software changelog entries from a table, and those entries don't get updated very often (like less than twice a day), then it certainly makes sense to cache that data. As, even if the news/changelog that appears to the user is slightly out of date, the cache would update itself within a short period of time and any new info would then appear. Nobody loses anything just because new news doesn't immediately show up on a site. But your server does save a lot.
        • Profiling is just a form of benchmarking specifically intended to identify parts of code. When the code path taken is well understood and various parts can be exercised by changing parameters passed in, normal benchmarking techniques can easily be used to gather the same data as "proper" profiling.
  • by /ASCII (86998) on Monday December 05, 2005 @08:04PM (#14189764) Homepage
    I've been toying around a bit with AJAX, and it really depends on what you are doing. Autocomplete should ideally be implemented using an indexed table of common words, or something like that, since if it does anything complex, it will be dog slow because of the large number of transactions. Also, client-side caching is good to make sure the amount of network trafic doesn't get out of hand. You can do some cool things with very little JavaScript, like my english to elvish interactive translator [no-ip.org].

    Other AJAX concepts actually make things faster. I've been implementing a forum that never reloads. When you write an entry and press the submit button, an XmlHTTP request is sent containing the new post and the id of the last recieved post. The reply contains all new posts, which are then appended to the innerHTML of the content div-tag. Less CPU-time is spent regenerating virtually identical pages over and over, and less data is sent over the network.
    • I've been implementing a forum that never reloads.

      I have one of those. I also have a server admin that never responds. Not good.

    • Yes, i'm doing the same thing right now [pre-computing results].
      Built a dictionary of commonly searched for terms, those are the only ones which appear on the autocomplete. Cache the list, change the AJAX call so it references a static list (basically a two layer alphanumeric hashing structure) .. but it pulls the file statically.

      The only problem is that all the lame-o ajax frameworks cropping up don't offer that type of flexability without being considered bloatware. I did finally manage to implement it us
    • I've been implementing a forum that never reloads.

      Quite interesting, but I'd have my reservations with an all-AJAX forum. IMHO forums shouldn't break the REST behaviour of the traditional web model in very many places. I want the navigation buttons to work, and I want to be able to bookmark the URL and feel confident that when I visit that URL it will contain the posting that was there before. Yes, there are ways around this in AJAX but to fix what it inherently breaks takes more effort than I think shou
  • Depends (Score:5, Insightful)

    by Bogtha (906264) on Monday December 05, 2005 @08:10PM (#14189800)

    There isn't any useful information out there because it all depends on what you are doing.

    Take a typical web application for filling in forms. One part of the form requires you to pick from a list of things, but the list depends on something you entered elsewhere in the form. In this instance, you might put the choice on a subsequent page. That's one extra page load, and needless complication for the end user. Or, you can save the form state, let them pick from the list, and bring them back to the form. That's two extra page views and saving state. Or, you can use AJAX, and populate the list dynamically once the necessary information has been filled in. That's no extra page views, but a (usually smaller) JSON or XML load.

    In this instance, using AJAX will usually reduce server load. On the other hand, something like Google Suggest will probably increase page load. Without knowing your application and its common use patterns, it's impossible to say. Even using the exact same feature in two different applications can vary - autocomplete can reduce server load when it reduces the overall number of searches, but that's dependent upon the types of searches people are doing, how often they make mistakes, how usual it is for people to search for the same thing, and so on.

  • by Albanach (527650) on Monday December 05, 2005 @08:16PM (#14189848) Homepage
    Another suggestion is only to auto-complete after .5 seconds with no typing - that way rather than autom completing s sl sla slas slash slasd slasdo the user who knew exactly what they wanted doesn't load down your server with spurious requests.
  • Really... (Score:2, Interesting)

    by wetfeetl33t (935949)

    For this project, gratuitous use of XMLHttpRequest appears to be in order.

    All this hyperbole surrounding AJAX is just that - hyperbole. I dunno exactly what your requirements are, but the first thing you can do to ensure that AJAX XML requests don't bog down your system is to decide whether you really need all that fancy AJAX stuff. It's neat stuff, but the majority of web apps can still be done the conventional way, without wasting time on AJAX code
    • All this hyperbole surrounding AJAX is just that - hyperbole.


      It is futile to claim that there is no important difference between Google Maps and the mapping web applications that preceded it, because the difference is screamingly obvious.
      • Re:Really... (Score:2, Insightful)

        by houseofzeus (836938)

        For every website/application you can name that was made fantastic by the use of AJAX it is possible to list at least ten that didn't need it and only have it to try cash in on the latest fad.

        The GP's point was a valid one, it is important that people sit down and work out whether AJAX will actually benefit their application.

        Despite all the crap being spouted about AJAX it is NOT some magic wand that works for every given situation.

        • For every website/application you can name that was made fantastic by the use of AJAX it is possible to list at least ten that didn't need it and only have it to try cash in on the latest fad.

          Prove you are not engaging in your own hyperbole. Name ten web applications that are using AJAX that don't need to.

          The GP's point was a valid one, it is important that people sit down and work out whether AJAX will actually benefit their application.

          I love the smell of stagnation in the morning!

          Despite all the crap be

          • 'Prove you are not engaging in your own hyperbole. Name ten web applications that are using AJAX that don't need to.'

            You made my task easier by using the word 'need' since we had the following kinds of services before the tools of AJAX became widely used (specifically the first A) then we can assume that the following don't *need* AJAX:

            • Gmail Google maps Digg

            I'll leave it at that because your response (if there is one) will no doubt be 'oh but I meant name ten applications that use AJAX and it doesn

            • Also, as you can see, I'm a pro with the preview button and the list tags ;).
            • You made my task easier by using the word 'need' since we had the following kinds of services before the tools of AJAX became widely used (specifically the first A) then we can assume that the following don't *need* AJAX:

              * Gmail Google maps Digg

              So it is reasonable to conclude from this list that the rule "sit down and decide whether your application needs AJAX" would have forbidden AJAX to Gmail, Google maps, and Digg.

              What a lossy rule.

              • Not really. Of the three examples Google maps is the only one that uses AJAX in a manner that provides major benefits over a traditional implementation.

                My point remains that developers should be asking themselves why they need to use AJAX and whether it will provide any real benefit. Often the answer is not only no, but also that an AJAX implementation might actually detract from the application (be it in performance or user experience).

                As a developer, arguing that this question isn't important because

                • Or they should just create the application in the "legacy" way, then check if there is any area of the current application that could use AJAX (or other advanced Javascript technique) to improve usability, comfort or response time (user-wise) and layer it on top of the existing and working application.

                  This is the principle behind the Progressive Enhancement philosophy, and it allows your application to work fine in any and every context, be it your local nerd's text browser, your mom's Internet Explorer, y

                • Re:Really... (Score:5, Insightful)

                  by kingkade (584184) on Tuesday December 06, 2005 @07:04PM (#14197991)
                  I think we're all saying the same thing here, try to see if AJAX (ugh i feel dirty) makes sense in your webapp. Hard to see sometimes (see below)

                  Of the three examples Google maps is the only one that uses AJAX in a manner that provides major benefits over a traditional implementation

                  Now hold on. Gmail is another perfect example of how AJAX can help. Say I have an inbox with 50 emails and I want to trash, archive, or otherwise do something to one message/thread that would involve it being removed from my view, the rest of the inbox (not to mention all the other peripheral UI elements) shouldn't change. In the old way we'd re-request all this tremendous information (say ~95% of the UI) that didn't even change! And this is even more obvious when you remember that each seemingly tiny, simple piece of the UI (say a message line) may use a bunch of HTML (not to mention scripts, css, etc) behind the scenes to make it look/feel a certain way. In the AJAX version we'd just have to add some scripting to remove that DOM element from the page and send a simple, say 0.5KB, HTTP message like "[...]deleteMsg.do?msgid=x[...]" to the server. You still have to suffer the TCP round trip latency (but less so), but the difference can be dramatic, no?
                  • It seems to vary. Often Gmail will still find the need to display its' custom red loading bar in the corner of the screen and you have to pause and wait for it.

                    Sure, this wait time is nowhere near as long as I remember say Hotmail traditionally was, but on most connections it is quite possible that Gmail might actually be as slow as or slower than a 'clean' old traditional implementation.

                    Importantly the keyword in my statement was 'major'. It's definitely arguable that Gmail provides benefits over the t

                • Re:Really... (Score:3, Insightful)

                  by petard (117521) *

                  Not really. Of the three examples Google maps is the only one that uses AJAX in a manner that provides major benefits over a traditional implementation.

                  Have you used gmail? Have you used any other web mail applications? I've used gmail, hotmail (the old, pre-ajax one), yahoo mail (the old, pre-ajax one), hushmail, squirrel mail, open webmail and outlook web access all fairly heavily at one time or another. gmail and OWA really stand out from that crowd. gmail uses AJAX, and OWA is conceptually very simila

              • If you're suggesting that programmers just use Ajax "because they feel like it" without examining the 1) value and 2) repercussions, then this programmer will have to absolutely disagree with that approach. On the other hand, web programmers should certainly explore possibilities with Ajax and not reject it out of hand.
        • ... cash in on the latest fad

          Speaking of fads, I note that as of 02:06 EST, www.houseofzeus.com fails to validate as XHTML 1.0 Transitional.
          • Exactly. I didn't cash in on the 'zomg i can put this validation clicky thingo on my page and be so coolzzz!111' fad. What's your point? :-P
            • What's the point of serving invalid XHTML? Why should a site falsely claim to be XHTML? What is the motivation for using bandwidth for an erroneous DOCTYPE statement instead of simply omitting it, if it is not simply a faddish impulse?
              • Have a look at the bottom of the page, niether the blog software nor the template in use were created by me. I haven't changed the doctype from its original state and I don't intend to.

                If bad doctypes and abuse of standards really gets to you that much I suggest you stop using computers now.

          • Even those of us who strive generally to achieve standards compliance will often put site features ahead of compliance, leaving compliance as a leftover task. Compliance is not a simple thing. Anyone who has been a web programmer for any length of time knows the difficulty of balancing features versus standards compliance, not to mention the ease of making little mistakes that fail the standards testing tools.
            • Even those of us who strive generally to achieve standards compliance will often put site features ahead of compliance
              What site feature is enabled by failing to escape ampersands?
              • What site feature is gained by escaping them? Run the validator over Digg or Slashdot and you will see the same errors (because of Slashdot's referrer block on w3c.org you will need to use the save as html option of your browser).

                While we're at it, you mightn't have noticed, but the ampersands weren't what blocks the page validating. Under XHTML 1.0 Transitional that only generates a warning.

                The error is because my chosen blogging software doesn't shove an alt tag on when it converts [img][/img] to prop

                • What site feature is gained by escaping them?

                  Reliability. Sure, you can get away with ?foo=x&bar=y nine times out of ten, but then you use something like ?foo=x&bar=y&trade=z and links start breaking in some software but not others.

                  Escaping isn't there just for the hell of it. It has a purpose. When you ignore that purpose and hope all software that will encounter your code will compensate for your mistake, you are bound to have things break some of the time.

                  While we're at it, yo

                  • Reliability. Sure, you can get away with ?foo=x&bar=y nine times out of ten, but then you use something like ?foo=x&bar=y&trade=z and links start breaking in some software but not others.

                    They weren't in a URL. They were in body text, hence no breaking links.

                    'The rules for escaping ampersands are uniform across all variants of XHTML. Not sure why you think the rules vary between Transitional and Strict; the differences between those two document types are structural in nature, not syntacti

                  • The rules for escaping ampersands are uniform across all variants of XHTML. Not sure why you think the rules vary between Transitional and Strict; the differences between those two document types are structural in nature, not syntactical.

                    For interests sake I went and shoved an unescaped ampersand back into the body text of the page and ran it through the validator again to check I wasn't seeing things.

                    http://www.houseofzeus.com/filez/fails.jpg [houseofzeus.com]

                    As it stands I never said anything to suggest the XHTML

                    • As it stands I never said anything to suggest the XHTML variants handle it differently.

                      When you said that "Under XHTML 1.0 Transitional that only generates a warning." you seemed to be implying that it being just a warning had something to do with it being XHTML Transitional. I guess I read too much into that sentence.

                      I just stated that MY page is marked as transitional and that it only generates a warning when ampersands aren't escaped in body text.

                      How is any of that NOT true?

                      While it's

                    • Fair enough, but the fact of the matter is the validator has been doing that for a bloody long time now. If the W3C can't work out how to validate it properly I doubt I have anything to worry about re. rendering in various user-agents for quite some time :-P.
              • What site feature is enabled by failing to escape ampersands?

                What personality feature is enabled by failing to escape uppity snidery?
  • ...but then you had to go and say "Web 2.0" and I dissolved [theregister.co.uk] into fits [theregister.co.uk] of laughter [wordpress.com].
  • by natmsincome.com (528791) <adinobro@gmail.com> on Monday December 05, 2005 @09:57PM (#14190445) Homepage
    Just remember that. It's not half a request it's a full request. The easiest way to think about it is imagine instead of using AJAX you reload the page.

    Now that isn't quite true as you only reload part of the page.

    The common example is google sugest. Instead of a list of searches lets try a list of products. If you use AJAX against a database of 1000 products and you had say 5 users using AJAX hiting the database. If you just did a select each time it would be really bad. At least 5 database hits per second. In the old environment it would have been 1 hit every second (assuming it took 5 seconds to fill in the form). So in this case you're increased your database load by more than 5 times (if you used like instead of = in the SQL).

    To get around this you have a number of options. Here some of the ideas I've seen
    1. Add columns to the product table with 1, 2, 3, 4 characters and index the columns. This means you can use = instead of like which is faster.
    2. Hard code the products in an array.
    3. Use has files. EG create 10 has files for different lengths. Then check the length of the imput and then load the correct hash file and look up the key.

    The basic idea concept is try and do some kind of work upfront to decrease the overhead of each call because you'll end up having a lot more requests.
    • Even in mysql, LIKE with a without a wildcard prefex; e.g.
          where name like 'ab%'

      still properly uses the index and is efficient; and if it weren't you could do:
          where name > 'ab ' and name 'ab}' or something

      Sam
    • Isn't hardcoding arrays, using flat files for product lists etc a *huge* step backwards?
      I'm thinking when the server starts up maybe initialize some arrays or temp files that get updated when the database gets updated (though this seems to violate DRY and could cause concurrency issues).
    • Ouch.

      What you are suggesting is not good for the database at all!

      A decent database (even MySQL) can do LIKE matches on indexes.

      The trick, as others have said, is to use caching. Lots of caching.

      You may even want to put a Squid (or Apache 2.2) in front of your web server to catch these little requests and keep them away from your database.
  • by b4k3d b34nz (900066) on Monday December 05, 2005 @10:01PM (#14190466)

    It's going to be tempting to use a lot of AJAX, especially if sounds fun. In reality though, you should be considering user experience, since this is a community site. Don't use an AJAX call where someone might expect a page refresh.

    With that said, it's best to try to cache frequently accessed items in memory (regardless of whether you're doing AJAX calls). ASP.NET does a good job of this--I don't know what you're programming in, but definitely find out how to cache so that you don't have to read the database all the time. This reduced our database server load from 55% to 45% upon implementation (it's separate from the web server).

    To specifically answer your question, the thing that's fast about AJAX is mostly perceived. Yes, you'll reduce calls, but at the sacrifice of having to code things twice: once for users with JS, once for those without. Use it in places where it's senseless to reload an entire page. For example, opening a nested menu. Searches that aren't done by keyword are good as well. Like has been said above, delay a server request until the user is done typing so that you can reduce calls. Remember, it's still a hit on your server, it just doesn't have to get all the rest of the crap on the page.

    To reduce bandwidth, use JSON instead of XML, and only pass the headers that you need to into the AJAX call. To reduce server strain, cache frequently accessed database calls/results. Also, other non-AJAX javascript can help reduce calls, such as switching between "tabs" with some display:none action instead of reloading a page.

    The answer is not gratuitous AJAX, the answer is thinking through how people will most commonly use your site, and making those parts easiest (so users don't have to redo things, therefore wasting your server capacity/bandwidth). Take things that shouldn't have to refresh the page and make them work using javascript, AJAX or not. Depending on how crappily things are coded now, you should see between a 15 and 35% reduction in server load and database calls.

    • Don't use an AJAX call where someone might expect a page refresh.

      Why not?

      I expect to fail a course and die a smelly virgin, but if I pass the final and get laid, I won't complain.

      With that said, it's best to try to cache frequently accessed items in memory (regardless of whether you're doing AJAX calls). ASP.NET does a good job of this

      Sorry, I don't think that's where you should be focussing. Cache frequently accessed items in client memory. In the javascript. Oh, and he's probably not using ASP.NET if h
      • I think since the browser parses XML for you, it might be faster than using javascript code to create/interpret JSON.

        As you think that the browser parse XML for you, it would be coherent to think that the browser parses and interpret JavaScript for you.

        As you will finally process the retrieved data in the Javascript interpreter, the only comparison apply to:
        • JSON: let the JavaScript parser extract the data to JavaScript data
        • XML:
          1. tell the XML parser to load the XML document and represent it in memory with the
        • Why does everybody advocating JSON skip a few steps when describing how it is processed?

          JSON is data supplied in the form of Javascript instructions. Using it goes like this:

          1. Retrieve the resource from the server
          2. Parse the Javascript into a set of instructions that can be executed
          3. Execute those instructions (which loads the data into Javascript objects)
          4. Execute the function that uses that data

          You neglected to mention the fact that JSON information doesn't magically find its way into your code, it

          • It's not clear at all that one is faster than the other; in fact it really depends on how fast the DOM access is, which varies wildly between browsers and depends on exactly what it is you are doing.

            "Fast" and "DOM Access" should never belong to the same phrase. Ever. Not without a negation somewhere anyway.

      • Don't use an AJAX call where someone might expect a page refresh.

        Why not?

        I expect to fail a course and die a smelly virgin, but if I pass the final and get laid, I won't complain.


        How about the fact that it breaks the back/forward buttons and means you cannot add the page to favourites/bookmarks. Small issues for some, but could be damn irritating for people who bookmark a page for the bookmark just to resolve back to the original page.

        With that said, it's best to try to cache frequently accessed
        • I think he really means that "that's what people expect" is not the same as "that's the most usable." If it were, we'd abandon a lot of UI improvements - tabbed browsing or multiple desktops, for a couple of examples. Sure, sometimes consistency translates to usability - but innovation is a way of saying that the status quo is not good enough.

          A basic search function that can quickly and efficiently (the focus of this conversation) return relevant results without needing to refresh the browser can be a huge
      • Sorry, I don't think that's where you should be focussing. Cache frequently accessed items in client memory. In the javascript.

        That's quite possibly the stupidest idea I've heard today. How the heck do you expect anyone to get data in the first place? It's not going to magically appear in the client's cache. I sure hope you mean doing it both ways, if anything, although I don't see any benefit to caching anything on the client-side...if you're working with data, chances are good it's going to be entered

        • This is a horrible idea. Repeat after me. This is a horrible idea. The only, only, ONLY exception is if you know exactly who's going to be using your application or web site--for example, in the case of an intranet app that's restricted to a small amount of users. Don't give me any crap about what a small percentage of people don't have Javascript enabled. It's still 10 million folks or more.

          Likewise, don't give us any crap about 10 million people having JavaScript disabled unless you can give us some verif
          • The following URL has some information from a counter service that seems to have aggregated their data. http://www.thecounter.com/stats/2005/September/jav as.php [thecounter.com]. I admit, this data is skewed because the service is probably not tracking users by IP (which is flawed anyway). However, if they're getting that many people with JS disabled and they have a pretty small slice of the pie, then it's obvious there are plenty more. The percentage is probably pretty accurate in either case.

            If you're not using AJAX in

      • Or maybe drop support for those without [JavaScript]?

        Not advisable. That's like dropping support for Opera and Safari users. A significant subset of your users will have scripting turned off. If you care about getting the widest possible audience for your site, in most cases, you will not want to shortchange functionality for anyone (perhaps except those using very old browser versions), but instead just present things differently for those with JS on and those with it off.
        • That's like dropping support for Opera and Safari users.

          Is that supposed to be an argument for, or against?

          In the real world, who cares about Opera and Safari users? The real world where until about 3 months ago, few people even cared about Firefox users.

      • While I think JSON is awesome, I think XML is faster overall. If you're using mod_gzip, bandwidth of XML vs. JSON is negligable, and I think since the browser parses XML for you, it might be faster than using javascript code to create/interpret JSON.

        The only place where speed matters like this is the server. Client side, the difference is a one-time occurance, and it's a matter of microseconds. Compared to download time, processing time is negligable. Compared to the speed at which the user interacts wit
  • by uradu (10768) on Monday December 05, 2005 @10:09PM (#14190505)
    The trick in minimizing server traffic is to come up with the right remote data granularity--i.e. don't fetch too much or too little data on each trip. At one extreme you'd fetch essentially your entire database in a single call and keep it around on the client, wasting both its memory and the bandwith to get data that will mostly go unused. At the other extreme you simulate traditional APIs, which typically get you what you want in very piecemeal fashion, requiring one function call to get this bit of data, which is required by the next function, which in turn returns a struct required by a third function, and so on until you finally have what you really want.

    The happy medium is somewhere in between. Come up with functions that return just the right amount of data, including sufficient contextual data to not require another call. For a contacts-type app you would provide functions to read and write an entire user record at a time, as well as a function to obtain a list of users with all the required columns to display them in a single call. You will generally find it more bandwidth and client-side processing efficient to taylor the remote functions towards the UI that needs them, fetching or uploading just the required data for a particular application screen or view. Once you have a decent remote function architecture you will have no doubt considerably less server traffic, since practically only raw data makes the trip anymore.
  • could be less load (Score:3, Insightful)

    by josepha48 (13953) on Monday December 05, 2005 @10:23PM (#14190581) Journal
    you could experience less of a load as you will not be reserving up the entire page each time you "refresh".

    An example is an application that display a list of cities in a state, after a user selects the state. (1) If you send ALL the data to the client at onece, its a large file transfer and takes a long time. This produces a heavy load all at once. (2) If you coded it to refresh the whole page after the selection then it is a smaller initial load, but on the 'refresh' you are sending the whole page plus the new data. (3) If you use AJAX, you only have to send the initial small request ( not the heavy load ) and then the second request for the part of the page that needs updating.

    Between 2 & 3, #3 is better because it reduces the second hit on the server and network as it does not have to resend parts of the page that have already been sent. Between 1 and 3 you actually will have more hits on the server but #3 will result in less data being sent across the network.

    The biggest problem with #2 is that sometimes refreshing a whole page ( onchange ) confuses users. Yes, this may sound weird, but I have had people tell me this.

    The biggest problem with #3 is that if the server request fails, you must code for this and if you don't the user may not know what happened. Also how do you handle a retry on something like this?

  • by davecb (6526) * <davec-b@rogers.com> on Monday December 05, 2005 @11:18PM (#14190813) Homepage Journal
    You can do a surprising amount with nothing but the response times of each kind of transaction.

    Make some simple test scripts using something like wget, and capture the response time with PasTmon or ethereal-and-a-script, one test for each transaction type, while at the same time measuring cpu, memory and disk IO/s.

    At loads that wget or a human user will generate, 1/response time equals the load at 100% utilization of the application (not 100% cpu!), so if the average RT is 0.10 seconds, 100% utilization will happen at 10 requests per second (TPS).

    For each transaction type, compute the CPU, Memory, Disk I/Os and network I/Os for 100% application utilization. That becomes the data for your sizing spreadsheet.

    If you stay below 100% load when doing your planning, you'll not get into ranges where your performance will dive into the toilet (:-))

    --dave
    This is from a longer talk for TLUG next spring

    • At loads that wget or a human user will generate, 1/response time equals the load at 100% utilization of the application (not 100% cpu!), so if the average RT is 0.10 seconds, 100% utilization will happen at 10 requests per second (TPS).

      That's just wrong. You mixed up bandwidth and latency.

      • It's all response time (RT). Latency (L) is the part of RT before the first byte shows up back at the client, whereias bandwidth is bytes/(RT-L).

        This is from Raj Jain's, "The Art of Computer Science Peformance Analysis", Chapter 33 (Opertional Laws).

        --dave

    • Hi Dave - tried to find a way to contact you outside of this, but have been unsuccessful to date - apologies all around if I'm using the wrong communication channel...

      Your post confirmed some hunch I've had for a while - I've been trying to figure out a way to measure CPU, I/O, RAM and other resources on our web servers to get reasonable application benchmarking data - I've been told it's next to impossible, and I've had the hardest time finding any info on this type of benchmarking on the net (beyond gutfe
  • by photon317 (208409) on Monday December 05, 2005 @11:50PM (#14190954)

    I'm in the latter stages now of my first serious professional project using AJAX-style methods. In my experience so far, it can go either way in terms of server load versus a traditional page-by-page design. It all depends on exactly what you do with it.

    For example, autocompletions definitely raise server load as compared to a search field with no autocompletion. Using a proper autocomplete widget with proper timeout support (like the Prototype/Scriptaculous stuff) is a smart thing to do - I've seen home-rolled designs that re-checked the autocomplete on every keystroke, which can bombard a server under the hands of a fast typist and a long search string. But even with a good autocomplete widget, the load will go up compared to not having it. That's the trade-off. You've added new functionality and convenience for the user, and it comes at a cost. Many AJAX enhancement techniques will raise server load in this manner, but generally you get something good in return. If the load gets too bad, you may have to reconsider what's more important to you - some of those new features, or the cost of buying bigger hardware to support them.

    On the flip-side, proper dynamic loading of content can save you considerably processing-time and bandwidth in many cases. Rather than loading 1,000 records to the screen in a big batch, or paging through them 20 at a time with full page reloads for every chunk - have AJAX code step through only the records a user is interested in without reloading the containing page - big win. Or perhaps your page contains 8 statistical graphs of realtime activities in PNG format (the PNGs are dynamically generated from database data on the server side). This data behind each graph might potentially update as often as every 15 seconds, but more normally goes several minutes without changing. You can code some AJAX-style scripting into the page to do a quick remote call every 15 seconds to query the database's timestamps to see if any PNG's would have changed since they were last loaded, and then replace only those that need updating, only when they need to be updated. Huge savings versus sucking raw data out of the database, processing it into a PNG graph, and sending that over the network every 15 seconds as the whole page refreshes just incase anything changed.
    • The PNG example isn't really anything to do with AJAX. It is just a case of proper use of the "If-Modified-Since" header. But you're right in general, a well designed AJAX app should cause less server load for a traditional page-by-page app with the same functionality. It is only the extra functionality that is not available without AJAX that should be adding any load.
  • I'm not the first person to have this idea, but this brings up the question: do we need to define a new tier e.g. something like a presentation services tier? I think so. I'm not going to go into it because its pretty self explanitory; but honestly I don't think ajax is going away and if you are going to make asynch. calls in this fashion, you need hardware to back it up.
    • I'm not the first person to have this idea, but this brings up the question: do we need to define a new tier e.g. something like a presentation services tier?

      You mean like the Presentation Layer of the OSI model? The one below Application (second from the top)?

        -Charles
  • by jgardn (539054)
    You're going to want to separate out your web server if you are going to face any real load. A good mod_perl implementation with a PostgreSQL (or even MySQL) can give you the kind of dynamic speeds you need. Since the AJAX queries usually translate to a single call, you can probably get much more performance than the older style where each page had to make several queries.

    To serve up the webpage, I think you should go with a static HTTP server if you can. If you can't I would use a different server because
  • Cash cash and money (Score:5, Informative)

    by tod_miller (792541) on Tuesday December 06, 2005 @03:52AM (#14191844) Journal
    Don't mention web2.0 it is utter stupidity.

    People talked about RSS web server loads versus advertising revenue about 2 years ago on slashdot, so I hardly think people are that stupid.

    Also, if every page is (at best - which I doubt in your case) 50Kb - and the AJAX traffic each call is 500bytes - decide if that ajax call saved an entire page refresh (from your site, a page is probably 120Kb, with ads for customer pages can be 200kb..)

    So, initial download even at worst (or best) would be 50Kb, each call 500bytes, so you can see the % of overhead is little, and if this call SAVED a refresh then you have saved 49.5p which is good for half a pint on fridays between 12 and 2 at the little willow on hidge street.

    good day.
    • Web 2.0 is made of ... 600 million unwanted opinions in realtime
      Paul Moore
      Web 2.0 is made of ... emergent blook juice
      Ian Nisbet
      Web 2.0 is made entirely of pretentious self serving morons.
      Max Irwin
      Web 2.0 is made of ...Magic pixie dust (a.k.a . Tim O'Reilly's dandruff)
      Jeramey Crawford

      - and a load of other things, see http://www.theregister.co.uk/2005/11/11/web_two_po int_naught_answers/ [theregister.co.uk]
      • And like 'podcasting' has a lot of twats fighting over who thought of the grand scheme (while ordinary people were making mp3's and letting people download them without the need for twatish words or syndicated xml), people will fight over who was the one who needs all the attention over web 2.0

        I like the one about pretentious self-serving morons and 600 million unwanted opinions.

        Web0.002 is like the web, only with a lower signal:noise ratio.

        Does anyone find the fucktarded way ingaydget puts every fucking ke
        • mod parent up (Score:4, Interesting)

          by samjam (256347) on Tuesday December 06, 2005 @06:01AM (#14192220) Homepage Journal
          It's those same idiots who spend all their time talking about how the web has failed to deliver its promises, and in reality are just trying to figure a way they can get all the money by patenting stuff folk have been doing for years.

          Because they are so dumb it all looks non-obvious to them; 1 click ordering is so dumb nobody bothered doing it, but hey- the customer (dolt) likes it, so as Amazon were the first senseless idiots to actually do it they get to patent it!

          Sam
        • Does anyone find the fucktarded way ingaydget puts every fucking keyword to a link back to its search engine in every story a bit uber-google-gay?

          Nothing beats a little intelligent, well thought out criticism.
  • by Nurgled (63197) on Tuesday December 06, 2005 @08:53AM (#14192657)

    This isn't directly related to your question, but it's something that most people experimenting with "AJAX" seem to be overlooking. It's too easy to fall into the trap of using XMLHttpRequest to do everything just because you can, but by doing that you are restricting yourself to a small set of browsers that actually support this stuff. This doesn't include many of these phone/PDA browsers that are becoming more common.

    Also worth noting is that changing the DOM can cause confusion to users of aural browsers or screen readers. In some cases this doesn't cause a major problem; for example, if you have a form page where choosing your country then changes the content of the "select region" box that follows, the user will probably be progressing through the form in a logical order anyway and so the change, just as in a visual browser, won't be noticable. However, having a comment form which submits the comment using XMLHttpRequest and then plonks the some stuff into the DOM will probably not translate too well to non-visual rendering, as the user would have to backtrack to "see" the change.

    Of course, depending on the application this may not matter. Google Maps doesn't need to worry about non-visual browsers because maps are inherently visual. (though that doesn't actually use AJAX anyway!) Google Maps would be useful on a PDA, however. I'm not saying "avoid AJAX at all costs!" but please do bear in mind these issues when deciding where best to employ it. Most of the time it really isn't necessary.

  • by Pascarello (909061) on Tuesday December 06, 2005 @10:32AM (#14193125)

    There are a lot of good points posted in here. Caching on the client on the server are two big things for a good application that is using the XHR. A good database design is also key if you do not want to use "like" which slows down the search. In Ajax In Action [manning.com] as discussed on Slashdot here [slashdot.org]. In chapter 10, the project talks about how to limit post backs with an auto suggest by using the clientside efficiently. The basic idea examines the results returned. If it is under a certain number, it uses JavaScript regular expressions to trim down the dataset instead of hitting the server. Plus there is a limit on number of results returned so it speeds up response time.

    One thing I can not get through people's minds enough when I do my talks is Ajax is not going to be a "client-based app" on the web. The main reason is going to be network traffic getting in the way of your request. Imagine a dial up user in India with your server sitting in the United States. The request is going to have to travel to the other side of the world and back with the slow speeds of dial-up. Testing on your localhost is going to look great until you get on an outdated shared server hosting multiple applications with a full network load. Yes we are talking small requests pinging the server, but 1000 users with a 10 letter word could mean death if you designed the system badly!

    I love XHR, cough Ajax, but you need to look at what you are dealing with. The design of an XHR app can kill you if you do not think it out fully.

    My 2 cents,
    Eric Pascarello
    Coauthor of: Ajax In Action

    • Imagine a dial up user in India with your server sitting in the United States. The request is going to have to travel to the other side of the world and back with the slow speeds of dial-up.

      Ummmm...no, it is not. UUCP is deader, thank God. It is going to travel from the dial-up users computer to his local ISP at the slow speed. After that, it gets routed on the rest of the Internet. Last I checked, that isn't using dial-up connections. The last link is going to be back to the user at dial-up speeds but
      • I thought it was common knowledge that the slowness is only to the ISP, guess I should have stated that.

        The whole point is you need to realize that there is a difference between dialup, DSL, Cable, and localhost that a lot of developers tend to forget. I have seen people ask why it was so fast in development and sluggish in production. That is why I brought up the point.

        Some people think that slapping an XHR on a page is going to be a beam of light from the skies to end all of their troubles. Ends up it can
      • Worse than dialup is 56K frame relay WANS...
    • But...has anyone looked into how this exact same scenario would affect asp.net, which essentially does the same exact thing that AJAX uses (server side components that spit Javascript code to client, which executes the Javascript code to communicate bidirectionally with the server, essentially "out of band" of the original HTML stream)?
    • There is a lot to be said for some crazy stuff, like serving from a 33Mhz 486, kinda gives a feel for a normal server getting really hammered.
  • by Anonymous Coward on Tuesday December 06, 2005 @11:26AM (#14193492)
    I am not quite sure what the question is, but I am fairly confident that AJAX is not the answer. IMO AJAX is a freak of nat..., computer science.
    AJAX reliance on ECMA-script seems like a shaky foundation at best. I imagine debugging ECMA-script can be quite clunky and even if tool support might solve this problem at some point, there is no guarantie that browsers will interpret ECMA-script the same way, it seems like an embrace and extend waiting to happen.
    I will not venture too far into the dynamically vs. statically typed language discussion other than stating that personally I prefer strongly typed languages.
    I get the impression AJAX is a quick-and-dirty solution to a problem that requires something more advanced.
    It seems like AJAX is an attempt to overcome the shortcomings of thin clients using the technology that had the widest market penetration, without considdering whether the technology was the appropriate tool for the job.
    I am afraid that we will have to live with AJAX for a long time. A tradgedy similar to VHS victory over betamax, where an inferior technology beat a superior one.
    I wonder if something like a next generation X-Server browser plugin or a thick client Java framework might not have been better suited for the job. It can't help but feel like AJAX is somehow trying to force a round peg through a square hole.
  • If you are worried about load, take the time to think about what really requires a round trip to the server, and what can more easily be done by populating some data on the webpage and then use that directly with javascript. My organization recently paid a certain overblown web design consultanting firm which botched a certain popular humor news site's website a lot of money, and they wanted us to use AJAX to autocomplete all of our forms - without making the simple connection that we only have about 100-20
  • Interesting. This isn't 100% new idea about AJAX but pretty darn close. I have only seen squeaks and sqwaks from a few people who are in the Web server business but not much else:

    http://www.devx.com/asp/Article/29617 [devx.com]
    http://www.port80software.com/200ok/archive/2005/0 4/29/393.aspx [port80software.com]

    My opinion I would think folks haven't done enough apps to know what is what and the only people saying much are going to be the Web2.0 folks themselves (unlikely to own up to it quite yet) or the few folks like these sitti

  • http://www.mortbay.com/MB/log/gregw/?permalink=Sca lingConnections.html [mortbay.com]

    Basically, the server that is used to handle X number of customers making a request every 2-3 minutes, will get a multiple of that because the requests are coming in much more frequent.

    You will need to tune the server for much higher throughput value (more listeners/threads/workers) to deal with AJAX.

[Crash programs] fail because they are based on the theory that, with nine women pregnant, you can get a baby a month. -- Wernher von Braun

Working...