Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Programming Technology

AJAX Applications vs Server Load? 95

Squink asks: "I've got the fun job of having to recode a medium sized (500-1000 users) community site from the ground up. For this project, gratuitous use of XMLHttpRequest appears to be in order. However - with the all of the hyperbole surrounding AJAX, I've not been able to find any useful information regarding server load [Apache + MySQL] when using some of the more useful AJAX applications, such as autocomplete. Is this really a non-issue, or are people neglecting to discuss this for fear of popping the Web2.0 bubble?"
This discussion has been archived. No new comments can be posted.

AJAX Applications vs Server Load?

Comments Filter:
  • by /ASCII ( 86998 ) on Monday December 05, 2005 @08:04PM (#14189764) Homepage
    I've been toying around a bit with AJAX, and it really depends on what you are doing. Autocomplete should ideally be implemented using an indexed table of common words, or something like that, since if it does anything complex, it will be dog slow because of the large number of transactions. Also, client-side caching is good to make sure the amount of network trafic doesn't get out of hand. You can do some cool things with very little JavaScript, like my english to elvish interactive translator [no-ip.org].

    Other AJAX concepts actually make things faster. I've been implementing a forum that never reloads. When you write an entry and press the submit button, an XmlHTTP request is sent containing the new post and the id of the last recieved post. The reply contains all new posts, which are then appended to the innerHTML of the content div-tag. Less CPU-time is spent regenerating virtually identical pages over and over, and less data is sent over the network.
  • Really... (Score:2, Interesting)

    by wetfeetl33t ( 935949 ) on Monday December 05, 2005 @08:49PM (#14190079)

    For this project, gratuitous use of XMLHttpRequest appears to be in order.

    All this hyperbole surrounding AJAX is just that - hyperbole. I dunno exactly what your requirements are, but the first thing you can do to ensure that AJAX XML requests don't bog down your system is to decide whether you really need all that fancy AJAX stuff. It's neat stuff, but the majority of web apps can still be done the conventional way, without wasting time on AJAX code
  • by natmsincome.com ( 528791 ) <adinobro@gmail.com> on Monday December 05, 2005 @09:57PM (#14190445) Homepage
    Just remember that. It's not half a request it's a full request. The easiest way to think about it is imagine instead of using AJAX you reload the page.

    Now that isn't quite true as you only reload part of the page.

    The common example is google sugest. Instead of a list of searches lets try a list of products. If you use AJAX against a database of 1000 products and you had say 5 users using AJAX hiting the database. If you just did a select each time it would be really bad. At least 5 database hits per second. In the old environment it would have been 1 hit every second (assuming it took 5 seconds to fill in the form). So in this case you're increased your database load by more than 5 times (if you used like instead of = in the SQL).

    To get around this you have a number of options. Here some of the ideas I've seen
    1. Add columns to the product table with 1, 2, 3, 4 characters and index the columns. This means you can use = instead of like which is faster.
    2. Hard code the products in an array.
    3. Use has files. EG create 10 has files for different lengths. Then check the length of the imput and then load the correct hash file and look up the key.

    The basic idea concept is try and do some kind of work upfront to decrease the overhead of each call because you'll end up having a lot more requests.
  • by uradu ( 10768 ) on Monday December 05, 2005 @10:09PM (#14190505)
    The trick in minimizing server traffic is to come up with the right remote data granularity--i.e. don't fetch too much or too little data on each trip. At one extreme you'd fetch essentially your entire database in a single call and keep it around on the client, wasting both its memory and the bandwith to get data that will mostly go unused. At the other extreme you simulate traditional APIs, which typically get you what you want in very piecemeal fashion, requiring one function call to get this bit of data, which is required by the next function, which in turn returns a struct required by a third function, and so on until you finally have what you really want.

    The happy medium is somewhere in between. Come up with functions that return just the right amount of data, including sufficient contextual data to not require another call. For a contacts-type app you would provide functions to read and write an entire user record at a time, as well as a function to obtain a list of users with all the required columns to display them in a single call. You will generally find it more bandwidth and client-side processing efficient to taylor the remote functions towards the UI that needs them, fetching or uploading just the required data for a particular application screen or view. Once you have a decent remote function architecture you will have no doubt considerably less server traffic, since practically only raw data makes the trip anymore.
  • by davecb ( 6526 ) * <davecb@spamcop.net> on Monday December 05, 2005 @11:18PM (#14190813) Homepage Journal
    You can do a surprising amount with nothing but the response times of each kind of transaction.

    Make some simple test scripts using something like wget, and capture the response time with PasTmon or ethereal-and-a-script, one test for each transaction type, while at the same time measuring cpu, memory and disk IO/s.

    At loads that wget or a human user will generate, 1/response time equals the load at 100% utilization of the application (not 100% cpu!), so if the average RT is 0.10 seconds, 100% utilization will happen at 10 requests per second (TPS).

    For each transaction type, compute the CPU, Memory, Disk I/Os and network I/Os for 100% application utilization. That becomes the data for your sizing spreadsheet.

    If you stay below 100% load when doing your planning, you'll not get into ranges where your performance will dive into the toilet (:-))

    --dave
    This is from a longer talk for TLUG next spring

  • by photon317 ( 208409 ) on Monday December 05, 2005 @11:50PM (#14190954)

    I'm in the latter stages now of my first serious professional project using AJAX-style methods. In my experience so far, it can go either way in terms of server load versus a traditional page-by-page design. It all depends on exactly what you do with it.

    For example, autocompletions definitely raise server load as compared to a search field with no autocompletion. Using a proper autocomplete widget with proper timeout support (like the Prototype/Scriptaculous stuff) is a smart thing to do - I've seen home-rolled designs that re-checked the autocomplete on every keystroke, which can bombard a server under the hands of a fast typist and a long search string. But even with a good autocomplete widget, the load will go up compared to not having it. That's the trade-off. You've added new functionality and convenience for the user, and it comes at a cost. Many AJAX enhancement techniques will raise server load in this manner, but generally you get something good in return. If the load gets too bad, you may have to reconsider what's more important to you - some of those new features, or the cost of buying bigger hardware to support them.

    On the flip-side, proper dynamic loading of content can save you considerably processing-time and bandwidth in many cases. Rather than loading 1,000 records to the screen in a big batch, or paging through them 20 at a time with full page reloads for every chunk - have AJAX code step through only the records a user is interested in without reloading the containing page - big win. Or perhaps your page contains 8 statistical graphs of realtime activities in PNG format (the PNGs are dynamically generated from database data on the server side). This data behind each graph might potentially update as often as every 15 seconds, but more normally goes several minutes without changing. You can code some AJAX-style scripting into the page to do a quick remote call every 15 seconds to query the database's timestamps to see if any PNG's would have changed since they were last loaded, and then replace only those that need updating, only when they need to be updated. Huge savings versus sucking raw data out of the database, processing it into a PNG graph, and sending that over the network every 15 seconds as the whole page refreshes just incase anything changed.
  • mod parent up (Score:4, Interesting)

    by samjam ( 256347 ) on Tuesday December 06, 2005 @06:01AM (#14192220) Homepage Journal
    It's those same idiots who spend all their time talking about how the web has failed to deliver its promises, and in reality are just trying to figure a way they can get all the money by patenting stuff folk have been doing for years.

    Because they are so dumb it all looks non-obvious to them; 1 click ordering is so dumb nobody bothered doing it, but hey- the customer (dolt) likes it, so as Amazon were the first senseless idiots to actually do it they get to patent it!

    Sam

The use of money is all the advantage there is to having money. -- B. Franklin

Working...