Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
The Internet

Are Web Pages Getting Larger? 67

An anonymous reader asks: "I work for a large multinational in a remote part of world. Our connectivity to the outside world (the Internet as well as company communications) is all done via a single E1 line - that's 2Mbps. Thousands of users. The company keeps access pretty well screwed down for security reasons, and the fact that our link to the outside world costs almost $300K/year! Our growing problem is Internet traffic. While policing of non-business use is very active, Internet traffic continues to grow. I'm becoming convinced that one of our problems is that average web page size is growing. As more of the world enjoys broadband access, I think web developers have less reason to limit the size of their web pages. Large images, flash animations and other size-increasing content seem increasingly common. Am I right? Can anyone point to a recent study that would support my theory, and help me convince my management that we just plain need more bandwidth?"
This discussion has been archived. No new comments can be posted.

Are Web Pages Getting Larger?

Comments Filter:
  • the answer is... (Score:4, Interesting)

    by yagu ( 721525 ) * <{yayagu} {at} {gmail.com}> on Wednesday December 07, 2005 @11:42PM (#14207686) Journal

    I think the answer to your question lies within the technology itself, and the obvious answer is "yes", web pages are getting larger. Consider that:

    • processor speeds have increased nearly 80x since the approximate time of the "web".
    • disk and memory have increased even more dramatically than processor speeds
    • average bandwidth (more optical networks, more 100M LAN's, etc.) has increased dramatically.
    • features and functionality of browsers has expanded dramatically (sorry, not going to go into the laundry list here)
    • the number of web sites has grown exponentially
    • the ways of finding these sites has become easier if not just plain simple

    So, yes, the web universe is "expanding" in very nearly every dimension. To your specific question, will you need to petition for more bandwidth? Undoubtedly. And, I can't imagine it isn't doable at today's rates. It sounds like a balky bureaucracy, not a question of need. Good luck.

    I think maybe the better question to ask, is what has happened to the general psyche of the average employee, and how do you address it? If I had to guess (see, I'm not proving anything with this post!) I'd guess the technology has easily stepped up to the task of underpinning the network use but people still have not learned how to modulate and attenuate the siren that is the internet. (Maybe that would help decelerate your need to upgrade and expand bandwidth.)

    • the number of web sites has grown exponentially

      I think this is the biggest factor causing more and more bandwidth usage, and your best angle for convinciong management to buy more. It is true that extra computing power lets people get away with writing bigger web pages, but also the number of web sites is quickly growing, as is the number of people who use them and the number of things they use them for. I'm sure there are hundreds of studies and surveys that show this. It's only a small exageration to

    • by n3hat ( 472145 )
      Install Opera and browse with the loading of graphics off by default. Rightclick and download only the graphics that look useful. This saves time and bandwidth by avoiding the downloading of adverts and other gratuitous graphics. I did this all the time when I was on dialup with a 28.8 Kbit/sec modem and a 25MHz cpu -- the text downloaded quickly and I would download only 1 image in 20. I recommend Opera to anyone who's on a skinny pipe.
      • uhhhhhh, how do you "download only the graphics that look useful" if you haven't downloaded them yet and therefore can't see them?

        • You use ALT tags and context.

          If you're browsing Slashdot, for instance, you know that there aren't going to be any graphics on the page that you really need to see. It's all text, the only graphics are ads. So wasted bandwidth, essentially.

          However if you were reading some howto article or news story, that referenced a graphic, then you could click on it and download.

          It's not very hard; you just wait until you either need to see something because it's referenced in the text, or because the page doesn't make
  • Caching. (Score:5, Interesting)

    by slashkitty ( 21637 ) on Wednesday December 07, 2005 @11:49PM (#14207715) Homepage
    You could add a local caching proxy server and/or set browsers to cache longer to reduce bandwidth. Have you done an analysis on how much of the traffic is people just pulling up the same pages?
    • Re:Caching. (Score:4, Interesting)

      by duffbeer703 ( 177751 ) on Thursday December 08, 2005 @12:23AM (#14207883)
      I'm sure that a company that can afford a $300k E1 circuit in some third world shithole knows about caching proxy servers.
      • In that case, maybe they also know how to read the logs and compare total traffic and average-bytes-transferred-per-page-view to figures from 12 months ago. If they saved that info, of course.

    • We set up a squid cache on an old workstation. We were pulling about 10GB/month with our 2mbs cable modem. Not a huge amount, but after installing the cache, and running our 75 users through it, we took it down to 3GB a month. Just part of being good Net citizens.

      • We set up a squid cache on an old workstation. We were pulling about 10GB/month with our 2mbs cable modem. Not a huge amount, but after installing the cache, and running our 75 users through it, we took it down to 3GB a month. Just part of being good Net citizens.

        I would hope they already have a local proxy server if they are serving "thousands" of users with a 2 megabit link. It would probably be a small expense to set up a remote proxy server at a colo facility somewhere and point their local proxy t

      • Just out of curiosity, does anyone know of an easy-to-setup Squid distro? Something that runs maybe as a live CD, with minimal or easy configuration? (Like Smoothwall, basically...)

        There are a few places I know of that are really dying for web cache servers, because even without looking at the logs I can tell that they are probably spending GB a month downloading the same banner ads and Google splashscreen to every user on the network. It's a situation where there are hundreds of users and they're all basic
  • Yes, it's like how many software developers write lazy code these days since memory is cheap. Bandwidth is cheap now so many web developers are writing lazy pages.

    Or it could be the dreaded Skype sucking up all your bandwidth as it has done at my company.
    • It's not just about laziness. They are including things they wouldn't have included, when bandwidth costs more. The know people will probably be browsing with a quick connection, so they include more images, and the ones they include are of higher resolution and quailty. Webpages are more of a multimedia experience then they used to be.
      And then there is lazyness as well...
  • one word... (Score:3, Funny)

    by Emperor Palpatine ( 30639 ) on Thursday December 08, 2005 @12:03AM (#14207770) Homepage
    Lynx
  • by dtfinch ( 661405 ) * on Thursday December 08, 2005 @12:06AM (#14207787) Journal
    A website and all of its pages can be expected to grow over its lifetime, but a lot of newer sites are lot smaller than previous generations. The wide adoption of CSS, and all the user friendliness tech evangalism emphasizing simplicity over noise has been paying off those who listen. There are still a lot of sites, such as web forums, where the attitude seems to be to make have really complex themes with almost no CSS and let mod_gzip/deflate deal with the task of making it small.
  • Ouch. I assume it's the European counterpart to the T1, T1s in the US are a lot cheaper, like 2% of that cost. I suppose you really mean the "remote part of the word" part.

    But yes, I wouldn't be surprised if web pages are getting larger. On average, web pages are a lot more complex than in the past, with lots of links and sections and borders done with lots of little images. There are more ways to post images now, probably less care in compressing them, rather than compress them with care to not ruin the
    • by jerde ( 23294 )
      A T1(or E1) in a downtown metro area == cheap.

      A T1 out in, say, Montana in the US? NOT cheap.

      It all depends where you are.
    • I assume it's the European counterpart to the T1, T1s in the US are a lot cheaper, like 2% of that cost.

      There's no need to guess when you can check. A quick gander at wikipedia [wikipedia.org] or google define [google.co.uk] says that an E1 offers 2.048 Mbit/s of bandwidth whereas a T1 offers 1.544 Mbit/s. However your cost point is taken since an E1 obviously isn't 50 times the speed T1...

    • An E1 data circuit via a satellite channel to Africa or the Middle East will run about US$125k to US$200k/year, in satellite costs, uplink and downlink station maintenance, and the actual internet connection in Europe or NYC.

      Compressors, TCP (packet shaping) optimisers, proxy caches, DNS/email caching, webvertising blocks, QoS and agressive firewall rules are pretty much a given for any kind of expensive satellite connection. On the luser end, to really make use of the web they can set their browsers to not
  • by mattwarden ( 699984 ) on Thursday December 08, 2005 @12:07AM (#14207796)

    You need to change policy, not spend more money. Change the cache settings on clients. Insert caching proxy servers. Make sure mail, DNS, etc. is local. Et cetera. You should find a solution that does not have a linear (at best) relationship with the number of users.

    Web pages are getting larger. It might be what is causing your increase in utilization, but to me it's hard to believe (although if your users are viewing a lot of embedded videos, that's another story). And, if it's hard for me to believe, it's probably going to be hard for your PHB to believe, too.

    • Right. FlashBlock, AdBlock, AdZapper, all can be helpful.

      But $300K/yr for 2Mbits? They should just get a satellite Internet connection and pay $100/mo for a server on terra firma at a data center somewhere.

      I can't think of a scenario where you could justify that kind of cost. That's $12.50 per kilobit per month. A multiplex of V.34 modems would be about 30 times cheaper.
  • by Alpha27 ( 211269 ) on Thursday December 08, 2005 @12:27AM (#14207906)
    "Duh! Here's more content"

    With the broadband market now including a minimum of 25% of home users, and up to maybe 40%, though I haven't looked at those numbers in some time, would be a contributing factor to the fact that yes, web pages are getting bigger.

    One way to see proof of this is using the wayback machine.
    http://www.waybackmachine.org/ [waybackmachine.org]

    I took a quick sampling of the NYTimes homepage, and noticed that the number has increased by a few kilobytes per year, from 56K in 2001, to 67K in 2003, to 83K in 2005. That's not even counting images. They've added more ad banners since the old days. If you google search, I'm sure you will find stuff.

    Ad banners have increased in size, and complexity over time. Streaming content, is another addition, as well as more services running over the network.

    You probably have a number of contributing factors happening to your bandwidth, in addition to web pages.
    - Unless you have an internal instant messenging environment, you may have many ppl chatting away on services having to use your bandwidth.
    - Email for personal use. Jokes, funny attachments, and worms clogging up things.

    Here are a couple of suggestions to try and improve traffic:
    - block services that shouldn't be run at the office like streaming music content.
    - block websites that you see can have an impact on traffic, that you believe users should not be visiting. ie: quicktime movies.
    - block your daytrading slacker coworkers.
    - block ad servers entirely! this should drastically improve your situation, and be the easiest to implement.
    - switch to an internal instant messenging service, if you haven't done so already.
    - disable unnecessary services.
    - ensure that you have an internet policy that prohibits the users from using their work companies for personal use.
    - cache often used content.

    • There may be more unique arrangements of bits to collect, but as for content, most of the Internet's rather slim. Mostly it's personal websites, livejournal and its analogues, sales, and aggregators. On the side, you get some regular news sites.

      I've found some informative and interesting personal websites, including nuwen.net and a fair bit about amateur linguistics. Other than that, if you want more content, the best place to look is universities.

      It's not more content, generally, just more media.
    • You need a smart gateway. Your E1's border router, or a gateway immediately behind it, needs traffic shaping and queueing [lartc.org]. Pretty much any circuit anywhere needs traffic queueing. Either side of your E1 could probably benefit from a compressed virtual circuit such as maybe a VPN. Compress all traffic that way. If you locally host your web servers, you can use a reverse proxy [apacheweek.com] that includes mod_gzip and other stuff to strip whitespace from their content. You can also control your users' behaviors with c
    • Here are a couple of suggestions to try and improve traffic:...

      A quick way of doing this might be to block all (or most) Flash presentations. It is my experience that any site that heavily employs Flash displays tends to be very thin on content, despite the fact that they usually chew through a fair amount of bandwidth.

  • wow (Score:2, Insightful)

    this is one of the dumbest /. stories ever

    I mean wtf....are they getting bigger? you mean...you mean as more and more people attach databases? and...and...java? etc...

    I mean really...how stupid is this

  • Welcome to the club.

    It isn't just low-economy bandwidth ISP that is suffering this onslaught of dazzling eye-candies of HTML pages.

    Blackberries and java-enabled cell phones also face the same problem. But, there is a way as other postings have shown -- web proxy cache.

    One can defer uploading of images with a like-sized null (one-dot) image placeholder. These images, if the end-user desires, can be retrieved selectively. This is the BIGGEST bandwidth saver. Other is filtering of ad-contents.

    Otherwise, it
  • by pizza_milkshake ( 580452 ) on Thursday December 08, 2005 @01:22AM (#14208153)
    according to archive.org/waybackmachine:
    html size (doesn't include images/dependencies)
        slashdot.org    yahoo.com    microsoft.com
    1996    -        7k        11k
    1997    -        9k        -
    1998    23k        10k        20k
    1999    35k        10k        20k
    2000    36k        12k        17k
    2001    41k        16k        21k
    2002    39k        17k        28k
    2003    39k        32k        31k
    2004    51k        33k        38k
    Today    19k        14k        22k

    the trend has certainly been up, but lately big sites' main pages seem to be slimming down, due to CSS as well as a tendency to store style and javascript in separate file
    • in addition it's worthwhile to note that today microsoft.com has ~100k of images, the other 2 are very image-light. also, the big sites that get millions of hits per day seem better at slimming down content, while many sites that get less hits seem oblivious that their 200k .bmp logo is a waste of bandwidth. also, large files such as flash and video are more prevalent these days.
    • A few sites, such as CNN, seriously slimmed down with CSS/HTML tricks, for colored regions instead of image files, after 9/11 when most of these new sites were brought to their knees by the bandwidth overload. Such as sites used to be over > 400K brought themselves down to 300K or so...

      Since, then there has been a slight tick back up, but some lessons learned.
    • This measure isn't going to be very accurate anymore. No longer are designers lumping everything into one HTML page. Now they are importing multiple css files, js files, ad banners, etc. Often the js/flash code pulls in even more content. Measuring the size of the HTML is only measuring a small piece of the actual bandwidth.

      That being said, I wonder if there is any way to get a truly accurate idea of how much content is being transferred per page visit, after all client-side code is executed. Perhaps a fire
      • Couldn't you do this better with a packet sniffer? Some sort of script that monitored the packet stream and told you exactly how many packets had gone in and out to a specific IP? (You'd have to reset the count every time you wanted to measure a different page at the same site...)

        But that would give you the "true transfer" involved in loading a particular webpage.
  • hmmm (Score:3, Informative)

    by way2trivial ( 601132 ) on Thursday December 08, 2005 @01:29AM (#14208179) Homepage Journal
    http://www.google.com/search?hl=en&lr=&safe=off&q= average+web+page+size [google.com]

    has some good results
    http://www.sims.berkeley.edu/research/projects/how -much-info/ [berkeley.edu]
    has info from 2000 and a link to the same info from 2003

    specific internet 2000
    http://www.sims.berkeley.edu/research/projects/how -much-info/internet.html [berkeley.edu]
    and 2003
    http://www.sims.berkeley.edu/research/projects/how -much-info-2003/internet.htm [berkeley.edu]

    it's worth noting, these types of statistics can take a year or more to compile..

  • With low bandwidth connections, the quality of the experience of accessing information over the web is lower, and so people access less.

    It is true that broadband has allowed pages to be bigger than before, but trends in UI design have also resulted in more text and fewer images, and with mod_gzip a mostly text/css site ends up being a lot lighter than its image/table predecessor.

    I suspect that if you decreased the quality of the surfing experience by introducing about 1.5 seconds of latency into each web re
  • by paulbiz ( 585489 ) on Thursday December 08, 2005 @01:40AM (#14208237) Journal
    Web developers (and programmers in general) don't care about optimizing anymore, they just want it to be done so they can get paid. Worrying about such trivial things as a few kbytes or making valid and accessible HTML is asking too much of them.

    From a web-designer standpoint, a lot of size can be reduced without altering the content.

    Are you serving up nicely formatted HTML with indentations? That's wasteful. Strip whitespace and carriage returns.

    Are you using HTML comments? Why? Does the customer really need to see them? Do you need to waste that bandwidth? Delete them or use comments in your server-side scripting language of choice.

    Are you using GIF's where PNG's would be smaller? Or PNG's where GIF's would be smaller?

    Have you optmized your PNGs [sourceforge.net], JPEGs [freshmeat.net] and GIFs? (I don't remember a GIF optimizer, but there are plenty of non-destructive ones).

    A 50x50 JPEG preview of an item does not need embedded comments, thumbnails, or EXIF data.

    If you must use animated GIF's, be sure they are optimized and not full-frame!

    Are you using pictures of words, when actual stylized text could convey the same message?

    Are you using inline JavaScript or CSS, rather than calling it from a cacheable external file?

    Are you using PDF, Flash or Java when it's not ABSOLUTELY necessary?

    From a user's standpoint, the best solution, short of getting more bandwith: use less bandwidth. Turn off image loading or use a text-based browser. Don't browse the web as much. If you have a choice of sites to use, use the one that is smallest. Use a proxy. blah blah.
  • Easy solution. Have your company rent the closest apartment that's next to your office. Then have the local telco install one of those 100Mpbs super-fast, uncapped broadband lines. You know, the ones you Europeans say you pay 20 euro a month for while laughing at us Americans who are stuck with slower DSL and cable. Run network cable from apartment to building. Or use microwave. Problem solved and for far less than $300k/year.

    Anyway, I'm just kidding... or am I?

  • Use MRTG (Score:3, Informative)

    by Matt Perry ( 793115 ) <perry DOT matt54 AT yahoo DOT com> on Thursday December 08, 2005 @01:57AM (#14208305)
    Can anyone point to a recent study that would support my theory, and help me convince my management that we just plain need more bandwidth?
    The only study you need is a report from MRTG [ee.ethz.ch]. Configure it and have it start graphing your network utilization for your E1. After a week or two you'll have several pretty graphs that can show your management exactly how saturated your connection is. Also, look at installing a caching proxy, such as Squid [squid-cache.org].
  • For one, you can use a good proxy to cache webpages that are constantly brought back up. For two, you can use it to block out animated GIFs and flash game/ads that take so much bandwidth. I hope this helps.
  • Youre an employee there. Youre part of the company. You should be working to SAVE them money.

    So just asking for $$$ for bandwidth is failing them, in a way. If things arent working out AT ALL right now and you have to fix it, they might pay. But if youre making do with things, they wont slap down another $100K.

    So try other ideas like giving them quotas. They'll learn to fit their browsing in the quotas. Cache the pages, try to disable some things like maybe .exe files, flash, jar files etc. I wouldnt say yo
  • They are getting bigger and bigger and bigger...
  • Sure web pages are getting larger but telecoms costs are decreasing too (at least in my part of the world), albeit not in direct proportion.
    Invest in a proxy adblocker, such as http://www.privoxy.org/ [privoxy.org]. This is easy to justify to management, it really doesn't interfere with legitimate web usage in my experience.
    Create a group policy for Outlook that disables the ability to change the automatic picture downloading in Outlook2003.
    Adjust you user's Exchange delivery restrictions to something sensible. Does
  • by DrSkwid ( 118965 ) on Thursday December 08, 2005 @06:41AM (#14209140) Journal
    The un-adoption of mod_gzip and whatever IIS *should* use is also prevalent.
    Ticking the box used to crash IIS but these days it actually works, not that you'd notice :

    Response Headers - http://www.microsoft.com/ [microsoft.com]

    Date: Thu, 08 Dec 2005 10:30:40 GMT
    Content-Length: 23186
    Content-Type: text/html; charset=utf-8
    Cache-Control: private
    Server: Microsoft-IIS/6.0
    P3P: CP="ALL IND DSP COR ADM CONo CUR CUSo IVAo IVDo PSA PSD TAI TELo OUR SAMo CNT COM INT NAV ONL PHY PRE PUR UNI"
    X-Powered-By: ASP.NET
    X-AspNet-Version: 2.0.50727

    200 OK

    Response Headers - http://slashdot.org/ [slashdot.org]

    Transfer-Encoding: chunked
    Date: Thu, 08 Dec 2005 10:40:11 GMT
    Content-Type: text/html; charset=iso-8859-1
    Cache-Control: no-cache
    Server: Apache/1.3.33 (Unix) mod_gzip/1.3.26.1a mod_perl/1.29
    SLASH_LOG_DATA: mainpage
    X-Powered-By: Slash 2.005000090
    X-Fry: Where's Captain Bender? Off catastrophizing some other planet?
    Pragma: no-cache
    Vary: User-Agent,Accept-Encoding
    Content-Encoding: gzip

    200 OK

  • First get strong management backing. If not just back off and forget the whole thing.

    Then:
    Option A: Fascist
    All outbound nonSSL web traffic (80-83, 8000-8999) goes through transparent proxy.
    All outbound SSL webtraffic must be via nontransparent proxy.
    All users get their own PCs and IP addresses.
    All users don't share PCs where possible.
    No other outbound or inbound traffic allowed to client networks.
    DNS, mail via the corporate servers.
    Then go through the web logs and get the top 10 users and the top 10 sites v
  • As others have suggested, you need to have local servers for caching web traffic, dns, etc. You can also block content such as advertisements and flash files.

    However, assuming that has all failed, you can still setup an extranet proxy. The advantage of this is that the machine in your extranet can be run at much lower monthly rates in an area of cheap bandwidth (such as the US), and then COMPRESS the data before it hits your own pipe.

    Essentially, use the following steps:
    1. Purchase a dedicated server with
  • Sure webpages get more and more crap, but there are still solutions, I saw the first post mentioned caches, but you can also block all ads, flash, embeded video, maybe there exists proxys which removes unneccesary stuff, and so on.

    Or you could move to Sweden or south Korea ;)
  • 1) Transparent cacheing via Squid
    2) Packet shaping to throttle or block certain protocols (Kill off P2P from your corporate network, for example)
    3) Offsite proxying to compress text (HTML, JS, CSS, etc) to save bandwidth. Also might want to think about recompressing JPEG/GIF/etc offsite to save bandwidth at the sacrifice of quality
    4) REQUIRE FLASHBLOCK ON ALL COMPUTERS. Seriously, the good thing about FlashBlock is that it doesn't prevent flash, only that flash will only load if you click on them. The benef
  • ... and software expands to fill all available space. So I think that answers the question of whether or not Web pages are getting bigger.

    The other problem is that the value of software is inversely proportional to the quantity of its output. So in that respect, while Web pages are getting bigger they are simultaneously becoming less useful.

If all else fails, lower your standards.

Working...