Forgot your password?
typodupeerror
Apache Software

Apache Bandwidth Limiting? 44

Posted by Cliff
from the whoa-cowboy! dept.
IOOOOOI asks: "I work at a high traffic web hosting company and we're trying to find a simple effective way to limit bandwidth hogs, some of whom we've clocked pulling over 4Gb/hr off our servers. We've tried mod_throttle and have looked into QoS/fair queuing as well as a couple of custom solutions in-house. None of these quite did the trick. Has anyone found an effective way to do this, one that can handle individual connection streams?"
This discussion has been archived. No new comments can be posted.

Apache Bandwidth Limiting?

Comments Filter:
  • But the solution seems easy to me. Simply charge your customers for their bandwidth.

    This rectifies the disparity between flat rate pricing and incremental bandwidth costs.

    When I went to find a solution for my web appliction, I chose to put in a DSL line and host it myself (Because of the complexity of the app, this is cheaper than colocating my computers there).. but I chose a DSL provider that doesn't give "all you can eat" - but instead charges for bandwidth.

    The reason I chose this is the theory that the bandwidth hogs would go elsewhere and the latency at this ISP would be much lower. so far this has proven tru, and I've yet to exceed the basic "free" bandwidth level.

    If, on the other hand, you're talking bout people who are downloading your customers content at huge rates, then maybe you should charge your customers based on the service they are providing. If they're hosting lots of large files, they should probably be paying more...

    Dunno if that's a viable solution-- but smart customers will prefer someone who charges "by the byte"... because the bytes are better quality.
    • you are missing something. My friend runs a semi-large website and has had multiple hosts. On his website are very many images (not porn), mostly gif, jpg, etc. The rest is html and cgi. What happens is most visitors to the site come in and look at a few pictures, download a couple and leave. Very recently someone wrote a spider program that went through his entire website and downloaded every single image. He stopped it before they finished, but he had a very hard time finding a way to prevent it from happening again.

      What apache needs is something where you can say
      someone who is visiting the site only gets x amoutn of bandwith. And if someone tries to use up too much by downloading too much, stop sending things to them.
    • The issue is not about our customer's bandwidth consumption and how much they can/can't use. It's about being able to provide services to all of their users without experiencing slowdowns because of the occasional hog.
      • The issue is not about our customer's bandwidth consumption and how much they can/can't use. It's about being able to provide services to all of their users without experiencing slowdowns because of the occasional hog.

        Are your slowdowns bandwidth or CPU based? If you are serving lots static content (like porn), then Apache is going to kill you, due to its process-per-connection model, which the developers refuse (read: are too lazy) to fix. Zeus doesn't have this problem. Neither do the open source boa or thttpd (but they unfortunately lack many important features that may stop them from being used for commercial web hosting). Zeus will allow you to max out your network card (100mbit) on a modest machine (P3/500 w/ 1gb RAM).
        • Heh,

          Common, have you tried to tune Apache?
          Process-per-connection not a problem - you just have to keep process pool big enough.. There are some other tricks, but you could saturate 100Mbit network with p3/500 and Apache as well.

          • Common, have you tried to tune Apache? Process-per-connection not a problem - you just have to keep process pool big enough.. There are some other tricks, but you could saturate 100Mbit network with p3/500 and Apache as well.

            I seriously doubt it, not in real world conditions. When you include things like mod_php and mod_perl, those Apache processes get big. Our hosting servers (running Zeus) get 15-20 thousand hits a minute. That's ~333 hits per second. Say each client is downloading 50k images at 2k per second. That means you have 300+ new connections opening per second, that stay open for 25 seconds. So you need to be handling 7500+ concurrent connections.

            Keep alives and such will help with this, but a high traffic HTTP server needs to handle at least 1000-2000 connections concurrently. Show me a p3/500 that is running 2000 Apache processes, and processing scripts, etc., and isn't dying. It just won't happen. The process switching overhead alone will kill you. Read this page [kegel.com], then tell me that Apache's I/O model doesn't suck.
    • it may seem easy to you to just charge the customer.. but as the technical support manager at a web hosting company, i can assure you that it isn't. :)

      a few hypothetical situations:

      * customer cannot afford to pay for bandwidth. customer leaves hosting company for another provider and hosting company has to eat the bill.

      * customer is getting hammered so hard that they affect other customers, resulting in a bunch of cranky customers with slow websites.

      it doesn't matter whether you offer unlimited bandwidth or charge per byte/mb/gb/whatever.. problems can still arise when someone's site gets slashdotted or someone leaks a password for a porn site.. :)
  • altqd (Score:3, Informative)

    by schmaltz (70977) on Friday July 19, 2002 @02:52PM (#3918363)
    try altqd. i've only used it on openbsd, but with it you can selectively throttle bandwidth.
  • Packeteer (Score:2, Informative)

    by shave (16748)
    Not an Apache based solution, but check out Packeteer Packetshapers [packeteer.com]..specifically the ISP models.. lets you set SLA's by protocol, IP, etc, perform rate limiting, and all other kinds of really cool stuff. Not exactly cheap but extremely effective, and simple to manage.
    • It seems a little biased/uninformed to mention only the Packeteer product here, so I'll broaden the horizon a little: A solution that is comparably priced (Still very expensive) and IMHO is a better choice is the Allot Netenforcer [allot.com]. There is also P-Cube [p-cube.com] and F5 [f5networks.com], but independent tests (and my own) makes the Allot-box the better bet. If you think Packeteer is easy to use, you should check them out yourself!
  • I have used mod_bandwitdth to a certain extent, it may have what your looking for. I would love to hear about other solutions though.
  • by merlyn (9918) on Friday July 19, 2002 @03:06PM (#3918483) Homepage Journal
    The solution that the (defunct) etoys.com adopted for their site was based on code from one of my Perl columns [stonehenge.com]. My code is based on CPU throttling, but you can quickly change it to bytes sent using the same technology.
  • mod_bandwidth (Score:2, Informative)

    by Gormless (30031)
    I use mod_bandwidth [cohprog.com] at work to simulate 56k connections to the web server.

    It works quite well and will throttle per-connection or per-virtualhost.
  • I'm not verey experienced with bandwidth limiting.
    I did play with mod_throttle, and all it did was actually allow all traffic until the limit was reached, and then deny the next new connections. Hmm, not too great actually.

    I'm planning to try out mod_bandwidth, but I dunno if it works different.
    Bad link (sorry, I don't feel for html now):
    http://www.cohprog.com/v3/bandwidth/doc-en. html

    I tried playing with QoS on linux 2.4.
    According to the documentation it's actually quite hard to have that functional, because if you have a 10 Mbit connection, it will shape the traffic elative to that. But 10 Mbit is not always the same. If you have lots of lost packets it will behave different then with a perfect connection.
    In my experience I couldn't reliably limit the traffic on a 10 Mbit connection down to 80 kbit (almost 1% of the 10 Mbit). My cable connection of 16 kbyte still could get choked.
    Maybe I should just get a card of 1 Mbit and try again, the numbers might be better then.
    Or hey, a card of 100 Kbit :-) should be doing perfect.
  • You could look at using a combination of content acceleration and bandwidth pools in squid [squid-cache.org]. I've used these features before and it actually works pretty well for static content. You can tune the caching params to allow for large files, etc.

    Derek

  • by gibmichaels (465902) on Friday July 19, 2002 @05:46PM (#3919388)
    I am having the same problem, and I think you guys are missing the point. He said 4GB an hour, which means he probably has an OC-3, OC-12, or Gigabit Ethernet connection.

    "Blocking" network appliances such as Packeteer can't handle these high rates, and even if they had gigabit interfaces, they would only be able to do 600-800mbps on them.

    None of the kernel QoS/queueing options I've seen allow for anything other than classifying traffic or "fair" queueing. None of this seems to help someone that wants to limit all webserver connections to 2mbps - everything here is expecting an IP range, ports, or something to distinguish by. What if I don't want to?

    Apache needs real per connection, per user, and per IP rate limiting. mod_throttle and everything else I've seen has to starve connections after they perform too well. How about something that hard limits connections to 2mbps/sec. I will pay for anything that can do that for Apache today...

    Forgive me if I have overlooked the obvious...
    • Apache needs real per connection, per user, and per IP rate limiting. mod_throttle and everything else I've seen has to starve connections after they perform too well. How about something that hard limits connections to 2mbps/sec. I will pay for anything that can do that for Apache today...

      Then head for eBay, because a moderate-cost solution to your particular problem (limiting all web traffic to 2 megabits/s) is available for two bids and some cable work: buy two Ascend Pipeline 130s and run them back-to-back with a T1 cross-over cable. Another advantage of this solution is that your web server can be located near the webmaster, up to 5000 feet (without repeaters) from your network access point. Indeed, if you partition all of your services (mail, news, web server, ftp server) then no one service can completely swamp your connection.

      Don't like using T1 routers? Then get a moderately powerful Intel computer, install enough Ethernet interfaces to satisfy your needs, load up a modern Linux distribution with 2.4.18 kernal and IPTABLES, and set up rules that will traffic-limit to the interface to which you connect your Web server. If you are like a lot of people who run multiple servers on the same box, the rules can "customize" the throttling by service. Not only that, but you can throttle by direction as well: incoming HTTP could be limited to 30 kilobits/s while outbound HTTP could be limited to 3 megabits/s -- that takes care of some of the problems with DoS attempts on HTTP. The same can be done for other services, such as FTP, mail, and IRC. The amount of control that IPTABLES provides is, well, interesting.

      (Yes, I know that the *BSD people have something similar, but I know the IPTABLES stuff better and have seen it work.)

      C'mon, people, this isn't all that hard to do if you think and are willing to put a little money where your wishes are.

    • I am having the same problem, and I think you guys are missing the point. He said 4GB an hour, which means he probably has an OC-3, OC-12, or Gigabit Ethernet connection.

      That's only 9.1 mbps. T1 = 1.544, T3 = 44.736, OC1 = 51.84. OCx = OC1 * x.
    • by keepper (24317)
      how much are you going to pay me then?

      Using ipfw and dummynet on freebsd is the way
      I have gone in a VERY high traffic hosting
      and colo company.

      you can not only simultae a link of a certain speed, but can also limit any ip that hits a certain destination to a max speed...

      :-P
  • Have you considered using FreeBSD Traffic Shaping? ("man ipfw").

    here [onlamp.com] is a story to a problem that sounds identical to yours. A hosting company (using a virtual host) has a customer who uses exessive bandwidth, and they wish to throttle it. After trying mod_throttle, they went with a better solution.

    If your not using FreeBSD, i am very suprised. Perhaps you should look into it.

    D.
  • See http://www.cisco.com/warp/public/105/policevsshape .html

    for a good tutorial on Traffic Policing and Traffic Shaping, two ways of doing what you require with Cisco hardware.
  • Cisco has a great IOS feature called CAR that can do exactly what you're asking for at the router level. You can rate-limit specific physical ports on the router (even using a schedule such as from 8am to 8pm, allow anything, from 8pm to 8am throttle to xxx kbytes/second).

    This is assuming that you're not running virtual hosting (multiple domains sharing one IP address), in which case all customers on that IP/physical port would be affected by the CAR limitations you would impose. It is possible with the amount of traffic you're talking about. Just make sure that the puppy has a good processor and plenty of RAM.
  • Use Zeus (Score:4, Insightful)

    by Electrum (94638) <david@acz.org> on Saturday July 20, 2002 @09:29AM (#3922070) Homepage
    High traffic and Apache is almost an oxymoron. If you are running a high traffic web hosting company, then you need to stop playing games and use Zeus. Apache has its strong points, like being free and open source, but that's about it. If Zeus was free, then it wouldn't just be the best web server for UNIX platforms, it would also be the most popular.

    You want Zeus because it is high performance (it doesn't use the toy process-per-connection model). It comes with an easy to use, powerful web based GUI. The GUI doesn't just hold your hand. It lets you set everything, and then will show you the exact lines that are changing in the config files.

    It doesn't use extremely complex format for config files that Apache uses. A good comparison is BIND and djbdns. Do you want to try and deal with the incredibly complex BIND zone files, or the simple, one record per line data files that djbdns uses? Zeus config files are one record per line of the form "modules!throttle!enabled yes". It also comes with tools that let you do everything from scripts. But only if you want to. Otherwise, use the GUI.

    And speaking of throttling, Zeus does it correctly, unlike any other web server (at least any of the freely available UNIX ones, as that is all I am familar with). It will let you set a limit on the number of users, or set a max number of bytes per second on a virtual server or subserver level. It doesn't serve some people at max speed and then start dropping connections (mod_throttle) or set the throttle speed at the beginning of the request, then start dropping connections (thttpd).

    Virtual servers in Zeus actually make sense. There is no master server configuration like in Apache. Instead, you create one or more virtual servers. As such, each virtual server has its own separate configuration. Virtual servers can serve a single website, or any number of websites, via subservers. Subservers all share the configuration of the virtual server (kind of like Apache's mass virtual hosting only much better). No more restarting the server to add a site. Simply create the directory, and it starts serving the site.

    There are plenty of other reasons why Zeus is superior to Apache, but the ones I listed should be enough to start considering it. No, I don't work for Zeus or own stock (don't think they have any) or anything like that. I'm just a satisfied customer.

    For some things, Apache works just fine. But for anything high traffic, requires throttling or needs a flexible or scripted configuration, Zeus beats Apache hands down. It's worth every penny. Check it out. I doubt you'll be dissapointed.

    (subconscious message to Apache developers: stop being lazy and make Apache more like Zeus!)
    • Thanks for the actual good reply to this - I wish there was a way to do it with Apache - I hope our developers' scripts mod well to work with Zeus ;) We have been looking at it for a long time, but we have to make a case for it at work.

      Are the Slashdot readers this ignorant? Everyone else suggested QoS methods that would do nothing to help *per* user connections. Are people really this obtuse? The first poster and I were very clear about we wanted to do, and people came up with pretty lame stuff that was way off the mark.

      The problem with the IT industry is that there are so many clueless people that have the "experience", and make good money. They dilute the talent and make it hard for a real wiz to make money anymore. How many people do you know that fall in the category: "Knows enough to be dangerous"?
      • The Squid [nlanr.net]+delay-pools [squid-cache.org] someone suggested maybe viable as well (or there's Oops [paco.net], another web cache which can run in reverse mode which does bandwidth limitation, I usually prefer it over Squid but haven't tried pushing it particularly hard).

        Zeus really is great, it has some wonderful clustering features too, admin for the whole cluster can be done from one place. At the very least it's worth taking a look at the 30-day trial version to get an idea for how much work it would be to port the scripts across.

        On a large site, you'll quite likely save the license cost by the decreased use of resources.

        (AOLServer [aolserver.com] is a good server too, though it doesn't have the nice admin of Zeus there's a lot it can do and is also very efficient. I'm not sure whether it can throttle bandwidth by itself though).

  • how about a script that goes through the output of netstat every 5 minutes and adds entries to a table. If that table shows its "interesting" traffic, then nail it with something like ipfilter or just set it to a null route. In the case of a dedicated hosted serer, stick in another ethernet card and route all the funny trafic to it and let the switch or router set it to something slow. Its amazing what a perl script, a setuid wrapper for route and a 10mb ethernet card will do.

We gave you an atomic bomb, what do you want, mermaids? -- I. I. Rabi to the Atomic Energy Commission

Working...