Comcast Cheating On Bandwidth Testing? 287
dynamo52 writes "I'm a freelance network admin serving mainly small business clients. Over the last few months, I have noticed that any time I run any type of bandwidth testing for clients with Comcast accounts, the results have been amazingly fast — with some connections, Speakeasy will report up to 15 Mbps down and 4 Mbps up. Of course, clients get nowhere near this performance in everyday usage. (This can be quite annoying when trying to determine whether a client needs to switch over to a T1 or if their current ISP will suffice.) Upon further investigation, it appears that Comcast is delivering this bandwidth only for a few seconds after any new request and it is immediately throttled down. Doing a download and upload test using a significantly large file (100+ MB) yields results more in line with everyday usage experience, usually about 1.2 Mbps down and about 250 Kbps up (but it varies). Is there any valid reason why Comcast would front-load transfers in this way, or is it merely an effort to prevent end-users from being able to assess their bandwidth accurately? Does anybody know of other ISPs using similar practices?"
This is an advertised feature I believe (Score:5, Informative)
Powerboost (Score:5, Informative)
Web browsing optimisation (Score:5, Informative)
This is most likely "PowerBoost" (Score:3, Informative)
All it does is give you short bursts of high bandwidth and is really more talk than usefulness.
My ISP, Cox, does this too, though once the "PowerBoost" thing is off, I steadily get the bandwidth I'm supposed to get. Dunno about Comcast.
SpeedBoost is the thing (Score:5, Informative)
Some consumers may not notice the speed increase when downloading smaller files, such as text-based e-mails and simple Web sites with few graphics. However, customers who frequently download large files, such as software, games, music, photos, and videos will now download at speeds that are faster than ever before. For example, PowerBoost significantly reduces the time it takes to download a one hour television program. Comcast subscribers at the 6 Mbps tier would reduce their wait time in half - from 4 minutes and 29 seconds to 2 minutes and 15 seconds. And MP3 fans will be able to download music files as fast as 2.2 seconds!
Token Bucket (Score:5, Informative)
http://en.wikipedia.org/wiki/Token_bucket [wikipedia.org]
Re:This is an advertised feature I believe (Score:5, Informative)
Re:Powerboost (Score:5, Informative)
Re:Powerboost (Score:3, Informative)
Speeds (Score:2, Informative)
~Sun
If those clients are running Windows... (Score:3, Informative)
My advertised and provisioned rate via Atlantic Broadband cable is 5/512. I am actually getting closer to 6 or 7 down and 468 up at all times due to some tweaking I did. Even the AtlanticBB tech seemed a bit shocked that I was getting more than 5 down, and said it was unusual, but they wouldn't re-provision the line or anything because of it. I count myself lucky, because Verizon's service here is absolute rubbish - $25.00/month for 1.5/768 DSL that, shall I say in the politest way possible, isn't actually working for more than two weeks per month because they are too cheap to replace lines that were put up in this town sometime in the 1950's at the latest (Not to mention they never actually bother to show up for scheduled appointments to rewire buildings that were constructed pre-1900, such as mine - big old Victorian type home turned into apartments).
Powerboost does mess with speed testing, however those "tests" are very rarely accurate anyhow, as I can rate higher on a test to Seattle or Los Angeles than I do to say Pittsburgh, Toronto or NYC, which are MUCH closer to where I live (by several thousand wire miles). It's more accurate to calculate your average rates by downloading/uploading large files from/to a university/public FTP or something, at least in my experience.
QoS limits (Score:2, Informative)
How about an answer? (Score:4, Informative)
I know that this is slashdot but I'll try to answer some of the OP's question anyway. Of course I won't do any original research myself, but rather rely on information from the previous posters or make things up as I go.
Q1. Is there any valid reason why Comcast would front-load transfers in this way?
Yes. Most requests from browsers are for short files. By upping the speed for short requests, pages will render faster. This is a plus for the user, as he spends less time idling. Long downloads on the other time are expected to take a while to complete; the user expects to be able to walk away from the computer for a while. Thus Comcast can argue that they have greatly enhanced the experience of the web browser by stealing a few cycles from the downloader. I would welcome such a plan as long as the ISO downloading speed is reasonable.
Q2. Is it merely an effort to prevent end-users from being able to assess their bandwidth accurately?
It would have that effect on a poorly designed bandwidth test. Bandwidth testers try to make the download size long enough to counteract tcp connection costs and to average over variations in download speed. Comcast has just given them another variable to take in to account. Interestingly, there are some test suites that are designed to detect what Comcast is doing and give them extra credit for it. They bill their tests as real world throughput tests. They want to indicate what the effective bandwidth is while browsing web pages that reference many images or javascript files.
Q3. Is Comcast cheating?
If Comcast is just doing this when accessing known test sites then they are cheating. If this is their policy for all connections then the worst that can be said is that they are optimizing their service to a particular class of users (surfers as opposed to downloaders). If you are in this category, then you should be happy.
Re:This is an advertised feature I believe (Score:5, Informative)
Re:This is an advertised feature I believe (Score:5, Informative)
People are out with pitchforks and torches over the "bad" thing Comcast does, throttling Torrent downloads, which works completely differently. To throttle a torrent, they forge a "I'm dead" packet from remote host, and send it to the customer. This causes the customer's torrent application to shop elsewhere for a feed. The repeated connect-forge disconnect-search-connect process slows the overall transfer. This only works because of the multi-peer technology underlying torrents, and wouldn't work with web browsing or ftp*.
-Ellie
* technically it would reduce the bandwidth usage, because it terminates the connection. This would result in broken connections and half-downloaded files. Then the pitchforks would REALLY come out.
See link here (Score:3, Informative)
PowerBoost uses a 30 second average, not filesize (Score:5, Informative)
PowerBoost only accelerates the connection if the average speed you've been getting over the past 30 seconds* is less than the speed you are rated at/paid for. So if you have a 6 Mbps connection, that's 768 KB/s max. PowerBoost will raise that to up to 2 MB/s for a little less than 15 seconds, making your average for the past 30 seconds equal to 768 KB/s. After that, no matter how many new connections you open, your connection stays at 768 KB/s. But if your connection gets interrupted/throttled for a few seconds, you may get another boost after it resumes, until you are back to 768 KB/s 30 second average again.
*it may be slightly more/less than a 30 second average. Boosts seem to last about 10-15 seconds, which would make sense with that number.
Iperf (Score:3, Informative)
Re:I wish (Score:3, Informative)
It's true (Score:5, Informative)
It's really bad on uploads -- I just ran a test and I got 300 KB/s for the first 5 megs, then it degrades 100 KB/second over the next few megs, so that by the time you have uploaded 14 megs you are getting close to 40 KB/S in upload speed, and the connection is so bad that the shared digital phone line does not have enough bandwidth to have a phone conversation. Stop the upload and start it up again, and you get 330 kb/second, with the same degradation curve.
For downloads they do the same thing, but not so severely -- I downloaded a 67 meg file and it ran at about 750 KB initially, but then dropped to around 350-400 KB/S (according to the FTP app) about halfway through.
So for anyone using the connection for smaller file sizes (like the speed tests) you seem to get "blazing" speeds -- I ran the test at a couple of the internet speed test sites and they both think that I have 12000-14000 kb/s download speed and 2700 kb/s upload speed.
So if I didn't have any other way to measure it, I would think that I was getting way more than I paid for, rather than something that in reality is very pitiful.
Re:Powerboost (Score:3, Informative)
"The Comcast network is really content-agnostic," said company spokeswoman Jeanne Russo. [/quote]
Network Admin? Baseline the network utilization! (Score:2, Informative)
Unless there is an problem with the link that can be immediately identified at the time you tested, like a physical problem, then you should develop a baseline of the customers network utilization. Generally, a single download provides insufficient information to in order for to give the employer or customers a recommendation related to their link utilization. This is especially important when the upgrade costs money.
Trending the link utilization is easy to do with free open source tools that will run on Linux, Windows, or a Mac. Or you can pay some $$ and buy software that will perform network utilization trending. Many protocol analyzers have this feature too. As a network administrator/engineer I expect that you can figure how how to tap the link or access the link devices network interface utilization, SNMP, RMON, or even NetFlow/sFlow information.
This is easy to do, just setup an extra PC or laptop on the customers premise for a week, just lock it down logically and physically. Free tools that I regularly use are Ntop (http://www.ntop.org/news.html) and Cacti (http://www.cacti.net). I'm sure that someone on this list can recommend a dozen other solutions.
These tools provide a graphical means to display the link utilization over time, providing greater information over a single download test, thus allowing you to make a more informed recommendation. And the graphics make nice additions to your reports! One scenario would be that the customer is seeing a slow down of their internet connectivity after lunch or late afternoon. Well, trending might reveal that indeed the network utilization peaks at these times when workers get back from lunch and just before they go home. And maybe it's only a few people hogging the bandwidth. On customer networks I've discovered P2P file sharing, large file downloads (movies and ISO's), and even infected computers used as repositories. The customer would have plenty of bandwidth if they just cleaned up that mess or better managed the limited resource with both technical or administrative (policy-based) controls.
And if you have more time, then check out the list of even more network management tools at http://www.slac.stanford.edu/xorg/nmtf/nmtf-tools.html [stanford.edu] or http://www.ubuntugeek.com/bandwidth-monitoring-tools-for-linux.html [ubuntugeek.com].
HTH someone.
Re:This is an advertised feature I believe (Score:2, Informative)
OVH [ovh.net] offers dedicated servers with a dedicated 5 Mbps line, but you can upload some MB (I can't really remember if it was 5 or 10 MB) using full 100 Mbps capacity. After that, you let your upload privilege "refill" (i.e. using less than 5 Mbps "recharges the line"), so you can get another burst.
Nothing new under the sun. If anything, it's kind of a cool feature. If you need to measure real bandwidth, bursty downloads won't do.
Consumer-grade Shared bandwidth (Score:5, Informative)
If you want a commercial-grade link you expect to saturate, pay for it! Otherwise, you are stealing from other users and the ISP should throttle you to be fair to them.
Re:Consumer-grade Shared bandwidth (Score:3, Informative)
Re:I do remote support, & COMCAST = F A S T!!! (Score:2, Informative)
Re:This is an advertised feature I believe (Score:2, Informative)
Re:This is an advertised feature I believe (Score:5, Informative)
Actually that is not entirely correct. If they were simply forging the RST packet and only sending it to their customer it would be a simply matter of having the customer's firewall filter out all RST packets on specified port that is used for torrent download/uploads. I in fact have such a filter rule in place. However, detailed testing has shown that Comcast is sending the RST packet to BOTH their customer AND the outside connection, not just their own customers. Unless both sides have the RST filter in place on their firewalls, the connections are still dropped and throttled. This is what is going to get them into trouble as they are not just sending forged packets to their customers whom they have it written down in their service agreements somewhere that they can do this to you, but they are also forging YOUR identity and sending those packets to outside entities to affect their service as well, something that those people have NOT agreed to have happen to them.
Shortest Job First (Score:5, Informative)
In operating system theory, it is well known that a scheduling algorithm called "Shortest Job First" yields the least total waiting time. The SJF algorithm is usually implemented by giving a "new" job high priority, and then reducing the priority gradually as the job accumulates resource usage. The algorithm was developed in the 1960's to allow time-sharing operating systems to provide rapid keystroke response, while continuing to process large batch jobs in the background.
For communication systems, the same principle applies. The only difference is that the network is sharing a different resource (circuit bandwidth), instead of cpu time. The "new" connection gets high priority, and then that priority is reduced as the number of bytes/packets transferred over that connection increases. This allows rapid response for interactive applications, like browsing or editing, while also allowing the network to process large data transfers in the background. To apply it to datagram traffic, the switch just keeps a priority for each source/destination address-pair in cache, and any pair that is not in the cache is regarded as "new".
This has been pretty much standard practice in packet communication switching for a very long time. There is no surprise here, at least not to those of us who have not been doing communications network programming for a few decades.
sheesh... (Score:5, Informative)
Re:This is an advertised feature I believe (Score:2, Informative)
I imagine this is what Comcast is doing, and it's a very acceptable practice, as long as it's advertised properly.
It works like this: Your sustained bandwidth feeds a "bucket" (measured in megabytes or megabits, not based speed/time measurements), which is your burst capacity. When you start a download, you will pull from this bucket, and run at the maximum speed possible for your connection. When the bucket is empty, you will continue to run on your sustained speed. When you go idle (your download finishes), the bucket will refill at the sustained speed rate.
So, if you have a 50MB bucket size, and you can burst to 10Mbit/s, you will run at 10Mbit/s until you've downloaded 50MB, and then you'd drop to your sustained rate (say, 384Kbit/s). When you finish downloading, you will refill your bucket at 384Kbit/s until it either tops off or you start using your connection again.
In my experience, with the equipment I've been using working at a WISP, this bucket applies to all connections, regardless of protocol or download size. YMMV, of course, depending on the hardware/software you are working with.
Re:This is an advertised feature I believe (Score:3, Informative)
This is not a new thing (throughput vs. bandwidth) (Score:4, Informative)
Think of the connection as a large pipe (your cable connection) with a small outflow valve (your modem), connected to a larger, higher pressure pipe (your ISP). Until your local pipe is full, you can put water into it as fast as you desire. But once it is full, the volume slows down because you can only put in as much as you are taking out (your cable modem connection/outflow valve). So what speakeasy and various other speed testing sites see is the effect of filling up your local pipe (your connection to your ISP).
What a large file download shows you is the actual throughput.
BTW, this is also a quick, very simplified explanation of bandwidth (how much data you can pack into the pipe) vs. throughput (how fast you can actually pull data through the pipe).
Re:Earthlink Cheats with Latency too (Score:3, Informative)
208.67.222.222
208.67.220.220
Re:That is how cable works. (Score:3, Informative)
you have the right idea, though. everyone (on your node) shares that frequency. each cable modem a bandwidth limit. (it stores this limit in a file it downloads from the cable company. people change this file to get around the limit. if you've ever heard of "unlocking your modem", well, that's why it's possible.) they carve out their little slice of bandwidth when they need to use it, similar to what clients used to do in the old Bus Network setup. ( http://en.wikipedia.org/wiki/Bus_network [wikipedia.org] ) this is also why it is technically possible to see all your neighbors traffic on a cable modem, though the traffic should be encrypted.
cable providers assume not everyone will use their bandwidth at once, which is why they can oversell the available bandwidth. some cable providers (coughcomcastcough) do so at too high of a degree, and you get slowdowns during peak times.
anyways, there is very little latency in getting your "slice of the frequency pie", so to speak. more likely, the GP's DNS servers suck. most ISPs would throttle connections in the exact opposite manor, trying to make normal web-browsing seem lightning quick but huge downloads given less priority.
by the way, i work for a fairly decent ISP, and have a pretty good understanding of our cable setup.