Ask Slashdot: Past and Present Bandwidth Comparisions? 88
Jonathan Locke asks:
"this
"Ask Slashdot" question about comparative CPU power
led me to wonder if there isn't some attempt somewhere to
do the same thing for connectivity/communications.
Obviously, we've come a long, long way since the days of
semaphores, smoke signals and telegraphs. But how far
have we come? What does the bandwidth curve look like?
(I assume the Internet would represent a seriously
non-linear change, but maybe not). What are the
theoretical limits on communications (# of nodes and speed
of access)? Does it even make sense to compare such
qualitatively different technologies as semaphores and
OC48 lines?" Good question! I figure average bandwidth
curve over the last 10 years would be an interesting thing
to see.
Well, you are right, and wrong (Score:1)
Well, yeah... (Score:1)
Probably not that much change since radio. (Score:2)
I think the only thing to change in the past few years is the way we communicate. Not the amount. I do agree that the amount of communication has risen over time (over thousands of years), but only because there are more people on the planet.
Station Wagon of Mag Tape (First!) (Score:1)
is a valid measure. After all the problem
is in comparing old technology to today.
A valid comparison might be a using a
box of 1/2 inch reel, or tk50's to
copy a dataset from
a vax in NY to another in CA.
Anyone remember how much you could put
on a tk50? I don't think it was much.
Probably not that much change since radio. (Score:1)
I do so because with a modem, bandwidth is fairly small, so I download pages while I'm reading others.
Truckload of Mag Tape (First!) (Score:2)
Can it be... I am actually first post. (or maybe I just don't understand the new filtering)
Well, I may not have a *great* answer. But I was reminded of the old saying about the bandwidth of a truckload of magnetic tape. I.e. don't underestimate it.
I understand that X years ago, it really did turn out that this would be the fastest available way of transporting some large number of bytes from, say NY to LA. Tape has got bigger-capacity and more compact along the way, just like bandwidth has gotten faster. So do they balance, and does the comparison still hold?
Yours, Lulu...
Wrong, ... there's the CABLE MODEM!@ =] (Score:1)
Station Wagon of Mag Tape (First!) (Score:1)
I've seen a couple of these over the years (I personally prefer a Hercules filled with CD-ROMs (nice and flat thingies)). What you have to remember is the time to actually feed the data onto whatever storage you use. You'd have to have some kind of massive writing equipment ready, which in itself should have a pretty high bandwidth. And -- what exactly is it that you want to send that takes up 150 TB?
-Lars
No improvement at all (Score:1)
ly 0.
In 1995 a download from the USA during business hours came at about 3k/sec. Today the same distance now comes in at about 300 bytes/sec during business hours. The commercial internet is like a bottomless money pit. New backbones that take years to plan, approve, and bury are instantly saturated by the time they go online and there's no solution to the infinite recursion in sight.
The other side: latency (Score:1)
This illustrates the latency/bandwidth issue perfectly! While folks could send out more and more ponies to increase bandwidth, the latency was still there--there was still a lower bound on the time it took to send the minimal amount of information.
The telegraph was, in comparison, hideously low-bandwidth. Most folks only got to send short messages, such as:
(This is an excerpt from the CrackMonkey FAQ [crackmonkey.org])While this is useful for conveying information in a timely manner, it doesn't lend itself to the sort of epistlary communication that makes it into the anthologies.
Thus, something like a networked game requires short, low-latency packets of information to update player status and object action. Email, on the other hand, puts a certain delay on the transmission of a message, though it is equally swift with a 12-page rant as it is with a short one-liner.
One could also look at TCP and UDP as putting an emphasis on bandwidth or latency, respectively. TCP is used for sending long streams of data with error-correction and all sorts of goodies, while UDP is used for blindly tossing out quick datagrams in the hope that the lower overhead will be useful.
--
Truckload of Mag Tape (Score:1)
That is a different bandwidth problem. You could also argue that if you are using fiber optics, that you have to include the amount of time it takes to get the data onto and off of the tape. After all, the "real" problem you are trying to solve may well be moving a large archive from one city to another.
Truckload of Mag Tape (Score:1)
How about cost per MB/sec? I think a bunch of CD-R on a spindle might be the best bet there.
Bandwidth of a scroll, blazon, and other sundries (Score:1)
The non-availabily of good lenses made reading a problem as well.
We shouldn't be talking here of bps, but rather of baud. (I know, usually baud is inappropriate.) Baud refers to symbols per second.
In terms of art, this usually means a great deal. For instance, the depictions of saints were usually standardized, so a medieval illustrator probably could not stray very far from the norm. This further limits the amount of bandwidth in a tome.
Besides religious material, the other major illustrated works included armorial rolls. At perhaps 16-20 arms per page, it would seem that the depicted arms represented an enourmous bandwidth, armorial bearings can be fully described using blazon in no more than 15-20 words, and often in fewer. Since blazon is a precise, formal specification of an arms, it's (IMHO) probably the best subject for studies of medieval bandwidth.
Truckload of Mag Tape - not really (Score:1)
Interestingly, the time it takes to load/unload the truck would have an impact on both bandwidth and latency, which is a different case than with computers - it doesn't take hardly any time for the kernel to ready a packet, so it can send a practically unlimited number of them (enough to saturate the network), and this time has no significant impact on latency. In this case, if it takes an hour to load/unload 50,000 tapes, the delay will add 2 hours latency (constituting 10% of the total latency). The bandwidth will be limited because you will only be able to send a maxiumum of 1 truck per hour - making the peak bandwidth 8x what you predicted. Or, if it took no time to load/unload the tapes, the bandwidth would instead be limited by the number of trucks you could cram on the road (A LOT), kind of like packets on a slow modem.
Well, I thought that was interesting, at least. =)
Station Wagon of Mag Tape (First!) (Score:2)
Quake via station wagon (Score:1)
CPS (Score:1)
Truckload of Mag Tape (Score:1)
not so much has changed... (Score:1)
But, for the average user, not much has changed. When you consider dialup connections, the speed goes up, but the size of the stuff we try to push down the line gets bigger.
As far as I can tell, there's no real difference in the speed of the user experience using Linux,
Netscape 4 and a 56k connection and a 2400 baud modem used to connect to classic Prodigy on an IBM PS1 six or seven years ago, or the 14.4 on the 486 with Win3.1 and IE2.
Telecom has a long way to go, not so much in the high end connections on the back bone, but in proliferating technology to the low end, sub-1000 dollar pc kind of folks.
Andrew Gardner
Reminds me of ...lr (Score:1)
"Take only three data points, as somewhere in the world there is a sheet of graph paper that will make them a straight line."
Cable modem? Bah - when the local cable company gets those, everybody will be born with built-in T1s
(Let's see if my Sparc IPX can manage to post this without mangling the subject line
Here's one bandwidth curve (Score:2)
This [isnet.is] shows Iceland's connectivity since 1993. Note that this shows traffic in and out of the country, and not traffic within the country. I don't know if any conclusions can be drawn from this.
Hmm. I wonder if Iceland will be slashdotted :->
--
Truckload of Mag Tape (First!) (Score:1)
Mars (Score:1)
limitations (Score:1)
The other side: latency (and warfare) (Score:1)
Just think what those pilots of the "Predator" spy ROV must experience when they are flying over Kosovo watching the Serbs blow up another town and knowing there is not a damn thing they can do directly.
I think the human consequences of this low latency are going to stunning (both in positive and negative senses).
Station Wagon of Mag Tape (First!) (Score:1)
bandwidth can transform society, too (Score:1)
But what if we had enough bandwidth to immerse ourselves into the current environment of, say, Kosovo? Being able to virtually "be" at places while important events are happening would definitely change the way society experiences news and maybe even change our perception of current events.
Truckload of Mag Tape (First!) (Score:1)
What is the average bandwidth of each node in the net?
What is the number of nodes in the net?
What are the measueres of the interconnectivity of the net?
Average packet size and latency? (the truck has huge size and latency).
I heard somewhere that the planetary bandwidth was tripling every 12 months. This was attributed to the linear growth in the milage (meterage) of cabling, and Moores law impacting the performance of the nodes at each end of every cable/fiber.
I wonder, as the internet (TCP/IP protocol networking) becomes universal, will the 16 hop time to die be a problem? Does IP6 address this, along with the less than one IP per human on this planet issue? (yes I DO think everyone have an IP!, prefferably stamped on forehead at birth)
bits per second vs. seconds per bit. (Score:1)
How to measure bandwidth (Score:2)
Probably a better measure would be a 'typical' message carried by a courier on horse and ship. Let's assume a typical royal decree to be 40 characters per line, 30 lines per page and 5 pages. This would give a message length of 6000 bytes. A messenger would rarely carry only one message, however - let's say our intrepid traveller carries an equivalent of forty decrees (he'd bring along letters, maybe a codex for a monastery and such as well). That gives a 'packet size' of around 250k.
Next, the travelling speed: If my memory serves, it would take three weeks between Copenhagen and Stockholm (a distance of 630km). This would give a bandwidth of 0.13 bytes/sec.
Of course, many manuscripts were illustrated, so in practice the bandwidth would be somewhat higher. Nevertheless, I would hate surfing the web in the fifteenth century
Fiber optic cost, road cost (Score:1)
Make the assumptions equal unless you want to compare apples and oranges.
--
Truckload of Mag Tape (Score:1)
/P.
bits per second vs. seconds per bit. (Score:1)
Think about playing quake with someone on mars. It doesn't matter what type of OC line is between you, it still takes light about 45 minutes (I think) to get from there to here.
Paralleling data communications is only benificial when the distance between the two communicating point is short enough to make the speed of light appear instantanious.
--
correct time between earth and mars. (Score:1)
from: http://nssdc.gsfc.nasa.gov/planetary/factsheet/ma
Minimum (10^6 km) 54.5
Maximum (10^6 km) 401.3
So if we take the speed of light to be
3.0 x 10^8 m/s
It takes at least 3 minutes and at most 22.3 minutes, depending on the time of the year.
It's not 45 minutes, but the latency still sucks.
--
T1's in the disco era? (Score:1)
Some friends of mine and I found an old computer magazine from the '70s or '80s, offering 'high speed' connections to some network or other (it was last year, so sue me!). We looked at the price, did a few calculations, decided how fast our school's T1 was (burst up to 127 kilobytes), and came up with an answer.
In the late '70s and early '80s, our school's internet connection would have cost around $1.7 billion* per month. Nowadays, you can get a faster connection (cable's faster, right? thought so) for about $40/month**.
I think it's changed a little, yeah.
~Sentry21~
_______________________________
* - Canadian dollars at the time
** - Canadian dollars today
Quick Question (Score:1)
Seriously, there would be more than one section of bandwidth scales. One for consumers, and one for companies. Joe Netsurfer isn't gonna have a T3 hookup in his house (Though there ARE exceptions), and I certainly don't know of any personal user that has one of Lucent's Trans-Oceanic 10GB/Sec Lines piped into his Home LAN. Damn, That's not a bad Idea.
-- Give him Head? Be a Beacon?
Probably not that much change since radio. (Score:1)
Well, Don't forget I can have cable bring TV data into my house, and have 3 different people watching TV on the 3 different sets in 3 different rooms, which isn't THAT uncommon, especially in Suburbia.
About cable TV, though, I was wondering exactly how much bandwidth my 70 or so channels are using in comparison to the total theoretical bandwidth of coaxial cable. I don't think much, considering Satellite services offer more channels but I am fairly certain a satilite only has ~1/10 the bandwidth. . .correct me if I am horrible wrong :).
How to measure bandwidth (Score:1)
About a bit a second then, huh? Scary, that is probably about as fast as WE people could distinctly and clearly percieve each bit. So this would be about the practical maximum if we were reading binary. Thank goodness for high-level encoding
The other side: latency (Score:1)
The special property that makes the Internet revolutionary can be summed up in one word: routing.
The Mathematical Theory of Communication (Score:2)
Bandwidth, latency & theoretical limits (Score:1)
You first have to define what you mean by bandwidth. Clearly it's the capacity of some communications link in terms of symbols per unit time. This is a function of the physical properties of the materials with which this link is built (which is also called bandwidth; confusing, isn't it?).
To throw a bunch of tapes on a truck and call it a high-bandwidth link is really misleading. That's a short burst of data; it's unsustainable. In comparing communications throughout history, what you really want to compare is the theoretical maximum sustained rate of transfer of individual symbols (bits)
e.g. give a bit (0 or 1) to the pony expressman and as soon as he leaves, give another expressman the next bit, and so on and so on (assuming an infinite supply of horses!). After a sufficient time such that latency becomes negligible (say, a year), add up the bits received and divide by the total time. So if a horse leaves every 5 seconds, your bandwidth is 1/5=0.2 bits per second.
Note that latency becomes neglible for sustained transfers of data. You don't care how many hops your ftp of the 2.2 kernel takes; you care how long it takes for you to complete the transfer.
Here [ucl.ac.uk] you'll find an explanation of Shannon's theoretical limit on the bandwidth of a channel: "There is a theoretical maximum to the rate at which information passes error free over the channel. This maximum is called the channel capacity C. The famous Hartley-Shannon Law states that the channel capacity C is given by: C=B*log2(1+S/N) bits/second. Note that S/N is linear in this expression. For example, a 10KHz channel operating in a SNR of 15dB has a theoretical maximum information rate of 10000log2(31.623) = 49828 b/s. "
So, brothers and sisters, the growth of bandwidth is a function of the growth in bandwidth of the materials making up our communications links, and the SNR of these links. I expect the graph would be a nice exponential. Good night.zzzzzzzzzzzzzz
Probably not that much change since radio. (Score:1)
An old data point... (Score:1)
New cheap tape standards? (Score:1)
Probably not that much change since radio. (Score:1)
Just one? (Score:1)
My cable co. has probably 10 or 15 unused channels on its lineup, so they could probably crank out way more than a single OC-3, if need be. On the other hand, they don't have a snowball's chance in hell of being able to have a backbone connection big enough to support that kind of bandwidth.
Heh... we've moved backwards! (Score:1)
Quick Question (Score:1)
cable (Score:1)
Then their backbone provider isn't that hot, either - I live in St. Louis, and tracerouting to a computer a few blocks away shows that stuff gets routed out to Kansas City, Chicago, and then back into St. Louis again, just to travel the distance equivalent of a 10-minute walk.
Mars (Score:1)
45 minutes may be too long, but it seems to me that when Mars and Earth are on opposite sides of the sun, it could be at least 16 minutes.
Probably not that much change since radio. -not (Score:1)
But the thing about radio and TV is that they are broadcast - they don't actually provide any bandwidth. I see the same old cspan on my TV that you are seeing on yours. They only have ONE channel. Whereas on the internet, you could conceivably have millions of simultaneous video brodcasts (not all to my single PC, but all travelling over the central pipes). You could boil down the bandwidth of your whole cable company to the equivilant of a single OC3.
Ditto goes for satelite brodcast. Sure, each channel is fat, but the overall bandwidth is pretty limited.
-=Julian=-
Station Wagon of Mag Tape (First!) (Score:1)
Bandwidth is location dependent (Score:1)
This is one thing that will proove very interesting as the rate of innovation increases. It is a general trend that those who live closest to the big cities get the innovations first, but I wonder how the rate of increase is outside that area. If an innovation requires an upgrade to existing infrastructures, then the likelyhood of them getting out to the most remote areas is pretty small. If it is something that can work on existing infrastructure (pots, or wireless), then it can be expanded very easily to those outlying areas.
I wonder how long it is before a net provide realizes they can be the Walmart of ISP's. Rather than developing technologies that are only useful in the highly competitive metropolitan areas, somebody should work to bypass geographic boundaries and make money off the relative monopoly they would posess in the more remote locations.
---
No improvement at all (Score:1)
max a bit off (Score:1)
Jakob Nielsen's analysis is off (Score:1)
Thus, if you only look at data after deregulation came into effect, the grouth of bandwith will be much more dramatic than Moores law.
cheers amigos
current capacity... (Score:2)
They had some neat adds stating that it would take 17 or something seconds to tx the whole lib of congress, including pictures, accross the US; i should know the details better, i work in nortel; however...
by the way, some trivia: did you know that more than 75% of internet trafic is carried over nortel equipment?
hasta luego, compadre.
The other side: latency (Score:4)
When electronic communications became commonplace, these distances were greatly reduced. Messages get blasted across the country at around the speed of light. Newspapers could report current information. Important events of the day were on the evening news. Now our wars are fought halfway around the world on live TV. There is no more delay in getting a message across. Bandwidth will increase, allowing larger messages to get sent faster. Yet bandwidth only changes the content, and will not contribute in a measurable way to the revolution that has already taken place.
log-log graph paper (Score:1)
i know i've gone from modem to dual channel isdn in 3 years, then in a year and a half went to cable.
my favorite part is that the bandwidth increases at the same ratio it's decreasing in price!!! =)
Forget cross country, think local (Score:1)
Bandwidth (Score:1)
I expect that the bandwidth curve will follow that same trend. A letter on a ship a 4 months across the atlantic was very fast compared to speed/distance a thousand years before.
I can't put numbers on it tonight, and one of the serious disadvantages to slashdot is the obsolesence rate. It would take me a few days of part time work to write an article on the subject. I will have to find numbers to back up what I can see in my head. Are enough of you guys interested to make it worthwhile for me to put in the effort?
I expect the internet is right on the curve, I will be surprised if a knee shows up because of it.
Jim Hurlburt
jlh@ewa.net; jhurlburt@cwcmh.org
Throughput/Latency (Score:1)
If you didn't care how long a certain piece of data took to get across the atlantic, then a bunch of cargo jets and DVD burners would work quite well. You would indeed achieve record bandwidth (with really, REALLY high latency for writing/reading and transport time)
That is the key difference. Frankly, it's a related reason that causes big CD-ROM games to be shipped instead of downloaded. In theory, Cyan could distribute 5 CDs of Riven over the internet, once someone buys it with their credit card, right? The problem is that your average home user doesn't have the bandwidth to download it. Latency is fine...they just have to wait a few days before they get to play. (there are of course, other reasons, disk space probably being up there in the list, but it's a decent viewpoint for the topic at hand)
bandwidth and latency (Score:1)
bandwidth and latency (Score:1)
Probably not that much change since radio. (Score:1)
That really depends on which type of satellite you're talking about. If you're talking about C-Band (The big dish, generally 7' or 10' center focused parabolic dishes) then you don't have much bandwidth at all. You have to move the dish from one satellite to another, and each satellite has 12 transponders * 2 polarities for a total of 24 uncompressed TV channels per satellite. The satellite itself usually has less then 50 watts of power. Using compression such as mpeg2 this number goes up by a factor of 10-12 (I'm a little fuzzy on this number).
Whereas if you're talking about DBS (The little dish, generally 18" off-center parabolic dishes) then you get a little more raw bandwidth. The satellite itself broadcasts at >= 250 watts of power, with 16 transponders * 2 polarities. The way they get some 200 channels on these beasts is by compressing everything. DirecTV/USSB uses mpeg1 while Dish Network uses mpeg2.
Getting back to the original point, I'm not sure how much b/w standard rg6 or rg59 coax has, but the DBS satellites themselves have roughly 32mb/s of bandwidth, making a single satellite useful only to mid-sized ISP's and those for whom running cable to would be more expensive then using satellite.
Incidentally, the way most cable companies do digital cable is by compressing several cable channels onto one channel, so you don't get the full picture quality you otherwise would have if you were watching the same channel on a c-band system.
-skullY
No improvement at all (Score:1)
Funny, in 1995, a download for me (from, for example, walnut creek cdrom) came at about 2.2k/sec.
This morning, I moved a big chunk of data from the same point into my house at about 148k/sec. And I have to tell you, I'm happy about that.
DSL has killed the "last mile" problem for anyone with a telco wise enough to offer it. Backbones aren't the problem, nor is the last mile (within 18 months, you'll have DSL or cable. If you have neither after that point, you probably will never have it) The problem is that peering points are saturated. The MAEs and PBNAP are the major bottlenecks, not counting the existance of alter.net, which is just plain poor.
But all of this, of course, is the raving of a fool.
The other side: latency (Score:1)
No improvement at all (Score:1)
Bandwidth (Score:2)
And remember that the transportation of a human who bears information is equivalent to a videoconference with perfect audio and video fidelity.
Finally, if you just want to know about Internet bandwidth, Jakob Nielsen's got a nice chart [useit.com].
bandwidth price evolution. (Score:1)
- nr
not so much has changed...and it shouldn't (Score:1)
My mom wants to buy one of the new neato mini laptops, for around two grand. I really think she'd prefer fifty green-screens, myself...
An ISP's Perspective: 1994 (Score:1)
We had a dedicated 56k pipe to the net, and that seemed good enough for the time. And with 30 simultaneous users logged on, there was still bandwidth to spare! (After all, people were either using Telnet, FTP, or Gopher. Anyone remember gopher?!??! That's a dead protocol. So much for my skill at creating gopher pages!)
Of course, the web changed all of that, and bandwidth requirements have gone through the roof. Running an ISP on a 56k link? HA! That's how much a single user can suck down the pipe with a web browser.
not so much has changed... (Score:1)
Truckload of Mag Tape (First!) (Score:2)
Can you imagine playing Quake through a station wagon?
Great post (Score:1)
I am programmed to receive... (Score:2)
Well, it depends on what you mean -- data transmitted, or data received?
In the first case, a previous poster cited the old saw "Never underestimate the bandwidth of a station wagon full of tapes." The transmission rate of a Country Squire is stupendous, but how fast can it be received? Tape drives manage from under 500KB/sec on up...
Now consider the case of printed books. I'd hazard a guess that the average Tom Clancy potboiler contains something like a megabyte of information. Again, the transmission rate is very high (someone hands you the book), but the reception rate is very low (you have to read it) -- in my case, I manage about 350bits/second (550wpm, 5.5 letters/word, 7bit ASCII).
Now think about television. Uncompressed NTSC video has a transmission rate of around 25MB/sec. This works out to about 45GB for an episode of I Love Lucy, including commercials. Cynics will argue that the actual useful data rate is an inverse square of the amount watched.
I guess it all depends on what you're transmitting, and to what or whom. In most cases, I'd say that above a ceratin transmission rate it doesn't matter -- the process is cpu-bound anyway (whether by grey matter or otherwise)