Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Technology

Ask Slashdot: Past and Present Bandwidth Comparisions? 88

Jonathan Locke asks: "this "Ask Slashdot" question about comparative CPU power led me to wonder if there isn't some attempt somewhere to do the same thing for connectivity/communications. Obviously, we've come a long, long way since the days of semaphores, smoke signals and telegraphs. But how far have we come? What does the bandwidth curve look like? (I assume the Internet would represent a seriously non-linear change, but maybe not). What are the theoretical limits on communications (# of nodes and speed of access)? Does it even make sense to compare such qualitatively different technologies as semaphores and OC48 lines?" Good question! I figure average bandwidth curve over the last 10 years would be an interesting thing to see.
This discussion has been archived. No new comments can be posted.

Ask Slashdot: Past and Present Bandwidth Comparisions?

Comments Filter:
  • You theory is correct, but I know it takes light 8 minutes to reach us from the sun, That means it is less from earth to mars. 45 minutes is a long time-- to long. But your point is well made none the less.
  • I'm sure alot of /. readers browse that way (myself included). I'm comparing that to the people who watch banks of TVs to get alot of info at once.
  • You have to admit, TV and Radio are high bandwidth communications. With the proliferation of TV channels and radio stations we even have more bandwidth. The real question is, who is listening. You can't really watch/listen to more than 1 station at a time, (and I'm sure most people don't browse with more than one browser either).

    I think the only thing to change in the past few years is the way we communicate. Not the amount. I do agree that the amount of communication has risen over time (over thousands of years), but only because there are more people on the planet.
  • I don't think using current tape technology
    is a valid measure. After all the problem
    is in comparing old technology to today.

    A valid comparison might be a using a
    box of 1/2 inch reel, or tk50's to
    copy a dataset from
    a vax in NY to another in CA.

    Anyone remember how much you could put
    on a tk50? I don't think it was much.


  • I regularly have 5-20 browsers open at a time, what are you talking about?

    I do so because with a modem, bandwidth is fairly small, so I download pages while I'm reading others.
  • Posted by Lulu of the Lotus-Eaters:

    Can it be... I am actually first post. (or maybe I just don't understand the new filtering)

    Well, I may not have a *great* answer. But I was reminded of the old saying about the bandwidth of a truckload of magnetic tape. I.e. don't underestimate it.

    I understand that X years ago, it really did turn out that this would be the fastest available way of transporting some large number of bytes from, say NY to LA. Tape has got bigger-capacity and more compact along the way, just like bandwidth has gotten faster. So do they balance, and does the comparison still hold?

    Yours, Lulu...
  • > Let's figure the bandwidth of a station wagon of magnetic tape

    I've seen a couple of these over the years (I personally prefer a Hercules filled with CD-ROMs (nice and flat thingies)). What you have to remember is the time to actually feed the data onto whatever storage you use. You'd have to have some kind of massive writing equipment ready, which in itself should have a pretty high bandwidth. And -- what exactly is it that you want to send that takes up 150 TB?

    -Lars
  • Sure the combined bandwidth of the world is a few million times greater than it was in 1995 but bandwidth per user has dropped to near
    ly 0.

    In 1995 a download from the USA during business hours came at about 3k/sec. Today the same distance now comes in at about 300 bytes/sec during business hours. The commercial internet is like a bottomless money pit. New backbones that take years to plan, approve, and bury are instantly saturated by the time they go online and there's no solution to the infinite recursion in sight.
  • The transcontinental telegraph put the pony express out of business.

    This illustrates the latency/bandwidth issue perfectly! While folks could send out more and more ponies to increase bandwidth, the latency was still there--there was still a lower bound on the time it took to send the minimal amount of information.

    The telegraph was, in comparison, hideously low-bandwidth. Most folks only got to send short messages, such as:

    UPDATE: OUR PLAN HAS FAILED STOP JOHN DENVER IS NOT TRULY DEAD STOP HE LIVES ON IN HIS MUSIC STOP PLEASE ADVISE FULL STOP
    ...and the response...
    SENDING ANOTHER COMET STOP DO NOT REPEAT DO NOT GO AGAIN TO SAN DIEGO FOR HENCHMEN STOP EFFICIENCY REQUIRES ALSO MOP UP ABBA ON THIS ROUND FULL STOP
    (This is an excerpt from the CrackMonkey FAQ [crackmonkey.org])

    While this is useful for conveying information in a timely manner, it doesn't lend itself to the sort of epistlary communication that makes it into the anthologies.

    Thus, something like a networked game requires short, low-latency packets of information to update player status and object action. Email, on the other hand, puts a certain delay on the transmission of a message, though it is equally swift with a 12-page rant as it is with a short one-liner.

    One could also look at TCP and UDP as putting an emphasis on bandwidth or latency, respectively. TCP is used for sending long streams of data with error-correction and all sorts of goodies, while UDP is used for blindly tossing out quick datagrams in the hope that the lower overhead will be useful.
    --

  • But you have to include the amount of time it takes to get the data onto the tape and back off again.

    That is a different bandwidth problem. You could also argue that if you are using fiber optics, that you have to include the amount of time it takes to get the data onto and off of the tape. After all, the "real" problem you are trying to solve may well be moving a large archive from one city to another.

  • For N tapes, if you have N tape-writing devices at each end, then writing/reading time will only add a few hours to the drive time, and won't have any more effect than a snowstorm in the rockies or lousy traffic in Chicago.

    How about cost per MB/sec? I think a bunch of CD-R on a spindle might be the best bet there.
  • Well, a painted scroll may be A4 size at 300 dpi *24 bit color when scanned into a computer, making a 25 MB file, but the level of addressable resoloution was never that small. Brushes and quills probably had a resolution closer to 10-15 dpi (~2-4 mm square). And the monks probably had no more than a few pigments, several of which were not amenable to mixing or dilution (gold leaf/ for example).

    The non-availabily of good lenses made reading a problem as well.

    We shouldn't be talking here of bps, but rather of baud. (I know, usually baud is inappropriate.) Baud refers to symbols per second.

    In terms of art, this usually means a great deal. For instance, the depictions of saints were usually standardized, so a medieval illustrator probably could not stray very far from the norm. This further limits the amount of bandwidth in a tome.

    Besides religious material, the other major illustrated works included armorial rolls. At perhaps 16-20 arms per page, it would seem that the depicted arms represented an enourmous bandwidth, armorial bearings can be fully described using blazon in no more than 15-20 words, and often in fewer. Since blazon is a precise, formal specification of an arms, it's (IMHO) probably the best subject for studies of medieval bandwidth.



  • Really, the bandwidth would be the rate you could transport at if you had a continuous stream of trucks unloading, which would be MUCH higher.

    Interestingly, the time it takes to load/unload the truck would have an impact on both bandwidth and latency, which is a different case than with computers - it doesn't take hardly any time for the kernel to ready a packet, so it can send a practically unlimited number of them (enough to saturate the network), and this time has no significant impact on latency. In this case, if it takes an hour to load/unload 50,000 tapes, the delay will add 2 hours latency (constituting 10% of the total latency). The bandwidth will be limited because you will only be able to send a maxiumum of 1 truck per hour - making the peak bandwidth 8x what you predicted. Or, if it took no time to load/unload the tapes, the bandwidth would instead be limited by the number of trucks you could cram on the road (A LOT), kind of like packets on a slow modem.

    Well, I thought that was interesting, at least. =)
  • Let's figure the bandwidth of a station wagon of magnetic tape. We'll use DLT, since that's the current technology in tapes. To be specific, we will use a DLT-IV tape that holds 35GB uncompressed. We will assume no compression. According to Quantum's datasheet, a DLT-IV tape is 4.16 by 4.16 by 1 inch. A small to midsize station wagon gives you 4 feet by 4 feet by 3 feet. (i.e. Volvo 240 since I know the approximate measurements). This means that the said wagon can hold 11 by 11 by 36 tapes, for a total of 4356 tapes. This is only using the trunk, and ignoring the seats. This is also ignoring the possibility of packing a few extras on the side using the 'slack' space. These 4356 tapes hold a total of 152460 gigabytes. According to mapsonus.com, going from New York to Los Angeles by car takes 56 hours 34 minutes, which is 203640 seconds. This means that a small station wagon going from New York to Los Angeles, and with only the trunk filled with DLT, has a bandwidth of approximately 6 gigabits per second. Extrapolate this to an 18-wheeler and you can see that even the fastest fiber optic data lines have a long way to go to even come close to a truck of DLT.
  • Naah, use a HummVee, with roof-mounted machine gun!
  • by Penty ( 3722 )
    Yes it should be possible to do a chart. However it would have to be done in Character Per Second(CPS) rather than bits. Even then it would be though because most of the early methods used some form of code to compress the information.
  • But you have to include the amount of time it takes to get the data onto the tape and back off again.
  • For university users, people who access the majority of telecommunications through on site corporate access, and the techno-illmunati, telecom has improved greatly ( 100Mbit ethernet, ATM, etc. )

    But, for the average user, not much has changed. When you consider dialup connections, the speed goes up, but the size of the stuff we try to push down the line gets bigger.

    As far as I can tell, there's no real difference in the speed of the user experience using Linux,
    Netscape 4 and a 56k connection and a 2400 baud modem used to connect to classic Prodigy on an IBM PS1 six or seven years ago, or the 14.4 on the 486 with Win3.1 and IE2.

    Telecom has a long way to go, not so much in the high end connections on the back bone, but in proliferating technology to the low end, sub-1000 dollar pc kind of folks.


    Andrew Gardner
  • That reminds me of the old saying.

    "Take only three data points, as somewhere in the world there is a sheet of graph paper that will make them a straight line."

    Cable modem? Bah - when the local cable company gets those, everybody will be born with built-in T1s ...

    (Let's see if my Sparc IPX can manage to post this without mangling the subject line ... :) )
  • This [isnet.is] shows Iceland's connectivity since 1993. Note that this shows traffic in and out of the country, and not traffic within the country. I don't know if any conclusions can be drawn from this.

    Hmm. I wonder if Iceland will be slashdotted :->
    --

  • It would be much more interesting to compare the speeds and also factor in the cost per GB. Whereas a fiberoptic line would cost much more to install from NY to LA initially than to send tapes on a truck, once it is installed, the operating cost is low. You wouldn't want to use the truck method of transporting tapes for very long as you'd have to spend a lot of money everytime data is sent.
  • by cout ( 4249 )
    When they are on opposite sides of the sun it could even be 22 minutes, since transmitting through the sun could be a bit of a problem. So we'd need a relay station on Mercury or Venus.
  • don't take this the wrong way... but it always amazes me how people post (or write in IRC) these wonderful, long responces that have lots of useful information in them, but very little puncuation or attention to spelling.... =)
  • You almost touched on this but one prfound impact of low latency communications is that we can have "combatants" sitting hundreds of miles from the FEBA (forward edge of teh battle area) and still in many ways experiencing much of the stress, horror, and yes, even thrill, of combat.
    Just think what those pilots of the "Predator" spy ROV must experience when they are flying over Kosovo watching the Serbs blow up another town and knowing there is not a damn thing they can do directly.
    I think the human consequences of this low latency are going to stunning (both in positive and negative senses).
  • Great post. But I think bandwidth can actually transform society -- maybe even as profoundly as the reduction of latency. Today's bandwidth allows the transmission of relatively simple messages. We can watch CNN and get to see short video clips and sound samples. Those messages are rather abstract, highly edited, and their true meaning is often lost in the process.

    But what if we had enough bandwidth to immerse ourselves into the current environment of, say, Kosovo? Being able to virtually "be" at places while important events are happening would definitely change the way society experiences news and maybe even change our perception of current events.
  • This is probably the contents of several communications thry classes... But think about these issues.

    What is the average bandwidth of each node in the net?

    What is the number of nodes in the net?

    What are the measueres of the interconnectivity of the net?

    Average packet size and latency? (the truck has huge size and latency).

    I heard somewhere that the planetary bandwidth was tripling every 12 months. This was attributed to the linear growth in the milage (meterage) of cabling, and Moores law impacting the performance of the nodes at each end of every cable/fiber.

    I wonder, as the internet (TCP/IP protocol networking) becomes universal, will the 16 hop time to die be a problem? Does IP6 address this, along with the less than one IP per human on this planet issue? (yes I DO think everyone have an IP!, prefferably stamped on forehead at birth)

  • This just goes to show that the use and content of the information is an important factor when comparing information capacity. If I want to get a few hundred full-length movies delivered to my lonely marsian base, bandwidth is very important and latency is neglible. Playing Quake, the situation is reversed. What's needed is a measure that's abstract enough to discount these factors.

  • Interesting question. There is a real problem measuring bandwidth in preindustrial times. Let's say a cart fully loaded with codices; this would be the medieval analogue to the truckload of tapes. However, information was very rarely transported in this fashion, and never at the maximum speed. One example would be when Queen Kristina of Sweden abdicated and brought her library from Stockholm to Rome. She stopped at various places on the continent for over a year.

    Probably a better measure would be a 'typical' message carried by a courier on horse and ship. Let's assume a typical royal decree to be 40 characters per line, 30 lines per page and 5 pages. This would give a message length of 6000 bytes. A messenger would rarely carry only one message, however - let's say our intrepid traveller carries an equivalent of forty decrees (he'd bring along letters, maybe a codex for a monastery and such as well). That gives a 'packet size' of around 250k.

    Next, the travelling speed: If my memory serves, it would take three weeks between Copenhagen and Stockholm (a distance of 630km). This would give a bandwidth of 0.13 bytes/sec.

    Of course, many manuscripts were illustrated, so in practice the bandwidth would be somewhat higher. Nevertheless, I would hate surfing the web in the fifteenth century :)
  • If you're going to have to defray the cost of building the fiber optic cable, you have to also defray the cost of building the road. Yeh, roads have other purposes, but so do fiber optic cables. And yeh, roads have more capacity than one truck, but then let's flood the road to its maximum carrying capacity.

    Make the assumptions equal unless you want to compare apples and oranges.

    --
  • Well, if you were to stick say, 50,000 50Gb AIT tapes in a 737 (yeah, it's a lot, but they're small), and fly from LA to New York, figure six hours plus an hour at each end, call it 8 hours, that gives you a bandwidth on that link of 86Gb/sec... Not too shabby.

    /P.
  • I don't see bandwidth being limited by bits per second. To increase this measure of bandwidth all you need to do is build a more widely parallel system. The problem in the end comes down to how many seconds does it take 1 bit of data to get from point a to point b. You can shrink your fibers and parallel more of them, but your data won't go any faster. More will just travel at the same time. We have an absolute limit on the seconds per bit of a single pathway; the speed of light. The first person to get a single bit of data to travel faster than the speed of light will have found a real solution to the communication speed problem.

    Think about playing quake with someone on mars. It doesn't matter what type of OC line is between you, it still takes light about 45 minutes (I think) to get from there to here.

    Paralleling data communications is only benificial when the distance between the two communicating point is short enough to make the speed of light appear instantanious.
    --
    ...Linux!
  • According to NASA the distance between earth and mars is:
    from: http://nssdc.gsfc.nasa.gov/planetary/factsheet/mar sfact.html

    Minimum (10^6 km) 54.5
    Maximum (10^6 km) 401.3

    So if we take the speed of light to be

    3.0 x 10^8 m/s

    It takes at least 3 minutes and at most 22.3 minutes, depending on the time of the year.

    It's not 45 minutes, but the latency still sucks.


    --
    ...Linux!
  • Some friends of mine and I found an old computer magazine from the '70s or '80s, offering 'high speed' connections to some network or other (it was last year, so sue me!). We looked at the price, did a few calculations, decided how fast our school's T1 was (burst up to 127 kilobytes), and came up with an answer.

    In the late '70s and early '80s, our school's internet connection would have cost around $1.7 billion* per month. Nowadays, you can get a faster connection (cable's faster, right? thought so) for about $40/month**.

    I think it's changed a little, yeah.

    ~Sentry21~

    _______________________________
    * - Canadian dollars at the time
    ** - Canadian dollars today

  • Doesn't anyone ELSE use smoke signals and telegraphs to access the internet?



    Seriously, there would be more than one section of bandwidth scales. One for consumers, and one for companies. Joe Netsurfer isn't gonna have a T3 hookup in his house (Though there ARE exceptions), and I certainly don't know of any personal user that has one of Lucent's Trans-Oceanic 10GB/Sec Lines piped into his Home LAN. Damn, That's not a bad Idea.

    -- Give him Head? Be a Beacon?


  • Well, Don't forget I can have cable bring TV data into my house, and have 3 different people watching TV on the 3 different sets in 3 different rooms, which isn't THAT uncommon, especially in Suburbia.


    About cable TV, though, I was wondering exactly how much bandwidth my 70 or so channels are using in comparison to the total theoretical bandwidth of coaxial cable. I don't think much, considering Satellite services offer more channels but I am fairly certain a satilite only has ~1/10 the bandwidth. . .correct me if I am horrible wrong :).

  • "a bandwidth of 0.13 bytes/sec"

    About a bit a second then, huh? Scary, that is probably about as fast as WE people could distinctly and clearly percieve each bit. So this would be about the practical maximum if we were reading binary. Thank goodness for high-level encoding :).
  • The transcontinental telegraph put the pony express out of business. Prior to the telegraph, messages could only travel as fast as people could or, in limited circumstances, over lines of sight. So the transition to low-latency was sudden, but it is over a hundred years old, and actually hasn't improved much since its advent.

    The special property that makes the Internet revolutionary can be summed up in one word: routing.
  • Check out "The Mathematical Theory of Communication" by Claude Shannon and Warren Weaver. It provides a formal model for talking about all of these different forms of communication.
  • It's late, and I can't sleep, so if this sounds like drivel, forgive me.

    You first have to define what you mean by bandwidth. Clearly it's the capacity of some communications link in terms of symbols per unit time. This is a function of the physical properties of the materials with which this link is built (which is also called bandwidth; confusing, isn't it?).

    To throw a bunch of tapes on a truck and call it a high-bandwidth link is really misleading. That's a short burst of data; it's unsustainable. In comparing communications throughout history, what you really want to compare is the theoretical maximum sustained rate of transfer of individual symbols (bits)

    e.g. give a bit (0 or 1) to the pony expressman and as soon as he leaves, give another expressman the next bit, and so on and so on (assuming an infinite supply of horses!). After a sufficient time such that latency becomes negligible (say, a year), add up the bits received and divide by the total time. So if a horse leaves every 5 seconds, your bandwidth is 1/5=0.2 bits per second.

    Note that latency becomes neglible for sustained transfers of data. You don't care how many hops your ftp of the 2.2 kernel takes; you care how long it takes for you to complete the transfer.

    Here [ucl.ac.uk] you'll find an explanation of Shannon's theoretical limit on the bandwidth of a channel: "There is a theoretical maximum to the rate at which information passes error free over the channel. This maximum is called the channel capacity C. The famous Hartley-Shannon Law states that the channel capacity C is given by: C=B*log2(1+S/N) bits/second. Note that S/N is linear in this expression. For example, a 10KHz channel operating in a SNR of 15dB has a theoretical maximum information rate of 10000log2(31.623) = 49828 b/s. "

    So, brothers and sisters, the growth of bandwidth is a function of the growth in bandwidth of the materials making up our communications links, and the SNR of these links. I expect the graph would be a nice exponential. Good night.zzzzzzzzzzzzzz

  • I often have three four five or more Netscape windows open. Just seems tohappen when there are a couple of good threads from one page, I would rather just open a new window so I don't have to use the back button a lot. At home with a 24k connection, I have a lot of windows open so that at least one of them will be fully downloaded, giveing me somethign to read while the other three are downloading...
  • Back around 1978, a 9.6 kbps full duplex data line was considered "high speed". It required a conditioned phone line and a very expensive ($20K) modem. You could get 56 kbps satellite data links from Intelsat if you had the cash. Quite a few computers actually used 110 baud teletype circuits for data links. Today you can buy a 155 mbps data link.
  • Slightly off topic but aren't there some new cheap tape standards coming out now/soon? I ask as I really need to buy a tape drive soon. (20+gigs w/ no backup)
  • I used to do that, until it took me so long to find something I wanted that I broke down and paid the $45 a month for cable.... can't beat 10Mbps piped right into your living room.
  • From what I read about the cable modem my cable co. uses, it can support up to 35Mbps, and that entire pipe fits through one unused 6MHz wide TV channel. I'm pretty sure they can open up multiple data channels in the system, too.

    My cable co. has probably 10 or 15 unused channels on its lineup, so they could probably crank out way more than a single OC-3, if need be. On the other hand, they don't have a snowball's chance in hell of being able to have a backbone connection big enough to support that kind of bandwidth.

  • Now it takes MORE than a week for the mail to go across the country!
  • 10GB/sec? Wow... where do I sign?!
  • Then we have the cable modem, which is ultra-fast but the lines are so overloaded your machine has to wait in line to receive. For the most part, half-life on my cable hookup downright sucks, since the latency on the hookup is so high.

    Then their backbone provider isn't that hot, either - I live in St. Louis, and tracerouting to a computer a few blocks away shows that stuff gets routed out to Kansas City, Chicago, and then back into St. Louis again, just to travel the distance equivalent of a 10-minute walk.

  • by Jeremi ( 14640 )
    > Your theory is correct, but I know it takes light 8 minutes to reach us from the sun, That means it is less from earth to mars. 45 minutes is a long time-- to long. But your point is well made none the less.

    45 minutes may be too long, but it seems to me that when Mars and Earth are on opposite sides of the sun, it could be at least 16 minutes.
  • But the thing about radio and TV is that they are broadcast - they don't actually provide any bandwidth. I see the same old cspan on my TV that you are seeing on yours. They only have ONE channel. Whereas on the internet, you could conceivably have millions of simultaneous video brodcasts (not all to my single PC, but all travelling over the central pipes). You could boil down the bandwidth of your whole cable company to the equivilant of a single OC3.

    Ditto goes for satelite brodcast. Sure, each channel is fat, but the overall bandwidth is pretty limited.

    -=Julian=-

  • What if the station wagon drives a mile in a minute and a half? Can you then say the bandwidth is 1,694 gigabits a second? We need a method to measure the bandwidth that does not change depending on how far we drive.
  • One thing that seems to have been glazed over is that the amount of bandwidth available to a given person depends entirely on their location relative to a major market. If you live in a big metropolitan area you probably can get access to cable modems, DSL, etc. If you live out in the boonies, you are probably relegated to 56K dialup or maybe a DSS dish.

    This is one thing that will proove very interesting as the rate of innovation increases. It is a general trend that those who live closest to the big cities get the innovations first, but I wonder how the rate of increase is outside that area. If an innovation requires an upgrade to existing infrastructures, then the likelyhood of them getting out to the most remote areas is pretty small. If it is something that can work on existing infrastructure (pots, or wireless), then it can be expanded very easily to those outlying areas.

    I wonder how long it is before a net provide realizes they can be the Walmart of ISP's. Rather than developing technologies that are only useful in the highly competitive metropolitan areas, somebody should work to bypass geographic boundaries and make money off the relative monopoly they would posess in the more remote locations.

    ---

  • when I traceroute a slow connection, it is usually Mae West (a network-access point, a.k.a, peering point) that is the culprit. so I tend to believe that "raving of a fool".
  • Since the maximum is when is it directly on the other side of the sun, you're going to have real problems sending data that direction.. but hey..
  • He includes data from before the deregulation of the telecom industry. The old monopoly of AT&T did not provide an incentive to offer more bandwidth to the home customer; now, there are several new clecs and hundreds of regional and national ISPs laying infrastructure and offering cheaper connection rates.
    Thus, if you only look at data after deregulation came into effect, the grouth of bandwith will be much more dramatic than Moores law.

    cheers amigos
  • accross the USA would be hard to measure, but the biggest single link accross el continente belongs to QWest communications. They just lit up an OC-192 network, i think at 4 lamdas, expandable to 32 using Nortel equipment; that is, 40Gb/s expandable to 320Gb/s.
    They had some neat adds stating that it would take 17 or something seconds to tx the whole lib of congress, including pictures, accross the US; i should know the details better, i work in nortel; however...

    by the way, some trivia: did you know that more than 75% of internet trafic is carried over nortel equipment?

    hasta luego, compadre.
  • by Bald Wookie ( 18771 ) on Sunday March 28, 1999 @11:54AM (#1959650)
    Although bandwidth has increased steadily over the last hundred years, it has been the decrease in message latency that has really revolutionized communications. For George Washington to send a message to England, he could expect a transit time of about four months. In the days of the pony express, it took a bit over a week to send a message across the country. This slow speed at which messages were propagated had some pretty profound effects. Jamestown settlers couldnt send for more supplies in November and expect them for the winter. The king couldnt send timely orders to his governor, or keep abreast of current news. Moving up the timeline a bit we run into other considerations. What if the president had been shot? People in California couldn't know about it until at least a week later. Fundamental details such as who is in charge of the country are unknown to a substantial portion of the population for a long time. Everything on the other coast is old news, at least from our modern perspective.

    When electronic communications became commonplace, these distances were greatly reduced. Messages get blasted across the country at around the speed of light. Newspapers could report current information. Important events of the day were on the evening news. Now our wars are fought halfway around the world on live TV. There is no more delay in getting a message across. Bandwidth will increase, allowing larger messages to get sent faster. Yet bandwidth only changes the content, and will not contribute in a measurable way to the revolution that has already taken place.
  • sure, the progress might be linear, on log-log graph paper!!!

    i know i've gone from modem to dual channel isdn in 3 years, then in a year and a half went to cable.

    my favorite part is that the bandwidth increases at the same ratio it's decreasing in price!!! =)
  • Geez, if I burned about 1000 cds and took them to my computer upstairs, my LAN would have a bandwith of 640MB x 1000 = 640GB / 10 secs of walking = 64GB/sec. There would be no need to factor in burning and reading time because even if I transfered them over fiber, I would still have to read all of the cds to send the on one comp and burn all the cds to save the data on one comp.
  • I have seen such studies over the years, mostly on speed of transportation and energy use. They will show an exponential growth rate, the interesting ones start about 10,000 bc and follow to the present with extrapolation for the immediate future.

    I expect that the bandwidth curve will follow that same trend. A letter on a ship a 4 months across the atlantic was very fast compared to speed/distance a thousand years before.

    I can't put numbers on it tonight, and one of the serious disadvantages to slashdot is the obsolesence rate. It would take me a few days of part time work to write an article on the subject. I will have to find numbers to back up what I can see in my head. Are enough of you guys interested to make it worthwhile for me to put in the effort?

    I expect the internet is right on the curve, I will be surprised if a knee shows up because of it.

    Jim Hurlburt
    jlh@ewa.net; jhurlburt@cwcmh.org
  • It is true that this method gives you a terribly high throughput. In fact, think how many DVD-ROMs you could stack in one of those cargo jets the army transports tanks with? Pretty impressive.

    If you didn't care how long a certain piece of data took to get across the atlantic, then a bunch of cargo jets and DVD burners would work quite well. You would indeed achieve record bandwidth (with really, REALLY high latency for writing/reading and transport time)

    That is the key difference. Frankly, it's a related reason that causes big CD-ROM games to be shipped instead of downloaded. In theory, Cyan could distribute 5 CDs of Riven over the internet, once someone buys it with their credit card, right? The problem is that your average home user doesn't have the bandwidth to download it. Latency is fine...they just have to wait a few days before they get to play. (there are of course, other reasons, disk space probably being up there in the list, but it's a decent viewpoint for the topic at hand)
  • Something to remember as far as bandwidth goes is that it's not always the bandwidth, the latency factors in too. This is part of the reason I want an ISDN link; modems are slow and there's a theoretical limit to how fast a modem can transmit a given piece of information (something like 80 milliseconds), and ISDN already has that beat by a factor of at least two (possibly much more). This incidentally is why an Ethernet link throttled to modem bandwidth still beats the pants off a modem. (Netrek and Quake players should know all about this one.)
  • If I remember correctly, this is a limitation of modems attached externally via a serial port. I believe that internal modems are not subject to this inherent latency problem, but there are probably people out there who can correct me if I'm wrong. : )
  • About cable TV, though, I was wondering exactly how much bandwidth my 70 or so channels are using in comparison to the total theoretical bandwidth of coaxial cable. I don't think much, considering Satellite services offer more channels but I am fairly certain a satilite only has ~1/10 the bandwidth. . .correct me if I am horrible wrong :).

    That really depends on which type of satellite you're talking about. If you're talking about C-Band (The big dish, generally 7' or 10' center focused parabolic dishes) then you don't have much bandwidth at all. You have to move the dish from one satellite to another, and each satellite has 12 transponders * 2 polarities for a total of 24 uncompressed TV channels per satellite. The satellite itself usually has less then 50 watts of power. Using compression such as mpeg2 this number goes up by a factor of 10-12 (I'm a little fuzzy on this number).

    Whereas if you're talking about DBS (The little dish, generally 18" off-center parabolic dishes) then you get a little more raw bandwidth. The satellite itself broadcasts at >= 250 watts of power, with 16 transponders * 2 polarities. The way they get some 200 channels on these beasts is by compressing everything. DirecTV/USSB uses mpeg1 while Dish Network uses mpeg2.

    Getting back to the original point, I'm not sure how much b/w standard rg6 or rg59 coax has, but the DBS satellites themselves have roughly 32mb/s of bandwidth, making a single satellite useful only to mid-sized ISP's and those for whom running cable to would be more expensive then using satellite.

    Incidentally, the way most cable companies do digital cable is by compressing several cable channels onto one channel, so you don't get the full picture quality you otherwise would have if you were watching the same channel on a c-band system.

    -skullY
  • n 1995 a download from the USA during business hours came at about 3k/sec. Today the same distance now comes in at about 300 bytes/sec during business hours.

    Funny, in 1995, a download for me (from, for example, walnut creek cdrom) came at about 2.2k/sec.

    This morning, I moved a big chunk of data from the same point into my house at about 148k/sec. And I have to tell you, I'm happy about that.

    DSL has killed the "last mile" problem for anyone with a telco wise enough to offer it. Backbones aren't the problem, nor is the last mile (within 18 months, you'll have DSL or cable. If you have neither after that point, you probably will never have it) The problem is that peering points are saturated. The MAEs and PBNAP are the major bottlenecks, not counting the existance of alter.net, which is just plain poor.

    But all of this, of course, is the raving of a fool.
  • The ancient Romans had a communications network that could get messages from the capital to the frontier in a matter of minutes. They built towers along their roads within line of sight of each other, say every five or ten miles, and signalled each other by flag or mirror or something. Not very high tech but not bad for a couple thousand years ago. (They invented the steam engine too.)
  • re: New backbones that take years to plan, approve, and bury are instantly saturated by the time they go online and there's no solution to the infinite recursion in sight. Exactly right - it's just like building roads. Engineers say that if you build a new highway, within a few years it will be used at 120% of capacity, no matter where you build it. All that happens when you build new roads or add lanes to old roads is that people can live farther from where they work, and they can drive more. It doesn't matter how much bandwidth the internet can handle, it's ALWAYS gonna be slow.
  • Keep in mind, folks, that written messages are not the only information transferred. A painting, for example, is at least 300 dpi at 24-bit colour. At 8.5x11, that's 25 megabytes. Even books and scrolls had illustration, often with decorative initial letters that must have been several megabytes big.

    And remember that the transportation of a human who bears information is equivalent to a videoconference with perfect audio and video fidelity.

    Finally, if you just want to know about Internet bandwidth, Jakob Nielsen's got a nice chart [useit.com].

  • Today I can buy 10 Fast-ethernet 100 Mbit/s cards for the same price as a 1200 bit/s modem would cost me then I started computing. Pretty impressive is'nt it :)

    - nr

  • Who's talking about sub-$1000 PC's? I think everyone should have a comfortable 28.8 modem and a nice programmable serial terminal with a shell account and Lynx. Total cost $40, max, at a used computer-parts store. Does everything a $3000 Pentium III does, unless you're one of those people who care about pictures.

    My mom wants to buy one of the new neato mini laptops, for around two grand. I really think she'd prefer fifty green-screens, myself...
  • In 1994 when the web wasn't on anyone's radar, I helped start the first ISP in the area. The NSF was still around, but we managed to convince a backbone provider "MIDnet" to give us a link to resell Internet access.

    We had a dedicated 56k pipe to the net, and that seemed good enough for the time. And with 30 simultaneous users logged on, there was still bandwidth to spare! (After all, people were either using Telnet, FTP, or Gopher. Anyone remember gopher?!??! That's a dead protocol. So much for my skill at creating gopher pages!)

    Of course, the web changed all of that, and bandwidth requirements have gone through the roof. Running an ISP on a 56k link? HA! That's how much a single user can suck down the pipe with a web browser.
  • >> 2400 baud modem used to connect to classic Prodigy on an IBM PS1 six or seven years ago Six or seven years ago? I'm still using an IBM PS/1 to connect to the Internet. It just happens to not have a monitor attached to it. :)
  • An interesting point, but its important to distinguish between bandwidth and latency. A truckload of tape would have monstrous bandwidth, but horrible latency.

    Can you imagine playing Quake through a station wagon?

  • I wish I had thought of this. Note to moderators: give this one a +1, and bump my inane post down into AC-land...
  • With apologies to the Eagles... if you don't get it, you didn't need to know anyway.

    Well, it depends on what you mean -- data transmitted, or data received?

    In the first case, a previous poster cited the old saw "Never underestimate the bandwidth of a station wagon full of tapes." The transmission rate of a Country Squire is stupendous, but how fast can it be received? Tape drives manage from under 500KB/sec on up...

    Now consider the case of printed books. I'd hazard a guess that the average Tom Clancy potboiler contains something like a megabyte of information. Again, the transmission rate is very high (someone hands you the book), but the reception rate is very low (you have to read it) -- in my case, I manage about 350bits/second (550wpm, 5.5 letters/word, 7bit ASCII).

    Now think about television. Uncompressed NTSC video has a transmission rate of around 25MB/sec. This works out to about 45GB for an episode of I Love Lucy, including commercials. Cynics will argue that the actual useful data rate is an inverse square of the amount watched.

    I guess it all depends on what you're transmitting, and to what or whom. In most cases, I'd say that above a ceratin transmission rate it doesn't matter -- the process is cpu-bound anyway (whether by grey matter or otherwise)

To do nothing is to be nothing.

Working...