Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Creating a Backboneless Internet? 370

Peter Trepan asks: "The Internet is the best thing to happen to the free exchange of ideas since... well... maybe ever. But it can also be used as a tool for media control and universal surveillance, perhaps turning that benefit into a liability. Imagine, for instance, if Senator McCarthy had been able to steam open every letter in the United States. In the age of ubiquitous e-mail and filtering software, budding McCarthys are able and willing to do so. I Am Not A Network Professional, but it seems like all this potential for abuse depends upon bottlenecks at the level of ISPs and backbone providers. Is it possible to create an internet that relies instead on peer-to-peer connectivity? How would the hardware work? How would the information be passed? What would be the incentive for average people to buy into it if it meant they'd have to host someone else's packets on their hard drive? In short, what would have to be done to ensure that at least one internet remains completely free, anonymous, and democratized?"
This discussion has been archived. No new comments can be posted.

Creating a Backboneless Internet?

Comments Filter:
  • You're on it baby.. (Score:5, Informative)

    by brokenin2 ( 103006 ) * on Friday February 17, 2006 @09:42PM (#14746882) Homepage
    It would look an awful lot like the internet we have now.

    You're describing the original design of the internet, which we're still running with essentially.

    In practice though, it would be insane to let everyone with a DSL line to two different locations update routing table through the entire internet. The mechanisms to allow this exist (bgp, ospf) but major ISPs that don't want their network to fall apart prevent it because their service would quickly turn to crap. ISPs with missing filters have actually caused internet wide splits, when the entire internet tried to route through someone's T1's connected to two different ISP. BGP with a little better cost system could help that, but anyone could still cause a split anytime they liked. Think of an entire internet that acts more like IRC.

    The core of the internet is still just a bunch of peers, but if you want things to stay up, they've got to be a select group that really know what they're doing. You're still free to peer directly with anyone you want, just don't expect everyone else to use your internet connection to get there too. Most people don't want to have to buy two internet connections for marginal gains anyway.

    Perhaps a software solution like TOR or Freenet could help you sleep better at night?

    • More than one internet? Looks like George W. would finally have his Internets!
    • Tier 1s? (Score:5, Insightful)

      It would look an awful lot like the internet we have now.

      Except for, you know, the Tier 1 ISPs, on whose networks practically all our traffic passes at some point.

      Control them, and you control the net.
      • Re:Tier 1s? (Score:5, Interesting)

        by brokenin2 ( 103006 ) * on Friday February 17, 2006 @10:14PM (#14747028) Homepage
        I'm not saying that there isn't a core to the internet. It's there, but that's not by design, it's a convention to keep the internet from totally sucking.

        His question was, "Is there a way". The answer is yes, but you don't want it, so people stopped doing it. Anyone can peer with anyone else, but the copper/fiber cost to take the core out of the picture prevents anyone from wanting to do it. If you're worried about big brother, encrypt.

        If he really wants what he's asking for, he can start finding peers on the other side of the net, and he can keep *his* traffic off the backbones once he has enough peers (and he's built some enormous route tables as well).

        • Re:Tier 1s? (Score:3, Informative)

          by toddbu ( 748790 )
          The answer is yes, but you don't want it, so people stopped doing it.

          Then what do you make of the Seattle Internet Exchange [seattleix.net]?

        • Re:Tier 1s? (Score:3, Interesting)

          Anyone can peer with anyone else, but the copper/fiber cost to take the core out of the picture prevents anyone from wanting to do it.

          Not only that, but there's the problem of growing routing tables. Every host on the Internet needs to know what direction to send packets destined for every other host. IPv6 is designed to alleviate some of the problems of big routing tables, but that's just because it makes it easier to map a hierarchical network topology into a hierarchical address space (thus reducing

      • by bi_boy ( 630968 ) on Friday February 17, 2006 @11:47PM (#14747400)
        Control them, and you control the net.

        One Tier to Rule Them All. One Tier to Find Them. One Tier to Bring Them All and In The Darkness Bind Them.

        Yeah I know, redundant, I couldn't resist though.
      • Re:Tier 1s? (Score:2, Insightful)

        by dnoyeb ( 547705 )
        I disagree. There will be as many "Tier 1" ISPs as the people need. The only reason there are a few now is because we only require a few. This is all besides the point.

        When congress starts legislating your network architecture is meaningless. If your worried about invasion of privacy you should address it with your vote as well as your intelligence. If you can explain the issue perhaps you will get more votes. Its tough to fight the force of the media, but its not impossible.
      • Re:Tier 1s? (Score:2, Informative)

        by cat6509 ( 887285 )
        >>It would look an awful lot like the internet we have now.
        >Except for, you know, the Tier 1 ISPs, on whose networks practically all our >traffic passes at some point.
        >Control them, and you control the net.

        Keep the backbone, without huge aggregate networks the internet is not cost effective and not to mention what kind of routing problems and bloated BGP tables we would have, just do VPN to peers you trust, that can be either router-to-router ( GRE IPSEC hacked-together-ssh whatever )
    • by ZagNuts ( 789429 ) on Friday February 17, 2006 @09:56PM (#14746945) Journal
      Perhaps a software solution like TOR or Freenet could help you sleep better at night?

      Don't know much about TOR but I just thought I'd clarify about Freenet. It is indeed a software solution to what you are asking about in which the sites are accessed in an entirely peer to peer manner. Instead of having static routing tables located at specific points each computer in the network maintains its own routing information. If a computer doesn't know how to get to a certain site it guesses by asking a neighbor if it has the desired data. Data is cached throughout the network so that sites are stored as distributed files, meaning at any one time if your computer is a part of Freenet it could have information related to a number of sites.

      The good thing about Freenet is that site accesses are entirely anonymous. There is no way to be traced AFAIK. One of the bad things is that it takes a computer a long time to build up enough routing information to access any websites at all. You have to run the Freenet program for a few days before you are able to access anything and even though its painfully slow. The other problem that people have is that you have to store any content that goes through your computer. Freenet is plagued with child porn sites because the anonyminity that it provides. This means that if you are running the freenet program you are likely to have child pornography data stored on your computer even if you have never visited those sites. While the legality of this is questionable, the ethical issues are obvious.

      Still it is a very interesting concept and definitely has its applications (China anyone?).
    • Comment removed based on user account deletion
    • I agree... let the service providers provide the service. If you want privacy, use encryption. Unless some higly specialised entities have developed quantum computers and kept it a secret, they won't be able to break it in any time frame suitable for mass communication snooping.
      • For the moment... (Score:3, Interesting)

        by abb3w ( 696381 )
        Unless some higly specialised entities have developed quantum computers and kept it a secret, they won't be able to break it in any time frame suitable for mass communication snooping.

        True; but if various [slashdot.org] corporate [slashdot.org] proposals [slashdot.org] go through, your encrypted traffic might travel cross country at sub 56kbps rates with multi-second latency. Which does bad things to a torrent.

        Mind you, this still won't stop file sharing. As an example of the alternatives: someone in my apartment complex has a non-internet wireles

    • by Mattcelt ( 454751 ) on Friday February 17, 2006 @10:13PM (#14747021)
      I think that response may have missed the point of the submitter's original question. I read it as "is there a way to prevent all traffic from traversing predictable routes and hubs, thereby disallowing any entity from collecting all of one's transmitted data and using it against one?"

      Essentially what the submitter is interested in is a meshed network, which to my knowledge is the only network topology yet created which does not use hubs, centers, or buses to carry conglomerated traffic. Remember that things like bittorrent, bgp (less so), and other similar protocols are really creating "virtual" meshes, not real ones - all of your traffic (and that of every other person in your segment) is still travelling to your ISP, and that to their backbone. So anyone who sits at those hubs or backbones would be able to see all your torrent traffic, and who it is going to/from - it is only the separation of the ISPs and the RIAA/MPAA/FBI that keeps them from knowing your every move on the Internet! (Encryption and proxies help, but it aren't a foolproof solution, btw.)

      Also, TCP is designed to be fault-tolerant, but also semi-optimizing, taking the shortest perceived route to its destination. So unless a backbone is down, most (if not all) traffic from you to a host between which the backbone sits will travel on that backbone, very predictably. TCP is not privacy-sensitive.

      The short answer is that in a wired world, there is no feasible way to create a mesh. The strength of the mesh is algorithmically tied to the number of other nodes each node is connected to. So unless you're going to dig up the yard between you and, say, three of your neighbors, and they and two more of theirs, and so on, across the entire country, you will end up with a topology which looks more like what you've already got, with a smaller number of larger rings and stars, each funneling through a central location.

      In a wireless environment, the possibilities are much better. Some police precincts in the U.S. have been experimenting with mesh-networked radios, where each radio is a repeater as well as a transceiver. Thus a linear configuration of radios could extend the range from perhaps a 30-mile radius to a 60-mile-per-radio diameter for as long as the chain is unbroken. This isn't the optimum configuration, however, since it is presumed that one would want redundancy, so you would be forced to configure the mesh in such a way that you could talk to at least three other nodes at any given time. This requires a very high density of nodes, so it would work much better in a more densely-populated area than one nodes are scarce.

      I hope that answers the question.
    • by r_naked ( 150044 ) on Friday February 17, 2006 @10:21PM (#14747064) Homepage
      In practice though, it would be insane to let everyone with a DSL line to two different locations update routing table through the entire internet.

      We seem to be scaling rather nicely.

      http://anonetnfo.brinkster.net/ [brinkster.net]
    • If you are concerned with keeping communications over the Internet private you don't change the Internet, you encrypt the traffic you are trying to keep private. There are a number of good options available ranging from encrypting your messages to establishing VPN connections with the systems you communicate with.

    • Perhaps a software solution like TOR or Freenet could help you sleep better at night?

      Well, not quite. ISPs are already throttling/blocking BitTorrent, so it wouldn't be that hard to block Freenet too. What the original poster asked for was an alternative to the current Internet. FN is build on the Internet we have now, and thus subject to many of the same problems as anything else on the 'net.
    • Uh...IPv6 (Score:2, Informative)

      by NeepyNoo ( 619951 )
      'nuff said.

    • First, the stated privacy concerns are no justification for changing the underlying infrastructure. If you're genuinely concerned about privacy, then start encrypting everything you put on the wire. Use anonymizing services.

      Secondly, network geeks in general do not grok the economics of the internet on a national or global scale. Without statistical multiplexing and large economies of scale created by the "backbone providers" vilified in the original post, your internet access fees would not be as affordabl
    • This has been my contention since the whole fiasco about the Root DNS servers...They are there only because of a consensus...if we change our consensus, the existing servers will become unneeded. The US Government can't control the Inetnet, as it is a consensus meritocracy, much as the Tofflers described in their wonderful series of books on the future of power.

      With a totally decentralized Internet, all it takes is one tunnel to bypass existing censorship. I remember back in the Fidonet days, you could get
    • The question is basically a re-statement of the original ARPAnet design, you are correct. However, to be absolutely true to the question, you'd need two additional stipulations.

      First, to be effective, all network connections would need to be fairly fat. A tiered Internet is designed along the same sort of design philosophy as a "fat tree" - low bandwidth at the work-node level, massive bandwidth in the middle. A tierless Internet, particularly one that supported enough multiple paths to be useful for robust

    • by Alsee ( 515537 )
      Perhaps a software solution like TOR or Freenet could help you sleep better at night?

      Nope.
      Are you familiar with Trusted Network Connect? [trustedcom...ggroup.org]

      It is a new specification from the Trusted Computing Group to control and restrict network connections, and to control and restrict the networked computer.

      "The TNC architecture enables network operators to enforce policies regarding endpoint integrity at or after network connection."

      Of cource the Trusted Computing Group is advertizing it as a good thing, and is advertizing
  • Bad Idea (Score:5, Insightful)

    by Kasracer ( 865931 ) on Friday February 17, 2006 @09:45PM (#14746896) Homepage
    If Bit Torrent is of any example, this would be a bad idea. One day you may be able to get to Google fast and then the next, it may take forever to load.

    Peer to Peer internet would be horrible. Not only would it be unreliable, but at time slow.

    Sure some agencies can access our information because it's centralized, but if we don't want them to see something, it's not hard to encrypt it. Hell I'm even working on an encryption application.
    • .....it's not hard to encrypt it....

      Indeed, if anyone has deep dark secrets he/she wishes to share with someone, just encrypt with something like PGP. Your secret will be safe unless someone wants it bad enough to torture either you or the recipients for the password. If someone wants to force a secret out of you, they'll get it, unless you, like many young muslims are willing to die for it.
    • Re:Bad Idea (Score:3, Interesting)

      by Firehed ( 942385 )
      But encryption is a waste of time if you can bypass the evildoers entirely.

      The main problem with a P2P internet would be bandwidth, at least at this point. There just aren't the resources available - hardware or software - for people to be running /. out of their mom's basement. Even a good amount of small businesses wouldn't be covered by a fairly decent dedicated server, but they can't afford to set up a cluster to run things like a hosting company can, let alone hire someone to set the thing up (or b

      • The CPU power isn't the problem. The pipes are the problem. You can't transfer information magically, it has to travel through copper, fiber, or electromagnetic spectrum. Copper and fiber are expensive, for obvious reasons. Radio spectrum is also scarce and expensive, not to mention severely limited. As you can see, the capital investment required to run thousands of miles of fiber is ridiculously large. The centralized backbone system exists simply because it is the cheapest option -- it's easier to
  • by georgewilliamherbert ( 211790 ) on Friday February 17, 2006 @09:46PM (#14746901)
    If you need something like a terabit of bandwidth between the US east and west coasts, consider how many peer to peer link chains across the country will be saturated carrying it.

    One of the major problems right now in the commercial ISP backbone environment is what happens if there's an outage; what's called route flapping, where routes dissapear and reappear, and all the routers affected have to recalculate how to get to various endpoints, can already saturate the router CPU logic for big, industrial grade room-full-of-racksize-router backbone facilities. Going to a more diffuse network at high bandwidth requirements exponentially makes this worse.

    P2P across a city? Not ridiculous.

    P2P across the world? Baaad idea.
    • P2P between friends and acquaintances? Rockin'.
    • Perhaps this is bogus, I was reading some bit of history on the internet, and some people in the 60's or 70's objected to the idea as they understood it. They were thinking of a truly distributed architecture like a physical version of what P2P applications simulate, and according to their estimates, there was no where near enough copper in the world to build it. Every single user would need lines connecting to multiple other users and you would potentially have to connect through thousands of nodes (depend
  • Circa 1982 (Score:5, Interesting)

    by sphealey ( 2855 ) on Friday February 17, 2006 @09:46PM (#14746905)
    > Is it possible to create an internet that relies
    > instead on peer-to-peer connectivity?

    You have just describe the net (later the Net, still later the Internet) circa 1982. You can search Usenet to read about the excitement level when USR 2400 baud modems were released: doubling of connection speed to transmit netnews!

    Of course, you can also read about what happened when news (alone) was distributed on a meshed basis.

    sPh
  • by khasim ( 1285 ) <brandioch.conner@gmail.com> on Friday February 17, 2006 @09:48PM (#14746910)
    Is it possible to create an internet that relies instead on peer-to-peer connectivity?
    From a hardware/connection standpoint, every single user would have to have a router that could connect, somehow, to every other user/router.

    That is the "backbone" and where the "bottleneck" is.
  • Freenet: http://freenet.sourceforge.net/index.php?page=faq# what [sourceforge.net]

    More people use it, more helpful it could be.
    • Re:Get on Freenet ? (Score:3, Interesting)

      by blue_adept ( 40915 )
      freenet exemplifies what a peer-to-peer internet would be like: a disaster. It's slow, it's cumbersome, and more to the point, it fails to solve a problem a doesn't really exist in the first place. Nobody cares about anonymity at the EXPENSE of speed and convenience, except child pornographers, law breakers, and the paranoid. That's why networks like freenet and ZeroKnowledge ultimately fail.

      That's not to say freenet not an interesting experiment. That's not to say anonymity isn't desireable. but please
  • by Derling Whirvish ( 636322 ) on Friday February 17, 2006 @09:53PM (#14746932) Journal
    Imagine, for instance, if Senator McCarthy had been able to steam open every letter in the United States.

    Before and during WWII all mail crossing an international border in or out of the US was steamed open and read. This included all mail, all packages, all telegrams, and all telephone calls. In addition to all mail being steamed open and read, it was censored [lexisnexis.com] if the Army deemed it to be necessary to support the goals of the Army. Letters would arrive with portions cut out by scissors. They also censored all international media -- radio, newspapers, and magazines both incoming and outgoing.

    It's quite easy to imagine as it's already been done.

    • by Anonymous Coward
      "this included all mail, all packages, all telegrams, and all telephone calls."

      The capacity to read everything did not exist.
      This was during all out war not some informal war with no timetable.
      This data was not kept indefinitely.

      Lastly the computing power did not exist for a politician to do an SQL query on your life history to determine if you are "desirable".

      Dangerous and misleading analogy.
  • by blofeld42 ( 854237 ) on Friday February 17, 2006 @09:54PM (#14746935)
    Encrypt your email traffic, so that even if it is intercepted it can't be read.

    The government can still do some traffic analysis (they sniff headers rather than read the contents of the messages) and they can learn a lot from that, but such is life.
    • As a related matter, I've found myself wondering why encrypted email has not become far more popular - or encrypted IM for that matter. I downloaded and installed PGPMail myself a few years back, but could never get any of my friends to install it as well. This strikes me as strange considering that I know that were I given a choice between an email client with encryption and without, I would choose the former. I assume most people would. So why hasn't this been offered as an automatic part of Outlook o
      • As I've said before... some people dont have anything to hide....

        If people want to read all the little love letters I send my wife all day... or the email to my Dad about the cool car I saw on the way in to school this morning.... then go right ahead...

        What I'm wondering is why people feel the need to hide their e-mail activities. The only situations I can think of are when you need to send sensitive information quickly (the secretary for my advisor asked for my Full Name, SSN, Address and Telephone number
      • by penguinland ( 632330 ) on Saturday February 18, 2006 @04:18AM (#14748272)
        So why hasn't [PGP] been offered as an automatic part of [email]?

        Oddly enough, I'd say that a significant part of it is the chicken-and-egg problem: it's only really useful for cryptography if a lot of people have PGP (note that signing your emails using PGP shows that they're really from you, but does not actually encrypt them; for that, you need to encrypt using the public key of the recipient, and this would require most recipients to have public keys in the first place). For Joe User who hasn't heard of an IP address let alone public key encryption, you'd need some way to automatically set up PGP for him, since he certainly can't do it. and there's no economic motivation for companies to create automatic PGP stuff, since it's not really useful until more people adopt it (as I said earlier), though this is precisely why more people don't adopt it.

        On a related note, if you have a PGP key and then buy a new computer, you have to either know what you're doing in order to get your private key onto the new computer, which Joe User also can't do (And if there is a way to automate this process, anyone could write a virus that would use the automated version to steal your private key), or remove your original key and create a new one, which would confuse Joe's friends when their PGP systems suddenly don't trust Joe's email any more.

        Sadly, the only way that PGP will become popular is to educate the general populace so that they know as much about computers as we, the computer nerds, do. and although I don't want to admit it, this is never going to happen.

    • Why not have a p2p-style network with packet re-routing, so that person A attempting to access site E first sends it to random person C, who decrypts the outer layer and a random amount of time later sends it to person D, who decrypts their outer layer and sends it to site E, who takes the request, and returns along a second obfuscated return path. Nobody except the requesting computer would know the entire path, and while the ping time due to the random delay would be terrible, it would be utterly untraca
  • Forgive me if I'm being naive, but wouldn't such a free and open, decentrallized system be very different from "democratized". It would be more arachical than anything, as it would be free of government control.
  • Preface: Not a networking expert or a graph theory researcher:

    I read "Nexus" not too long ago. It talks about the study of networks and its results in various different fields. It wasn't as deep or detailed as I had hope but it mentioned a study where it was found that the Internet is really not a decentralized network but a hub and spoke network. It can survive numerous attacks in general but if even a small number of central hubs are taken down, connectivity suffers. Obviously that means it's even e

    • Preface: Not really a networking expert or a graph theory researcher, but I'm doing research on peer-to-peer/swarm intelligent web search, which is somewhat related.

      The type of network you're describing is known as a small-world network, and it has a lot of cool properties. The US Social Network is widely regarded as a small world network. A Harvard Professor named Stanely Milgram demonstrated this property rather dramatically in 1967 when he mailed 160 letters to randomly chosen people in Omaha, Nebraska
      • If a network could be constructed to take advantage of this phenomenon, it could have some pretty cool performance.

        This network already exists. It's called 'the Internet' or 'the phone system'. It takes advantage of the fact that people generally live in buildings, which are generally located in cities right next to each other. This permits building a central office, and running a cable from the central office to each building or group of buildings, from where it is distributed to individual subscribers.
        • I think the problem is more that you have lots of people in Seattle who want things in New York. Ideally you could move people closer to the information they want. I guess that's a little different than what I was talking about there though. Say you have a 256k DSL connection, but a 100Mbit connection to your neighbor. Obviously you want to get as much as you can from your neighbor rather than going across your DSL line. The small world thing comes into play because if you're highly clustered with peop
          • You are making some bad assumptions. In general, there is no correlation between what your neighbor wants to download and what you want to download. Your neighbor might be looking at gay porn, using MSN search, and reading Fox News. You might want to read CNN, use BitTorrent, and search using Google. If you don't believe me, share out your DSL line to your neighbors, use a transparent proxy, and verify that it doesn't do jack shit. Caching only works when you do it for a large group of users with simil
            • I didn't say there was such a correlation, just that it would be nice if there were. Still, if you get enough people together (though hopefully still small) you will probably start to find more of a correlation. Lots of people are going to be interested in their local weather, for example. While your neighbor might go to MSN and you go to CNN instead, if you add enough people then others will also start to look at the same site. While it may not be possible to hook everyone on your street up to the same
        • no, it doesn't. What you are describing is a centralised tree network, not a small-world network. A network such as the one described in the previous post would not have a 'central office' from which connections are distributed. It would instead have mostly local connections between neighbours, which is *completely different* to the current internet or phone system.
          • It's exactly the same, just with a bigger number of central offices. As I said, these days the CO might just be on a telephone pole outside your house. This would be equivalent to your cluster of neighbors. Of course, the clusters would have to be interconnected somehow, which would then make the system similar to the existing phone network.
  • Hmm, well... (Score:3, Interesting)

    by slavemowgli ( 585321 ) on Friday February 17, 2006 @10:07PM (#14746995) Homepage
    A backbone-less Internet... is it just me, or is that exactly the way the Internet was originally envisioned and built? The reasons we moved away from that are purely economical, and until there'll be an economical incentive to move to a backbone-less distributed system again (and, for that matter, an economical incentive to actually make it work at least as well in terms of speed and reliability as the system we currently have), things will stay the way they are now.

    The fact that the centralised system of today lends itself to easy censoring etc. is unfortunate, but if you really want it to change, you have to understand why it came to be.
  • Judging from current events, all you have to do is import the internet to China and it becomes spineless... or at least everyone involved in doing business on it does...
  • Unless you're going to hand deliver your data to the recipient, you will always have to trust someone with it. In a P2P system, the size of the entity with access to your data is smaller, but the number of entities with access to your data is bigger. I contend that it is easier to control and regulate a small number of large entities than it is to regulate a huge number of small entities.

    To me, it would be a better use of resources to put regulations into place (and enforce them!)
  • Pure Wireless Mesh (Score:5, Interesting)

    by Agar ( 105254 ) on Friday February 17, 2006 @10:34PM (#14747115)
    Seems to me that the biggest risk to individual freedoms is transport over centrally/corporate owned lines.

    Why not leverage nearly ubiquitous wireless access points (and possibly ad hoc wireless card settings) to create a completely wireless mesh that doesn't even connect to the Internet at all? This would parallel the development of the original 'net, where it starts as a bunch of island networks that get interconnected over time.

    Think about it-no phone lines, no centrality, no existing infrastructure. Nothing to "tap", very hard to track. Even better, no infrastructure so it could be built from scratch. IPv6, anonymizing, encrypted.

    Imagine a set of open source tools that take the best features of mesh networks and peer-to-peer, running exclusively over home wireless technology. One package could include a complete set of apps to get "on the mesh" including the routing intelligence, a "secure sandbox" for shared files/web pages, a browser, and caching. Run the package, and maybe at first you only connect to another geeky neighbor-but you don't know which. Check out his home-brew page in the browser, poke around the files he put up. As more people come on line, what you can access increases, sometimes dramatically as networks are interconnected.

    (Maybe initially the system could tunnel through the internet to connect disparate networks and gain critical mass. At some point this will always be necessary to get across oceans or challenging geographies.)

    Chicken and egg problem? You bet. Realistically, the three p's would drive it, as they do many new technologies: porn, piracy and privacy. But the opportunity is there for so much more.

    Speed would suck, sure, due to routing inefficiencies. But consider that the average bandwidth would be at 802.11 speeds: minimum 10Mbps, more likely 54Mbps. If I get 3Mbps on my cable line I'm thrilled. Latency might be high, but no one would be running Quake 3 on this. And wireless tech is only getting faster, while mesh routing and caching technologies are only getting smarter.

    I really think that if a truly independent, hacker-run next-gen internet will ever exist, it's going to be over home wireless. The entrenched media companies are too aware of the money making opportunities to let the "free ride" on their infrastructure continue forever (even though it's not a free ride, but don't tell them that). Unregulated spectrum is about the only Free space left.

    Use it to create a network that's truly decentralized, owned by the people, and anonymous from the ground up and you can change the world.
    • Shortly before DSL became available, I heard of some guys wanting to create a wireless mesh. You know, pringles can antennas and all that. And then DSL came and it wasn't worth the effort, especially when you had to get enough nerdy guys living close enough to each other in the same neighborhood for this to work. There's WAPs all up and down my street, but of course none of them are set up for mesh networking, they're just to avoid wiring up a house.

      The only other realistic alternative I can think of wh

    • Knock, knock.
      Who's there?
      Carnivore.
      Carni who?
      Carnivore.
      I don't know any Carnivore.
      That's OK, I've been operational for a while now and I know all about you. Chomp.

      Sorry, big publishers and the federal government will make what you want impossible. That's why your 802.11 power is so low you can't see further than your neighbor's house, if you can see that far. It's why The FCC says two "broadband" providers in any town is enough competition for anyone and the public servitude is off limits. The th

      • "Sorry, big publishers and the federal government will make what you want impossible. That's why your 802.11 power is so low you can't see further than your neighbor's house,"

        No, it's so you don't jam everyone elses signals
    • Latency "might" be high? High packet loss, routing inefficiencies, and terrible latency would combine to make decent transfer rates impossible to achieve over medium distances. Forget about gaming, streaming video, SSH, remote X11/RDP, or VoIP. Surfing today's web would be intolerable. A wireless mesh would only be usable for low-bandwidth, non-latency-sensitive applications; email and usenet would be about it.

      If you won't take my word for it, how about the words of an MIT mesh network study [mit.edu]:

      [...] W

  • I'm going to assume you used abel as a tongue in cheek reference to cain and abel [www.oxid.it], right? RIGHT??

  • Oh, how I pitty them (Score:5, Informative)

    by MarkusQ ( 450076 ) on Friday February 17, 2006 @10:36PM (#14747129) Journal

    Imagine, for instance, if Senator McCarthy had been able to steam open every letter in the United States. In the age of ubiquitous e-mail and filtering software, budding McCarthys are abel and willing to do so.

    As an administrator of a few reasonably small domains, my first thought was oh, the fools!

    You don't want to read every piece of e-mail that comes into even one site, let alone the whole internet. You don't even want to try to write programs to do it.

    /dev/null, I tell you, /dev/null! The only sane thing to do with 99% of the e-mail is route it to /dev/null in the most efficient way possible. All else is madness!

    You would be better off trying to understand the inner thoughts of a lava lamp then trying to figure out why anyone thinks anyone would buy "farmasuiticals (the 1 U've been lOOking 4!)", let alone ingest them! Or invest in "s+0cks" that are about to "+ake 0ff" based on the say so of a stranger named "Brandice Hornyslut." Or the pointlessly malformed sludge, the server errors from misconfigured machines...if anyone really wanted to hide something they'd be about as well off e-mailing it as flushing it down the toilet--and trying to find it would be about as pleasant.

    --MarkusQ

  • WiFi will be the technology that allows the REST OF US to create networks to rival the internet. The frequency is pretty much unregulated (and, because of microwave ovens, unregulatable). Once every (or every other) house has a WiFi router in it, a suburb has the infrastructure in place to be its own part of a backboneless internet. Connections from it to elsewhere can depend on a few hackers in the community with the means to talk to hackers in the neighboring ones using esoteric technology (including li
  • by John Jorsett ( 171560 ) on Friday February 17, 2006 @10:47PM (#14747191)
    You've described the original implementation of USENET. Participating machines would dial each other up and exchange current traffic. A message injected at one machine would eventually end up in the rec.practicaljokes.hotfoot newsgroup on every participating machine within a day or two, just by this simple machine-to-to-machine exchange.
  • by Quixote ( 154172 ) * on Friday February 17, 2006 @11:06PM (#14747267) Homepage Journal
    It was called UUCP. :-)
  • Correct me if I am missing something here, but is not that how the internet already works? There is no guarantee that any two packets will take the same path to get to their destination. Furthermore, the idea of "storing packets on a hard drive" is nothing short of absurd. There are no hard drives large enough or fast enough to record every packet a router receives, much less reassemble them in the proper order.

    The infamous Carnivore was one thing, relying on a predictable user-level protocol (SMTP). But th
    • That whole "the internet routes around censorship" thing borders on mythology. The truth is more limited. While the protocols allow for multiple paths between two end points, as a practical matter there are very few paths between points - and when those points are countries, the vast majority of traffic passes through just one or two choke points.
  • by Vexler ( 127353 ) on Friday February 17, 2006 @11:15PM (#14747289) Journal
    Is /. really running out of news to cover that we have to resort to this kind of "I am not a specialist nor do I really care to do some basic background reading, but here goes" talking points? I see this kind of pseudo-deep-intellectual topics a lot on sci.crypt, where someone would claim to have found a brand-new algorithm, only to have one or several of the following happen:

    1) The algorithm gets shot down in about fifteen minutes by several people who really know their stuff,
    2) Someone posts, "Oh, this is exactly the same thing as that zippity-zing-zang algorithm that Chuck Dumbo 'invented' some years back. It's completely bogus."
    3) Someone posts a follow-up question, and based on the reply given by the OP you suddenly realize that he has no clue whatsoever about crypto design.

    It really is not that hard to research some basic, layer-1 information about networking and deduce some fundamental operating principles (as someone already pointed out, one of which is physical cabling). Cisco has plenty of introductory material that even my wife the musician can understand. Do your homework first, and then come back.
  • It's a pity that the VENONA project [wikipedia.org] was only declassified almost 40 years after McCarthy's death. It proves that he was right all along.
  • by xenocide2 ( 231786 ) on Friday February 17, 2006 @11:23PM (#14747314) Homepage
    Look at GNUtella. Years ago, a problem was noticed: some peers are far more capable than others. Search traffic became heavy enough that it was saturating dialup users. This wouldn't have been so bad if the protocol didn't also ask for pseudo anonymity; this led to the networks occasionally dividing in two as a set of dialup users flooded off the net. The solution is to organize the network so that high capacity peers are on the inside, and dialup or otherwise impaired users become "leaves" of sorts. Gnutella2 uses this approach, and this has been added back to Gnutella in some fashions.

    The end result of this unequal distribution of resources is that centralization is the most efficient use of them. For the vast majority of Internet users, efficiency and performance are paramount. I hear far more complaint that Bittorrent is slow than that it's centralized or not anonymous. Even if you're willing to discount performance, the price of implementing a peering based system is greater, since it costs to maintain each link. People have tried using wifi to create mesh networks that operate sans "backbone" but this doesn't scale well either. Nor is it anonymous or difficult to tap.
  • mccarthy, while his methods were excessive, was after communists in the state dept and army. and you know what, there were plenty. we have the venona project [nsa.gov] as proof that we were infiltrated at the highest levels. and before you defend political freedom, these were people working for the enemy. you konw, the one with 10,000 nukes pointed at us, the same Stalin that had millions of Ukrainians starved to death, that killed many millions more in his purges, sent millions to the gulags, oh wait, duranty was
    • Japan was attempting to destroy the US as well. No doubt there were Japanese infultrators amongst the citizens of the US... some must have have been in positions of power in their communities.

      But that doesn't justify taking the lives and families of Japanese Citizens of the US and throwing them in concentration camps. That does not justify locking my grandparents up like criminals for years, kept away from their kids.

      McCarthy didn't just go after traitors. He went after communists, people with alternativ
  • The companies that are talking about tiered internet service are mainly ran by pointy-haired people who barely understand this whole internet thing and want to wish it away. Most people, in particular in the most profitable markets, have choices of internet service providers. The ISP who makes a policy change that makes Yahoo!, Google, or Ebay slow will lose customers. Same problem if a particular backbone provider does that to an ISP. The first business to try this is going to learn how easy it is to l
  • Is it possible to create an internet that relies instead on peer-to-peer connectivity?

    Depends -- can you afford to fight the RIAA lawsuits?
  • Why Not. (Score:4, Informative)

    by darqchild ( 570580 ) on Friday February 17, 2006 @11:32PM (#14747351) Homepage
    -The complexity of the routing tables. Although people complain that we are running out of IP address space, this isn't exactly true. The problem is in badly fragmented IP address space. That is to say that the route tables of our core routers that join the backbone providers have grown to be huge. There are a whole pile of class C networks (254 hosts each) that the IANA is trying to claw back so they can be consolidated into larger /16 and /8 CIDR networks.

    -BGP AS space. Due to what i can only assume was poor foresight, the AS# used to identify BGP "Autonomous Systems" (Corporations, and entities that use BGP to exchange routing information with the backbone providers) is a 16 bit value. So there are only ~65K numbers that can actually be given out.

    -Complexity of configuring these routing protocols. It's rocket science, plain and simple. A misconfigured BGP router will not work, and may even disrupt traffic over the rest of the internet. If anyone was allowed to broadcast any BGP route without the consent of all their peers and a pile of red tape, i could advertise a route to 24.0.0.0 and half the internet would disappear for a good number of cable-broadband users.

    -Required bandwidth, and latency problems. The current top-level backbone providers have many millions of dollars worth of equipment and high-speed point to point connections to keep the number of hops for each packet to a minimum. They have the capacity to push more traffic than you'll use in a week down their wan links every second. This is a vast improvement over a pile of 56, 1024 and 3068 kilobit connections that would be meshed together in a distributed model.

  • by puzzled ( 12525 ) on Saturday February 18, 2006 @12:27AM (#14747515) Journal
    Wow, its as if the drooling wireless fanboys suddenly discovered life beyond an IP address assigned via DHCP. Please pay attention, children ...

        The internet is composed of 'autonomous systems' - each autonomous system or 'AS' has one or more netblocks of a /24 or larger in size. Each AS connects to at least one other AS, makes at least one netblock available via BGP, and thusly the internet is stitched together. Find this shocking an incomprehensible? Try this

    telnet route-views.oregon-ix.net

      follow your nose through the login procedure, then type 'show ip bgp [your IP address]' and see what it says. Oh, if your IP address is 192.168.x.x, 10.x.x.x, or 172.16-31.x.x and you put that in please step away from the computer now and ask someone with a clue for help.

        I mean really - *this* is a frontpage story? I swear I'm going to auction my low Slashdot ID number on Ebay one of these days and alias this site to memepool in my hosts file.

    • OK, take these steps (Score:5, Informative)

      by puzzled ( 12525 ) on Saturday February 18, 2006 @12:43AM (#14747560) Journal

        Maybe I'm getting grouchy in my old age - see parent for details. This is how real men connect to the internet:

        There are three ISPs in the world - Sprint, UUNet, and [other]. Get on the phone and order a T1 from one of the two real ones. They'll get your payment information and then someone will ask how many IP addresses you need. Tell 'em you want a /24 (256 addresses). They'll ask why, you tell 'em you're going to multihome.

          Go to ARIN.net's site. Figure out how to get yourself an autonomous system number. Call up the other ISP you didn't originally order from and get a circuit from them. No IP addresses required, we'll just use the block from ISP 1.

        Assuming you're using a Cisco box do the following:

          router bgp [your AS number]
              network [your shiny new /24]
              ! UUNet
              neighbor yadda yadda AS 701
              ! Sprint
              neighbor yadda yadda AS 1239

          And *poof*! Your little /24 is now globally visible via two different ISPs. Yank the T1 to one of then, life is funny for a bit, then you're running like nothing ever happened.

          Take this little story and abstract it a bit - there is no 'backbone' to be found on the internet, just a web of large carriers with all sorts of peering agreements with each other. This won't happen at the home DSL router monkey level, but the diverse internet the asker speculated about already exists and happens to be pretty resistant to fools trying to monitor it.

  • The original internet was designed this way. Everybody knew the routing table to go anywhere. The fact is, there's too much data to *manage* the connectivity that it hinders the actual performance. Hence, they made a hierarchy routing system using subnets. This allowed for routing to be taken control of on a local level for local traffic.

    However, this hierarchy does have a top, obviously... and that's your backbone. So the quick answer to all your questions is "they tried it already... it doesn't work that
  • by birge ( 866103 ) on Saturday February 18, 2006 @12:49AM (#14747579) Homepage
    Grassroots nationwide network made up of people connecting to nearby people via modem. Granted it took a day or two for mail to make it across the country, but it was pretty cool given that it was truly decentralized and done entirely by hobbyists.

    Anyway, I think it's a moot point. Who cares about the topology of the internet when you can just encrypt everything? Backbones are great. Best thing is to use the fastest and most robust network topology, and let security be handled at the application level.

  • Comment removed based on user account deletion
  • As other posters have said the Internet is already (in the industrialized countries) a well connected mesh of peer networks. It's true that traffic flows through the tier one provides but that's only because they provide the best route to where ever your data is trying to get to. If a network provider stops routing traffic or starts censoring or port blocking certain applications then it's your job as Joe consumer to pressure your ISP to not use that providers backbone.

    The real threat to the Internet

  • ...is that the poster didn't just talk about privacy, but also about media control. While encryption might handle the privacy angle it does jack squat for getting an unpopular message out to everyone over channels controlled by people who think the message is detrimental to them. Especially if your web host or ISP is told that your message is "illegal" in the next few years. I live in America where it's getting harder and harder to get the truth out to people via mainstream channels. And now we've got p

The major difference between bonds and bond traders is that the bonds will eventually mature.

Working...