Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
The Internet Communications Technology

Features of a post-HTTP Internet? 122

Ars-Fartsica asks: "We've been living with HTTP/HTML ("the web") for a quite a while now, long enough to understand its limits for content distribution, data indexing, and link integrity. Automatic indexing, stateful-ness, whole-network views (flyovers), smart caching (P2P), rich metadata (XML), built in encryption, etc are all fresh new directions that could yield incredible experiences. Any ideas on how you would develop a post-HTTP/HTML internet?"
This discussion has been archived. No new comments can be posted.

Features of a post-HTTP Internet?

Comments Filter:
  • Why? (Score:5, Insightful)

    by MaxwellStreet ( 148915 ) on Thursday July 29, 2004 @01:52PM (#9834002)
    Given that all the technologies you mention work just fine across the internet as we know it....

    Why think about getting rid of html/http?

    The pure simplicity of developing and publishing content is what made the WWW take off the way that it did. Anyone could (and generally did!) build a site. It was an information revolution.

    The other technologies will handle the more demanding apps out there. But HTML/HTTP is why the web (and in a larger sense) the internet is what it is today.
  • Rewriting? (Score:5, Insightful)

    by Ianoo ( 711633 ) on Thursday July 29, 2004 @02:06PM (#9834222) Journal
    Why is it that developers feel the need to periodically scrap everything they've been working on, then reimplement it, usually in a more half-assed way than the original? (I'm talking to you, Apache programmers! ;)

    But seriously, where's the need to dump HTTP? It's not exactly a complicated protocol, and can be adapted to do many different things. Pretty much any protocol can be tunneled over HTTP, even those you'd normally consider to be connection-orientated socket protocols.

    As for HTML, again - why the need? By using object tags and plug-ins, the browser is almost infinitely extensible. Flash and Java bring more interactive content, streaming brings sound and video, PDF brings exact display of a document to any platform, and people are using all sorts of different XML-type markups every day now, such as RSS, XML-RPC, SOAP, and so on to do all kinds of interesting things like Web Services and RPC.

    Microsoft and the open source community are both working on markup-like things that will enable applications to operate over the web (all via HTTP). XAML and XUL's descendents might well have a big future, especially if the way documents should be displayed is more rigourously specified than HTML.
  • by self assembled struc ( 62483 ) on Thursday July 29, 2004 @02:08PM (#9834256) Homepage
    The fact that HTTP is stateless is one of the reasons that Apache and the kin scale so effectively. The instant they're done dealing with the request, they cna do something else without thinking about the consquences. Why do I need state on my personal homesite? I don't. Let your application logic deal with state. Let the protocol deal with data transmission period.
  • by OmniVector ( 569062 ) <see my homepage> on Thursday July 29, 2004 @02:09PM (#9834276) Homepage
    If all those things in the title were used to develop a website, i think the things one could accomplish are amazing. as it stands you can already use xhtml and xhmlhttprequest to do highly dynamic websites. sometimes i wish so much emphasis wasn't put on backwards compatability in the web. i wish browsers could automatically detect what version of html the webpage requires, and generate warnings if your browser's too old to properly render, with a handy "update here" link.

    PS: Canvas is a new tag from apple, used to draw things into an img like component. apple's working with opera and mozilla to integrate it into their browsers. hopefully this will go somewhere. i've always wanted something like that directly javascript accessable, but have never had the luck. it requires hack extensions like java and flash which don't communicate well with the underlying javascript without using some kludge like liveconnect.
  • Re:Forget HTTP. (Score:5, Insightful)

    by ADRA ( 37398 ) on Thursday July 29, 2004 @02:15PM (#9834374)
    Spoof integrity will always come down to two factors:

    1. Verification of Sender - This will never happen unless systems like cacert.org start to take off. Basically 99% of the internet don't give a damn about certificates, and the ability for anonymity is more limited. A debate about privacy/spam could go on for years if given the chance.

    2. SPF-like protocols - This is the ability to discriminate who is and who isn't allowed to send email from a given domain. This will cause a few things:
    - a. Every mail sender must be from a domain
    - b. Every mail sender has to route through an institutional server (the road warrior problem)
    - c. Every institutional mail server must deny relaying from anyone non-authenticated. (Should be done already)
    - d. Institution must be regarded positively by the community at large. If they aren't, they're completely eliminated from sending emails.
    - e. You have to get DNS servers that you can update.
    - f. You must lock down the DNS server from attacks (Have you done this lately?)

    Anyways, both solutions are possible, but neither are ideal for everyone. SPF has a real chance of shutting down spoammers, but I imagine the wild west internet we know is pretty much over.
  • REST (Score:2, Insightful)

    by StupidEngineer ( 102134 ) on Thursday July 29, 2004 @02:48PM (#9834878)
    Forget ditching HTTP, it's good even with its quirks. It's easy to use... And it's near perfect for applications designed with the REST philosophy in mind.

    Instead of ditching HTTP, let's ditch SOAP-RPC.
  • Re:Rewriting? (Score:1, Insightful)

    by Anonymous Coward on Thursday July 29, 2004 @02:56PM (#9834981)
    Why is it that developers feel the need to periodically scrap everything they've been working on, then reimplement it [...]?

    Are you a developer? There are lots of reasons, but they are not very good ones. It sounds like it might be discouraging, but it's really quite fun. You know the basic idea of how to do it, because you've done it once already, so you get to think about how to do it better. On a small scale, it is called refactoring. On a large scale it is probably a waste of time. But a lot of people are tempted to do it anyway.

    Programmers are often idealists. They implement something and then feel bad about it, so they later go ahead and reimplement it because it is "ugly" or "crufty". Even if its interface seems to work well, the internal implemenation probably feels to us like it is crufty and liable to break down at any moment. So we overengineer it, layer after layer, to ensure that no ugly, interface-defacing code ever needs to be introduced anywhere. It's kind of compulsive for me, although at least I can see myself doing it and decide whether it is really necessary.

    Now, I don't know the HTTP protocol very well myself, so I don't feel compulsive about reimplementing it. It feels pretty clean from what I've seen.

    As for HTML, on the other hand, it would be beautiful to start with a clean slate on that one. Force XHTML+CSS, force browser rendering standards, force everybody to respect MIME types. If you do that, you feel like you want to change HTTP, just to force everybody to start from scratch so you don't have any partial-compatibilities.

  • Re:Rewriting? (Score:3, Insightful)

    by miyako ( 632510 ) <miyako AT gmail DOT com> on Thursday July 29, 2004 @03:01PM (#9835054) Homepage Journal
    Why is it that developers feel the need to periodically scrap everything they've been working on
    The reason is that often times the original design of something does not facilitate the structured adding of newer features. Mainly because the $foo is first developed nobody has any idea that people will want to be doing $bar 10 years down the road. Finally someone finds a way to allower $bar by tacking on a few things to $foo with superglue and duck tape. At first this is no big deal $bar is just a small little thing and it doesn't invalidate the design of $foo. Eventually more people use duck tape and superglue to add things to $foo and use duck tape and superglue to add more things to $bar untill what your left with is a big ball of tape and glue supported percariously with popsicle sticks and rubberbands. In this case it can be better to then redesign $foo to provide a better structure for things like $bar to be added without so much cruft. Othertimes it's decided that all the things like $bar should just be given a seperate program/protocol/whatever and $foo should to back to what it was originally.
    Let's look at all this in the case of HTTP. Things like Java applets, Flash, even Javascript, are all hacks to get around the limitations of HTTP. Ofcourse I don't think that we are nearing critical mass of things being added onto HTTP, but the problem is certainly comming along. I think the latter of the above solutions is preferable in this case. HTTP is a good protocol and still serves a usefull purpose, what we need though is a second protocol for dynamic content.
  • Oh, yeah (Score:5, Insightful)

    by 0x0d0a ( 568518 ) on Thursday July 29, 2004 @04:05PM (#9835995) Journal
    Let's see:

    * The primary addressing mechansim would be content-based addressing (like SHA1 hashes of the content being addressed). We have problems with giving reliable references for things like bibliographies. We are gradually moving in this direction. P2P networks are now largely content-addressed, and bitzi.com provides one of the early centralized databases for content-based addressing.

    * We would have a global trust mechanism, where people can evaluate things and based on how well other people trust their evaluations, those people can take advantage of their evaluations. Right now, web sites have very minimal trust mechanisms (lifetime of domain, short domain names, and the generally-ignored x.509 certs). This would apply not just to domains, but be more finely-grained and apply to content beneath it.

    * The concept of creatable personas would exist. Possibly data privacy laws would end up requiring companies not to associate personas, or perhaps we would just make it extremely difficult to associate such personas. You would maintain different personas which may, if so desired, be separate. Such personas would be persistent, and could be used to evaluate how trustworthy people are -- e.g. if Mr. Torvalds joins a coding forum and makes some comments about OS design, he can simply and securely export his persona (a pubkey and some other data) from the other locations that he has been using that persona (like LKML, etc) and benefit from the reputation that has accrued to that persona. This would eliminate impersonation "this is the *real* Linus Torvalds website", etc.

    * Such trustable, persistent personas would allow for the creation of systems to allow persistent contact information to be provided ('snot that hard). This means no more dead "email addresses".

    * Domain names not be used as the primary interface mechanism to users for finding and identifying data providers. This is halfway handled already -- most people Google for things like "black sabbath" instead of looking for the official Black Sabbath website by typing out a single term. It's still possible for people to "choose their visual appearance", though, and Visa looks very much like "visa-checking.com", as long as end users have control over how domains are presented to users.

    * P2P becomes a primary transport mechanism for data -- from an economic standpoint, this means that consumers of data are responsible for subsidizing continued distribution of that content, and shifts the burden from the publisher of the content -- one step removed from consumers funding the production of their content. It solves many of the economic issues associated with data distribution. For this to happen, P2P protocols will have to be strongly abuse-resistant, even if that means a lesser degree of performance or efficiency. Many existing systems have severe flaws -- Kazaa, for instance, allows corrupted data to be spread to users, and conventional eDonkey (sans eMule extensions) does not provide any mechanism to avoid leeching, which destroys the economic benefits. Sadly, one of the few serious attempts to address the stability of the system was also from Bram Cohen of BitTorrent and abandoned -- called Mojo Nation, it used a free-market economic system to determine resource allocation, and was fairly abuse resistent. I have some efforts in this direction, but don't use a free-market model.

    * Email and instant messaging will merge to a good degree (or perhaps one will largely "take over"). Up until now, it has mostly been techncial limitations in existing software that has kept one from supplanting the others -- email provides poor delivery-time guarantees, instant messaging provides message size limitations. Email uses a strictly thread-based model, instant messaging uses a strictly linear model. Probably someone will coin a new, stupid term for the mix of the twain (like "instant mail").

    * Personas and global trust networks (not extremely limiting binary-style trust, a la PGP/GPG), as mention
  • I don't like Flash (Score:4, Insightful)

    by 0x0d0a ( 568518 ) on Thursday July 29, 2004 @04:28PM (#9836281) Journal
    I really hate Flash.

    I hate Flash for a lot of reasons.

    *) Lots of web designers think animation is a good idea. They tend to use it more than a user would like (especially since the "is it cool" metric, where users are asked for initial impressions of a site rather than to use the thing for a month and their feelings on usability) is wildly tilted toward novelty. Animation is almost never a good idea from a usability standpoint on a website.

    *) Lots of people doing Flash try to do lots of interface design, going so far as to bypass existing, well-tested and mature interface work with their own pseudo-widgets. They usually don't know what they're doing.

    *) Flash is slow to render.

    *) Flash is complex, and it's hard to secure the client-side Flash implementation compared to, say, a client-side HTML rendering engine.

    *) The existing Flash implementation chews up as much CPU time as it can get.

    *) Flash does not allow user-resizeablity of font sizes.

    *) Flash does not allow for meta-level control over some things, like "music playing in the background". Some websites provide a button for this. I don't want to have control if the designer choose to give me control -- I never want that software playing music if I choose to not have it do so.

    *) Flash does not allow user-configurable font colors (and for some reason, too many Flash designers seem to think that "ten-pixel high light blue text on dark blue looks great to them, so everyone should also be able to read their site as easily).

    *) Because Flash maintains internal state that is not exposed via URL, it's not possible to link to a particular state of a Flash program -- this means that you can only link to a Flash program, not a particular section of one. This is very annoying -- I can link to any webpage on a site, but sites that are simply one Flash program disallow deep linking. (I'm sure that concept gets a number of designers up somewhere near orgasm, but it drives users bananas.)

    *) The existing Flash implementation is not nearly as stable as the other code in my web browser, and takes down the web browser when it goes down.

    *) As you pointed out, I can't search for a "page" in a Flash program.

    Really, the main benefit of avoiding Flash to me is that it keeps web designers from doing a lot of things that seem appealing to them but are actually Really Bad Ideas from a user standpoint. Almost without exception, Flash has made sites I've used worse (the only positive example I can think of was either a Javascript or Flash in which the manufacturer of a hardware MP3 player demoed their interface to website users).

    I *have* seen Flash used effectively as a "vector data movie format", for which it is an admirable format -- I suspect most Slashdotters have seen the Strong Bad cartoons at some point or another. But I simply do not like it as an HTML replacement.
  • Don't be nasty (Score:4, Insightful)

    by 0x0d0a ( 568518 ) on Thursday July 29, 2004 @04:51PM (#9836626) Journal
    You know exactly what he meant, and simply couldn't pass up the opportunity to bash him to demonstrate your maximum geekiness.

    Please study TCP/IP better before you ask such a question again.

    You know what I've found? Professors and people that generally understand a subject are generally not assholes towards people that make an error in it (maybe if they're frusterated) -- they try to correct errors. It's the kind of people that just got their MSCE who feel the need to demonstrate how badass they are by insulting others.

    The question was not unreasonably formatted. The most-frequently used application-level protocol on the Internet is HTTP. The only other protocol directly used much by almost all Internet users are the mail-related protocols. The main way that people retrieve data and interact with servers on the 'Net is HTTP. Often, the HTTP-associated well-known ports 80 and 443 are the only non-firewalled outbound ports allowed to Internet-connected desktop machines. You're using a Web browser to read this at the moment. Other protocols are increasingly tunneled over HTTP. Saying that we have an "HTTP Internet" is entirely reasonable.
  • by 0x0d0a ( 568518 ) on Thursday July 29, 2004 @05:07PM (#9836875) Journal
    If all those things in the title were used to develop a website, i think the things one could accomplish are amazing. as it stands you can already use xhtml and xhmlhttprequest to do highly dynamic websites.

    "highly dynamic websites". Hmm. What specifically do you mean by this?

    i wish browsers could automatically detect what version of html the webpage requires, and generate warnings if your browser's too old to properly render, with a handy "update here" link.

    Browsers and website designers already have the ability to do this. The reason they don't is that it's a pain in the ass for the user.
  • by AuMatar ( 183847 ) on Thursday July 29, 2004 @05:15PM (#9836960)
    Of course, if we added state, we'd get rid of the need for cookies (and their privacy issues), and make writing web applications one hell of a lot easier.

    If you're not going to make it stateful, don't bother replacing it. As a stateless protocol its about as lightweight as you're going to get.

Two can Live as Cheaply as One for Half as Long. -- Howard Kandel

Working...