Features of a post-HTTP Internet? 122
Ars-Fartsica asks: "We've been living with HTTP/HTML ("the web") for a quite a while now, long enough to understand its limits for content distribution, data indexing, and link integrity. Automatic indexing, stateful-ness, whole-network views (flyovers), smart caching (P2P), rich metadata (XML), built in encryption, etc are all fresh new directions that could yield incredible experiences. Any ideas on how you would develop a post-HTTP/HTML internet?"
Word to Your Mother (Score:2, Funny)
Let's all just capitulate and make the official format a Microsoft Word document.
Michael. [michael-forman.com]
Re:Word to Your Mother (Score:3, Funny)
whoops (Score:3, Funny)
Sorry, my bad.
Re:Word to Your Mother (Score:2)
Re:Word to Your Mother (Score:2)
I agree, though - lots of uncompressed TIFFs are a good thing, too.
And then people can quote the existing PDF stuff, and add one line (with an entirely new font) that just says, 'I Agree.'
We'll make good use of that new Verizon fiber to the premesis bandwidth, no problemo!
I want FTMF - Fiber To My Fingers!
ADA and citation issues (Score:2)
Furthermore, citation is a significant problem on the Internet (for example, used resources can go away if cited by URLs). We need to solve the citation problem -- the appropriate approach is to embed all files used as sources of content for the existing file (which would, in turn, contain copies of all *their*
Re:ADA and citation issues (Score:2)
Time to apply for ISO?
Re:ADA and citation issues (Score:2)
Oh, surely not quite yet. The ISO committes are good at being certain not to avoid including anything that someone might want, but they aren't perfect, and we need to be sure to avoid missing crucial features.
Actually, to further ADA compliance, it should be full video (with captioning), that way not only do the blind get access, but the deaf as well.
This is a good example. We were ready to go to ISO with this. But there are more -- what about dual-language law in states borde
Re:ADA and citation issues (Score:1)
What about data corruption? Thats a huge problem these days. We have 3 copies of the documents imbedded inside itself, that way if one is corrupted, you have 2 more chances. Also each version will also have 3 copies. Whats the point o
Re:ADA and citation issues (Score:2)
Also, since they might have a machine that's not completely compatiable, includes should be schematics and instrucutions for making their own machine altair style.
Re:Word to Your Mother (Score:1)
Wrong question. (Score:5, Interesting)
First identify the problem, then you can start devising solutions.
So what's the problem? You mention certain limits of HTTP/HTML. Would these be overcome with better applications rather than throwing everything out?
Re:Wrong question. (Score:5, Interesting)
but HTTP is not concurrent! (Score:2)
For a web applications solution [HTTP is a great protocol that should't go anywhere!], I'd propose a new protocol much like IBM's 3730 or 5250 terminal sessions...
Re:Wrong question. (Score:2)
I groan every time I use some web "application" that involves submitting a form and waiting 3 to 10 seconds between each step. It's a ridiculous way of making something interactive. Even the best hacks (such as Gmail which uses a pile of javascript and DHTML to make things seem instantaneous) take excrutiating amounts of coding and testing and still fall prey to the "submit and wait for a whole new page" thing for some parts.
HTTP is great for static pages, but anything remotely interactive
Some limitations of HTTP and HTML (Score:2)
People accept the limitations of html and http because its currently the best thing out there. It does have problems, though:
Scalability. A server that isn't well provisioned can easily be slashdotted or DDOSed into oblivion. Not everyone can afford a DS3 or akamai. This problem could be solved through replication.
Document identity. A document's location is a permanent part of its file name. If a document moves, its contents are the same, yet its name changes. Sometimes, its nice to be able to
Why? (Score:5, Insightful)
Why think about getting rid of html/http?
The pure simplicity of developing and publishing content is what made the WWW take off the way that it did. Anyone could (and generally did!) build a site. It was an information revolution.
The other technologies will handle the more demanding apps out there. But HTML/HTTP is why the web (and in a larger sense) the internet is what it is today.
Why does HTTP have to go away? (Score:4, Interesting)
Has something changed that I'm not aware of here?
HTTP may be the most popular protocol out there, but it's hardly the only one. SMTP is really popular, FTP, NNTP, IRC, whatever all the IM systems use, UDP protocols used by games, DNS ... many of these may be showing their age, but they're not showing any signs of going away any time soon.
Re:Why does HTTP have to go away? (Score:2)
Hyper-Text Markup Language
Have nothing to do with each other?
Re:Why does HTTP have to go away? (Score:2)
So why criticize the article author? (Score:2)
What you are saying is not contrary to what the article author said in the first place.
Re:So why criticize the article author? (Score:2)
Re:Why does HTTP have to go away? (Score:2)
Read that again. Maybe out loud. Then you can hear what an ass you're being.
Re:Why does HTTP have to go away? (Score:2)
Hyper-Text Markup Language
Have nothing to do with each other?
Yup, that's correct... (-: They are completely "orthogonal" to each other.
The notion of HTTP being a "hypertext" related technology is more of a historical accident than anything. (Hypertext was a buzzword of the 90's, everybody made claim to the word.) The developers of HTML wanted a more elegant way of serving web pages than the older protocols like FTP and Gopher, so they contributed to HTTP's development. Ho
Re:Why does HTTP have to go away? (Score:2)
Re:Why does HTTP have to go away? (Score:2)
Yup. But if HTTP wasn't stateless to begin with, it would not have been adopted widely in the first place.
[BEGIN RANT MODE]
A lot of government and managerial people seem to forget that freedom from restrictions and overhead are what makes technologies and social processes popular. The current rush to reassert "control" over the Web and the Internet are going to drive away the very pe
Re:Why does HTTP have to go away? (Score:2)
Why can't anybody see that? It's time for something new to be developed that meets the needs of the NEXT 10 years. It's not about "control" or rather my post was that OSS should grab the reins FIRST. HTTP was mostly an OSS-type project and that made it very successful. It's time to put all the ducks in a row and lay out a new course....before THEY do it for us!
Re:Why does HTTP have to go away? (Score:1, Flamebait)
DNS IS TEH R0X0R
HTTP SUX P3N0R
LaTeX (Score:1, Interesting)
(LaTeX, being a programming language, is quite adept at laying things out, and accepting new sorts of extensions. It would be ideal for this kind of display
Re:LaTeX (Score:2)
That being said, the syntax of LaTeX is a pain to learn, a pain to code in, and just not all that great.
Now, I deal with the syntax because the approach of higher-level formatting is so good, and because the implementation is so good, but boy I wish that it was better.
Oh, and LaTeX doesn't have the excellent error detection and reporting of, say, perl.
Re:LaTeX (Score:2)
I don't have a problem with text markup -- I just don't like LaTeX's particular syntax. It makes a lot of characters metacharacters (which makes it a pain to past text in). A lot of characters that I think should be "regular" characters are only valid in math mode. I hate the way LaTeX deals with wrapping (I never want text going off the page, really). I hate trying to deal with cell-spanning in tables, which should really be part of the basic tabular e
a better plan... (Score:1, Offtopic)
Re:a better plan... (Score:1)
Digital or Analog
physical transmition type (ethernet, optical cable, phone line, radio waves)
addressing (IPV4, IPV6, probably more that I am not aware of)
transport protical (TCP, UDP, etc...)
packet type (http, ftp, gopher, smtp, etc...)
That is why you can change one of the layers and none of the others have to know about it. In other words, you can serve up so
Re:a better plan... (Score:1)
Banning IE would be easy if you could get a following. Just put JavaScript code in the 'onload' parameter of the 'body' tag that detects browser and if it detects IE, redirects to the Firefox download page. (I don't remember the code to do this off the top of my head, but it is very feasible) No one could use IE on your site. Get enough people to do this, and you've effectively 'banned' IE from every site except one. Then, people will start to us
Re:a better plan... (Score:2)
Re:a better plan... (Score:2)
Nothing; I just meant that if someone wants to go around fixing something, how about problems that are already known, and with a known solution, than to simply go around changing HTTP just because one can.
Forget HTTP. (Score:5, Interesting)
Please! Someone give us a secure email protocol that doesn't allow address spoofing.
Re:Forget HTTP. (Score:5, Insightful)
1. Verification of Sender - This will never happen unless systems like cacert.org start to take off. Basically 99% of the internet don't give a damn about certificates, and the ability for anonymity is more limited. A debate about privacy/spam could go on for years if given the chance.
2. SPF-like protocols - This is the ability to discriminate who is and who isn't allowed to send email from a given domain. This will cause a few things:
- a. Every mail sender must be from a domain
- b. Every mail sender has to route through an institutional server (the road warrior problem)
- c. Every institutional mail server must deny relaying from anyone non-authenticated. (Should be done already)
- d. Institution must be regarded positively by the community at large. If they aren't, they're completely eliminated from sending emails.
- e. You have to get DNS servers that you can update.
- f. You must lock down the DNS server from attacks (Have you done this lately?)
Anyways, both solutions are possible, but neither are ideal for everyone. SPF has a real chance of shutting down spoammers, but I imagine the wild west internet we know is pretty much over.
Re:Forget HTTP. (Score:3, Informative)
Or decentralized trust systems, but yes.
Basically 99% of the internet don't give a damn about certificates, and the ability for anonymity is more limited.
Not really. I can create multiple electronic personas, unless you're trying to enforce a 1:1 id:person ratio.
2. SPF-like protocols - This is the ability to discriminate who is and who isn't allowed to send email from a given domain. This will cause a few things:
Where "SPF-li
Re:Forget HTTP. (Score:2)
I think trust systems are the best option. Non technical users can have blobs of trust created by their ISP. e.g. MSN trusts that an MSN user is trustworthy, and AOL users are trustworthy, and other trustworthy providers. Technical users can trust friends, trust major service providers, trust friends of friends and revoke trust as abuses occur.
So AOL or MSN or whatever can establish the one account to one owner relationship. Randomly generated emails, even from valid addresses would be ignored since t
Re:Forget HTTP. (Score:2)
The problem is that the trust system bundled with GPG (not that you couldn't build something on top of GPG's trust system) is binary -- you trust someone or you don't. There's no concept of "sorta trusting persona A, and therefore trusting persona B, which persona A trusts, somewhat less".
Re:Forget HTTP. (Score:2)
I think as long as there is a valid from address I'd be happy. As long as I can send back a 5 gig
Verifying this is as simple as having a two way handshake protocol before delivering mail.
Re:Forget HTTP. (Score:3, Funny)
Forget about replacing HTTP - let's deal with the real problem protocol first: SMTP.
What, work on SMTP, while there are children starving somewhere in the world?
If we listened to people like you, nothing would ever get done. Well, perhaps some starving people would be saved. But that's besides the point, sheesh.
There is no problem with SMTP (Score:2, Interesting)
Claiming that there's a 'spoofing' problem with SMTP is like saying there's a 'spoofing' problem with HTTP, because *anyone* can put up a website claiming to be anyone else.
It's *NOT* a problem with the delivery protocol.
There already is a way of preventing address spoofing with email - it's called PGP, and using it doesn't require any change of SMTP.
Rewriting? (Score:5, Insightful)
But seriously, where's the need to dump HTTP? It's not exactly a complicated protocol, and can be adapted to do many different things. Pretty much any protocol can be tunneled over HTTP, even those you'd normally consider to be connection-orientated socket protocols.
As for HTML, again - why the need? By using object tags and plug-ins, the browser is almost infinitely extensible. Flash and Java bring more interactive content, streaming brings sound and video, PDF brings exact display of a document to any platform, and people are using all sorts of different XML-type markups every day now, such as RSS, XML-RPC, SOAP, and so on to do all kinds of interesting things like Web Services and RPC.
Microsoft and the open source community are both working on markup-like things that will enable applications to operate over the web (all via HTTP). XAML and XUL's descendents might well have a big future, especially if the way documents should be displayed is more rigourously specified than HTML.
Re:Rewriting? (Score:1, Insightful)
Are you a developer? There are lots of reasons, but they are not very good ones. It sounds like it might be discouraging, but it's really quite fun. You know the basic idea of how to do it, because you've done it once already, so you get to think about how to do it better. On a small scale, it is called refactoring. On a large scale it is probably a waste of time. But a lot of people
Re:Rewriting? (Score:3, Insightful)
The reason is that often times the original design of something does not facilitate the structured adding of newer features. Mainly because the $foo is first developed nobody has any idea that people will want to be doing $bar 10 years down the road. Finally someone finds a way to allower $bar by tacking on a few things to $foo with superglue and duck tape. At first this is no big deal $bar is just a small
Re:Rewriting? (Score:3, Funny)
Now ask me a hard one.
-Peter
Re:Rewriting? (Score:2)
Don't get rid of statelessness (Score:5, Insightful)
Re:Don't get rid of statelessness (Score:4, Insightful)
If you're not going to make it stateful, don't bother replacing it. As a stateless protocol its about as lightweight as you're going to get.
Re:Don't get rid of statelessness (Score:2)
XML + XForms + XMLHttpRequest + canvas (Score:2, Insightful)
PS: Canvas is a new tag from ap
Re:XML + XForms + XMLHttpRequest + canvas (Score:1)
Re:XML + XForms + XMLHttpRequest + canvas (Score:4, Interesting)
Re:XML + XForms + XMLHttpRequest + canvas (Score:2)
Which rules out IE.
Mozilla has always been downloadable as a binary with SVG. It's on its way being fully merged into the tree.
Re:XML + XForms + XMLHttpRequest + canvas (Score:3, Insightful)
"highly dynamic websites". Hmm. What specifically do you mean by this?
i wish browsers could automatically detect what version of html the webpage requires, and generate warnings if your browser's too old to properly render, with a handy "update here" link.
Browsers and website designers already ha
XTP (Score:2)
Here's what Google finds:
http://www2.ics.hawaii.edu/~blanca/nets/xtp.html [hawaii.edu]
http://www.cs.columbia.edu/~hgs/internet/xtp.html [columbia.edu]
The question indicates misunderstanding (Score:2, Informative)
Please study TCP/IP better before you ask such a question again.
The comment indicates mis-reading (Score:1)
So instead of trying to prove that you're smarter than the average \.er by playing with semantics, how 'bout putting that noggin to better use and answering the question. Clearl
Re:The comment indicates mis-reading (Score:2)
Actually, I think it's a fair comment. The question becomes somewhat ambiguous when the line between the World Wide Web (which is ostensibly what the article poster meant) and the Internet as a whole is blurred. Is the intention to redevelop IP and/or TCP/UDP to be better suited for the distribution of web content, to the possible detriment of other forms of Internet content? Or is the question what it app
Don't be nasty (Score:4, Insightful)
Please study TCP/IP better before you ask such a question again.
You know what I've found? Professors and people that generally understand a subject are generally not assholes towards people that make an error in it (maybe if they're frusterated) -- they try to correct errors. It's the kind of people that just got their MSCE who feel the need to demonstrate how badass they are by insulting others.
The question was not unreasonably formatted. The most-frequently used application-level protocol on the Internet is HTTP. The only other protocol directly used much by almost all Internet users are the mail-related protocols. The main way that people retrieve data and interact with servers on the 'Net is HTTP. Often, the HTTP-associated well-known ports 80 and 443 are the only non-firewalled outbound ports allowed to Internet-connected desktop machines. You're using a Web browser to read this at the moment. Other protocols are increasingly tunneled over HTTP. Saying that we have an "HTTP Internet" is entirely reasonable.
Unification (Score:5, Interesting)
Then I would re-design DNS so that you have to provide not just a domain name to resolve to an IP number, but a "resource type" such as SMTP, HTTP, etc. (similar to MX records, but generic). Each resource type would have its own associated IP number and port.
I would unify all the protocols under a single HTTP-like protocol and make everything, FTP, SMTP, NNTP, etc. a direct extension of it.
I would merge CGI and SMTP DATA into a single "data" mechanism that could be used with any of the protocols uniformly.
I would clean up the protocol so it's possible to concatenate multiple lines together without ambiguity, and uniformly, so the method for multiple line headers is the same as multiple lines of data.
I would also move SSL authentication into that protocol, rather than having it at the TCP level. This would make shared hosting simpler and would save us a LOT of IPv4 numbers.
I would peel the skin off of anyone who suggests that XML become an integral part of that protocol. XML is wordy, wasteful, hard to read and should be a high-level choice, not a low-level foundation.
That's not all I can think of, but that's all I'm going to bother with right now. =)
Re:Unification (Score:3, Funny)
Re:Unification (Score:2)
That's been considered before, and was rejected because handling variable length addresses would place an enormous strain on routers and DNS servers.
Re:Unification (Score:2)
If you kept the same model of bit-patterning the numbers (network bits high, host bits low), a single byte (or smaller bit pattern) could be added to the packet to represent the number length (00000100 for IPv4 and 00000110 for IPv6).
Lookups could be speeded up because you could pre-hash the router lookup table by separating IP networks by length of their number. If a packet came in with an IP number length of 7, you could search for a routing solution straig
Constant-size addresses (Score:2)
No, but it is easier for a chip engineer to make optimizations with constant-length addresses.
And, honestly, as long as we're using IPv6 addresses as actual addresses, as they're intended to be used, I just cannot see length being an issue again. (Problems will come up if some idiot tries ramming additional data into the thing, like a MAC address.)
Re:Constant-size addresses (Score:2)
I guess I this at the core of the issue for me. I can imagine not being to imagine right now needing anymore than what IPv6 allows.
It's not just a matter of having enough addresses for all the hosts we may have, there's an allocation issue. Every network is going to watch giant swaths of address space, so the ceiling that IPv6 provides is much lower than the sum of hosts that can fit in it. Lots of addresses are going to go to waste in one network while oth
Critique (Score:2)
As I go into detail in in my post futher in this thread, I don't think that this is a good idea. It makes optimizations harder, and IPv6 should never need to be extended as long as it is properly used. Furthermore, unless a new protocol uses the *exact same* routing mechanisms and *only* changes address length, compatibility gets broken anyway. I think the gain
Re:Critique (Score:2)
You could have a "WWW" resource type, I guess.
This is already done, with well-known ports -- the advantage of using well-known ports is that the additional network traffic and latency is avoided.
I think you misread what the original poster meant. He wanted a given DNS name to resolve to completely different IPs depending on intended use. For example, "tempuri.org" could resolved to one IP if being accessed in "Web" domain, while the DNS server would
Re:Critique (Score:2)
Relying on a port number would require either server (or a third server, mostly likely) to dispatch requests to a single IP, then route traffic to other IPs based on intended use. He wanted the shift the burden of traffic differentiation up a level.
This isn't how it would work. When a client resolves a domain name, it would provide a domain name and "use ID" and would get, in return, an IP address and port and would go directly to the IP
Re:Critique (Score:2)
We've already got one of those: rfc2782 [roxen.com]... It's in use already, but mainly in-site as part of DNS service discovery (rendezvous/zeroconf) and ActiveDirectory - it's not supported by e.g. standard web browsers, email clients etc.
There are problems with using site-variable port numbers: it makes identifying t
Re:Critique (Score:2)
This IS a real problem with the idea, but I think it could be worked out with some creative thinking.
Someone types in example.com - what do you need to lookup? www.example.com A, example.com A, example.com SRV? What about sites where these are different - which address do you connect to? Then, do you send them off all at once (reduces delays in the common instance but has a tendency to increase de
Re:Critique (Score:2)
You could have a "WWW" resource type, I guess.
My idea was to change DNS to allow IP numbers to be returned for arbitrary identifiers the way MX works, but more generically; not "resource types" per se. You can store numbers for HTTP, WWW, MAIL, TELNET, PORN, whatever you want.
This is already done, with well-known ports -- the advantage of using well-known ports is that the additional network traffic and latency is avoided.
Well-known ports are very
Re:Critique (Score:2)
No, you don't. You simply move the problem from "well-known ports" to "well-known labels".
Re:Critique (Score:2)
If they move to "well-known labels", isn't that "dropping well-known ports?" Ports and labels are two different animals. Well-known ports are a
REST (Score:2, Insightful)
Instead of ditching HTTP, let's ditch SOAP-RPC.
Flash (Score:1, Interesting)
Flash has it's drawbacks of course (proprietary and non-indexable being pretty critical), but if opened up to a standards body, it could very we
I don't like Flash (Score:4, Insightful)
I hate Flash for a lot of reasons.
*) Lots of web designers think animation is a good idea. They tend to use it more than a user would like (especially since the "is it cool" metric, where users are asked for initial impressions of a site rather than to use the thing for a month and their feelings on usability) is wildly tilted toward novelty. Animation is almost never a good idea from a usability standpoint on a website.
*) Lots of people doing Flash try to do lots of interface design, going so far as to bypass existing, well-tested and mature interface work with their own pseudo-widgets. They usually don't know what they're doing.
*) Flash is slow to render.
*) Flash is complex, and it's hard to secure the client-side Flash implementation compared to, say, a client-side HTML rendering engine.
*) The existing Flash implementation chews up as much CPU time as it can get.
*) Flash does not allow user-resizeablity of font sizes.
*) Flash does not allow for meta-level control over some things, like "music playing in the background". Some websites provide a button for this. I don't want to have control if the designer choose to give me control -- I never want that software playing music if I choose to not have it do so.
*) Flash does not allow user-configurable font colors (and for some reason, too many Flash designers seem to think that "ten-pixel high light blue text on dark blue looks great to them, so everyone should also be able to read their site as easily).
*) Because Flash maintains internal state that is not exposed via URL, it's not possible to link to a particular state of a Flash program -- this means that you can only link to a Flash program, not a particular section of one. This is very annoying -- I can link to any webpage on a site, but sites that are simply one Flash program disallow deep linking. (I'm sure that concept gets a number of designers up somewhere near orgasm, but it drives users bananas.)
*) The existing Flash implementation is not nearly as stable as the other code in my web browser, and takes down the web browser when it goes down.
*) As you pointed out, I can't search for a "page" in a Flash program.
Really, the main benefit of avoiding Flash to me is that it keeps web designers from doing a lot of things that seem appealing to them but are actually Really Bad Ideas from a user standpoint. Almost without exception, Flash has made sites I've used worse (the only positive example I can think of was either a Javascript or Flash in which the manufacturer of a hardware MP3 player demoed their interface to website users).
I *have* seen Flash used effectively as a "vector data movie format", for which it is an admirable format -- I suspect most Slashdotters have seen the Strong Bad cartoons at some point or another. But I simply do not like it as an HTML replacement.
Re:I don't like Flash (Score:1)
But if the designer gets the tickle to make your browsing experience something of a movie and not provide a (point for point) site map alternative ~ your screwed and theyve screwed themselves.
I browse with plug-ins off personally, flash ads are a pet
Re:Flash (Score:1)
What about the non-HTTP Internet? (Score:3, Informative)
Please don't assume that my Internet is the same as your Intarweb.
Come up with something people want. (Score:2)
You need to develop a new protocol/app that provides something people actually want without added complexity and you'll replace the web as quickly as the web replaced usenet/gopher/ftp/irc (I know it didn't replace all of those things but for the majority of uses and people it did to some degree render them obsolete)
Of course if your new system was really a wanted thing and was open enough to b
Oh, yeah (Score:5, Insightful)
* The primary addressing mechansim would be content-based addressing (like SHA1 hashes of the content being addressed). We have problems with giving reliable references for things like bibliographies. We are gradually moving in this direction. P2P networks are now largely content-addressed, and bitzi.com provides one of the early centralized databases for content-based addressing.
* We would have a global trust mechanism, where people can evaluate things and based on how well other people trust their evaluations, those people can take advantage of their evaluations. Right now, web sites have very minimal trust mechanisms (lifetime of domain, short domain names, and the generally-ignored x.509 certs). This would apply not just to domains, but be more finely-grained and apply to content beneath it.
* The concept of creatable personas would exist. Possibly data privacy laws would end up requiring companies not to associate personas, or perhaps we would just make it extremely difficult to associate such personas. You would maintain different personas which may, if so desired, be separate. Such personas would be persistent, and could be used to evaluate how trustworthy people are -- e.g. if Mr. Torvalds joins a coding forum and makes some comments about OS design, he can simply and securely export his persona (a pubkey and some other data) from the other locations that he has been using that persona (like LKML, etc) and benefit from the reputation that has accrued to that persona. This would eliminate impersonation "this is the *real* Linus Torvalds website", etc.
* Such trustable, persistent personas would allow for the creation of systems to allow persistent contact information to be provided ('snot that hard). This means no more dead "email addresses".
* Domain names not be used as the primary interface mechanism to users for finding and identifying data providers. This is halfway handled already -- most people Google for things like "black sabbath" instead of looking for the official Black Sabbath website by typing out a single term. It's still possible for people to "choose their visual appearance", though, and Visa looks very much like "visa-checking.com", as long as end users have control over how domains are presented to users.
* P2P becomes a primary transport mechanism for data -- from an economic standpoint, this means that consumers of data are responsible for subsidizing continued distribution of that content, and shifts the burden from the publisher of the content -- one step removed from consumers funding the production of their content. It solves many of the economic issues associated with data distribution. For this to happen, P2P protocols will have to be strongly abuse-resistant, even if that means a lesser degree of performance or efficiency. Many existing systems have severe flaws -- Kazaa, for instance, allows corrupted data to be spread to users, and conventional eDonkey (sans eMule extensions) does not provide any mechanism to avoid leeching, which destroys the economic benefits. Sadly, one of the few serious attempts to address the stability of the system was also from Bram Cohen of BitTorrent and abandoned -- called Mojo Nation, it used a free-market economic system to determine resource allocation, and was fairly abuse resistent. I have some efforts in this direction, but don't use a free-market model.
* Email and instant messaging will merge to a good degree (or perhaps one will largely "take over"). Up until now, it has mostly been techncial limitations in existing software that has kept one from supplanting the others -- email provides poor delivery-time guarantees, instant messaging provides message size limitations. Email uses a strictly thread-based model, instant messaging uses a strictly linear model. Probably someone will coin a new, stupid term for the mix of the twain (like "instant mail").
* Personas and global trust networks (not extremely limiting binary-style trust, a la PGP/GPG), as mention
Re:Oh, yeah (Score:2)
SOAP is a hack to ram things through HTTP
I completely agree. It was borne out of the need to tunnel RPC through HTTP due to misguided and zealous firewall administrators, added with the then-current hype: XML. The result is a bloated protocol.
sunrpc complicated and ugly
It isn't. The interface specification is close to C:
and after the stubs ha
Re:Oh, yeah (Score:2)
If MS Office/OpenOffice would output CSS+XHTML+SVG, then it would be useful. Right now you have to learn a third-party client (Nvu, Moz Composer, Dreamweaver) that outputs decent (Nvu) to horrible (Frontpage 2003/FPExpress) code. I have yet to find a WYSIWYG editor that isn't brain-dead, the W3C Amaya browser/editor is decent, but slower than molasses.
XML (Score:1)
The XML should be psuedo-standardized, so browsers would be able to recognize TV-Listing-ML/Search-Result-ML and present it in an alternate form, if you wanted, with headers and footers added (to make advertisers happy, unfortonately necessary for a new Web protocol to suceed.)
Re:XML (Score:1)
I agree with you, but on this point:
I would like to point that XSLT already exists, and it is not a replacement to CSS, but a complement. XSLT = data format CSS = style
HTTP is fine (Score:2, Interesting)
HTML however feels rather clunky now with all these bloated half-supported standards tacked onto it. We still don't have consistent rendering across the board, and it's still a pain in the posterior to publish anything. CSS, that wretched hammer of aborted salvation, is yet another limited hack.
We used to have HTML glitches and workarounds, now we have CSS glitches and workarounds; design compromises in a system that was supposed to break the
It's been tried (Score:1)
No XML please. (Score:2)
Tim
Pro Jax! (Score:2)
Because HTML is fairly verbose and well formed HTML is regularly laid out it isn't terribly difficult to parse. Comput
The future (Score:1)
XML-Enabled Telnet (Score:1)
My ideal vision of the future of the internet is basically a version of Apache that supports persistent connections, so I can go back to the days of BBS, only with graphics and streaming video added. Or would we call them MMOBBSes now?
Privacy (Score:2)
IPSec? At the application level, SSL?
I want IP privacy masking, meaning If I connect to a server, it won't record my IP, and my IP will never be seen on the public network. The phone company can "block caller ID" Why can't an ISP block "host IP?"
Oh, it can. Lots of ISPs provide web proxies, in particular (they'd probably be
Critique (Score:2)
Almost all existing P2P filesharing-oriented servents reshare downloaded files. From that standpoint, the statement is not unreasonable.
And many common implementations actually use HTTP.
Not that I'm aware of. Gnutella uses an HTTP-like protocol, which is as close as I can think of.
Sure. I'd refactor XHTML to include more useful element types (e.g. ).
I disagree. The current behavior of navigation controls operates on a meta-level -- the operator never gets control over
Re:Critique (Score:2)
I guess it comes down to what you define as "smart caching".
It certainly caches, the question is whether it's co
Re:Critique (Score:2)
Which all newer HTTP specifications require backwards compatibility with.
Try actually reading the protocol specification. It uses HTTP.
Try actually using the servents. The document does not reflect how the GnutellaNet operates. Given the way the GDF operates (mostly trying to formalize existing practices rather than coming up with new protocol specs from scratch), it is unlikely that it ever will.
Not in any browser I know of; it's usually prov