UDP - Packet Loss in Real Life? 127
PacketStorm asks: "There's always an argument between TCP and UDP users. TCP users complain that UDP is non-determinstic and lossy, while UDP users complain that TCP is slower and nobody needs all the features anyway. So the question is - has anyone actually seen/experienced UDP loss in high-traffic environments? Is the degeneration of UDP any better or worse than TCP in a congested environment? TCP also craps out in times of congestion, but at least you know - or do you? Experiences?"
UDP packet loss (Score:2, Interesting)
Re:UDP packet loss (Score:3, Informative)
With TCP, you have a listening socket and then a socket for each client. Each socket has a file descriptor, and you have to select() on all of them to check for activity. There is a lot of housekeeping to do - though it is a solved problem - all webservers solve it. Timeouts are harder to determine from the application's perspective because there can be retransmits and stuff going on that you have no idea about.
However, with UDP, it's laughably easy. You just do one recvfrom() and you get a packet and it also fills in a data structure to tell you where it came from. No filling in fd_set structures and no running out of file descriptors. When you got the packet, you got it.
Most online games and stuff would really like to know when packets were sent, when they were received and if packets are being dropped. With UDP you can usually find out these values directly. With TCP I think it would be more like an educated guess.
One drawback is that UDP is quite easy to spoof. I can send a packet and it is up to the application to figure out it has been spoofed.
If I were downloading a patch to a game I think TCP would be the better choice. It already has the smarts to pace the connection, transmit the data reliably and prevent spoofing.
Re:UDP packet loss (Score:5, Insightful)
Re:UDP packet loss (Score:3, Informative)
I would think SOME game data would need to be reliable, but most isn't. The problem with TCP for an online game is that you can never THROW AWAY data that's too old or unnecessary. It will be transmitted and retransmitted further delaying current data. At some point, things will become unusable.
I don't think you could play a multi-user game for over an hour and not run into this problem. I think on lower-bandwidth connections, you might never be able to catch up once you fall behind.
Of course, you could do what microsoft does with DirectPlay: open multiple connections, sometimes on multiple protocols, some of them asymmetrical, while ignoring silly details like firewalls and NAT.
I think the initial handshake for dungeon siege was something like:
first packet: udp my_ip:6073 to server_ip:6073
reply packet: udp server_ip:6073 to my_ip:2302 (what!?!?)
next packet: udp my_ip:2302 to server:6073
or some garbage like that. I think the dungeon siege guys just changed it.
Re:UDP packet loss (Score:2)
Well, that's not entirely true -- while it is the case that once you have called send() on the data, it can't be thrown away, it's also true that if you are maintaining an outbound message queue (to avoid blocking on a filled TCP send buffer), then you can remove or replace data that is in the outbound message queue, that hasn't been sent yet. I have used this technique with reasonable success.
I do agree that UDP has its place, however.
Re:UDP packet loss (Score:3, Informative)
Some tasty articles from gamasutra (might require a login, you might also find these in Google's cache):
"TCP is evil. Don't use TCP for a game. [gamasutra.com] You would rather spend the rest of your life watching Titanic over and over in a theater full of 13 year old girls."
article [gamasutra.com] on WON's servers for Half-Life.
Dead Reckoning [gamasutra.com] Latency Hiding for Multiplayer Games.
Other software might need the benefits of TCP, but game development is one familiar illustration of where UDP often wins out.
Re:UDP packet loss (Score:1)
both are usefull (Score:4, Insightful)
Parts of the Internet currently collapsing? (Score:4, Funny)
Why UDP in games, TCP for bulk tranfer (Score:5, Informative)
TCP is designed to make unreliable networks (like the internet, which only gives "best effort delivery") reliable by ensuring that a stream can be reassembled, in order, with no missing pieces. Read the RFC for more info here. This reliability makes it good for things that need zero corruption (file transfers for example), and aren't time criticial.
Hope this helps.
--MonMotha
Re:Why UDP in games, TCP for bulk tranfer (Score:2)
This reliability makes it good for things that need zero corruption (file transfers for example), and aren't time criticial.
TCP and corruption is an interesting subject. TCP's error detection is only designed to solve the problem of data not being delivered. It is not designed to stop data corruption. If the transport layer corrupts data, then eventually, TCP could deliver incorrect data.
What happens if data is corrupted so that a corrupt checksum matches corrupt data? Eventually, from a probability standpoint, there will be undetected errors.Interestin paper on Checksums, CRCs (Score:4, Informative)
Derek
Re:Why UDP in games, TCP for bulk tranfer (Score:1)
--MonMotha
Re:Why UDP in games, TCP for bulk tranfer (Score:2)
UDP Experience (Score:4, Interesting)
During my initial proposal I mentioned to the PHBs that I would use the UDP protocol. One of my colleagues, wanting to sound important, said that UDP can be lossy. He went on to explain packet loss to the befuddled crowd. Well, the PHBs latched onto the term "packet loss". Packet loss this, packet loss that. They had no freakin' clue what it was, but it sounds pretty cool.
Anyway, I had to set up a test in which I had all of the timeclocks start a program at the same time. This program went into a tight loop of sending UDP packets (clock numbers) across the network to the server. Each one sent 1000 clock numbers, and every single one made it across. Obviously our 100mb network and proper use of subnets helped, but we haven't experienced any packet loss in the four years that these things have been running. So there.
Matthew
Re:UDP Experience (Score:1, Insightful)
Even if it does, it's still not a great decision.
Re:UDP Experience (Score:1)
Re:UDP Experience (Score:1, Informative)
#2. Within your LAN is something, out in the big bad internet is something else.
Re:UDP Experience QWZX (Score:5, Insightful)
Dude, I hate sound flame-y, but do you have any understanding of what you implemented? That you got lucky and it "works" is totally irrelevent to the fact that it's completely unreliable. All it takes is one flakey piece of Ethernet dropping packets to SCREW UP FREAKING TIMESHEETS.
This reminds me of the morons who use MySQL for financial transactions (i.e., no transactions, no foreign keys, etc) who justify their ignorance with "well, it works so far!!"
They point isn't whether it works or not, the point is that when it fails, it fails spectacularly. Like your system. Just because you can keep spinning the chamber and the russian roulette gun never goes off doesn't mean it never will.
Seriously, you screwed up bad. These are the kinds of stories that really make me think that programmers should have some sort of licensing.
Re:UDP Experience QWZX (Score:3, Insightful)
Ok, calm down...
Now, I agree that UDP is built to be a fast, not-so-reliable protocol. One would initially presume that this is not the best thing to use on a punchclock system.
However, you have to look at the whole system.
First of all, if you put all the punchclocks, and the servers on the same subnet, then you've eliminated dropped packets due to routing.
Secondly, if you send an acknowledgement back to the clock, then it can display to the user "OK, you're signed in" or "ERROR, please retry". If you lose either the "sign-in" packet, or the acknowledgement packet, then all the user has to do it swipe again to retry.
Thirdly, these kind of systems always (should) have a manual backup, so if, for some reason, the system records Buddy punching in at 7 am, but never punching out, then a supervisor can go in and manually update the database to fix the problem.
Just remember that programs like this don't exist in a vaccuum; they are always part of a larger picture, and they need to function in that framework.
Re:UDP Experience QWZX (Score:3, Insightful)
As a rule, people who build acknoweldgements into their UDP based protocol should have used TCP.
Re:UDP Experience QWZX (Score:2)
Not necessarily. A better rule is that people who build sequencing and acknowledgements in should have used TCP. But if you can do without one or the other then sometimes UDP is worth it, but not often.
Sumner
Re:UDP Experience QWZX (Score:2)
I agree. In this case, you don't need sequencing, because there's only ever one transaction out there from each client at any given time. The transaction is as simple as "12345"->"OK". The UDP protocol is so much simpler to program (and you don't have to worry about managing connections) that for this particular application, it certainly seems worthwhile.
Re:UDP Experience QWZX (Score:1)
I once worked on a system that made use of PC's to process bitstreams for a digital radio broadcast system. Communications were UDP-based, and we had a problem where the odd data packet was being lost (0.5 seconds of audio lost approx every 20 seconds). I spent a MONTH debugging code to figure out where the lost packets were going. Then using tcpdump, I saw that the NIC was actually losing packets (each packet contained an integer counter). I was able to repeat the loss of packets at will.
Replaced the NIC, and all was well.
This is not to put down UDP; for this application, TCP would have retransmitted and the added delay would have created problems elsewhere.
Moral of the story: expect data to be lost over ANY kind of network.
Re:UDP Experience QWZX (Score:2)
I of course don't know about the original poster's actual implementation.
I'm making the assumption timeclocks themselves are very lightweight (or thin) devices, CPU and memory wise.
The problem with TCP for a truly embedded application is that you have to play a lot of keep-state programming to implement even a minimal TCP stack.
A minimal UDP stack is much smaller. Everything you need to do with UDP is rather stateless.
Ignoring ARP and ICMP for a second (which is the same for TCP and UDP), sending a single message via UDP consists of encapsulating the packet and sending it. With TCP you have to send a SYN, then wait for a SYN-ACK, then send an ACK back (possibly containing data), and then you have to deal with closing the connection.
Maybe a better way to describe this is that UDP is a DATAGRAM protocol. I.E. you format a datagram and send it. By contrast TCP is designed for a mostly-reliable almost-serial-like stream, with congestion control, retransmission, etc. etc.
I was going to try to make a case for TCP in the timeclock example where a larger machine is available. I just can't see a good reason for the TCP overhead in this app. I mean, what exactly are they going to be sending... Something like "Employee 308 clock in", which the server is going to say "Got it, employee 308"?
If you haven't written low-level embedded networking code you really don't realize how much overhead is in TCP. Heck, just to get a TCP session set up and torn down you need 7 packets:
--> SYN
<-- SYN-ACK
--> ACK
(Session Established)
<-- FIN
--> ACK
--> FIN
<-- ACK
With UDP you only need 2 (1 each way) to send your message and get it acknowdged.
Don't get me wrong, I'm a fan of TCP in the right places. Any message over the ~1400-1500 MTU size of a UDP packet should probably be sent via TCP.
Re:UDP Experience QWZX (Score:2)
Actually, the timeclocks he describes are fully fledged PCs, I believe, and they do support a full TCP stack already. However, considering the differences in complexity of the server, I still would favour a UDP solution.
As an example, for the server to implement TCP, you would need to listen on a port, deal with an incoming connection (maybe fork off another process), deal with connection timeouts, etc.. With UDP, you can just have one single procedure that's called when a packet arrives. The procedure parses out the employee number from "Employee 308 signing in", updates the employee database, and sends a UDP message back, i.e. "Employee 308 signed in". This tolerates dropped packets (the employee just swipes again), and most importantly, the implementation is practically stateless, and that's a much simpler architecture. Never underestimate the KISS principle.
Re:UDP Experience QWZX (Score:2)
When was the last time you used MySQL? MySQL now supports several table types other than the default MyISAM type, including types with full transaction support.
MySQL and transactions (Score:1)
Re:UDP Experience (Score:1, Insightful)
Re:UDP Experience (Score:2)
So, what you're saying is... (Score:4, Funny)
Re:UDP Experience (Score:1)
Well the main problem on TCP, will be the scalability. Few, small transactions on a LAN can work correctly, so if your application ever will run on that environment, then everything should be OK.
Problem would arise if the system would increase the number of transactions, the size of them, or the area of the network (From LAN to MAN, or WAN).
From experience, I worked on "debugging" an small database application that used UDP. They added remote dial in clients. While it worked smoothly on a LAN, with few routers, at the moment that dial in customers started trying to update data on the system the fun started :). The original developer, assumed that they'd never change the environment, and had an scheme for fix retransmitions that worked OK on the LAN, but constantly gave the dial in clients a bad time.
Re:UDP Experience (Score:1)
Re:UDP Experience (Score:1)
Re:UDP Experience (Score:1)
I've got 3 cisco 2924xl switches all linked together. I run about 25 3com ethernet phones that all use raw ethernet packets (think datagram like udp). The phones are 10mb and many have PC's hooked to them (the phones are 10m hubs). Each of the phones sends a sequence packet number. So far in my testing, I have never seen a lost sequence number and this is for millions of packets.
Our other office is just a smaller version of the same thing with newer 2950 switches and only 4 phones. Accordig to the swtich, its at about 1/2,000,000 of its capacity most of the time when its busy.
Comment removed (Score:5, Informative)
What argument? (Score:4, Insightful)
Re:What argument? (Score:1)
Re:What argument? (Score:2)
Along with some <FatB*stard> Baby - the other, other white meat!</FatB*stard>
Re:What argument? (Score:3, Funny)
Except for pepsi.
Re:What argument? (Score:1)
And milk [notmilk.com]
Re:What argument? (Score:2)
Seems like UDP would work OK on an unclogged network and have the benefit of adding less congestion compared to TCP with all of its handshaking.
For an low traffic, more or less isolated network, UDP might be a simple solution. OTOH, with PCs with 100 Mb cards so cheap these days, people could probably afford the extra buzz of TCP without noticing any appreciable degradation of performance or incurring great cost.
The day the network starts to become the least bit congested and any UDP packets get dropped, then everyone shifts to TCP and things start to really get congested.
A binary QoS mechanism exhibiting an inherent problem.
I wouldn't know. (Score:2, Funny)
- A.P.
The question is: (Score:1)
non-determinstic ? (Score:2)
If you have a network that is just sending packets correctly with out decisions then all the better and we should take it on tour. Maybe, however you're refering to the complete broadcast mentality that packets aren't sent to specific hosts as required but to all hosts since this is how non-determinism is typically simulated, but simulated non-determinism and true non-determinism are a tad different.
Maybe it's just that I don't really know a lot of networking theory, but that term just sort of jumped out at me. I've often equated non-determinism with magic...
Re:non-determinstic ? (Score:1)
A deterministic system maps input to output in 1:1, non-deterministic maps input to output in 1:some_number
Note that TCP isn't really deterministic: What if all the routes between the source and destination go down? The packet won't be received. It's just that it's more likely to be deterministic. And anyway, TCP packets get dropped just like UDP packets; it's just that TCP packets get resent.
btw, this has nothing to do with the "magic" that non-deterministic systems exhibit (these systems can work by looking for a specific output for a specific input out of all the outputs that can be generated from that input).
Multicast (Score:2)
I used to work with a company that sent data (mostly video) to all customers via satellite, in multicast. The uplink went through the terrestrial network. IIRC, the whole traffic was sent over UDP.
real life experience (Score:1)
An excellent example comes from my own experience. Years ago we built a system that needed to synchronize system clocks between a server (hooked up to an atomic clock) and a bunch of workstations (I know about NTP, it was just too complicated for our humble needs). The server would broadcast the time over UDP every 30 seconds, and the workstations would synchronize their system clocks. This is a typical example where you don't want retransmission of lost packets, since the data (i.e., the system time) has become invalid in the mean time. Better to wait for the next update, since the system clocks will drift only microseconds in 30 seconds anyway.
Before implementing this, I sort of tested UDP reliability. I was warned about dropped packets, corrupted packets and packets delivered out of sequence. The test consisted of pumping out as many packets as possible, and see what happens. I never ever saw a single corrupted packet, or a packet delivered out of sequence! The only errors I could force were dropped packets, basically by overflowing the socket buffers on the receiving end. This test brought the network (10 Mb/s Ethernet LAN) completely to it's knees, by the way (got me the honorary title of "the network killer"). Millions and millions of packets sent, only dropped packets, never anything else.
Now, this was an isolated LAN (more or less) without crazy geeks downloading DivX movies, but the hardware wasn't all that spectacular either (SparcStation 10 server, 64MB RAM, SparcStation 5 workstations, 16MB RAM -- no typo!).
I ran some more tests with more 'normal' network loads, and I never, ever, detected a single problem.
So, your mileage may vary, but I think that problems with UDP typically would happen under circumstances (i.e., severe network congestion)that would also affect TCP.
Silly idea: you could use UDP with Forward Error Correction if your data is important. Plenty of textbooks about coding theory that explain the theory behind it, and how to design your customized error-correcting code. As long as you stay within the Shannon limit (i.e. don't try to exceed your channel capacity), you can make the error probability arbitrarily low. Doesn't help with disconnected cables an the like, since your channel capacity is zero then... Oh well, you can't win...
Re:real life experience (Score:2)
Out-of-sequence delivery happens when there is more than one possible route for the packet, such as occurs over the internet. On the internet, routing is done on a case-by-case best effort basis. It's possible then for the first packet to be sent via a longer route than the second and therefore for the second packet to arrive first.
So remember - UDP across a LAN is quite a different beast to UDP across the internet.
Re:real life experience (Score:1)
Anyway, it's a relevant point to the original question: it's not just the applications and network load that matter, it's also the network topology.
Lemme get this straight. (Score:3, Funny)
To quote Dr. Evil, "Riiiight...".
Re:Lemme get this straight. (Score:1)
Our "homegrown NTP" as you call it wasn't even close to that. All we needed was a simple way to keep system clocks in line, with an accuracy of a few tenths of a second. We accomplished this in a few dozen lines of C++ (a generic message broadcasting mechanism over UDP was already implemented).
In hindsight you can always question decisions like that, but bottom line is that we had a simple problem, and we fixed it with a simple solution.
And the software (control system for rocket launches) has been in operational use since 1996, without any major problems, thank you.
Re:Lemme get this straight. (Score:2, Funny)
Ahem.. you were going to use usenet posts to synchronize your system time?!?!?!
Bwahahahahah...
(wipes tear from eye)
*sigh*, that was a good one - To quote FreeLinux, 'To quote Dr. Evil, "Riiiight...".'
Re:Lemme get this straight. (Score:1)
Re:Lemme get this straight. (Score:1)
Re:Lemme get this straight. (Score:1)
Multicasting MPEG (Score:2)
We use Multicasting (IGMP is built on UDP) to 'broadcast' MPEG Video over an IP network for an Interactive DTV (www.kitv.co.uk) project.
This functions largely without problems, because maintaining an MPEG steam is highly time sensitive but it is not catastrophically sensitive to lost or dropped packets. These lost/dropped packets lead to video artifacts and not total loss of data, because the video stream can continue from the next received packet.
The quality issue is governed by *minimising* the lost packets not 'guaranteeing' them
packet loss expierenced using tcp (Score:1)
My point... I thought tcp wasnt supposed to let that happen, then again, maybee i should just blame the router. Has any one else have simular problems with cheap nat routers or is this post on topic?
Re:packet loss expierenced using tcp (Score:2)
Alas, I've seen this too. I wrote a file sharing program that uses TCP for file transfers, and has a download-resume feature that does an md5 checksum of the user's local file portion, and only starts the resume if the local file portion matches the corresponding portion of the uploader's file.
Every few days, I would get complaints from users that the auto-resume wasn't working. It was refusing to resume because the two file fragments' hashcodes didn't match. After going over all my checksum code checking for bugs, I finally got paranoid and added an application-computed checksum every so often to my TCP data.... and sure enough, in some cases the TCP data I sent would arrive at the downloader's machine with a checksum mismatch. Unfortunately, I haven't been able to finger a particular piece of software or hardware that causes this, but I can confirm that it does sometimes happen. :^(
Re:packet loss expierenced using tcp (Score:2)
Doubtful, in this case.... this program was only available for BeOS.
Wireless (Score:2)
Re:Wireless (Score:1)
Re:Wireless (Score:1)
In most cases using UDP is a loss (Score:4, Informative)
About the only reason for using UDP is if you deliberately want to circumvent the congestion avoidance protocols that are built into TCP. So, if you are playing a game, and you need the packets to get through at all costs, but you aren't sending many packets- by using UDP you can agressively defend the small amount of bandwidth you need- any TCP connections around will tend to back off and get out of your way; and that's reasonable if you code it carefully. But writing the protocol to do that is hard, you have to understand not only UDP but also TCP, as well as your game requirements.
And that's the real problem. In most cases people think that waving UDP at the problem will solve their problems- in fact it makes them worse; and TCP has solutions to problems only PhDs have even thought of, and the solutions are built in.
As an example, somebody I know implemented a tftp protocol using UDP. The guy is off the chart in his software abilities (trust me the guy is amazing, he's in the top 2 percent of software engineers according to the tests). Anyway in a back-back comparison against a standard ftp protocol- the tftp protocol loses by a factor of 10 or more (on a network with some congestion, I expect a quiescent network would have been much more level). Of course tftp isn't supposed to handle congestion. But that's the problem- UDP can't handle network congestion out of the box... indeed if anything it tends to create network congestion.
The main algorithms in tcp include 'slow start' and 'exponential backoff'. Both of these are missing in UDP, and both improve the network performance enormously. If your application doesn't affect network performance and doesn't worry about packet loss much, then UDP may be the way to go, otherwise stay away from UDP.
Re:In most cases using UDP is a loss (Score:2)
You started out okay and then sank into total bullshit land as soon as you mentioned your uber-friend's tftp implementation. Newsflash: tftp is UDP from the get-go. That's the whole point. And since tftp is a request-response type protocol, of *course* it's going to see less performance out of your networks than ftp. On the other hand, try these guys. [dataexpedition.com] They built a faster-than-tcp implementation of a streaming protocol using UDP and it outperforms TCP on high-path networks and does *at least as well as* TCP on the LAN.
There is no way to "aggressively defend" your bandwidth if (for example) you're playing a game. If someone else comes along with their own UDP application that doesn't back off when detected packet loss gets extremely high, you lose out just as much as they do. There is no defence in this case, there is only *higher packet loss.*
And claiming your friend is "off the chart" like that--what is that supposed to do--lend credence to your exaggerated and false claims about UDP? Bzzzt. Nobody cares about test scores--especially for someone who re-implemented a trivial file transfer program using the very protocol it was designed for to begin with. Tell you what--you show us something impressive he did and we'll be impressed. Don't beak off about "trust me" and "he's in the top 2 percent". That's just crap--you didn't even tell us what the tests were or who administered them!
Finally, UDP wasn't designed to handle congestion. But that doesn't mean it can't. Counter-example: build congestion avoidance into your application.
Readers, don't listen to the parent of this note, he doesn't know what he's talking about. Anything TCP can do, UDP can do--the problem lies in how much work you have to do to implement it. TCP is convenience because all the work for a streaming, in-order, semi-reliable, congestion avoiding protocol has been done for you. Unfortunately you can't turn the major features you don't want, off.
Yeesh. Someone mod parent down, it's really not worth a 4.
Re:In most cases using UDP is a loss (Score:2)
Precisely! It's supposed to be a lightweight protocol, which is one of the reasons people choose UDP, but it turns out to be slower in this case. And it's because of what's been left out of the UDP protocol stack; UDP isn't inherently bad because of this, but usually you would want what's been left out. Right now I would not want to deploy tftp ever, it lacks passwords, and it's performance is poor on a congested network.
Finally, UDP wasn't designed to handle congestion. But that doesn't mean it can't. Counter-example: build congestion avoidance into your application.
Absolutely, but then you have to implement it yourself! And it is very much not simple.
There is no way to "aggressively defend" your bandwidth if (for example) you're playing a game. If someone else comes along with their own UDP application that doesn't back off when detected packet loss gets extremely high, you lose out just as much as they do. There is no defence in this case, there is only *higher packet loss.*
Being aggressive does not guarantee more bandwidth. But in most situations most other people are using ftp/http etc. they normally will back off and you will get more bandwidth if your protocol is aggressive. Of course if everyone is aggressive, especially inappropriately- you and everyone else ends up losing worse than if you'd have used TCP; it's rather like real life. It probably only makes sense in cases where you only need a small amount of bandwidth; but you NEED that bandwidth. If you start to lose packets due to congestion, you are supposed to send more slowly. However, if you increase the rate you send, you end up with a bigger slice of the pie; if the slice is then enough for what you need, you can control your bandwidth to some extent. However, ultimately you can't send more than your pipe will take, so really heavy congestion will still kill you.
Anything TCP can do, UDP can do--the problem lies in how much work you have to do to implement it.
Yes. Exactly. It's often a lot more work, and unless you really, really know what you are doing UDP is likely to be the wrong choice.
Re:In most cases using UDP is a loss (Score:1)
Nah it's not. Talk to the guy. He knows what he's doing. They don't implement exponential back-off and keep their streams consistent. TCP has an oscillating effect as it tries to saturate the line, then back-off, then saturate, then back-off. DataExpedition's method is to sneak in during those off-periods and make use of that bandwidth.
You don't need a PhD to do this "properly".
Quiet, you!
UDP can be made to go as fast or faster than TCP (Score:2)
Exponential backoff is braindead when you need good thoughput on a fat but noisy pipe. The reason TFTP gets lousy throughput is not because it uses UDP but because it waits for every ACK before it sends the next packet rather than having a receiver window feature like TCP does.
Apparently you have to be smarter than mister Top 2 Percent to make this work.
Here's some links for you:
Re:UDP can be made to go as fast or faster than TC (Score:1)
Re:UDP can be made to go as fast or faster than TC (Score:2)
Actually he's very capable of doing this kind of stuff right. More or less he had to follow the tftp spec, so what's he gonna do?
But my real point is still that most people don't know enough to do it right.
You certainly sound like you might have your head around what TCP does, so probably your protocol works very well.
Exponential backoff is braindead when you need good thoughput on a fat but noisy pipe.
Yes, you are optimising your protocol to deal with cases where packet loss is caused by noise. The internet is built mostly on the assumption that packet loss is caused by congestion, so of course TCP will perform more poorly, and a more tailored protocol is a win, and UDP allows you to do that.
I do wonder if your protocol might trigger congestion collapse, but I have no doubt you've designed against that and tested for it too.
Re:In most cases using UDP is a loss (Score:1)
UDP and TCP flow control (Score:1)
Also, some TCP stacks are implemented in a way that if you start TCP connections one-after-another, after you let the latest one peak in its bandwidth usage, the third or higher TCP connections will not be able to grab any bandwidth. This is appearantly a problem with windows 9x and some ADSL connections. The TCP flow control lets the first two connections split the bandwidth between each other, but the third connection can't ramp up enough bandwidth and just chokes. Appearantly this problem is mitigated by using different types of network connections or different operating systems.
Re:UDP and TCP flow control (Score:2)
Re:UDP and TCP flow control (Score:1)
Tricker to shape UDP (Score:1)
However as others have pointed out, which you use should depend upon the situation. If getting packets out of sequence is not so important (e.g. video streaming) then great but if reliablity is absolutely essential then TCP is probably better (plus TCP tends to still work when computers are stuck behind restrictive stateful firewalls).
Coding philosophy (Score:3, Insightful)
Now if only someone would standardize a reliable datagram protocol implementation.
Re:Coding philosophy (Score:1)
This is vary true. You must code UDP applications to deal with packet loss, and yes packets can be dropped when going from localhost to localhost. Overfill the network transmit or receive capacities or use up all buffer space, packets will be dropped. There is no way around that without changing the UDP protocal. Afterall if your application sends out 10,000 UDP packets per second and the network can only handle 2,000 per second, the other 8,000 are to be dropped. For TCP connection loss is all you have to code for.
Packet loss == upgrade network (Score:2)
OKay, a very small amount of packet loss can be normal and should be ignored. However if you have anything other than a tiny amount of packet loss your network is in trouble, and in serious need of upgrades. Remember packet loss starts to casscade, because the droped packets have to traverse the network two or more times, and each time it crosses the network it uses some bandwidth. One droped packet, but when it is re-transmitted some other packet is droped and re-transmitted, and as a result your network gets really slow. In theory tcp will just slow down, but users will re-start the slow jobs trying to get a fast connection.
Sure, TCP will get through even when you have 85% packet loss, (I had a customer who had 85% packet loss once, a babybell I won't name) but your applications will often start timing out in other areas. In theory things should still operate, but they just get slow, but many programs have their own timeout outside of tcp so they can detect when the connection when down.
Don't use tcp where you don't need it though. I once had to debug a heartbeat for a failover system, where the system only provided tcp packets. When there was a failure in the network we could switch the network easially enough, but then we had a lot of code to try to figgure out if the heartbeat that just arrived after the network switch was old (and contained invalid information about the failed node), or correct. It always seemed to work, but I didn't sleep well many nights knowing that a customer could lose a critical computer because of code that I was supposed to make work.
So in theory you can say TCP is better when there is expected to be packet loss, and UDP is better when lost packets should be ignored. In parctice though, if you have significant packet loss you need to upgrade the network.
Yep. (Score:1)
In one of our course, we had to do a network game in Java and we decided to use UDP. When we where using it in the university Labs (130 boxens, 100 Mbps) we had a packet loss of about 5%. Mainly caused by ethernet collisions I guess.
tcp vs udp.. (Score:2)
The best method is really a send / reply setup in your code. You send data and the other side replys with I got it (or something). Like what is done in RFC 821 - SMTP.
SCTP (Score:1)
Implementations are already available or becoming available, on various OSs. Although designed for transporting real-time telephone signalling over IP (as stated in the RFC), it is applicable to anything else with similar requirements.
What is UDP good for? (Score:2)
Frankly, the single most important use of UDP is for sending singleton datagrams. Not to be pissy but, um... duh. Consider how DNS works: queries are sent as UDP, and if the reponse is small enough it goes back in a UDP packet. If it's too big, it's sent as a TCP stream. Prime example of what UDP is good for.
Frankly, as far as UDP streams are concerned, I've never found a use for them that didn't involve a realtime response on the receiving end. Network gaming and video streaming is one idea, but certain kinds of telecontrol are another.
The thing that bugs me most is that, by trade I am not a network programmer. I have done network stuff in the past, but it upsets me that the sum of information in all the commentary on Slashdot is more and more frequently less than I already knew about the topics.
Maybe Bruce Perens [slashdot.org] is right.
pr0n suckage causing crc errors? (Score:2)
NFS uses UDP. (Score:2)
From what I understand, UDP will lose packets in a congested environment when passing across a router. Your stack will guarantee that the packet gets onto the wire, if it gets onto the wire, it will make it onto the other machine if it is on the same wire (subnet) (barring sunspots). As soon as it hits a router, then it can be dropped.
TCP has some nasty timers in it that make it entirely unsuitable for real-time traffic. It assumes that a packet was lost because of congestion, and backs off on the retry, rather than retransmitting immediately. In modern corporate/telco IP networks congestion simply isn't the case. What really is bad is having to wait up to 4+minutes to find out the connection is dead.
UDP allows you to do what you want, and avoid anything you don't need. However, if you need things like in-order, reliable transmission you are probably better off with TCP. If you are simply providing a response, then you should be fine with UDP.
If you are after high-traffic, high connection, high throughput, UDP seems to be the way to go. If you are after easy programming and guaranteed in-order delivery, TCP is your tool.
As someone pointed out, there is a new player in town, and that's SCTP. It was invented specifically because TCP is bad for low-latency transmissions (such as Telephony!). It is used in the SS7 over IP protocols, such as M3UA, SUA, etc.
That is why you will see a mix of streams in various protocols. H.323 uses TCP for control, and UDP for speech/video. SIP allows you the choice of UDP/TCP for call control.
Jason Pollock
UDP Loss and alternate packet recovery schemes. (Score:1)
TCP is certainly the easiest to implement. but at only a few percent packet loss (under 5%) is comes to a grinding halt. I have never seen TCP get to full speed between to T3 at 45Mbps and even with much tweaking of the TCP windows and other timing parameters in some cases we still couldn't get over 56K over a lossy 45Mbps to 45MBps link.
With UDP 5% loss is 5% loss, MPEG and most other video formats will not be happy, with video tearing up and stalling all the time.
It's possible to implement the TCP's transmission protocol over UDP, just the packet headers will not be correct.
At my former company we developed 2 mechanisms to re-implement a TCP like connection using UDP packets. One called SPAK, which is an aggressive retransmission protocol, unlike TCP that backs down intentionally with congestion (loss) this pushes harder! On a 64K link with 90% loss measured with Ping we were able to send 60Kbps! This was for a live event from Sri Lanka to the USA on March 14 1997 with Arthur C. Clarke, and it work, even to my surprise. I had to use a 2400 Bps modem to connect to the remote server because telnet couldn't establish a connection over the 64K line into that country.
The other method we called ECIP for Error Correction Internet Protocol. It used erasure Codes, (unheard of at that time) the best papers on this are by Luigi Rizzo. http://info.iet.unipi.it/~luigi/fec.html These also worked well it took about 4 years of work to find an optimal coding and transmission scheme that ultimately borrowed some of the S-PAK ideas to include a retransmission but kept the latency to under 1 packet round trip time! This is important when doing video conferencing.
Both of these protocols were able to consonantly move up to 40MBps between to lossy T3's from the USA to Korea. This was tested over a 3-year period.
I still own the rights to these and if anyone is interested is commerial or opensourcing these you can contact me through livecam.com
Rate Policy vs Priority Policy (Score:2)
Re:I've noticed quite a bit of packet loss (Score:1)