Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Programming IT Technology

UDP - Packet Loss in Real Life? 127

PacketStorm asks: "There's always an argument between TCP and UDP users. TCP users complain that UDP is non-determinstic and lossy, while UDP users complain that TCP is slower and nobody needs all the features anyway. So the question is - has anyone actually seen/experienced UDP loss in high-traffic environments? Is the degeneration of UDP any better or worse than TCP in a congested environment? TCP also craps out in times of congestion, but at least you know - or do you? Experiences?"
This discussion has been archived. No new comments can be posted.

UDP - Packet Loss in Real Life?

Comments Filter:
  • UDP packet loss (Score:2, Interesting)

    by fraxas ( 584069 )
    I don't know if it means anything, but I have experienced UDP packet loss on a regular basis -- playing online games. Unit positions are most often tracked over UDP because it doesn't really matter if things warp around a little when the client receives an update that puts a unit somewhere other than where the client extrapolated it was. There was a big fiasco at launch-time for Anarchy Online -- they used TCP and not UDP for about 8 hours, during which time the game was unplayable because the servers couldn't maintain all the TCP connections. Interpret this as you will.
    • Re:UDP packet loss (Score:3, Informative)

      by Polo ( 30659 )
      Hmmm... it's LOTS easier to write an application to handle UDP than to handle TCP.

      With TCP, you have a listening socket and then a socket for each client. Each socket has a file descriptor, and you have to select() on all of them to check for activity. There is a lot of housekeeping to do - though it is a solved problem - all webservers solve it. Timeouts are harder to determine from the application's perspective because there can be retransmits and stuff going on that you have no idea about.

      However, with UDP, it's laughably easy. You just do one recvfrom() and you get a packet and it also fills in a data structure to tell you where it came from. No filling in fd_set structures and no running out of file descriptors. When you got the packet, you got it.

      Most online games and stuff would really like to know when packets were sent, when they were received and if packets are being dropped. With UDP you can usually find out these values directly. With TCP I think it would be more like an educated guess.

      One drawback is that UDP is quite easy to spoof. I can send a packet and it is up to the application to figure out it has been spoofed.

      If I were downloading a patch to a game I think TCP would be the better choice. It already has the smarts to pace the connection, transmit the data reliably and prevent spoofing.
      • Re:UDP packet loss (Score:5, Insightful)

        by Jeremi ( 14640 ) on Thursday July 11, 2002 @12:09AM (#3862327) Homepage
        However, with UDP, it's laughably easy. You just do one recvfrom() and you get a packet and it also fills in a data structure to tell you where it came from.

        ... or maybe you don't get the packet, because a router was loaded down and had to drop it. Now you gotta implement a timeout and retransmit protocol. Not to mention that packets may arrive out-of-order from the way they were sent, so you if ordering is important (and it usually is) you have to implement some sort of sequencing tag systems too. A few dozen hours later, you find out you've implemented something that looks suspiciously like a primitive version of TCP... :^)

        • Re:UDP packet loss (Score:3, Informative)

          by Polo ( 30659 )
          Well, in relation to the post about anarchy online I replied to, it seems like TCP didn't work.

          I would think SOME game data would need to be reliable, but most isn't. The problem with TCP for an online game is that you can never THROW AWAY data that's too old or unnecessary. It will be transmitted and retransmitted further delaying current data. At some point, things will become unusable.

          I don't think you could play a multi-user game for over an hour and not run into this problem. I think on lower-bandwidth connections, you might never be able to catch up once you fall behind.

          Of course, you could do what microsoft does with DirectPlay: open multiple connections, sometimes on multiple protocols, some of them asymmetrical, while ignoring silly details like firewalls and NAT.

          I think the initial handshake for dungeon siege was something like:

          first packet: udp my_ip:6073 to server_ip:6073
          reply packet: udp server_ip:6073 to my_ip:2302 (what!?!?)
          next packet: udp my_ip:2302 to server:6073

          or some garbage like that. I think the dungeon siege guys just changed it.
          • The problem with TCP for an online game is that you can never THROW AWAY data that's too old or unnecessary

            Well, that's not entirely true -- while it is the case that once you have called send() on the data, it can't be thrown away, it's also true that if you are maintaining an outbound message queue (to avoid blocking on a filled TCP send buffer), then you can remove or replace data that is in the outbound message queue, that hasn't been sent yet. I have used this technique with reasonable success.

            I do agree that UDP has its place, however.

        • Re:UDP packet loss (Score:3, Informative)

          by crisco ( 4669 )
          Many games are designed to not care about the dropped packets. Sure, they can be unplayable if packetloss gets too high but the occasional dropped packet doesn't matter.

          Some tasty articles from gamasutra (might require a login, you might also find these in Google's cache):
          "TCP is evil. Don't use TCP for a game. [gamasutra.com] You would rather spend the rest of your life watching Titanic over and over in a theater full of 13 year old girls."
          article [gamasutra.com] on WON's servers for Half-Life.
          Dead Reckoning [gamasutra.com] Latency Hiding for Multiplayer Games.

          Other software might need the benefits of TCP, but game development is one familiar illustration of where UDP often wins out.

    • Well, I've been working on an online game for the last year or so and we're definitely using UDP. Mainly because we can decide on re-transmission on dataloss in a deterministic manner. One of the other disadvantages of TCP (correct me if I'm wrong) is that packet sizes are non-deterministic. Sometimes you can receive all the data sent, but essentially it's just a stream with you getting as much data has been acquired at that time. This results in having to do some local buffering to get all of the packet that was sent! Didn't have to do that with UDP.
  • both are usefull (Score:4, Insightful)

    by f0rtytw0 ( 446153 ) on Wednesday July 10, 2002 @11:07PM (#3862009) Journal
    UDP for streaminig video and games and other sorts of things where it doesn't matter if you miss a couple of packets and TCP where you can't miss packets such as file transfers. There everyone happy go home now.
  • by Anonymous Coward on Wednesday July 10, 2002 @11:07PM (#3862010)
    Check the Internet Traffic Report [internettr...report.com]
  • by MonMotha ( 514624 ) on Wednesday July 10, 2002 @11:18PM (#3862055)
    UDP is commonly used in games and other time sensitive environments precisely because it lacks reliabilty. With time sensitive data (such as streaming video or unit positions in a game), if the data gets dropped it's not worth it to retransmit, because it woudl be out of date. Therefore, the program just transmits the next update and the user sees a small skip. This is better than getting "out of sync".

    TCP is designed to make unreliable networks (like the internet, which only gives "best effort delivery") reliable by ensuring that a stream can be reassembled, in order, with no missing pieces. Read the RFC for more info here. This reliability makes it good for things that need zero corruption (file transfers for example), and aren't time criticial.

    Hope this helps.

    --MonMotha
  • UDP Experience (Score:4, Interesting)

    by mchappee ( 22897 ) on Wednesday July 10, 2002 @11:44PM (#3862183)
    I'm firmly in the UDP camp. About 4 years ago we replaced our timeclocks at work (manufacturing facility) with hardened, wall-mounted PCs. I wrote a GTK app that started at boot up that takes the users card swipe, grabs their name from the database (for display only), and sends the clock number via UDP to the timeclock server.

    During my initial proposal I mentioned to the PHBs that I would use the UDP protocol. One of my colleagues, wanting to sound important, said that UDP can be lossy. He went on to explain packet loss to the befuddled crowd. Well, the PHBs latched onto the term "packet loss". Packet loss this, packet loss that. They had no freakin' clue what it was, but it sounds pretty cool.

    Anyway, I had to set up a test in which I had all of the timeclocks start a program at the same time. This program went into a tight loop of sending UDP packets (clock numbers) across the network to the server. Each one sent 1000 clock numbers, and every single one made it across. Obviously our 100mb network and proper use of subnets helped, but we haven't experienced any packet loss in the four years that these things have been running. So there. :-)

    Matthew
    • Re:UDP Experience (Score:1, Insightful)

      by Anonymous Coward
      Unless an entire transaction fits in a single packet, you made a horrible decision.

      Even if it does, it's still not a great decision.
    • Re:UDP Experience (Score:1, Informative)

      by Anonymous Coward
      #1. Does the entire payload fit in 1 UDP packet ? If not, does the other end know how to assemble the 2+ packets correctly ? Do you have in-UDP sub-data that instructs it how to reassemble it ? If so, didn't you just re-invent the wheel (ehm. TCP ?)

      #2. Within your LAN is something, out in the big bad internet is something else.
    • by Anonymous Coward on Thursday July 11, 2002 @12:24AM (#3862389)

      /me bangs his head on his desk over and over at the sheer stupidity of this.

      Dude, I hate sound flame-y, but do you have any understanding of what you implemented? That you got lucky and it "works" is totally irrelevent to the fact that it's completely unreliable. All it takes is one flakey piece of Ethernet dropping packets to SCREW UP FREAKING TIMESHEETS.

      This reminds me of the morons who use MySQL for financial transactions (i.e., no transactions, no foreign keys, etc) who justify their ignorance with "well, it works so far!!"

      They point isn't whether it works or not, the point is that when it fails, it fails spectacularly. Like your system. Just because you can keep spinning the chamber and the russian roulette gun never goes off doesn't mean it never will.

      Seriously, you screwed up bad. These are the kinds of stories that really make me think that programmers should have some sort of licensing.

      • by RobinH ( 124750 )
        Dude, I hate sound flame-y, but do you have any understanding of what you implemented? That you got lucky and it "works" is totally irrelevent to the fact that it's completely unreliable. All it takes is one flakey piece of Ethernet dropping packets to SCREW UP FREAKING TIMESHEETS.

        Ok, calm down...

        Now, I agree that UDP is built to be a fast, not-so-reliable protocol. One would initially presume that this is not the best thing to use on a punchclock system.

        However, you have to look at the whole system.

        First of all, if you put all the punchclocks, and the servers on the same subnet, then you've eliminated dropped packets due to routing.

        Secondly, if you send an acknowledgement back to the clock, then it can display to the user "OK, you're signed in" or "ERROR, please retry". If you lose either the "sign-in" packet, or the acknowledgement packet, then all the user has to do it swipe again to retry.

        Thirdly, these kind of systems always (should) have a manual backup, so if, for some reason, the system records Buddy punching in at 7 am, but never punching out, then a supervisor can go in and manually update the database to fix the problem.

        Just remember that programs like this don't exist in a vaccuum; they are always part of a larger picture, and they need to function in that framework.
        • by bugg ( 65930 )
          Secondly, if you send an acknowledgement back to the clock, then it can display to the user "OK, you're signed in" or "ERROR, please retry". If you lose either the "sign-in" packet, or the acknowledgement packet, then all the user has to do it swipe again to retry.

          As a rule, people who build acknoweldgements into their UDP based protocol should have used TCP.

          • As a rule, people who build acknoweldgements into their UDP based protocol should have used TCP.

            Not necessarily. A better rule is that people who build sequencing and acknowledgements in should have used TCP. But if you can do without one or the other then sometimes UDP is worth it, but not often.

            Sumner
            • Not necessarily. A better rule is that people who build sequencing and acknowledgements in should have used TCP. But if you can do without one or the other then sometimes UDP is worth it, but not often.

              I agree. In this case, you don't need sequencing, because there's only ever one transaction out there from each client at any given time. The transaction is as simple as "12345"->"OK". The UDP protocol is so much simpler to program (and you don't have to worry about managing connections) that for this particular application, it certainly seems worthwhile.
        • OK, what if you have a flakey NIC? What if a cable gets stepped on or eaten by mice?

          I once worked on a system that made use of PC's to process bitstreams for a digital radio broadcast system. Communications were UDP-based, and we had a problem where the odd data packet was being lost (0.5 seconds of audio lost approx every 20 seconds). I spent a MONTH debugging code to figure out where the lost packets were going. Then using tcpdump, I saw that the NIC was actually losing packets (each packet contained an integer counter). I was able to repeat the loss of packets at will.

          Replaced the NIC, and all was well.

          This is not to put down UDP; for this application, TCP would have retransmitted and the added delay would have created problems elsewhere.

          Moral of the story: expect data to be lost over ANY kind of network.

      • This reminds me of the morons who use MySQL for financial transactions (i.e., no transactions, no foreign keys, etc) who justify their ignorance with "well, it works so far!!"

        When was the last time you used MySQL? MySQL now supports several table types other than the default MyISAM type, including types with full transaction support.

      • MySQL now has transactions and foreign key support using the InnoDB table type. Beta-ed about a year ago, production about six months ago, and I heve heard few complaints, though I don't usi it myself.
    • Re:UDP Experience (Score:1, Insightful)

      by Anonymous Coward
      Thats crazy. A few dropped packets and those PHBs are going to be pretty pissed. UDP has its place but depending on it for people's timesheets isn't one of them. Its just a matter of time before you get someone who punched in at 8am and never punched out. What are you going to tell the PHBs then?
    • What is a packet became corrupted, something that will happen. YOu need to build reliability into the application layer, whereas with TCP it is there already.
    • I'm a skydiver. I never use a parachute. Instead, I just jump out of planes and hope that I land on a large air mattress. It's worked perfectly so far. Why should I switch? All you silly fools using parachutes make me laugh.
    • Well the main problem on TCP, will be the scalability. Few, small transactions on a LAN can work correctly, so if your application ever will run on that environment, then everything should be OK.

      Problem would arise if the system would increase the number of transactions, the size of them, or the area of the network (From LAN to MAN, or WAN).

      From experience, I worked on "debugging" an small database application that used UDP. They added remote dial in clients. While it worked smoothly on a LAN, with few routers, at the moment that dial in customers started trying to update data on the system the fun started :). The original developer, assumed that they'd never change the environment, and had an scheme for fix retransmitions that worked OK on the LAN, but constantly gave the dial in clients a bad time.

    • But what you are talking about is an application that sends the clock number. The number of packtets hitting the time server though many would not be very big. This means that the load may not be very much. Whereas when we talk about a gaming application the payload *can* be heavy. In which case there are chances of packet loss.
    • Within our company we have done extensive testing on this. We found that if UDP packets are sent consecutively under 5ms intervals the loss is approximatley 3-5% (if this is the only network traffic!). If you decrease the interval, you will notice a logarithmic increase in packet loss.
      • I don't know about how your network is, but my networks don't have that problem.

        I've got 3 cisco 2924xl switches all linked together. I run about 25 3com ethernet phones that all use raw ethernet packets (think datagram like udp). The phones are 10mb and many have PC's hooked to them (the phones are 10m hubs). Each of the phones sends a sequence packet number. So far in my testing, I have never seen a lost sequence number and this is for millions of packets.

        Our other office is just a smaller version of the same thing with newer 2950 switches and only 4 phones. Accordig to the swtich, its at about 1/2,000,000 of its capacity most of the time when its busy.
  • Comment removed (Score:5, Informative)

    by account_deleted ( 4530225 ) on Wednesday July 10, 2002 @11:56PM (#3862252)
    Comment removed based on user account deletion
  • What argument? (Score:4, Insightful)

    by jkujawa ( 56195 ) on Wednesday July 10, 2002 @11:56PM (#3862257) Homepage
    Jesus. Both have their uses. You use TCP if you need the reliability, and a stateful connection. You use UDP if it doesn't matter that a packet gets dropped here or there. Things like games and streaming media are good examples. This is rather like comparing Pepsi to milk. They both have their place. It's the job of a good engineer to determine which is most apropriate.
    • This is rather like comparing Pepsi to milk. They both have their place. In my belly?
    • This is rather like comparing Pepsi to milk. They both have their place.

      Except for pepsi.

    • Seems like UDP would work OK on an unclogged network and have the benefit of adding less congestion compared to TCP with all of its handshaking.

      For an low traffic, more or less isolated network, UDP might be a simple solution. OTOH, with PCs with 100 Mb cards so cheap these days, people could probably afford the extra buzz of TCP without noticing any appreciable degradation of performance or incurring great cost.

      The day the network starts to become the least bit congested and any UDP packets get dropped, then everyone shifts to TCP and things start to really get congested.

      A binary QoS mechanism exhibiting an inherent problem.

  • The only protocol I ever use is ICMP.

    - A.P.
  • Alright, if you're doing anything over the real internet, you will encounter packet loss. So what do you want to happen when packet loss occurs? Here are your options:
    1. Automatically and transparently timeout waiting for the packet, and ask for retransmission. In this case, use TCP.
    2. Anything else. In this case, use UDP.
    As an example of "Anything Else" maybe notice that a packet is missing, ask for a retransmission, and draw a little rendered animation of a gerbil chewing on a length of cat5utp in the corner of the screen.
  • Could you please elaborate on that for me? I always though NTM's (non-determinstic Turing Machines) were so powerful since they always made the "right" decision and didn't need luck or a specific rule set, they just non-determinstically chose the right path.

    If you have a network that is just sending packets correctly with out decisions then all the better and we should take it on tour. Maybe, however you're refering to the complete broadcast mentality that packets aren't sent to specific hosts as required but to all hosts since this is how non-determinism is typically simulated, but simulated non-determinism and true non-determinism are a tad different.

    Maybe it's just that I don't really know a lot of networking theory, but that term just sort of jumped out at me. I've often equated non-determinism with magic...
    • In the context of networking, TCP can be considered deterministic because the output of the connection on the receiving end will be identical in all ways to the input of the connection on the transmitting end. This is not the case in UDP, where it is possible for packets to arrive in a different order than they were transmitted, or not at all. This is the heart of determinism: a deterministic algorithm has one and only one possible path for each possible input.

      A deterministic system maps input to output in 1:1, non-deterministic maps input to output in 1:some_number

      Note that TCP isn't really deterministic: What if all the routes between the source and destination go down? The packet won't be received. It's just that it's more likely to be deterministic. And anyway, TCP packets get dropped just like UDP packets; it's just that TCP packets get resent.

      btw, this has nothing to do with the "magic" that non-deterministic systems exhibit (these systems can work by looking for a specific output for a specific input out of all the outputs that can be generated from that input).
  • I used to work with a company that sent data (mostly video) to all customers via satellite, in multicast. The uplink went through the terrestrial network. IIRC, the whole traffic was sent over UDP.

  • As some people have pointed out, both UDP and TCP have their advantages. Note though, that for some applications you don't want retransmission at all! What's the point in having part of an audio stream retransmitted, or a video stream for that matter? It only makes things worse.

    An excellent example comes from my own experience. Years ago we built a system that needed to synchronize system clocks between a server (hooked up to an atomic clock) and a bunch of workstations (I know about NTP, it was just too complicated for our humble needs). The server would broadcast the time over UDP every 30 seconds, and the workstations would synchronize their system clocks. This is a typical example where you don't want retransmission of lost packets, since the data (i.e., the system time) has become invalid in the mean time. Better to wait for the next update, since the system clocks will drift only microseconds in 30 seconds anyway.

    Before implementing this, I sort of tested UDP reliability. I was warned about dropped packets, corrupted packets and packets delivered out of sequence. The test consisted of pumping out as many packets as possible, and see what happens. I never ever saw a single corrupted packet, or a packet delivered out of sequence! The only errors I could force were dropped packets, basically by overflowing the socket buffers on the receiving end. This test brought the network (10 Mb/s Ethernet LAN) completely to it's knees, by the way (got me the honorary title of "the network killer"). Millions and millions of packets sent, only dropped packets, never anything else.

    Now, this was an isolated LAN (more or less) without crazy geeks downloading DivX movies, but the hardware wasn't all that spectacular either (SparcStation 10 server, 64MB RAM, SparcStation 5 workstations, 16MB RAM -- no typo!).

    I ran some more tests with more 'normal' network loads, and I never, ever, detected a single problem.

    So, your mileage may vary, but I think that problems with UDP typically would happen under circumstances (i.e., severe network congestion)that would also affect TCP.

    Silly idea: you could use UDP with Forward Error Correction if your data is important. Plenty of textbooks about coding theory that explain the theory behind it, and how to design your customized error-correcting code. As long as you stay within the Shannon limit (i.e. don't try to exceed your channel capacity), you can make the error probability arbitrarily low. Doesn't help with disconnected cables an the like, since your channel capacity is zero then... Oh well, you can't win...
    • On a simple network, you won't get out-of-sequence packets with UDP.

      Out-of-sequence delivery happens when there is more than one possible route for the packet, such as occurs over the internet. On the internet, routing is done on a case-by-case best effort basis. It's possible then for the first packet to be sent via a longer route than the second and therefore for the second packet to arrive first.

      So remember - UDP across a LAN is quite a different beast to UDP across the internet.
      • That's an excellent point. I sort of meant to hint at that, but I guess I should have put more emphasis on our particular setup.

        Anyway, it's a relevant point to the original question: it's not just the applications and network load that matter, it's also the network topology.

    • NTP was too complicated(??) so, you designed built and tested your own homegrown version of NTP??????

      To quote Dr. Evil, "Riiiight...".

      • Whatever. All I can remember that in those days we looked at NTP (or maybe it was NNTP??), and did not like the idea of going through and possibly maintaining 40,000 lines of C (that's the way it was in '94, or at least that's how I remember it).
        Our "homegrown NTP" as you call it wasn't even close to that. All we needed was a simple way to keep system clocks in line, with an accuracy of a few tenths of a second. We accomplished this in a few dozen lines of C++ (a generic message broadcasting mechanism over UDP was already implemented).

        In hindsight you can always question decisions like that, but bottom line is that we had a simple problem, and we fixed it with a simple solution.

        And the software (control system for rocket launches) has been in operational use since 1996, without any major problems, thank you.
        • All I can remember that in those days we looked at NTP (or maybe it was NNTP??)... All we needed was a simple way to keep system clocks in line, with an accuracy of a few tenths of a second.

          Ahem.. you were going to use usenet posts to synchronize your system time?!?!?!

          Bwahahahahah...

          (wipes tear from eye)
          *sigh*, that was a good one - To quote FreeLinux, 'To quote Dr. Evil, "Riiiight...".'
          • Ok, you got me. I made a mistake. This was 8 years ago, we looked at something which sounded similar to 'NTP'. I can't remember what it was called exactly any more, and I'm not gonna waste more time on it. All I know is that we analyzed the problem, evaluated what was available, and made a decision.
          • Hah, OK, I've found it. It was called 'xntp', where the 'x' apparently stood for 'experimental', rather than 'X' as in 'X-server'.
        • And the software (
          control system for rocket launches ) has been in operational use since 1996, without any major problems, thank you.
          Yea, that kind of scares me.
  • One of the points for using UDP over TCP is where 'guaranteed' data data delivery is not a 'hard' required. i.e. It does not cause a catastrophic failure.

    We use Multicasting (IGMP is built on UDP) to 'broadcast' MPEG Video over an IP network for an Interactive DTV (www.kitv.co.uk) project.

    This functions largely without problems, because maintaining an MPEG steam is highly time sensitive but it is not catastrophically sensitive to lost or dropped packets. These lost/dropped packets lead to video artifacts and not total loss of data, because the video stream can continue from the next received packet.

    The quality issue is governed by *minimising* the lost packets not 'guaranteeing' them
  • packet loss expierenced using tcp A while back i was getting alot of CRC errors with tcp transfers. Just to give you an idea how bad it was if I downloaded 40 - 15 meg files, at least half would have CRC errors. I never recieved any error messages during the transferes and I used multiple protocols, client apps, even diffrent computers on my lan. I isolated the problem by replacing siemens speedstream nat router with a custom built router, and I've had no crc errors yet.
    My point... I thought tcp wasnt supposed to let that happen, then again, maybee i should just blame the router. Has any one else have simular problems with cheap nat routers or is this post on topic?
    • Just to give you an idea how bad it was if I downloaded 40 - 15 meg files, at least half would have CRC errors [...] I thought tcp wasnt supposed to let that happen, then again, maybee i should just blame the router. Has any one else have simular problems with cheap nat routers or is this post on topic?

      Alas, I've seen this too. I wrote a file sharing program that uses TCP for file transfers, and has a download-resume feature that does an md5 checksum of the user's local file portion, and only starts the resume if the local file portion matches the corresponding portion of the uploader's file.

      Every few days, I would get complaints from users that the auto-resume wasn't working. It was refusing to resume because the two file fragments' hashcodes didn't match. After going over all my checksum code checking for bugs, I finally got paranoid and added an application-computed checksum every so often to my TCP data.... and sure enough, in some cases the TCP data I sent would arrive at the downloader's machine with a checksum mismatch. Unfortunately, I haven't been able to finger a particular piece of software or hardware that causes this, but I can confirm that it does sometimes happen. :^(

  • Also, there is an issue of IP over wireless, where packet loss is a given, at quite high rates. Thus, if you need the reliability either use TCP or add a reliability layer over UDP.
    • every wireless protocol which is meant to carry data that I am aware of has packet retransmission built in, eg 802.11(a,b,g) and cdpd.
      • You mean frame retransmission (which is an interesting problem in itself, as frames don't have checksums)? Wireless protocols do not deal with packets. They're layer 2, packets are layer 3.
  • by WolfWithoutAClause ( 162946 ) on Thursday July 11, 2002 @06:59AM (#3863251) Homepage
    If you have very special requirements, or very non special requirements, or you simply can't use TCP for some reason, then UDP can give you better performance. But it usually won't.

    About the only reason for using UDP is if you deliberately want to circumvent the congestion avoidance protocols that are built into TCP. So, if you are playing a game, and you need the packets to get through at all costs, but you aren't sending many packets- by using UDP you can agressively defend the small amount of bandwidth you need- any TCP connections around will tend to back off and get out of your way; and that's reasonable if you code it carefully. But writing the protocol to do that is hard, you have to understand not only UDP but also TCP, as well as your game requirements.

    And that's the real problem. In most cases people think that waving UDP at the problem will solve their problems- in fact it makes them worse; and TCP has solutions to problems only PhDs have even thought of, and the solutions are built in.

    As an example, somebody I know implemented a tftp protocol using UDP. The guy is off the chart in his software abilities (trust me the guy is amazing, he's in the top 2 percent of software engineers according to the tests). Anyway in a back-back comparison against a standard ftp protocol- the tftp protocol loses by a factor of 10 or more (on a network with some congestion, I expect a quiescent network would have been much more level). Of course tftp isn't supposed to handle congestion. But that's the problem- UDP can't handle network congestion out of the box... indeed if anything it tends to create network congestion.

    The main algorithms in tcp include 'slow start' and 'exponential backoff'. Both of these are missing in UDP, and both improve the network performance enormously. If your application doesn't affect network performance and doesn't worry about packet loss much, then UDP may be the way to go, otherwise stay away from UDP.

    • You don't know what you're talking about.

      You started out okay and then sank into total bullshit land as soon as you mentioned your uber-friend's tftp implementation. Newsflash: tftp is UDP from the get-go. That's the whole point. And since tftp is a request-response type protocol, of *course* it's going to see less performance out of your networks than ftp. On the other hand, try these guys. [dataexpedition.com] They built a faster-than-tcp implementation of a streaming protocol using UDP and it outperforms TCP on high-path networks and does *at least as well as* TCP on the LAN.

      There is no way to "aggressively defend" your bandwidth if (for example) you're playing a game. If someone else comes along with their own UDP application that doesn't back off when detected packet loss gets extremely high, you lose out just as much as they do. There is no defence in this case, there is only *higher packet loss.*

      And claiming your friend is "off the chart" like that--what is that supposed to do--lend credence to your exaggerated and false claims about UDP? Bzzzt. Nobody cares about test scores--especially for someone who re-implemented a trivial file transfer program using the very protocol it was designed for to begin with. Tell you what--you show us something impressive he did and we'll be impressed. Don't beak off about "trust me" and "he's in the top 2 percent". That's just crap--you didn't even tell us what the tests were or who administered them!

      Finally, UDP wasn't designed to handle congestion. But that doesn't mean it can't. Counter-example: build congestion avoidance into your application.

      Readers, don't listen to the parent of this note, he doesn't know what he's talking about. Anything TCP can do, UDP can do--the problem lies in how much work you have to do to implement it. TCP is convenience because all the work for a streaming, in-order, semi-reliable, congestion avoiding protocol has been done for you. Unfortunately you can't turn the major features you don't want, off.

      Yeesh. Someone mod parent down, it's really not worth a 4.
      • Newsflash: tftp is UDP from the get-go. That's the whole point. And since tftp is a request-response type protocol, of *course* it's going to see less performance out of your networks than ftp.

        Precisely! It's supposed to be a lightweight protocol, which is one of the reasons people choose UDP, but it turns out to be slower in this case. And it's because of what's been left out of the UDP protocol stack; UDP isn't inherently bad because of this, but usually you would want what's been left out. Right now I would not want to deploy tftp ever, it lacks passwords, and it's performance is poor on a congested network.

        Finally, UDP wasn't designed to handle congestion. But that doesn't mean it can't. Counter-example: build congestion avoidance into your application.

        Absolutely, but then you have to implement it yourself! And it is very much not simple.

        There is no way to "aggressively defend" your bandwidth if (for example) you're playing a game. If someone else comes along with their own UDP application that doesn't back off when detected packet loss gets extremely high, you lose out just as much as they do. There is no defence in this case, there is only *higher packet loss.*

        Being aggressive does not guarantee more bandwidth. But in most situations most other people are using ftp/http etc. they normally will back off and you will get more bandwidth if your protocol is aggressive. Of course if everyone is aggressive, especially inappropriately- you and everyone else ends up losing worse than if you'd have used TCP; it's rather like real life. It probably only makes sense in cases where you only need a small amount of bandwidth; but you NEED that bandwidth. If you start to lose packets due to congestion, you are supposed to send more slowly. However, if you increase the rate you send, you end up with a bigger slice of the pie; if the slice is then enough for what you need, you can control your bandwidth to some extent. However, ultimately you can't send more than your pipe will take, so really heavy congestion will still kill you.

        Anything TCP can do, UDP can do--the problem lies in how much work you have to do to implement it.

        Yes. Exactly. It's often a lot more work, and unless you really, really know what you are doing UDP is likely to be the wrong choice.

    • I just finished writing a UDP file transfer protocol that goes just as fast as TCP on a clean network and handily outperforms it on noisy ones.

      Exponential backoff is braindead when you need good thoughput on a fat but noisy pipe. The reason TFTP gets lousy throughput is not because it uses UDP but because it waits for every ACK before it sends the next packet rather than having a receiver window feature like TCP does.

      Apparently you have to be smarter than mister Top 2 Percent to make this work.

      Here's some links for you:

      • Good luck with that. Try and see whether your custom application works so well with extremely large files where the packets are delivered out-of-order. Use two dummynet's connected end to end--after you're done testing with the dummynet's, come back and let us know whether it really does outperform TCP for extremely large files. (Like, tens of gigabytes.)
      • Apparently you have to be smarter than mister Top 2 Percent to make this work.

        Actually he's very capable of doing this kind of stuff right. More or less he had to follow the tftp spec, so what's he gonna do?

        But my real point is still that most people don't know enough to do it right.

        You certainly sound like you might have your head around what TCP does, so probably your protocol works very well.

        Exponential backoff is braindead when you need good thoughput on a fat but noisy pipe.

        Yes, you are optimising your protocol to deal with cases where packet loss is caused by noise. The internet is built mostly on the assumption that packet loss is caused by congestion, so of course TCP will perform more poorly, and a more tailored protocol is a win, and UDP allows you to do that.

        I do wonder if your protocol might trigger congestion collapse, but I have no doubt you've designed against that and tested for it too.

    • Modern TCP stacks also include additive increace (congestion avoidance), multiplicative decrease, selective acknowledgements, and explicit congestion notification (ECN). Very good for keeping things going for bulk transfers. Unfortunately, they also include a nice big socket buffer, delayed acknowledgements, and nagle's algorithm which are ABSOLUTE DEATH to an interactive on-line game attempting to provide 200ms of lag (unless you go and turn them off explicitly).
  • Since TCP has flow control and UDP doesn't, you can actually cause your network connection to saturate with a couple UDP connections and then you won't be able to make any TCP connections. UDP doesn't play very well with others. I believe there is a new protocol in the works that is like UDP but plays better with TCP.

    Also, some TCP stacks are implemented in a way that if you start TCP connections one-after-another, after you let the latest one peak in its bandwidth usage, the third or higher TCP connections will not be able to grab any bandwidth. This is appearantly a problem with windows 9x and some ADSL connections. The TCP flow control lets the first two connections split the bandwidth between each other, but the third connection can't ramp up enough bandwidth and just chokes. Appearantly this problem is mitigated by using different types of network connections or different operating systems.
    • UDP does have flow control, as does all IP traffic. Unfortunately the sockets API don't consistently support it. It's called an ICMP source quench message, and it tells the sending application that the receiving side couldn't keep up (or a router in the middle couldn't keep up). It's not UDP's fault that the APIs don't support flow control well, it's the fault of the OS and the sockets layers that it's not easy to use.
  • When things get rough sometimes it can be difficult to reign back a UDP stream (in comparison to a TCP one) if bandwidth is being shared. UDP streams are often unresponsive and having the odd packet dropped here and there simply will not cause the rate of packets to be slowed down (there are various solutions to this though).

    However as others have pointed out, which you use should depend upon the situation. If getting packets out of sequence is not so important (e.g. video streaming) then great but if reliablity is absolutely essential then TCP is probably better (plus TCP tends to still work when computers are stuck behind restrictive stateful firewalls).
  • Coding philosophy (Score:3, Insightful)

    by kevin42 ( 161303 ) on Thursday July 11, 2002 @10:11AM (#3863976)
    I've spent the past several years writing networking applications, and this is my philosophy: Anytime there is *any* chance of a packet being lost or arriving out of order, you must write code that assumes every packet has a high probability of being lost. I've seen people make assumptions that since it's running on an ethernet lan they will never lose UDP packets, then their app becomes very unstable. Even when sending UDP packets to localhost they can be lost. When designing UDP software it shouldn't be a matter of how often packets are lost, but how well your code deals with lost packets.
    Now if only someone would standardize a reliable datagram protocol implementation. :)
    • I've spent the past several years writing networking applications, and this is my philosophy: Anytime there is *any* chance of a packet being lost or arriving out of order, you must write code that assumes every packet has a high probability of being lost. I've seen people make assumptions that since it's running on an ethernet lan they will never lose UDP packets, then their app becomes very unstable. Even when sending UDP packets to localhost they can be lost. When designing UDP software it shouldn't be a matter of how often packets are lost, but how well your code deals with lost packets.

      This is vary true. You must code UDP applications to deal with packet loss, and yes packets can be dropped when going from localhost to localhost. Overfill the network transmit or receive capacities or use up all buffer space, packets will be dropped. There is no way around that without changing the UDP protocal. Afterall if your application sends out 10,000 UDP packets per second and the network can only handle 2,000 per second, the other 8,000 are to be dropped. For TCP connection loss is all you have to code for.

  • OKay, a very small amount of packet loss can be normal and should be ignored. However if you have anything other than a tiny amount of packet loss your network is in trouble, and in serious need of upgrades. Remember packet loss starts to casscade, because the droped packets have to traverse the network two or more times, and each time it crosses the network it uses some bandwidth. One droped packet, but when it is re-transmitted some other packet is droped and re-transmitted, and as a result your network gets really slow. In theory tcp will just slow down, but users will re-start the slow jobs trying to get a fast connection.

    Sure, TCP will get through even when you have 85% packet loss, (I had a customer who had 85% packet loss once, a babybell I won't name) but your applications will often start timing out in other areas. In theory things should still operate, but they just get slow, but many programs have their own timeout outside of tcp so they can detect when the connection when down.

    Don't use tcp where you don't need it though. I once had to debug a heartbeat for a failover system, where the system only provided tcp packets. When there was a failure in the network we could switch the network easially enough, but then we had a lot of code to try to figgure out if the heartbeat that just arrived after the network switch was old (and contained invalid information about the failed node), or correct. It always seemed to work, but I didn't sleep well many nights knowing that a customer could lose a critical computer because of code that I was supposed to make work.

    So in theory you can say TCP is better when there is expected to be packet loss, and UDP is better when lost packets should be ignored. In parctice though, if you have significant packet loss you need to upgrade the network.

  • by gfilion ( 80497 )
    Yep, I did.

    In one of our course, we had to do a network game in Java and we decided to use UDP. When we where using it in the university Labs (130 boxens, 100 Mbps) we had a packet loss of about 5%. Mainly caused by ethernet collisions I guess.
  • you may or may not know when one craps out. That all depends on the programmer and what they check for. In the case of tcp / ip if a programmer assumes that all is okay they can keep sending data to a connection that is failing. If they do not check for certain statuses then it could fail and you wont know it. Same is true I'm sure for udp.

    The best method is really a send / reply setup in your code. You send data and the other side replys with I got it (or something). Like what is done in RFC 821 - SMTP.

  • by dolt ( 23627 )
    Read about SCTP, Stream Control Transfer Protocol, RFC2960 [ietf.org], which combines the best of TCP with the best of UDP. SCTP is a protocol (that is, it doesn't run over UDP or over TCP), and provides the following features:

    acknowledged error-free non-duplicated transfer of user data,
    data fragmentation to conform to discovered path MTU size,
    sequenced delivery of user messages within multiple streams, with an option for order-of-arrival delivery of individual user messages,
    optional bundling of multiple user messages into a single SCTP packet, and
    network-level fault tolerance through supporting of multi-homing at either or both ends of an association.

    Implementations are already available or becoming available, on various OSs. Although designed for transporting real-time telephone signalling over IP (as stated in the RFC), it is applicable to anything else with similar requirements.

  • In what really ought to be called the Slashdot Effect, I've just read 50 or so comments where the most useful 5% all said the same thing: if you're writing a game or streaming video, use UDP, otherwise stick to TCP. But, what bugs me is that this is a very narrow view of the differences between the two protocols.

    Frankly, the single most important use of UDP is for sending singleton datagrams. Not to be pissy but, um... duh. Consider how DNS works: queries are sent as UDP, and if the reponse is small enough it goes back in a UDP packet. If it's too big, it's sent as a TCP stream. Prime example of what UDP is good for.

    Frankly, as far as UDP streams are concerned, I've never found a use for them that didn't involve a realtime response on the receiving end. Network gaming and video streaming is one idea, but certain kinds of telecontrol are another.

    The thing that bugs me most is that, by trade I am not a network programmer. I have done network stuff in the past, but it upsets me that the sum of information in all the commentary on Slashdot is more and more frequently less than I already knew about the topics.

    Maybe Bruce Perens [slashdot.org] is right.

  • I have a fairly high traffic enviroment.... (my boss pulling pr0n from news servers on our internet connections constant......) Most days.... I see about 97% usage of the connections... Scarry huh? I haven't noticed many UDP dropped packets.... Infact... I think I'm seeing more mangled UDP packets that are failing the crc test then packets being dropped. I do have QoS enabled, which allows my web browsing... or software downloads to take his bandwidth right out from under him.

  • Everyone's arguing back and forth about which is better, UDP, TCP, some are even presenting other protocols such as SCTP. None of it matters, because the protocol you use depends on what the problem is.

    From what I understand, UDP will lose packets in a congested environment when passing across a router. Your stack will guarantee that the packet gets onto the wire, if it gets onto the wire, it will make it onto the other machine if it is on the same wire (subnet) (barring sunspots). As soon as it hits a router, then it can be dropped.

    TCP has some nasty timers in it that make it entirely unsuitable for real-time traffic. It assumes that a packet was lost because of congestion, and backs off on the retry, rather than retransmitting immediately. In modern corporate/telco IP networks congestion simply isn't the case. What really is bad is having to wait up to 4+minutes to find out the connection is dead.

    UDP allows you to do what you want, and avoid anything you don't need. However, if you need things like in-order, reliable transmission you are probably better off with TCP. If you are simply providing a response, then you should be fine with UDP.

    If you are after high-traffic, high connection, high throughput, UDP seems to be the way to go. If you are after easy programming and guaranteed in-order delivery, TCP is your tool.

    As someone pointed out, there is a new player in town, and that's SCTP. It was invented specifically because TCP is bad for low-latency transmissions (such as Telephony!). It is used in the SS7 over IP protocols, such as M3UA, SUA, etc.

    That is why you will see a mix of streams in various protocols. H.323 uses TCP for control, and UDP for speech/video. SIP allows you the choice of UDP/TCP for call control.

    Jason Pollock
  • With my former Companies IBS and DVBS we were doing video over TCP, UDP (Unicast and Multicast)
    TCP is certainly the easiest to implement. but at only a few percent packet loss (under 5%) is comes to a grinding halt. I have never seen TCP get to full speed between to T3 at 45Mbps and even with much tweaking of the TCP windows and other timing parameters in some cases we still couldn't get over 56K over a lossy 45Mbps to 45MBps link.

    With UDP 5% loss is 5% loss, MPEG and most other video formats will not be happy, with video tearing up and stalling all the time.

    It's possible to implement the TCP's transmission protocol over UDP, just the packet headers will not be correct.
    At my former company we developed 2 mechanisms to re-implement a TCP like connection using UDP packets. One called SPAK, which is an aggressive retransmission protocol, unlike TCP that backs down intentionally with congestion (loss) this pushes harder! On a 64K link with 90% loss measured with Ping we were able to send 60Kbps! This was for a live event from Sri Lanka to the USA on March 14 1997 with Arthur C. Clarke, and it work, even to my surprise. I had to use a 2400 Bps modem to connect to the remote server because telnet couldn't establish a connection over the 64K line into that country.

    The other method we called ECIP for Error Correction Internet Protocol. It used erasure Codes, (unheard of at that time) the best papers on this are by Luigi Rizzo. http://info.iet.unipi.it/~luigi/fec.html These also worked well it took about 4 years of work to find an optimal coding and transmission scheme that ultimately borrowed some of the S-PAK ideas to include a retransmission but kept the latency to under 1 packet round trip time! This is important when doing video conferencing.

    Both of these protocols were able to consonantly move up to 40MBps between to lossy T3's from the USA to Korea. This was tested over a 3-year period.

    I still own the rights to these and if anyone is interested is commerial or opensourcing these you can contact me through livecam.com
  • I ran into this very problem a while back and solved it last week. Have have a Packeteer PacketShaper 4545. We use it at my Unv to slow down P2P and speed up interactive traffic such as SSH. I also use it to limit the amount of bandwidth an application or set of applications can consume. One of these sets is Games. I limited them to a slice of bandwidth during the day and raised that after hours (neither limit has ever been reached). After hours I also garuntee a small slice to help kick start apps. I also use dynamic partitions to garuntee each flow a certain amount of BW within the garunteed slice. I've made various changes to the Games class over time to try and make it better. It's been a bit of a guinea pig to test settings on for other types of traffic. One of the things I've done is raise the classes Rate Policy after hours to something slightly higher than most other traffic. None of the things I tried helped. Users still complained of exceptionally high ping times (ie, 9999ms) even during after hour times. Most of these corresponded to bursts of other traffic, not usually with a higher rate policy. Finally I called tech support. One of their techs had me switch the Rate Policy (default) to a Priority Policy. He also explained the difference between the two. Using a rate policy to slow TCP traffic means that the ACKs are delayed to slow responses down. This isn't possible with UDP though. Using a rate policy on UDP resulted in either dropped packets or an entire datgram being delayed. There is no backoff implementation in UDP so the queue would fill with these UDP datagrams. Using a simple FIFO, the 1st delayed datagram would be delivered considerably late. The another and another. If the queue was full and a datagram was received, it was dropped. Bad news. The client would experience really crappy performance. Video and audio would be extremely choppy. It would just suck. A Priority policy works differently though. Let's say a UDP Quake packet and a TCP HTTP packet arrive at the same time. The Quake packet has a higher priority. The Quake packet goes 1st. No queuing, no delay. There's other technical stuff that goes along with this but I won't bore you with it. All in all, because this traffic was UDP, the policy method I was using (again, the default) caused horrible service. Does UDP have it's uses? I'm sure. Does UDP have benefits over TCP in some cases? Sure. Would I consider switching everything to UDP? Hell no.

Get hold of portable property. -- Charles Dickens, "Great Expectations"

Working...