Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Networking Operating Systems Windows Software

Guaranteed Transmission Protocols For Windows? 536

Michael writes "Part of our business at my work involves transferring mission critical files across a 2 mbit microwave connection, into a government-run telecommunications center with a very dodgy internal network and then finally to our own server inside the center. The computers at both ends run Windows. What sort of protocols or tools are available to me that will guarantee to get the data transferred across better than a straight Windows file system copy? Since before I started working here, they've been using FTP to upload the files, but many times the copied files are a few kilobytes smaller than the originals."
This discussion has been archived. No new comments can be posted.

Guaranteed Transmission Protocols For Windows?

Comments Filter:
  • UDP. (Score:5, Funny)

    by langelgjm ( 860756 ) on Tuesday June 30, 2009 @12:32PM (#28530135) Journal
    Clearly you're looking for UDP. Next question.
    • Re:UDP. (Score:5, Funny)

      by sofar ( 317980 ) on Tuesday June 30, 2009 @01:55PM (#28531923) Homepage

      TCP is so horrible. I wish HTTP used UDP by default so I wouldn't have the pro

    • Z-Modem FTW! (Score:5, Insightful)

      by Cytotoxic ( 245301 ) on Tuesday June 30, 2009 @05:53PM (#28535305)

      Crappy connection? Resumable transfers? Slow connections? Sounds like the good old BBS days!

      Z-modem is your answer.

  • by guruevi ( 827432 ) on Tuesday June 30, 2009 @12:33PM (#28530151)

    SFTP should do since the communications are encrypted, if something changes along the way it should be rejected by the other end. HTTPS and any other protocol-over-SSL should do.

    FTP is a plain-text protocol so if something changes along the way it won't give you any issues.

    • Re: (Score:2, Troll)

      by fm6 ( 162816 )

      I don't know SSH (which SFTP uses) well enough to say that you're wrong, but I think you are. Encrypting software, in itself, does not guarantee that there are no errors. It's a simple case of garbage-in-garbage-out.

      On the other hand, use of SFTP in place of FTP is mandatory in this day and age. FTP sends passwords in clear; anybody using it is wearing a big red sign that says HACK ME!!!!

      As for data integrity, this is not exactly new, or rocket science. Here's the magic word: checksum.

      • by jeffmeden ( 135043 ) on Tuesday June 30, 2009 @12:58PM (#28530763) Homepage Journal
        Using modern encryption like SSH does guarantee that things *have to add up* since keeping what you start with a secret is just as important (sometimes more so) as making sure you finish with exactly what you start with (meaning no one in the middle meddled with your data).

        So, in short, something like SSH or any other properly encrypted communication mechanism is a great way to both secure the data from snooping (in the case of a microwave link, a VERY real problem) as well as to safeguard the data from corruption (intentional or unintentional). I sincerely hope, for the asker's sake and possibly for the country's sake, that these files he works with are trivial.
        • by Anonymous Coward on Tuesday June 30, 2009 @01:14PM (#28531157)

          I sincerely hope, for the asker's sake and possibly for the country's sake, that these files he works with are trivial.

          Well, let's see.

          transferring mission critical files across a 2 mbit microwave connection, into a government-run telecommunications center

          Pretty sure encryption isn't necessary.

      • Re: (Score:3, Informative)

        FTP is however, more than an order of magnitude faster than SFTP or SCP. If the files are relatively small, SFTP is certainly the more secure solution, but if the files are huge and time is an issue, FTP has the clear performance advantage.
    • by Itninja ( 937614 )
      Since there are several concepts/protocols that like to call themselves "SFTP", which one are you referring to? http://en.wikipedia.org/wiki/Sftp [wikipedia.org]
    • SSHFS (Score:3, Insightful)

      by cenc ( 1310167 )

      I use sshfs file mounts for all office document file sharing and such, not just one time transfers. SSH encryption security, with the ability to open and edit files over the network. No goofing around with samba or windows file sharing. Regardless, some sort of ssh or sftp at least.

      Not sure about getting it to work on windows, but there should be some options.

    • by link-error ( 143838 ) on Tuesday June 30, 2009 @04:05PM (#28533895)
      Wrong. FTP has a binary mode. This is probably the reason his files are missing several k at the destination. Sending a binary file in ascii mode is the ONLY TIME I've ever had a file not transfer entirely/correctly using FTP. Unless of course there is a network error/timeout, etc, but the FTP client always errored out in those cases. Using SFTP over an already secure network will only slow things down greatly.
      • Re: (Score:3, Interesting)

        by jgrahn ( 181062 )

        Wrong. FTP has a binary mode. This is probably the reason his files are missing several k at the destination.

        Using FTP ASCII mode for binary files would be increadibly stupid, but yeah, it sounds like that could be it.

        Sending a binary file in ascii mode is the ONLY TIME I've ever had a file not transfer entirely/correctly using FTP. Unless of course there is a network error/timeout, etc, but the FTP client always errored out in those cases.

        Calling ftp from a .BAT script or whatever it's called in DOS and

  • by csoto ( 220540 )

    Or I guess that would be WWCP. WWJD?

  • TCP? (Score:5, Interesting)

    by causality ( 777677 ) on Tuesday June 30, 2009 @12:33PM (#28530167)
    The summary states that with FTP, the downloaded files were of the wrong size. Can anyone explain why TCP's efforts to to deal with unreliable networks, such as the retransmission of unacknowledged packets and their reassembly in proper order, would not already deal with this? I am familiar with the concepts involved but I think I lack the low-level understanding of how you would get the kind of results the story is reporting.
    • Re:TCP? (Score:5, Insightful)

      by Anonymous Coward on Tuesday June 30, 2009 @12:39PM (#28530315)

      TCP has timeouts. The FTP client and server probably have timeouts. Eventually, some bit of the system will decide the operation is taking too long and give up. The FTP client is probably reporting an error, but if it's driven by a poor script no-one will know.

      • by Tiroth ( 95112 )

        I think AC has the only correct response to this post. All of the people talking about CRs must not have any experience using FTP over a spotty connection, because it is quite common to run into these kinds of issues, especially on lengthy transfers.

        • Re:TCP? (Score:4, Informative)

          by ShieldW0lf ( 601553 ) on Tuesday June 30, 2009 @01:19PM (#28531267) Journal

          You could deal with a situation like this by zipping or rarring it into multiple small files and including parity files.

          http://en.wikipedia.org/wiki/Parchive [wikipedia.org]

    • Re:TCP? (Score:5, Informative)

      by Zocalo ( 252965 ) on Tuesday June 30, 2009 @12:39PM (#28530331) Homepage
      The only times I've seen FTP report a successful file transfer and have a file discrepency is when a binary file has been transferred in ASCII mode and the CR/LF sequences are being swapped for just CRs, or visa versa. Nothing wrong with the protocol, PEBKAC...
      • Re:TCP? (Score:5, Informative)

        by bwcbwc ( 601780 ) on Tuesday June 30, 2009 @01:18PM (#28531239)

        I used to get dropped characters and groups of characters in text files using FTP back in the 1990s and early 21st century. It seemed to be a bug in the FTP client, because it only happened when we used the Windows Explorer interface for the product. When we did command line or used the native GUI there was no problem. If you're seeing this type of a pattern where you can see that characters are missing, switch to a different FTP client or try the Windows command line FTP.

        Another possibility is that the target Windows system is mimicking a Unix system, so that an ASCII transfer is stripping the CR characters from CR/LF sequences.

        On the other hand, if you really want a "guaranteed delivery" with formal acknowledgment and validation, try using a secured protocol like SSH or SFTP or a messaging system like JMS with a handshaking architecture around it. There are plenty of Open Source architectures you can build around (xBus for example), but I don't know of any ready-built executables. Commercially, vendors like IBM (MQ) and Tibco have products that deal with the messaging at a similar level.

    • Re:TCP? (Score:4, Insightful)

      by AvitarX ( 172628 ) <me&brandywinehundred,org> on Tuesday June 30, 2009 @12:40PM (#28530341) Journal

      I bet it is file systems with different block sizes rounding slightly differently, and an OP that does not understand.

    • Re: (Score:3, Insightful)

      by mini me ( 132455 )

      FTP, while in ASCII mode, can try to translate line endings. If the carriage returns were removed, in order to be UNIX compatible, the file size would have been reduced.

      Most FTP clients allow the enabling of a binary mode which prevents the conversion from happening.

    • by JamesP ( 688957 )

      TCP is as reliable as borrowing a brand new Ferrari to the crack dealer on the street corner.

      UDP of course, is less reliable than that. The Ferrari is rigged with a bomb.

    • by theCoder ( 23772 )

      It's possible the files were transferred in ASCII mode. This means that any place a '\r\n' appeared in the file, it was replaced by a '\n'. This is normally OK (and sometimes desirable) for text files, but can really cause problems with binary files. Because \r is 0x0d and \n is 0x0a, they can often appear in sequence in that in a binary file (like two pixels in an image) when they do not mean a line break.

      I would recommend that the submitted check to make sure that binary mode was enabled in the FTP cli

    • Re:TCP? (Score:5, Informative)

      by samkass ( 174571 ) on Tuesday June 30, 2009 @01:45PM (#28531775) Homepage Journal

      While others point out, probably correctly, that the problem is probably a binary/ascii conversion, in actuality the error checking on TCP is simply not that good.

      TCP uses a 16-bit checksum, so you have 1 in 65536 chance of an error packet being incorrectly validated as being correct. To make matters worse, it uses 1's complement instead of 2's complement, so 0x00 and 0xFF are indistinguishable.

      Ethernet has a 32-bit, 2's complement checksum so if you're transmitting over that link-layer you're probably in good shape. But depending on that from a systems point of view seems risky.

      Much better to only transfer ZIPs and check them at the other end if you only have control over the endpoints. If you can control the transmission, use a better error-correcting high-level protocol or even a forward-error correction protocol on top of TCP.

      Or just use rsync.

    • Re: (Score:3, Insightful)

      by SnarfQuest ( 469614 )

      Binary verses text mode?
      Lousy windows file system screwing up on one or the other end.
      Sparse files.
      Windows "fixing" the data during transmission.
      Loss of packets, and no error checking.
      Windows.

  • Robocopy? (Score:5, Insightful)

    by wafath ( 91271 ) on Tuesday June 30, 2009 @12:33PM (#28530175)
    • Re: (Score:3, Informative)

      by Krneki ( 1192201 )
      Robocopy works on top of Windows network layer, it's the same as using copy / paste with some extra functionality.
      • Re:Robocopy? (Score:5, Informative)

        by Anonymous Coward on Tuesday June 30, 2009 @12:49PM (#28530561)

        Yeah but that extra functionality contains things like the ability to resume a transfer, retry if things fail, and verify the files after copying.

        • Re:Robocopy? (Score:5, Informative)

          by Saint Stephen ( 19450 ) on Tuesday June 30, 2009 @12:53PM (#28530651) Homepage Journal

          MOD PARENT UP. Not to mention it's multithreaded, so it's not really the same as copy/paste - it's the same as a whole bunch of copy/pastes as the same time.

          Why do people keep fighting the Robocopy, I'll never know.

      • Re:Robocopy? (Score:4, Insightful)

        by Malc ( 1751 ) on Tuesday June 30, 2009 @01:01PM (#28530881)

        It might be using Windows copy protocols, but it definitely is not like copy/paste. It's restartable for instance. It's way more reliable.

        We have to copy large files to our office in China. FTP always fails. Windows copy via Explorer often fails, but it is also incredibly painful to do when latency is high and one is browsing over the network. Robocopy (depending on system setup) will motor through and is very persistent when there's a connection hiccup. You definitely want restartability if you copy large files are a couple of hundred MB an hour.

        I'd say make sure to break the files up in to chunks if they're large. Also, run 2-4 robocopies in parallel if the latency is high as this will give better throughput. It can do funny things to Windows though (maybe other things wait on some network handle and seem to freeze until one of the robocopy processes moves on to the next file).

        Also, consider doing it over a Cisco VPN. It seems to add some robustness if there is packet loss. I often had trouble access servers in the US when I was living in China due to packet loss, but no such problem over a VPN (zero packet loss, but very slow instead, which is better).

    • First, why would FTP not be the right size? The transaction was terminated prior to the upload completing. I see that too often.

      Robocopy really is a great too do deal with this problem. I have around 20 remote links with unreliable connections, and robocopy is a god send.

      Use a command line. 7Z the file to be transmitted into a .7Z archive. Robocopy "\\source\server\path" "\\dest\server\path" filename.7z /ipg:9 /z /r:30 /w:30 /ipg:9 says to wait 9ms between packets. I use 9000 at slow link sites to n

  • Use BITS (Score:5, Informative)

    by Lothar ( 9453 ) on Tuesday June 30, 2009 @12:34PM (#28530183)

    Background Intelligent Transfer Service (BITS) can be used to transfer files between windows servers. It is the technology behind Windows Update. We use it in our company to transfer files across a low bandwidth sattelite connection. Great thing is that it can automatically resume transfer after rebooting both machines. SharpBits offer a nice .NET API. You can find it here: http://www.codeplex.com/sharpbits [codeplex.com]

  • domyjobforme tag (Score:2, Insightful)

    by EmagGeek ( 574360 )

    I love it! Haha... that's probably one of the better tags I've seen.

    • Re: (Score:3, Insightful)

      by Anonymous Coward

      Too easily thrown around if you ask me. He's not looking for anyone to set it up, he just wants some options. Isn't that what community is about?

  • BitTorrent (Score:5, Insightful)

    by Inf0phreak ( 627499 ) on Tuesday June 30, 2009 @12:34PM (#28530199)
    I'd say BitTorrent -- with firewall rules or some other measure so random people can't see your microscopic swarm. It uses SHA-1 hashes of chunks, so if a torrent client says a file downloaded successfully it's pretty much guaranteed to be true.
  • by bacchu_anjan ( 100466 ) on Tuesday June 30, 2009 @12:35PM (#28530217)

    hi there,

        why don't you get cygwin on both the systems and then do a rsync ?

        between your own network, you might want to use robocopy(http://en.wikipedia.org/wiki/Robocopy).

    BR,
    ~A

    • Re: (Score:3, Informative)

      by ericnils ( 1424615 )
      We use Cygwin's rsync to backup windows servers over a slow Internet connection at work. It works very well for us and using the -z compression option will probably result in much faster transmission over a 2Mbit pipe than FTP will provide. We run rsync as a service on the source and pull to the destination using the rsync command line tool, but you could easily reverse that. You should also consider Microsoft's built-in DFS replication which automates replication of data between two file servers over TC
  • Should work fine across a WAN and then just file-copy.
  • by not already in use ( 972294 ) on Tuesday June 30, 2009 @12:35PM (#28530225)
    Wasn't TCP designed for just this? Guaranteed transmission?
    • by Dogun ( 7502 ) on Tuesday June 30, 2009 @12:43PM (#28530415) Homepage

      Implementations of TCP in most operating systems fall a bit short of that, killing off stalled connections, etc. Also, some firewall suites, and some routers make a habit of killing off connections after a certain amount of time, sometimes without regard to whether or not they are 'active'.

      You might have some luck boosting reliability with the TcpMaxDataRetransmissions registry setting in Windows. But ultimately, the poster is going to need to find a file copy suite which retries when connections die.

  • Line endings! (Score:5, Insightful)

    by sys.stdout.write ( 1551563 ) on Tuesday June 30, 2009 @12:36PM (#28530249)

    they've been using FTP to upload the files, but many times the copied files are a few kilobytes smaller than the originals

    Twenty bucks says you're converting from Windows line endings (/n/r) to Linux line endings (/n).

    Use binary mode and you'll be fine.

    • Line ending was my first thought too. I've used FTP scripts in Windows to and from *NIX machines with no trouble at all. I can't vouch for how well it works for Windows-Windows transfers because in that case I've always just used shared folders. That worked fine too. Unless the data is sensitive, there's really no need for scp or anything fancy.

  • it's not just for Linux.

  • It's crazy but it just might work. Not very quickly though.
    • Why not very quickly? It'll go as fast as the connections will permit, will it not?

      Set up a tracker on one of the servers, and have a client on both. (This may not even be required, but I'm not sure)

      You also have the much more interesting property of the technology which is to automatically retransmit any faulty data, and 100% guarantee the resultant file will be bitwise identical. Furthermore, you can have the clients automatically add any .torrent files found in a specified (remote, in your situation)

  • There are no guarantees when it comes to the protocols and the internet .... it is always a "best effort" system. Many forget that it is always a best effort system because the internet has come to the point where for all intents and purposes, there are virtually no failures. I would probably use a tried and true protocol like FTP or maybe even SCP. Both work very well. I would think your best bet is to try to work with the government to improve their "dodgy" internal network. SCP has the advantage of
  • Probably tape drives, or hard drives if you prefer. Encrypt with a shared key. I think microwave is LOS already, so your distances can't be that large. It would certainly solve your "flaky" bandwidth and security considerations. You would "packetize" the data, eg: tapes are brought over in serial succession; if a tape went missing, you delete the key that encrypted it's contents and request a resend of the contents of that tape. That verifies it's receipt.

    Not sexy, but it's probably the best solutio

    • I think microwave is LOS already, so your distances can't be that large.

      I'm not sure it is the distance that matters but what is in that distance. Sneakernet probably isn't the better option if there is, say, a cliff in the middle.

  • rsync (Score:5, Informative)

    by itsme1234 ( 199680 ) on Tuesday June 30, 2009 @12:42PM (#28530399)

    ... is what you want. Yes, you can use it with Windows (with or without cygwin bloat). Use -c and a short --timeout and you're good to go. If you're using it over ssh you're looking at three layers of integrity (rsync checksums, ssh and TCP), two of them quite strong even against malicious attacks not only against normal stuff. Put it in a script with a short --timeout; if anything is wrong with the link your ssh session will freeze completely, as soon as your --timeout is reached rsync will die and your script can respawn a new one (which will resume the transfer using whatever chunks with good checksum you have already transfered and will again checksum the whole file when it finishes).

    • Re:rsync (Score:4, Informative)

      by doug ( 926 ) on Tuesday June 30, 2009 @12:52PM (#28530629)
      Yep, that's what I'd do. The rsync --server means sending signatures instead of files to prevent pointless copies, and it does an excellent job of ensuring good copy or failure. It is certainly better than any ftp variant.
    • No mod points, but this is the answer to your question.

  • Take an MD5 hash of the data or something, then send it. If it comes back changed, you've got data loss. If it comes back the same, and the files are still a few kb smaller, then either you're the Wizard of File Hashes or you're reading off on-disk size instead of actual data size.

  • by n4djs ( 1097963 ) on Tuesday June 30, 2009 @12:45PM (#28530455)
    'set mode binary' prior to moving the file. I bet the file you are moving isn't a text file with CR-LF line terminations as normally found in DOS, or one side is set and the other isn't.

    Ritchie's Law - assume you have screwed something up *first*, before blaming the tool...

    • Parent is 100% accurate. This is integral to binary file transmission via FTP. Transfer mode (binary or text) may be set to text on the server by default. Without the proper setting, things won't transfer properly.

      'hash' is also a nice feature...

  • Think of this transfer model like a car, the further it goes, the more bytes are burned up. they just need to be added back in with a network filling station. I would look to google for a government approved provider.
  • AS2 FTW (Score:3, Interesting)

    by just fiddling around ( 636818 ) on Tuesday June 30, 2009 @01:01PM (#28530865) Journal

    You should look at the EDIINT AS2 protocol [wikipedia.org], AKA RFC 4130 [ietf.org]. This is a widely-used e-commerce protocol built over HTTP/S.

    AS2 provides cryptographic signatures for authentification of the file at reception, non-repudiation and message delivery confirmation (if no confirmation is returned, the transfer is considered a failure), and is geared towards files. There is even an open-source implementation avaliable.

    More complex than FTP/SFTP but entirely worth it if your data is mission-critical and/or confidential. Plus, passes through most networks because it is based on HTTP.

  • Use .complete files. (Score:4, Interesting)

    by Prof.Phreak ( 584152 ) on Tuesday June 30, 2009 @01:01PM (#28530867) Homepage

    Even on reliable connections, using .complete files is a great idea.

    It works this way: If you're pushing, open ftp, after ftp completes, you check remote filesize, if matches local file size, you also ftp a 0 size .complete file (or a $filename.complete file with md5 checksum, if you want to be extra paranoid).

    Any app that reads that file will first check if .complete file is there.

    If remote file size is less, you resume upload. If remove filesize is more than local, you wipe out remote file and restart.

    Same idea for the reverse side (if you're pulling the file, instead of pushing).

    You can also setup scripts to run every 5 minutes, and only stop retrying once .complete file is written (or read).

    Note that the above would work even if the connection was interrupted and restarted a dozen times during the transmission. [we use this in $bigcorp to transfer hundreds of gigs of financial data per day... seems to work great; never had to care for maintenance windows, 'cause in the end, the file will get there anyway (scripts won't stop trying until data is there)].

  • How about creating SHA1 checksum and then transferring data using netcat? You could split files in pieces then run them though sha1 and finally send over netcat using udp and retransmit at will. Or if files don't change too much you could try rsync.

    This is all unix-centric solutions, so you'd have to install cygwin, unless there exists a python library that does all that.

  • Back in the very old days, we had slow modems with noisy lines. We used thinks like Zmodem [omen.com] and other things to handle this problem. It might just be the thing that will work now to solve your problem.
  • I used to have a similar problem over another connection, where even more advanced file copy utilities would say the file was copied, but a 2-4k chunk would be missing. What I did to solve the problem was to use an archiving utility that supported adding ECC records and install it on both endpoints. Then, I'd just archive the files I need, send them over the faulty link, and usually the ECC records were able to correct any errors that did crop up during the transfer when extracted on the destination machi

  • Setup a linux box on the same network next to the windows box that is at the 'remote' end of the transfer (eg, not the end the transfer is initiated from).

    Use ssh from the 'local' end to transfer the file to the linux box. Then run something appropriate (ftpd? apache? samba?) on the linux box that makes the files directly available to the windows box.

    Alternatively, rip the Windows crap out and replace both ends with a real OS.

  • I would reckon that something based on the Bittorrent protocol (or a subset of it) might be an exceptionally reliable way of, while running in the background, sending files from one machine to another one.

    The protocol comes with built-in file splitting/recombination, block validation and you can get several GUIs (and I believe at least one command line implementation) for it. It might be a bit overkill though - pretty much everything in the protocol related to dealing with managing communications with multi

  • Does your file have to be transferred in-synchro?

    Otherwise you might want to look into Message Oriented Middleware, things like MQ Series or in worst case scenario, even Microsoft MQ. There are plenty of options.
    This would allow you to put policies on the messages, handle routing (in case you need to deliver to different recipients), guarantee delivery at least once, do type conversions/transformations etc.

  • by ballyhoo ( 158910 ) on Tuesday June 30, 2009 @01:12PM (#28531119)

    You are kidding about this, aren't you?

    Let me get the facts straight:

    - you have "mission critical files", and the network you're transferring them over is so incredibly badly managed that it doesn't support reliable data transfer
    - you want a technical workaround for this brokenness.

    If this is the case, you don't have a technical problem on your hands; you have a political one.

    "Mission critical" has a meaning: it means critical to the success of the operation. I.e. without these files, your operation or someone else's operation will fail.

    If your management believes that your files are "mission critical", and you're facing a problem of this sort, you need to document the difficulties you're having, along with measurements to support your claims and then make a clear statement that as long as your network path is completely broken, you are absolving yourself of responsiblility for the correct transmission of these files.

    If your management doesn't do anything about this, then the files are not "mission critical".

    • by Dravik ( 699631 ) on Tuesday June 30, 2009 @03:26PM (#28533339)
      Mission critical means that you need to get it done even if someone else isn't getting their job done. Standing around in a huff and stomping your feet means that the mission critical information isn't getting moved. What he needs to do is find a way to accomplish his mission despite the difficulties, and then document the problems so they can be addressed.
    • Re: (Score:3, Insightful)

      by eap ( 91469 )

      I think you missed the part about the government being involved

  • by rlseaman ( 1420667 ) on Tuesday June 30, 2009 @02:48PM (#28532825)

    Set up a BSD lpd queue under Cygwin, something like:

    sendit:lp=/spool/null:sd=/spool:if=/spool/sendit.sh:sf:sh:mx#0:

    Have the sendit.sh script do whatever it is you want with the file. To send a file: lpr -Psendit filename

    Configuration of the network queue left as an exercise for the student. (Hint - queue pathnames locally.)

"Protozoa are small, and bacteria are small, but viruses are smaller than the both put together."

Working...