Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Unix Operating Systems Software

A Better FTP? 37

cppgodjavademigod asks: "I used to work for a company that sold a file transfer product for datacenters. It supported checkpoint/restart, encyrpted password transmition, asynchonous job procesing, etc. Is there an Open Source project that aims to provide a better FTP? I'm looking for something that makes use of multiple paths (for machines connected via more than one network), job restart, job control, secure transmission (over internet), maybe even tunneling over HTTP and redundant servers (via some kind of private P2P protocol)."
This discussion has been archived. No new comments can be posted.

A Better FTP?

Comments Filter:
  • There are a couple of problems. The main one is that most of the servers out there understand the FTP protocol, it has critical mass, and it is 'good enough'.

    This means that there is really no incentive to change to it.

    The second problem is that for most people, most of the time, their Internet connection is pretty reliable. This is also improving all the time as more and more people move to DSL and cable modems instead of dialling up.

  • I know that a number of the things you are talking about are already possible although not as simple as FTP secure = FTP + IPsec multiple paths = download managers Get Right or my faverate Flash get which suport multiple mirrors, splitting files, pause/resume etc. I have yet to find a program as good as flash get for linux though .... I you know of any send me an email.
  • Swarmcast? (Score:2, Interesting)

    by RossyB ( 28685 )
    Sounds a lot like Swarmcast to me...
  • FSP, anyone? (Score:3, Interesting)

    by Chelloveck ( 14643 ) on Tuesday October 30, 2001 @10:10AM (#2496810)

    Remember FSP [faqs.org], the "File Server Protocol". It was introduced about 10 years ago and was supposed to be the FTP-killer. Technically it probably was superior, but good ol' FTP was available everywhere and was good enough. Today you'd be hard-pressed to find any FSP sites at all. The last published version of the FSP FAQ appears to be dated 1996-08-19. It seems there's really no demand for a better FTP.

    • I hand't heard of FSP, so I read that doc. Doesn't look to me like it was supposed to be an FTP killer, but an anonymous FTP killer. Different thing. It also doesn't handle any of the important points that the poster asked about.

      FSP seems to have died for lack of new or interesting things, rather than because FTP was too entrenched.
    • I remember FSP, the File Slurping Protocol, as an alternative for FTP for l33t warez d00d that that wanted to use FSP's bandwidth restriction to avoid being noticed when they were borrowing other people's servers. YMMV.

  • not exactly FTP, but it does tranfer files. It tunnels over ssh, and can copy vast directory trees. And for slow connections, it can both compress the data, and only transfer the files and parts of files that are needed (sorta binary diff).
  • I know its mostly dead now, but HOtline used to provide most of that, making it the reason it was my program of choice in my MacWarez days...
    Once you connected to a server, you had file transfer with start/stop/resume, you could comunicate with other users on the server, you could tunnel through http...
    i dont recall if any incarnations had any more secure features, however.
    It was a wonderful program, with actually a lot of promise, until it was released for windows, at which point it became a banner-ad driven attempt at making money for files which were usually not provided at the end.(clicking on banner pages to get username/passwds to get in and download the warez/pr0n/mp3s/whatever needed.
  • by Wills ( 242929 ) on Tuesday October 30, 2001 @11:34AM (#2497213)

    The rsync algorithm meets most of your requirements. rsync was proposed in 1998 by Andrew Tridgell for efficient secure file transfers. The main points are:

    • For efficiency rsync skips any previously received parts of files, a process based on transmitting small checksums instead of large file chunks.
    • For security you can tunnel rsync over any secure protocol such as ssh/openssh. If you don't want or need protocol-level security you can tunnel it over http.

    The detailed description is here [anu.edu.au] (http://samba.anu.edu.au/rsync/tech_report/), and open-source software is here [anu.edu.au] (http://samba.anu.edu.au/rsync/download.html).

    Overall rsync is often much (10x) faster than using compressed file transfers. It is most useful for users who frequently download new versions of packages with significant similarities between successive versions.

  • by rw2 ( 17419 ) on Tuesday October 30, 2001 @11:35AM (#2497226) Homepage
    I work in grid computing and we have some needs that push this idea forward. Over at Argonne labs [anl.gov] the Globus [globus.org] team has put forward this draft [anl.gov] of extensions for some of what you talk about (i.e. it's secure and multi-path). Code exists under yet another open source license the "Globus Toolkit Public License" [globus.org].
  • HTTP 1.1 has most of what you're looking for. And with the DAV extensions you can get the equivalent of directory listings too.
    • Re:HTTP/1.1 (Score:3, Insightful)

      by Slynkie ( 18861 )
      I'll be honest up front: I don't have a good comparison of either the features or performance of HTTP/1.1 vs. FTP.

      That said, I have to wonder whether HTTP/1.1 could be a true solution, for the simple fact that HTTP was not created specifically for the purpose suggested. In addition, for future development purposes, would we really want to bog down HTTP with features not used in everyday web transactions?

      *shrug*, just my initial thought, I might not have a clue what I'm talking about =)
  • SFTP (Score:2, Interesting)

    by Anthanos ( 160322 )
    How about SFTP? It is an FTP like protocol layered ontop of SSH. While it may not have ALL the features you were looking for, it has the most important - security.
    • by rw2 ( 17419 )
      While it may not have ALL the features you were looking for, it has the most important - security.

      FYI, security isn't the most important for everyone.

      I'm much more concerned these days with bandwidth utilization, which would be kind of hosed by a scheme that encrypts the data stream. I probably want encrypted authentication, but that's it.
      • I'm much more concerned these days with bandwidth utilization, which would be kind of hosed by a scheme that encrypts the data stream. I probably want encrypted authentication, but that's it.

        I'm just curious why you think an ecrypted data stream would use more bandwidth than unencrypted?

        Besides, sftp can use zlib compression anyway, so that could probably help a little(well, depending on the type of data you're transfering)...
      • Well, besides the security through encryption, you can utilize the zlib compression sftp/ssh provide to minimize bandwidth utilization. I have done several informal benchmarks and moved all of my servers (1000+ users combined) to sftp ONLY. Like I said, it may not have everything you want, but it is certainly better than FTP alone.
        • by rw2 ( 17419 )
          FWIW, our data is nearly uncompressable so that doesn't apply to us. In the case of compressable data, the issues of encryption and compression are orthoganal. If you compress the unencrypted data (which of course you must do in either case as encrypted data isn't compressable) then you can frequently save more time in transmission than you spend on the compression step.

          If you then choose to encrypt it, your crypto algorythm will at some point not be able to keep up with the pipe available.

          If you choose not to encrypt it you will have saved an almost unimaginable number of cycles to use pushing data down the pipe instead.

          Remember what started this thread though. I don't disagree that encryption is useful, just that the naked assertion that security is the most important thing isn't nearly always true.
      • Why would encryption reduce the bandwidth? Encrypted data is generally the same size as the original, depending on whether a block or stream cypher is used and how much, if any, header data is attached to the message.
        • by rw2 ( 17419 )
          Encrypted data is generally the same size as the original, depending on whether a block or stream cypher is used and how much, if any, header data is attached to the message.

          True that the data going down the pipe is the same size, but unless you have multiple machines sending pieces of the same data down different paths to the endpoint, you find that more often than not your CPU cannot encrypt the data quickly enough to keep up with the pipe that is available to it.
          So in quick tests we find that SCP takes about twice as long as native FTP. I've never tested SFTP, but imagine similar results. Do you know?
          • What cypher are you using ?

            I've seen dramatic speedups by putting this
            in $HOME/.ssh/config:
            Host *
            Cipher blowfish

            It is *much* faster than the default (3DES)
  • It's usually through your webserver (Apache, IIS, Zope), so, it will happily use SSL (and many clients support it happily).

    Not only that, but you can mount WebDAV trees from your OS! MacOS X and Windows 2000/XP do this happily, just give it a WebDAV URL under their 'connect to server' dialog. Unfortunately, 2000 (XP is supposedly much improved) isn't a full redirecter, so you can only use File Explorer & Office against WebDAV, not WinAMP, for instance.

    You can also get a WebDAV mounter for Linux too.

    It's also readily supported.. Office supports it, as well as some Macromedia & Adobe products I believe. Even Oracle has it.

    Now someone just needs to make a nice multi-user WebDAV server for UNIX.. I'd love to move my Ogg collection to a WebDAV server, and have people upload it as a user other than nobody :)
  • Downloader for X has some of the features you have requested.

    http://www.krasu.ru/soft/chuchelo/

    - James
  • I don't think proftpd has many of the things you're looking for, but it's certainly the best ftp server I've used. You might be able to get some use out of it until you find something better, at least.
  • I doubt it does all you want, but on a related topic, I always wondered why Sendfile never made it big. It would solve a lot of other problems. I used to use it on Bitnet back in my college days. Amazingly, it's apparently an available spell for Sorcerer GNU Linux.
  • by dsb3 ( 129585 )
    webdav [webdav.org]'ll do this.
  • Some of the other solutions mentioned here might already handle this, but I am ignorant enough of them to ask the question anyway...

    On our latest project, we have a large number of embedded targets running Windows CE on a TCP/IP (cable modem) connection to a file server running Windows NT. We have requirements to be able to download identical images to all the targets for software upgrades within a certain period.

    The problem is that TCP does not support a multicast to multiple targets - you essentially wind up retransmit the same data over and over again. With a certain number of targets, the numbers come out taking longer to do a system upgrade than using an older legacy serial link that supported broadcast.

    So, here's my question: Does anyone know of a multicast file transfer protocol (not simply serial FTP) that is suitable for this application, preferably something open source?

    Thanks

  • Healthcare organizations have HIPAA requirements that are forcing them to look at encrypted processes (either standalone encryption products, or encrypted channels like SSL) to replace FTP for moving personal healthcare information across the internet.

    Pretty much across the board, insurers are moving to HTTP based solutions (over SSL obviously). For a few lines of Perl/Java/PHP you can ride on top of existing SSL transport, easily provide redundancy, be universally available to any type of client, etc. Command line tools like cURL (a kick ass utility btw) make it scriptable / automatable.

    Features like job control and smart restart aren't inherently included though with HTTP based apps.

    FTP over SSL isn't fully standardized from what I can tell. See http://www.ford-hutchinson.com/~fh-1-pfh/ftps-ext. html for a rundown on available clients and servers. IME, there are significant incompatabilities between implementations.

  • LFTP is an excellent command-line and scriptable tool. Check out the fm.net page [freshmeat.net] for more info.

    Not sure if it does the encrypted password part, but it has almost every other bell and whistle out there. My fave is the 'mirror' and 'mirror -R' commands - does a comparison with the local file timestamps/sizes and only "get"s or "put"s the required files.
  • Bittorrent (Score:3, Interesting)

    by rafa ( 491 ) <rikard@anglerud.com> on Tuesday October 30, 2001 @09:28PM (#2500707) Homepage Journal
    bittorrent [bitconjurer.org] has some of what you're looking for. It automaticaally mirrors when you download, helping ease the load on the server for poular downloads. Worth checking out. It could probably be run over ipsec if you wanted to.
  • by Orasis ( 23315 )
    Hello, I am the creator of Swarmcast and have just written a new paper entitled "HTTP Extensions for the Content-Addressable Web" that is available at onionnetworks.com [onionnetworks.com].

    The Content-Addressable Web provides all of the asked-for features, including multi-source/parallel downloads, and the ability to safely retrieve content from untrusted mirrors.

    Please read the paper and tell me what you think.

BLISS is ignorance.

Working...