Forgot your password?
typodupeerror
Software

FTP: Better Than HTTP, Or Obsolete? 1093

Posted by timothy
from the don-your-holy-vestments dept.
An anonymous reader asks "Looking to serve files for downloading (typically 1MB-6MB), I'm confused about whether I should provide an FTP server instead of / as well as HTTP. According to a rapid Google search, the experts say 1) HTTP is slower and less reliable than FTP and 2) HTTP is amateur and will make you look a wimp. But a) FTP is full of security holes. and b) FTP is a crumbling legacy protocol and will make you look a dinosaur. Surely some contradiction... Should I make the effort to implement FTP or take desperate steps to avoid it?"
This discussion has been archived. No new comments can be posted.

FTP: Better Than HTTP, Or Obsolete?

Comments Filter:
  • by NinteyThree (256826) on Thursday February 13, 2003 @07:09PM (#5297951)
    ...use sftp!
    • by Karamchand (607798) on Thursday February 13, 2003 @07:18PM (#5298059)
      I guess that's not what s/he wants. It sounds like anonymous downloading of publicy available files - whatfor do we need any encryption then? There are no passwords to secure, no sensitive data to secure. You'd get only hassles from MSIE users who never heard about sftp..
      • by Just Some Guy (3352) <kirk+slashdot@strauser.com> on Thursday February 13, 2003 @08:24PM (#5298589) Homepage Journal

        It sounds like anonymous downloading of publicy available files - whatfor do we need any encryption then?

        If not for the encryption, then consider what else you get: a well-defined TCP connection. It's a cinch to configure a firewall to allow sftp connections, while FTP firewalling will give you prematurely grey hair (and if it doesn't, then you're not doing it right).

      • by dmayle (200765) on Thursday February 13, 2003 @08:43PM (#5298697) Homepage Journal
        The reason to use sftp on publicly available files or hhtps is so that people <*cough*>Carnivore</*cough*> can't track what I'm doing online. Sure, they can tell what sites you are visiting, but they can't tell what content you're looking at, or which files you downloaded... Imagine a time when you want to download DeCSS for your linux boxen from a foreign server, but someone is logging your downloads <*cough*>Verizon</*cough*> and the RIAA wants access to those records... If they don't know what you downloaded they can't (potentially unjustly) prosecute you...
      • by mr. methane (593577) on Thursday February 13, 2003 @09:54PM (#5299106) Journal
        I provide a mirror for a couple of largish open-source sites, and several of them specifically request that sites provide FTP service as preferred over HTTP. A couple of reasons:

        1. Scripts which need to get a list of files before choosing which ones to download - automated installers and the like - are easier to implement with FTP.

        2. FTP generally seems to chew up less CPU on the host. I can serve 12mb/s of traffic all day long on a P-II 450 box with only 256mb of memory.

        3. "download recovery" (after losing connection, etc.) seems to work better in FTP than HTTP.

    • by nuxx (10153) on Thursday February 13, 2003 @07:22PM (#5298117) Homepage
      There's no reason to use sftp for publically available files. This is for the exact same reason that you wouldn't use https. There's no need for encyption of something that is freely, publically available. Checksums, yes, encryption, no.

      I personally would say go with http for the files, as it'll be much easier for people behind http proxies to download, it'll get cached more often by transparant proxies, and most browsers support browsing http directories FAR better than FTP directories.
    • OR, How about... (Score:5, Informative)

      by Anenga (529854) on Thursday February 13, 2003 @07:32PM (#5298205)
      P2P?

      I've written a tutorial [anenga.com] on how you can use P2P on your website to save bandwidth, space etc. An obvious way to do this would be to run a P2P client and share the file on a simple PC & Cable Modem. This works, but it is a bit generic and un-professional. A better way to do this may be to run a P2P client such as Shareaza [shareaza.com] on a webserver. You could then control the client using some type of remote service (Terminal Services, for example).

      P2P has it's advantages. Such as:
      - Users who download the file also share it. This is especially useful if the client/network supports Partial File Sharing.
      - When you release the file using the P2P client, you only need to upload to only a few users. Those users can then share the file using Partial File Sharing etc.
      - Unlike FTP and HTTP, they aren't connecting to your webserver. Thus, it saves bandwidth for you and allows people to browse your website for actual content, not media. (Though, media is content). In addition, there is ussually "Max # of Connections" allowed to a server or FTP. Not so on P2P.
      - P2P Clients have good queuing tools. At least, Shareaza does. It has a "Small Queue" and a "Large Queue". This basically allows you to have, say, 4 Upload slots for Large Files (Files that are above 10MB, for example) and one for Small Files (Under 10MB). Users who are waiting to download from you can wait in "Queue", instead of "Max users connected" on FTP.

      Though, at it's core, all of the P2P I know of uses HTTP to send files etc. But the network layer helps file distribution tremendously.
      • Re:OR, How about... (Score:4, Interesting)

        by mlinksva (1755) on Thursday February 13, 2003 @08:29PM (#5298624) Homepage Journal
        Excellent tutorial overall, save the sniping at non-Shareaza Gnutella clients. The great thing about MAGNET [sf.net] is that it is client/network agnostic. Shareaza was the first client to support MAGNET and it's an excellent program, but it isn't the only one (at least Xolox does right now, with several others either recently or very soon to be added). The part about disallowing uploads to non-Shareaza clients is completely bogus -- allowing others to download a) doesn't prevent other Shareaza users from downloading and b) limits the number of people you'll be able to distribute content to in a cost effective P2P manner. BTW, you can share your content with any modern Gnutella client (i.e., allows download by hash), and it will be available to people using MAGNET, even if the sharing client doesn't support MAGNET yet.

        Also, you forgot the first and biggest site with MAGNET links [bitzi.com]. Still, an excellent tutorial, thanks for writing it!

    • by Hug Life (643998) on Thursday February 13, 2003 @07:37PM (#5298246)
      While this does help with the evasion of an acient protocol, it forgets one of the poster's main goals. The poster worried HTTP is slower ... than FTP . SFTP is very slow considering the overhead each packet has because of encryption. -js
    • by emil (695) on Thursday February 13, 2003 @07:42PM (#5298285) Homepage

      What I don't care for with FTP is the continuous setup/teardown of data connections. What is even worse with active FTP is that the client side of the data connection establishes server ports, and the server becomes the client (I'd like to be able to use plug-gw from the TIS FWTK for FTP, but this is not possible for the data connections). However, even when enabling passive FTP, the data connections are too prone to lockup. The difficulty of implementing all of this in C probably contributes to the FTP server vulnerabilities.

      Still, if you want both (optionally anonymous) upload ability and access from a web browser, FTP is the only game in town.

      From the network perspective, the rsh/rcp mechanism is cleaner (in that there is only one connection), but it still has the problem of either passing cleartext authentication or establishing unreasonable levels of trust with trivial authentication. In addition, with rcp syntax you must know much more about the path to a file, and there is no real "browsing."

      Many say that SSH is the panacea for these problems, but sometimes I am not concerned about encryption and I just want to quickly transfer a file. The SSH man pages indicate that encryption can be disabled, but I have never been able to make this work. SCP also has never been implemented in a browser for file transfers. I should also say that I've never used sftp, because it has so little support.

      Someday, we will have a good, encrypted file transfer protocol (and reliable implementations of that protocol). Sorry to say, but ftp, rcp, and scp are not it. What will this new protocol support?

      1. Stateless operation a la NFS and FSP.
      2. Unlike early (non-V3) NFS, selection of either TCP or UDP for lossy or non-lossy networks.
      3. Support for a centralized authentication key repository (a la Verisign), but support also for locally-defined, non-registered keys.
      4. Support for both encrypted and non-encrypted transfers.
      5. Multiple client connections per server, possibly implemented with threads (do not spawn one server process per client a la Samba, ftpd, httpd, etc.).
      6. Support for chroot operation on UNIX, without the need for implementing /bin/ls, libc, passwd, et al.
      7. And, of course, we need to keep compression.

      Boy, I never thought that I could rant about file transfer software for so long!

    • by Daytona955i (448665) <flynnguy24NO@SPAMyahoo.com> on Thursday February 13, 2003 @08:08PM (#5298478)
      sftp is not the way to go if you want public access of files. sftp would be the way to go if you were required an account to download/upload files.

      If the files you are serving are large then use ftp. If the files are smaller (less than 10MB) use http.

      http is great, I sometimes throw up a file on there if I need to give it to someone and it is too big to e-mail. (Happened recently with a batch of photos from the car show)

      Since I already have a web page it was easy to just throw the file in the http directory and provide the link in an e-mail.

      I like http for the most part. I doubt anyone will call you lame for using it, unless the files are huge.
      -Chris
  • hmm (Score:5, Interesting)

    by nomadic (141991) <nomadicworldNO@SPAMgmail.com> on Thursday February 13, 2003 @07:10PM (#5297955) Homepage
    I haven't really noticed any reliability issues with http anymore. If it starts loading it usually finishes, and I haven't run into any corruption problems. Maybe if you were serving huge files ftp would be a good idea, but for 1-6 mb it's probably not worth it.
    • Re:hmm (Score:3, Informative)

      by cbv (221379)
      If it starts loading it usually finishes, and I haven't run into any corruption problems.

      You may (just may) run into a routing or timeout problem, in which case the download will stop and you are forced to do the entire download again. Using the right client, eg. ncftp, you can continue downloading partially downloaded files. An option, HTTP doesn't offer.

      With respect to the original question, I would set-up a box offering both, HTTP and FTP access.

      • Re:hmm (Score:4, Interesting)

        by nomadic (141991) <nomadicworldNO@SPAMgmail.com> on Thursday February 13, 2003 @07:21PM (#5298101) Homepage
        Right, but for 1-6 mb I'd rather just try my luck with http. Especially since http is faster to connect to than ftp.
      • Re:hmm (Score:5, Informative)

        by toast0 (63707) <slashdotinducedspam@enslaves.us> on Thursday February 13, 2003 @07:21PM (#5298110) Homepage
        using the right client, ie wget, you can resume from http streams provided the server supports it (and i think most modern ones do)
      • Re:hmm (Score:5, Informative)

        by tom.allender (217176) on Thursday February 13, 2003 @07:28PM (#5298169) Homepage
        you can continue downloading partially downloaded files. An option, HTTP doesn't offer.

        Plain wrong. RFC2068 [w3.org] section 10.2.7.

      • Re:hmm (Score:5, Informative)

        by Jucius Maximus (229128) <zyrbmf5j4x AT snkmail DOT com> on Thursday February 13, 2003 @08:34PM (#5298653) Homepage Journal
        "You may (just may) run into a routing or timeout problem, in which case the download will stop and you are forced to do the entire download again. Using the right client, eg. ncftp, you can continue downloading partially downloaded files. An option, HTTP doesn't offer."

        This is incorrect. Practically every download manager out there allows resuming HTTP downloads. There are only a few (very rare) servers that don't allow this, I guess due to them running HTTP 1.0

        Almost all windows download managers allow it, and for linux, check out 'WebDownloader for X' which has some good speed limiting features as well.

      • by @madeus (24818) <slashdot_24818@mac.com> on Thursday February 13, 2003 @08:36PM (#5298664)

        There are many reasons to support HTTP over FTP for small files.

        HTTP is a much faster mechanism for serving small files of a few MB's (as HTTP doesn't check the integrity of what you've just downloaded and relies purely on TCP's ability to check that all your packets arrived and were arranged correctly).

        Not only is HTTP faster both in initiating a download and while the download is in progress, it typically has less overhead on your server than is caused by serving the same file using an FTP package.

        If you are serving large files (multiple tens of MB's) it would be advisable to also have an FTP server, though many users still prefer HTTP for files over over 100 MB, and use FTP only if the site they are connecting to is unreliable.

        The speed of today's connections (56k, or DSL, or faster) means that the FTP protocol is not redundant, but it's less of a requirement than it used to be - as the consensus of what we consider to be a large file size has changed greatly.

        There was a time when anything over 500K was considered 'large' and the troublesome and unreliable nature of connections meant that software that was over that size would almost certainly need to be downloaded via FTP to ensure against corruption.

        Additionally, many web servers (Apache included) and web browsers (Netscape/Mozilla included) support HTTP resume, which works just like FTP resume.

        Unless you are serving large files (e.g. over 20 to 30 MB's) or you have a dreadful network connection (or your users will - for example if they will be roaming users connecting via GPRS) then HTTP is sufficient and FTP will only add to your overhead, support and administration time.

        One last note: I'd also add that many users in corporate environments are not able to download via FTP due to poorly administered corporate firewalls. This occurs frequently even in large corporations due to incompetent IT and/or 'security' staff. This should not put you off using FTP, but it is one reason to support HTTP.
    • Here's how they work (Score:5, Informative)

      by tyler_larson (558763) on Friday February 14, 2003 @06:26AM (#5300631) Homepage
      I've worked pretty extensively with these two protocols, writing clients and servers for both. I've read all the relevant RFCs start-to-finish (whole lotta boring) and have a pretty good idea about what they both can do. Now, there's a lot of talk about the two, but few people really understand how they work.

      Forget people's opinions and observations about which is better; here's what they both do, you decide what you like. If you still want opinions, I give mine at the bottom.

      HTTP
      The average HTTP connection works like this:

      • The client initiates a connection. The server accepts but does not send any data.
      • The client sends his request string in the form
        [Method] [File Location]?[Query String] [HTTP version]
      • The client then sends a whole bunch of headers, each consisting of a name-value pair. A whole lot can be done with these headers, here's some hilights:
        • Authentication (many methods supported)
        • Download resume instructions
        • Virtual Host identification (so you can use multiple domains on one IP)
      • The client then can follow the headers up with some raw data (such as for file uploads or POST variables)
      • The server then sends a response string in the form
        [HTTP Version] [Response code] [Response string]
        where the response string is just a human-readable equivalent of the 3-digit response code.
      • Next, the server sends its own set of headers (usually explaining the data its about to send. File type, language, size, timestamp, etc.)
      • Finally, the server sends the raw data for the response itself (usually file contents).
      • If this is a keep-alive connection, we start over. Otherwise the connection is closed

      FTP
      FTP connections are a little less structured. The client connects, the server sends a banner identifying itself. The client sends a username and password, the server decides whether to allow access or not. Also, many FTP servers will try and get an IDENT from the client. Of course, firewalls will often silently drop packets for that port and the FTP server will wait for the operation to timeout (a minute or two) before continuing. Very, very annoying, because by then, the client has given up too.

      Next, the client sends a command. There's a lot of commands to choose from, and not all servers support all commands. Here are some hilights:

      • Change directory
      • Change transfer mode (ascii/binary) -- ascii mode does automatic CR/LF translation, nothing more.
      • Get a file
      • Send a file
      • Move a file
      • Change file permissions
      • Get a directory listing

      And here's my favorite part. Only requests/resonses go over this connection. Any data at all (even dir listings) has to go over a separate TCP connection on a different port. No exceptions. Most people don't understand this point, but even PASV mode connections must use a separate TCP connection for the data stream. Either the client specifies a port for the server to connect to with the PORT command, or the client issues a PASV command, to which the server replies with a port number for the client use in connecting to the server.

      The client does have the option to resume downloads or retrieve multiple files with one command. Yay.

      Some Observations

      • FTP authentication is usually plain-text. Furthermore, authentication is mandatory. Kinda stupid for public fileservers, if you ask me.
      • FTP is interactive--much better for interactive applications (most FTP clients) but unnecessary overhead for URL-type applications.
      • Both protocols depend on TCP to provide reliability. Reliability is NOT a distinguishing characteristic.
      • For transferring files, both send raw data over a single TCP stream. Niether is inherently faster because they send data in the exact same way.

      My Opinion
      I honestly think FTP was a bad idea from the beginning. The protocol's distinguishing characteristic is the fact that data has to go over a separate TCP stream. That would be a great idea if you could keep sending commands while the file transfers in the background... but instead, the server waits for the file to finish before accepting any commands. Pointless.

      FTP is not better for large files, nor is it better for multiple files. It doesn't go through firewalls, and quality clients are few. HTTP is equally reliable, but also universally supported. There are also a number of high quality clients available.

      In fact, the only thing FTP is better for is managing server directories and file uploads. But for that, you really should be using something more secure, like sftp (ssh-based).

      Bottom line, ditch FTP. Use HTTP for public downloads and sftp for file management.

      • by SEAL (88488) on Friday February 14, 2003 @09:05AM (#5301034)
        I honestly think FTP was a bad idea from the beginning. The protocol's distinguishing characteristic is the fact that data has to go over a separate TCP stream.

        First, I think FTP was a *good* idea, when you consider that its initial design was in 1971, predating even SMTP. Also since FTP was created when mainframes were king, it has features that seem like overkill today.

        Both protocols depend on TCP to provide reliability. Reliability is NOT a distinguishing characteristic.

        Oh but read the RFC young jedi :) There's a lot more to FTP than you might notice at first glance. The problem is that many clients and servers only partially implement the protocol as specified in the RFC. In particular, nowadays the stream transfer mode is used almost exclusively, which is the least reliable mode, and forces opening a new data connection for each transfer.

        If you dig into RFC 959 more, you'll see some weird things you can do with FTP. For example, from your client, you can initiate control connections to two separate servers, and open a data connection between the two servers.

        There's a lot of power and flexibility built into FTP, and that's why it has stuck around for 30 years. That's really phenomenal when you think about anything software related. Even though most current firewall vendors support active-mode pretty well, passive mode was there all along, showing that someone thought of this potential issue in advance. The main weakness of FTP is that it sends passwords over the wire in plaintext, but for an anonymous FTP server this isn't an issue.

        This is a good resource if you want to read up on the history and development of FTP:

        http://www.wu-ftpd.org/rfc/

        Best regards,

        SEAL

  • gopher (Score:5, Funny)

    by mz001b (122709) on Thursday February 13, 2003 @07:10PM (#5297957)
    I think the only reasonable way to do these things is to put up a gopher site.
    • Re:gopher (Score:3, Funny)

      by ErikTheRed (162431)
      Nah, use finger... then you get the advantages of perverted humor as well...
    • by Speare (84249) on Thursday February 13, 2003 @08:10PM (#5298492) Homepage Journal

      Today I set up an Apache + mod_ssl + mod_dav server for "drag and drop" shared file folders that can be used by any Windows or Linux client over a single well-known socket port (https=443/tcp). It took me two hours without knowing a thing about WebDAV nor SSL to get both working together.

      Windows calls it a "Web Folder" while the protocol is usually called DAV or WebDAV. It extends the HTTP GET/POST protocol itself with file management verbs like COPY, LINK, DELETE and others.

      The key benefits are: almost zero training the users how to use it, flexibility while using proven protocols.

      WebDAV doesn't do the authentication or encryption, but these can be layered in with .htaccess and/or LDAP and/or SSL certified server-encryption.

      There are a few howto's out there. Google.

      • by TomatoMan (93630) on Thursday February 13, 2003 @08:53PM (#5298759) Homepage Journal
        OSX natively supports WebDAV; choose "connect to server..." from the Finder's Connect menu, enter the URL, your username and password, and it mounts the web folder as a local disk. You can save directly into it from any application, as well as create folders and drag-and-drop copy from the Finder. It's completely shielded from apps; you can even copy files to a WebDAV volume from a shell and it's entirely transparent (look in /Volumes). Very, very cool.
  • by Anonymous Coward on Thursday February 13, 2003 @07:11PM (#5297969)
    Use telnet and screen capture the VT100 Term buffer!
  • do both... (Score:4, Informative)

    by jeffy124 (453342) on Thursday February 13, 2003 @07:11PM (#5297971) Homepage Journal
    But in my experiences, HTTP for whatever reason goes faster (not entirely sure why), and FTP doesnt work for some because of firewalls.

    Try both - see which gets used more.
  • how about rsync? (Score:5, Informative)

    by SurfTheWorld (162247) on Thursday February 13, 2003 @07:11PM (#5297973) Homepage Journal
    rsync is a great protocol, fairly robust, can be wrappered in ssh (or not), supports resuming transmission, and operates over one socket.

    seems like the best of both worlds to me.

    the real question is - do you control the clients that are going to access you? or is it something like a browser (which doesn't support rsync).

    • by MisterMook (634297) on Thursday February 13, 2003 @07:34PM (#5298222) Homepage
      I don't know, after Rsync's last album I've decided that they're probably too old for serious contending in the boy-band heavy marketplace.
    • Re:how about rsync? (Score:5, Informative)

      by Dr. Awktagon (233360) on Thursday February 13, 2003 @07:58PM (#5298398) Homepage
      Agreed.. I've had enough headaches with FTP and firewalls/NAT, let's just let it die. For robust downloading of large files rsync is the protocol to use.

      For those not familiar: rsync can copy or synchronize files or directories of files. it divides the files into blocks and only transfers the parts of the file that are different or missing. It's awesome for mirrored backups, among other things. There is even a Mac OS X version that tranfers the Mac-specific metadata of each file.

      Just today I had to transfer a ~400MB file to a machine over a fairly slow connection. The only way in was SSH and the only way out was HTTP.

      First I tried HTTP and the connection dropped. No problem, I thought, I'll just use "wget -c" and it will continue fine. Well, it continued, but the archive was corrupt.

      I remembered that rsync can run over SSH and I rsync'd the file over the damaged one. It took a few moments for it to find the blocks with the errors, and it downloaded just thost blocks.

      Rsync should be built into every program that downloads large files, including web browsers. Apple or someone should pick up this technology, give it some good marketing ("auto-repair download" or something) and life will be good.

      Rsync also has a daemon mode that allows you to run a dedicated rsync server. This is good for public distribution of files.

      Rsync is the way to go! I guess this really doesn't 100% answer the poster's question, but people really should be thinking about rsync more.
      • Re:how about rsync? (Score:5, Interesting)

        by swillden (191260) <shawn-ds@willden.org> on Thursday February 13, 2003 @09:10PM (#5298856) Homepage Journal

        Rsync is the way to go!

        Rsync is great in theory, but the implementation has one major problem that makes it less than ideal for many cases: It puts a huge burden on the server, because the server has to calculate the MD5 sums on each block of each file it serves up, which is a CPU-intensive task. A machine which could easily handle a few dozen HTTP downloads at a time would choke with only a few rsync downloads.

        This is a problem with the implementation, not with the theory, because it wouldn't be that difficult for the rsync server to cache the MD5 sums so that it only had to calculate them once for each file (assuming it's downloading static content -- for dynamic content rsync will probably never make sense, particularly since we can probably expect bandwidth to increase faster than processing power). The server could even take advantage of 'idle' times to precalculate sums. Once it had all of the sums cached, serving files via rsync wouldn't be that much more costly in terms of CPU power than HTTP or FTP, and it would often be *much* more efficient in terms of bandwidth.

  • by emf (68407) on Thursday February 13, 2003 @07:11PM (#5297975)
    "HTTP is slower and less reliable than FTP"

    I would think FTP is slower since with FTP you have to login and build the data connection before the transfer begins. With HTTP it's a simple GET request.

    As far as the actual data being sent, I believe that the file is sent the same way with both protocols. (just send the data via a TCP connection). I could be wrong though.

  • by twiggy (104320) on Thursday February 13, 2003 @07:11PM (#5297979) Homepage
    As long as you're willing to secure your FTP server and do the simple stuff like watch out for file permissions - FTP is much better.

    HTTP is restricted by browsers, many of which will not support files larger than a certain size. Furthermore, FTP allows for features such as resume, etc...

    The real question, however, is what are you trying to use this for? What's your intended application?

    If it's a file repository for moderately computer literate people - FTP is definitely the way to go.

    If it's a place for average-joes to store pictures, maybe HTTP is your best option. Sacrificing a bit of speed and capabilities such as resume might be made up for with ease of use..
    • by Fastolfe (1470) on Thursday February 13, 2003 @07:19PM (#5298071)
      Furthermore, FTP allows for features such as resume, etc...

      So does HTTP. With the 'Range' header, you can retrieve only a portion of a resource.

      I agree that it really depends on the application, but for most practical "view directory, download file" purposes, there's no significant difference.

      If you wanted to interact with a directory structure, change ownerships, create directories, remove files, etc., it's generally easier to do this with FTP.
  • by kisrael (134664) on Thursday February 13, 2003 @07:12PM (#5297988) Homepage
    Whenever I see a list of FTP mirrors with one HTTP version, the HTTP version is faster and more reliable 9 times out of 10.

    It's generally simpler to get to from a browser, which is where 95% of people's online life is anyway. Yeah, you can rig up a FTP URL, but it seems a bit kludgey and more prone to firewall issues.
    • by jez9999 (618189) on Thursday February 13, 2003 @07:41PM (#5298278) Homepage Journal
      Whenever I see a list of FTP mirrors with one HTTP version, the HTTP version is faster and more reliable 9 times out of 10.

      I suspect that's because 99% of people are downloading from one of the FTP servers.

      It's generally simpler to get to from a browser, which is where 95% of people's online life is anyway.

      I honestly don't see how.

      Yeah, you can rig up a FTP URL, but it seems a bit kludgey

      ftp://www.mysite.com/file.zip

      How is that cludgey?
      • by @madeus (24818) <slashdot_24818@mac.com> on Thursday February 13, 2003 @09:13PM (#5298875)
        Whenever I see a list of FTP mirrors with one HTTP version, the HTTP version is faster and more reliable 9 times out of 10.

        I suspect that's because 99% of people are downloading from one of the FTP servers.


        I put to you that would be more logical to suspect it's because HTTP is faster than FTP as a transfer protocol. It generates less traffic (and uses less CPU overhead) which means downloads end quicker.

        Additionally the CPU overhead generated by FTP connections also causes many sites to limit the number of users who can connect, which often results in 'busy sessions', something much rarer with HTTP (as HTTP servers typically have very high thresholds for the number of concurrent connections they will support). The overhead on a server of a user downloading a file over FTP is much greater than that of a user downloading the same file over HTTP.

        Although FTP is of course theoretically more reliable than HTTP, in practice, because of 'Server busy: Too many users' messages combined with the speed and reliability of modern connections (which in turn makes HTTP more reliable) mean the the reverse is often the case from a user perspective - which is what I think the poster is getting at.

        This may be partly due to poor FTP server configuration defaults and/or poor administration, but they cannot shoulder all the blame.

        The potential lack of reliability with HTTP is a very minor issue these days, and the extra overhead of integrity checking files in addition to relying on TCP is just not warranted for all but the largest of files.

        This doesn't make FTP completely redundant, but it does make it make it redundant when your files are small and your users are on fast, reliable connections (though the value of 'fast' varies in relation to the size of the file, even 33 kbps is 'fast' compared to the speed of connections that proliferated when the File Transfer Protocol was developed).
  • by Telastyn (206146) on Thursday February 13, 2003 @07:13PM (#5297995)
    1-6mb files?
    heh, most 1-6mb files I see are on irc fserves :P

  • for what its worlth (Score:3, Informative)

    by dunedan (529179) <antilles@NOsPAm.byu.edu> on Thursday February 13, 2003 @07:14PM (#5298003) Homepage
    Those of your customers who don't have fast access to the internet may appreciate even a slightly faster standard.
  • HTTP is fine (Score:5, Informative)

    by ahknight (128958) on Thursday February 13, 2003 @07:14PM (#5298012)
    HTTP does not have firewall issues, does not need authentication, does not (by default) allow directory listings, and is the same speed as FTP. It's a good deal for general file distrubution.

    FTP is quickly becoming a special-needs protocol. If you need authentication, uploads, directory listings, accessability with interactive tools, etc. then this is for you. Mainly useful for web designers these days, IMO, since the site software packages can use the extra file metadata for synchronization. Other than that, it's a lot of connection overhead for a simple file.

    FTP does have one nice advantage that HTTP lacks: it can limit concurrent connections based on access privleges (500 anonymous and 100 real, etc.). Doesn't sound like you need that.

    Go with HTTP. Simple, quick, anonymous, generally foolproof.
  • by Anonymous DWord (466154) on Thursday February 13, 2003 @07:14PM (#5298014) Homepage
    1 to 6 megs, huh? Why not use Kazaa like everybody else? :-P
  • by Anonvmous Coward (589068) on Thursday February 13, 2003 @07:15PM (#5298020)
    "HTTP is amateur and will make you look a wimp"

    You really gotta watch out for things like this. I know one guy that got a 'click me' sign on his back because he used HTTP instead of FTP.
  • Transparent (Score:5, Insightful)

    by mao che minh (611166) on Thursday February 13, 2003 @07:15PM (#5298024) Journal
    It's almost transparent - most people (99.9%) don't know the difference between http and ftp. The .1% that "gets it" don't care what you're using as long as the pr0n gets from point A to point B (point B being my computer, which I lovingly call "My Pr0ndex").

    And I wouldn't care about the opinion of someone who would actually judge you over what friggin protocol you use to provide downloads. Such an utter nerd is somethig that I can not relate too. Maybe after I use Linux for a few more years, who knows.

  • by fwankypoo (58987) <jason.terk@gmai[ ]om ['l.c' in gap]> on Thursday February 13, 2003 @07:15PM (#5298028) Homepage
    The question is, "what do you want to do?" I run an FTP server (incidentally affiliated with etree.org, lossless live music!) and I need what it can give me. Namely I need multiple classes of login, each with a different

    1) number of available slots
    2) speed limit
    3) premission set

    Some people can only read files at 60KB/s, some can read and write (to the upload dir) at the same speed, come can only browse, etc. etc. For this kind of a setup, FTP is great _IF_ you keep your software up to date; subscribe to bugtraq or your distro's security bulletin or both.

    On the other hand, HTTP is great when you want to give lots of people unlimited ANONYMOUS access to something. I'm sure there is a way to throttle bandwidth, but can you do it on a class by class basis? In proftpd it's a simple "RateReadBPS xxx" and I'm set.

    As always, choose the tool that fits _your_ purpose, not the one that everyone says is "best"; they both have good and bad qualities. And http can be just as secure/insecure as any other protocol.
  • SCP (Score:5, Interesting)

    by elliotj (519297) <slashdot.elliotjohnson@com> on Thursday February 13, 2003 @07:16PM (#5298035) Homepage
    If you're only offering files to a group of users who you can give passwords to, you could even use SCP. (Secure copy...uses sshd on the server side)

    It all depends on the application. I only use SCP to move files around if I have the choice, just because I like better security if I can have it.

    But if you want to offer files to the public, I'd recommend offering both FTP and HTTP so people can use the most convenient.
  • by kazrak (31860) on Thursday February 13, 2003 @07:17PM (#5298049)
    I question why people think FTP is 'faster' or 'more lightweight' than HTTP. HTTP is a fairly lightweight protocol, and what overhead it does have is massively outweighed by the size of the files when you get into the multi-megabyte range. Add in that everything can be done in one transaction via HTTP (compared to logging in, changing to the right directory, activating passive mode if needed, starting the transfer, opening up a second TCP connection for the data transfer, etc. for FTP) and I really don't see a performance advantage to FTP.

    Security-wise, HTTP is a big win over FTP if only because it makes your port-filtering easier - "allow to 80" is simpler and less likely to cause unintended holes than all the things you need to do to support FTP active and passive connections. Certain FTP server software has a reputation as having more security holes than IIS, but there are FTP servers out there that are as secure as Apache.

  • by PhaseBurn (44685) <PhaseBurn@PhaseBurn.net> on Thursday February 13, 2003 @07:18PM (#5298056) Homepage
    From my point of view (A network administrator), I provide both ftp and http servers for the same files (stick all downloads off /download or something, and set the ftp root to that). This has several benefets...

    1) I've found HTTP transfers are a little faster than FTP transfers (just personally, and I can in no way prove it - it may be user error, or just the programs I'm using)

    2) I've found that FTP clients are everwhere - Windows, Linux, BSD, everything I've ever installed has included a command line FTP client, but not a web browser unless I specifically remember to install one. Further more, most of the "live CDs/boot disks" that I use don't have a web browser, but do have FTP... Thus, if you're serving files that a person with out a web browser/server might need, I'd set up both...

    3) FTP security is what you/your daemon makes of it. wu-ftpd has a long history of being rooted... ProFTPd dosn't. VSFtp doesn't. HTTP security is the same way... IIS has a long history of being rooted... Apache doesn't... *(Not to say that there haven't been occasional exploits for these platforms)

    There is no clear "Use this" or "Use that" procedure here, it depends entirely on your situation, what you're serving, what your network setup is, etc...
  • Security (Score:3, Insightful)

    by Devil's BSD (562630) on Thursday February 13, 2003 @07:19PM (#5298073) Homepage
    If you're looking for security, look into sftp, part of the openssh package. It uses the same encryption as SSH, is secure, yadayada. The only drawback is that windoze users have to get the sftp client to connect to an sftp server. Our school is considering adding sftp to the student fileserver so that we can access files from home without risk of attack.
  • WebDAV? (Score:3, Insightful)

    by Kevinv (21462) <kevin@NoSPam.vanhaaren.net> on Thursday February 13, 2003 @07:19PM (#5298083) Homepage
    How about implementing a webdav solution? You can get away from clear-text passwords, users can mount them like a drive on Mac, Windows and Linux (via DAVfs, http://freshmeat.net/projects/davfs/?topic_id=143% 2C90 )

    Still have some of the unreliablity of HTTP transfers and slowness. But works a lot better through firewalls (and more securely since connection tracking works better with WebDAV).

    I've found Passive Mode FTP to also be more unstable than standard ftp transfers.
  • HTTP Vs FTP (Score:5, Insightful)

    by neurojab (15737) on Thursday February 13, 2003 @07:21PM (#5298109)
    The efficiency differences will be debated forever... the common wisdom is that FTP is more efficient, but there is also evidence [hypermart.net] to the contrary. That isn't the point.

    To me, this is a problem of authentication. If you want EVERYONE to have these files, why not just use the HTTP server? If you're targeting a select few people, then why not use the built-in authentication mechanisms of FTP?

    Yes I know there are authentication mechanisms for HTTP, but they're arguably harder to implement than setting up an FTP server.

    Are your clients only using web browsers to retrieve these files? I'll get flamed for this, but web browsers were not designed for FTP, and thus are klunky at it. HTTP wins there again.

    Don't worry about it. Just use HTTP and let the FTP bigots flame away.
  • by boy_afraid (234774) <Antebios1@gmail.com> on Thursday February 13, 2003 @07:24PM (#5298137) Journal
    Come on people, use the Z-Modem protocol. It can resume transmission on a file transfer where HTTP or FTP can not. The only way a FTP or HTTP can resume transmission is with the GetRight tool.

    I remember in my days of BBSes with X and Y Modem, and then when Z-Modem showed up we all couldn't be happier. When some idiot in the house picked up the phone and disconnected you from hours and hours of downloading the latest Liesure Suite Larry, I just reconnected and started to resume my downloads (but only if I had enough credit, then I might have to upload some crap). :) HA HA!
  • by Neck_of_the_Woods (305788) on Thursday February 13, 2003 @07:25PM (#5298145) Journal

    Why do we have all these new ask slashdot question that sounds like a tech with a years experience is asking how to do his job?

    I vote for a new section, "How do I do my job" with a dollar bill as the logo.

  • weak question, (Score:5, Insightful)

    by Openadvocate (573093) on Thursday February 13, 2003 @07:43PM (#5298292)
    The question does not contain enough information to form a proper answer.
    Don't start with finding the solution, figure out what it is you want, what you want it to do and then find the right tool. We can not tell you which is right with almost no information about the use of it, for what and what is the average user profile etc.
    HTTP and FTP can be equally insecure, but it shouldn't be much of a job to properly secure a ftp.
  • Why /.? (Score:5, Funny)

    by Piquan (49943) on Thursday February 13, 2003 @07:48PM (#5298328)
    Let me get this straight. You went to search the web and got conflicting, likely ill-informed, and inconclusive reports. So you went to Slashdot?
  • by SWPadnos (191329) on Thursday February 13, 2003 @07:49PM (#5298336)
    As many people have said, it depends.

    FTP has a great advantage in that you can request multiple files at the same time: mget instead of get. Additionally, you can use wildcards in the names, so you can select categories / directories of files with very short commands. (mget *.mp3 *.m3u ...)

    Modern browsers allow you to transfer multiple files simultaneously, but they don't queue files for you - FTP will. This may be important if connections might get dropped - the FTP transfer will complete the first file, then move on to the next. In the event of an interruption, you will have some complete files, and one partial (which you can likely resume). For multiple simultaneous transfers - from an http browser - you may have some smaller files finished, but it's likely that all larger files will be partials, and will need to be retransmitted in their entirety, since http doesn't quite support resuming a previous download.

    So, if you're going to have a web page with many individual links, and you think that most people will download one or two files, http will probably suffice. If you expect people to want multiple files, or that they will want to be able to select groups of files with wildcards (tortuous with pointy-clicky things), then you should have FTP.

    It's not that hard to set up both, and that's probably the best solution.
  • HTTP, hands down (Score:5, Informative)

    by Percy_Blakeney (542178) on Thursday February 13, 2003 @08:18PM (#5298554) Homepage
    As I understand it, your requirements are:

    1. Download only
    2. 1-6 MB files

      I also assume the following:

    3. You don't need intricate access controls
    4. Non-technical to Somewhat-technical users

    I would say that you should go with HTTP for sure. Of course, you can provide both, but there are some key reasons for using HTTP.

    Easier Configuration Perhaps I'm just not that swift, but I've found that web servers (including Apache) are easier to configure. This is especially true if you have any previous web server experience. Of course, the FTP server is more complex due to its additional features that HTTP doesn't have, but assuming that (c) is true, then you won't need to mess with group access control rights and file uploads.

    Speed This whole "FTP is faster" stuff is not true. HTTP does not have a lot more overhead than FTP; it may even have less overhead than FTP in certain cases. Even when it does have more overhead, it is in the order of 100-200 bytes, which is too small to care about. HTTP always uses binary transfers and just spits out the whole file on the same connection as the request. FTP needs to build a data connection for every single data transfer, which can slow things down and even occasionally introduce problems.

    Easier for Users Given assumption (d), your users will be much more familiar with HTTP URLs than FTP addresses. You could just use FTP URLs and let their web browsers download the files, but then you lose the benefit of resuming partial downloads.

    Simple Access Controls Though some people need to have complex user access rules, you may very well just need simple access controls. HTTP provides this (look at Apache's .htaccess file), and you can even integrate Apache's authentication routines into PAM, if you are really hard core.

    There are a few main areas where FTP currently holds sway:

    Partial Downloads Web browsers typically don't support partial downloads, but the fact of the matter is that the HTTP protocol does support it (see the Range header.) The next generation of web browsers may very well include this feature.

    User Controls Addressed above.

    File Uploads Again, HTTP does support this feature but most browsers don't support it well. Look to WebDAV in the future to provide better support.

    In summary, just use HTTP unless you need complex access rules, resumption of partial download, or file uploading. It will be easier both on you and your users.

  • by ende (154873) on Thursday February 13, 2003 @08:30PM (#5298631)
    I would say go with http.. my reason is simple.. my college blocks ftp! There are two sides of the story, first off, the complete blocking of ftp started a week ago when they switched firewalls.. they couldn't get the ftp to work, and they decided it wasn't a high priority so they didn't do anything to fix it... After yelling at the IT department for half the day I got the priority up. But besides that.. Even before, when they allowed ftp, it was only active ftp.. I was unable to go to a website, click on a file that is served from ftp:// and download it! I'd have to use flashfxp or another client to d/l files.. I was able to figure that out quickly, but what about the other 99% of the school that wants to download say shareware from download.com or a driver from somewhere.. they are denied! FTP is a superior protocol for file transfer in my opinion, but administrators don't seem to care about it as much as they do http.
  • by argonaut (37085) on Thursday February 13, 2003 @08:54PM (#5298764) Homepage Journal
    Being in IT for a large Fortune 500 company that sells an operating system among other things (no, not Microsoft), I can share some of my expereinces with you. So take it for what it is worth.

    Our FTP servers run both HTTP and FTP providing the same content in the same directory structure. There are five servers that transfer an average of 1-2 TB (terabyte) per month each, so they are fairly busy. On a busy month each server can go as high as 7 TB of data transferred. File sizes range from 1 KB to to whole CD-ROM and DVD-ROM images. I think the single largest file is 3 GB.

    The logs show a trend of HTTP becoming more popular for the last several years and not stopping. It is currently at 70% of all downloads from the "FTP" servers via HTTP. While the remaining 30% is via FTP. Six years ago (I lost the logs from before this time, they are on a backup tape but I am way too lazy to get that data), it was completely reversed. 75% of downloads were via FTP and 25% were via HTTP. 90% of all transfers are done with a web browser as opposed to an FTP client or wget or something.

    One thing we learned was that many system administrators will download via FTP from the command line directly from the FTP server, especially during a crisis they are trying to resolve. They do this from the system itself and not a workstation. The reasons for this are a bit of a mystery. Feedback has shown that we should never get rid of this or we might be assassinated by our customers. We thought about it once and put out feelers.

    I would say if you don't need to deal with incoming files and you file size is not too large then stick with HTTP. Anything over about 10 MB should go to the FTP server. An FTP server can be more complicated. It seems like the vulnerabilities in FTP daemons has died down in the past year or so. Also, fronting an FTP server with a Layer 4 switch was a lot more tricky because of all the ports involved. If you want people to mirror you then go with FTP or rsync for private mirroring. In reading the feedback, most power users seem to prefer FTP, perhaps because that is what they are used to. Also, depending on the amount of traffic you might need to consider gigabit ethernet.

    The core dumps being uploaded are getting to be huge. Some of those systems have a lot of memory!
  • HTTP and FTP FUD (Score:5, Insightful)

    by MobyDisk (75490) on Thursday February 13, 2003 @09:27PM (#5298958) Homepage
    I see too many FUD replies here:

    1) HTTP doesn't support resumed downloading.
    - That's ridiculous. It has since HTTP/1.1 years ago. In fact, it can even do things like request bytes 70,000 - 80,000, then 90,763 - 96,450, etc.
    2) HTTP doesn't support security/authentication
    - Ridiculous. HTTP has an open-ended model for authentication and security, many of which are secure and standardized. If you REALLY need security, use HTTPS.
    3) HTTP doesn't support uploading
    - HTTP/1.1 has had this for a while. Netscape 4.7, Mozilla 1.1, and IE 4+ support this. I must admit though, it sucks. :-)

    Several people have pointed out the real differences:
    1) FTP doesn't like firewalls
    - Passive FTP fixes this, but it has quirks and limitations.
    2) FTP supports directory listing, renaming, uploading, changing of permissions, etc.
    - This is what FTP is for
    - This can be done in HTTP, but requires serious work
    - If the scope creeps, shell access would be better.
  • by almaw (444279) on Thursday February 13, 2003 @11:27PM (#5299397) Homepage
    You should use FTP if you answer yes to any of the following questions:
    1. Do you have bandwidth issues? If you are serving files to many people, FTP servers allow maximum concurrent users, which can be useful. I know you can do this with HTTP, but it's difficult to segment the downloading >1Mb files traffic from the normal site traffic. A separate service also allows you to use all the Quality of Service stuff in the 2.4 kernel nicely.
    2. Do you have a large array of files that the user might want to download, such that using an FTP client to ctrl+select multiple files is the right answer compared to having your users click on twenty links and have to cope with twenty dialog boxes?
    3. Do your users need to be able to upload files to you? This can be done with HTTP, but you'll need some PHP processing or similar on the server, it doesn't support resuming, and it won't work through many company firewalls, and therefore isn't a good option. HTTP uploading it particularly hopeless for large files, as it provides no user-feedback.
    However, you should NOT use FTP if you answer no to either of these:
    1. Are you running some flavour of unix? There just aren't any robust Windows FTP servers. Yes, I'm prepared for the flame war about this. :)
    2. Can you be bothered to keep your FTPd patched? ProFTPd and WU-FTPd are both frequent appearers on bugtraq. You need to stay on top of the patches, or you will be 0wn3d.
    Simple, see? :)
  • by osgeek (239988) on Thursday February 13, 2003 @11:47PM (#5299497) Homepage Journal
    After hosting an HTTP file transfer area for some time for my company, we decided to move to an FTP setup that was a bit more sophisticated.

    So far, it's been a failure for two reasons:

    1. IE blows as an FTP client, and users aren't comfortable dropping into the (somewhat crappy) DOS FTP client.
    2. Firewall setups at the fortune 500 companies that we deal with normally seem to keep FTP access off-site restricted.
  • by Anonymous Coward on Friday February 14, 2003 @12:04AM (#5299578)
    FTP implementations frequently use a fixed, small window size. HTTP on the other hand will honor the system limit, almost always larger even without tuning.

    Dramatically simplified, it means that the connection can send a lot more packets without hearing back from the far end, enabling the connection to reach higher speeds (imagine a phone call where you had to say 'okay' after every word the other person said. Now imagine only having to say it after every sentence. Much faster.)

    The tiny window size of (most crappy legacy implementations of) FTP starts to affect download speed at just 25ms latency, and has a huge effect over 50ms.

    A properly tuned system with HTTP can make a single high-latency transfer hundreds or even thousands of times faster than FTP.

    Relevant links:
    http://www.psc.edu/networking/perf_tune.ht ml
    http://www.nlanr.net/NLANRPackets/v1.3/windows _tcp tune.html
    http://dast.nlanr.net/Projects/Autobuf/ faq.html

  • by pieterh (196118) on Friday February 14, 2003 @05:42AM (#5300541) Homepage
    The main strengths of FTP over HTTP for file transfers are:
    • Easy command line scripting of FTP sessions via 'ftp' client available on most systems. In contrast, scripting an HTTP session requires some simple but non-trivial programming in Perl, Ruby, etc.
    • Fastest file transfers, since binary data is not encoded in any way.
    • Simplicity of the presented information, which users see as a file system.


    The main strengths of HTTP over FTP for file transfers are:
    • More-or-less guaranteed access through all corporate firewalls.
    • Virtual hosts, something that FTP does not support in any standard fashion.
    • Easily extended into more secure realms using various kinds of authentication, SSL, certificates, etc.
    • Support for MIME types.
    • (Obvious) Ability to encapsulate your files in an interesting context, e.g. web site, wiki, etc.


    The other differences one sees are due to server design issues. I.e. most FTP servers are large and spawn a process per connection, which makes FTP sessions much slower than HTTP sessions. But if you want to use FTP, there are very fast FTP servers out there.


    Overall, in today's world, it does not make sense to use FTP unless you have a requirement from your users. For public access to files, use HTTP or something more modern, such as rsynch, or a P2P network.


    As usual, you should answer such questions by thinking about your target users and asking yourself what they are likely to be most comfortable using. Chances are it's their main tool, the web browser.

Whoever dies with the most toys wins.

Working...