Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Apache Software

Using Single Apache SSL/Non-SSL in Production? 37

tck1000 asks: "I currently maintain some legacy webservers, running Apache 1.3.x on Linux, on x86 hardware. Two separate daemons are used. One to serve SSL vhosts, and one to server non-SSL vhosts. Each of these servers also is compiled with PHP, mod_perl, and JServe, and also works with a Tomcat servlet engines. In the process of planning an upgrade path, I've thought about using a single daemon to serve both the SSL and Non-SSL vhosts. Is this a good idea?"

"These webservers serve about 4 million hits a day across all the vhosts. I'm worried about memory usage if every httpd process has to load mod_ssl, as well as everything else they load.

I've been searching for comparisons between running 2 daemons (and the associated effort in maintaining/upgrading/patching), vs. running a single daemon (with any added overhead it entails).

I've found a lot of examples of how to do it, but not much on the why's.

Comments, Opinions, Ideas, Links?"

This discussion has been archived. No new comments can be posted.

Using Single Apache SSL/Non-SSL in Production?

Comments Filter:
  • by krow ( 129804 ) * <brian@@@tangent...org> on Saturday January 31, 2004 @05:51PM (#8146196) Homepage Journal
    This is what Slashdot does (and sourceforge too for that matter), they use one Apache host to serve content and use pound [freshmeat.net] to do the SSL.
    Good luck!
  • No, it doesn't (Score:3, Insightful)

    by mirabilos ( 219607 ) on Saturday January 31, 2004 @05:59PM (#8146233) Homepage
    Honestly, this is the first time I've heard that
    someone _does_ use two different servers for
    http and https.
    If you really want to increase security, use the
    new chroot facilities.
    • Re:No, it doesn't (Score:3, Insightful)

      by sylencer ( 634653 )
      If you really want to increase security, use the new chroot facilities.

      I don't understand what you are saying here. People use https to prevent others from sniffing their traffic, e.g. for credit card numbers or other data that should be kept secret, like passwords. Chroot environments are used for a completely different purpose: To keep the impact on your whole system as little as possible when (not if!) a security flaw in the daemon is discovered and thus an attacker can execute arbitrary code on your m

      • I was more thinking about securing the daemon, not
        the content exchanged.

        Another one: use propolice (the stack protector)
        if you can.
        http://www.research.ibm.com/trl/projects/security/ ssp/
        • SSP is good, but not commonly available in distributions.

          I've made packages for Debian stable/unstable which are available from my security pages [debian.org].

          More feedback is always appreciated - as it stands I use them, but I've no idea about others!

          • Right; on the other hand, it still has issues,
            because of which some programmes (mozilla, wineX)
            still are compiled with -fno-stack-protector.

            In OpenBSD since release 3.3, propolice is enabled,
            as in MirBSD (http://mirbsd.de/). The latest MirBSD
            development snapshot, available in 1-2 days under
            https://mirbsd.bsdadvocacy.org:8890/curren t /
            (while writing this, I'm building it), has gcc 3
            with propolice, W^X (ie, memory pages are either
            writable or executable), NXSTACK and NXHEAP
            protection mechanisms, and the defau
  • Heres the bottom line. SSL is used to secure things that you dont want people to see, CC#'s, etc.

    If your SSL machine is the same as your www host, then if the www host (a more likely target for random attacks) is compromised, the SSL is worthless, since they can replace your cert, access protected data etc, under the same permissions of the www daemon.

    SO, if your SSL daemon is handling data that sensibly should not be on the most obvious target for first attack, then no, its not a good idea.

    If on the ot
    • You have to do quite a bit to isolate them. You must at least prevent against simple local-user DOS like a forkbomb (remember that it is a C one-liner), and you must isolate the apache users. If you access any of the same content via SSL as you do with straigh HTTP, you have to put them in the same group, and be very paranoid about other things.

      SSL itself isn't secure enough for me -- I have to trust VeriSign. So there are better ways of storing really sensitive information.

      At this point, we are mainly
      • SSL itself isn't secure enough for me -- I have to trust VeriSign. So there are better ways of storing really sensitive information.

        This is wrong on two levels:

        1. Verisign has nothing to do with the security of data transmitted using SSL, and the only thing you could ever trust them for is vetting the identity of the people whose certificate requests they sign. They never control your private keys, and you can operate an SSL site that has nothing whatsoever to do with verisign. Just set up your own CA [openca.org].
        2. SS
        • #1: Verisign can fake that, though. They can create their own private key and replace yours with it temporarily. Either way, I have to trust them quite a lot.

          As for setting up my own CA, there's the problem of people having to trust me initially. I actually prefer that to verisign (it's free), but there could be a better solution in general -- like an encrypted network filesystem.

          #2: You're right, of course -- my bad, it is transit. And that is what this person was suggesting: "SSL is used to secur
          • #1: Verisign can fake that, though. They can create their own private key and replace yours with it temporarily. Either way, I have to trust them quite a lot.

            This is only a trust issue for SSL clients, not for an ssl server.


            at what point do you draw the line between 'ssl' and 'not ssl'?

            SSL is a specific protocol, not a library. It has been standardized by the IETF in RFC 2246 [ietf.org]. If the protocol described in that document is implemented, it may properly be described as SSL or TLS. SSH, of course, doe
            • True, they can't break my server just because I use SSL. But consider this:

              1.) I put critical documents on an SSL site.
              2.) VeriSign swaps out my private key for their own.
              3.) User requests secure page from my site, giving username and password.
              4.) VeriSign intercepts the request, proxies it to my server (a man-in-the-middle attack), and keeps a copy of the file downloaded.
              5.) VeriSign now has a full copy of whatever file the user was downloading.

              So if I care at all about the clients, or if I am in f
    • If your SSL machine is the same as your www host, then if the www host (a more likely target for random attacks) is compromised, the SSL is worthless, since they can replace your cert, access protected data etc, under the same permissions of the www daemon.

      Replacing the certificate? Wouldn't just about every browser throw a fit at that? The SSL isn't being used to protect any kind of persistent data on the filesystem.
  • Link (Score:5, Informative)

    by jpkunst ( 612360 ) on Saturday January 31, 2004 @07:01PM (#8146571)

    Comments, Opinions, Ideas, Links?

    Recipe 7.4: Serving a Portion of Your Site via SSL [onlamp.com] from O'Reilly's Apache Cookbook ?

    JP

  • First of all, I am a bit of a web admin. I write software all day, run my own website on my machine, and many of my past jobs were writing asp (shudder) and cgi of various sorts.

    Basically I've always found that people get into trouble when they run webservers with excessive complexity. On a modern webserver, serving just about anything, the actual performance of the webserver software doesn't mean much, all the time is taken with DB interaction and custom code (cgi, jsp, etc..). I would therefore suggest t
    • by Homology ( 639438 ) on Saturday January 31, 2004 @11:32PM (#8148029)
      With four million hits a day it does makes little sense to use Tomcat even for static content. Apache serves static pages faster than Tomcat and with less resources.
    • On the one hand you're right: When generating a dynamic page your code and database accesses are going to take the vast majority of the time.

      At the same time, how many images does an average page of yours use? How many stylesheets? How many external JavaScript files? Your code may only be running on 5% of the requests to your server.

      Static content simply needs to be blasted out to the user as quickly as possible. There's not much sense in using a 12MB Apache process loaded down with mod_perl, PHP, and god
      • Good point. It probably makes a lot of sense to have a separate apache server to blast out images and style sheets.

        I was, however, under the impression that the vast majority of the cost of a website is paying for bandwidth, so it doesn't make a whole lot of difference (cost wise) if the webserver runs half as fast. In fact, if you can save a $80,000/year admin by consolidating all your servers into ten identical boxes each serving everything, then perhaps it's cost effective, even if you have to pony up a
        • Well, hardware is cheap compared to people, that's for damn sure. At the same time, splitting your servers into static and dynamic servers is almost trivial.

          Half the servers run one httpd.conf, half the servers run another. Or half the servers run Apache, half the servers run your favorite app server. J2EE servers are enough of a pain in the ass to deal with that I'd rather get rid of half of them for simple Apache/thttpd/publicfile/mathopd/etc static content servers anyway, even if it is another piece of
    • I've tried jboss and tomcat and don't have any complaints. They work very well, and (at least from what I've heard) are very secure.

      YMMV, but we've found exactly the opposite. We've had 3 separate security problems through using tomcat, two of which caused "session leakage", i.e., displaying one customer's session information to another. As a finacial services site, we just can't afford that sort of exposure. Yes, it only shows up under high load, but the 4 million or so hits a day that we get is enough t

  • If you don't gain any additional benefit from using two apache instances, and you have a hang on the configuration, then what you currently have is just perfect. The difference is not significant on a server with lots of memory (512mb+) as is typically used in web hosting. As usual, more specific information about your servers would have been helpful!
  • First, the biggest problem with having so many virtual hosts is the file handle utilization. This is mostly caused by having seperate logs for each host (a custom access log and an error log.) After 300 hosts you'll start adding ulimit lines to your apache startup scripts. And that's ugly. You can prevent this one by defining no log entries on your vhost configs and using a global config. Write a script to parse out the logs for each domain at the end of the day or whatever.

    Have you considered an SSL
    • Re:Some advice... (Score:3, Interesting)

      Regarding your latter idea, wouldn't that screw things up when it comes to running CGI scripts, and the like? After all, every single thing would be running as the apache process, rather than in a chrooted environment, as many secure systems do. I may, of course, be missing the point.
      • Actually, it wouldn't. The URL Translator only maps URLs to system filenames. Basically, it just tells Apache where the CGI is and does not impact how Apache will execute the CGI. The script will still execute in a seperate process.

  • wow (Score:3, Interesting)

    by man_ls ( 248470 ) on Monday February 02, 2004 @01:16AM (#8156122)
    You sir, are in the exact same situation I was in just two days ago.

    My conclusion:

    Moving to one process to serve all requests, SSL and non-SSL, is more of a hassal than it's worth.

    SSL and Non-SSL on one process requires IP-based vhosting. If you're using vhosts you probably don't have multiple IPs per server. Thus you must use name-based vhosts, and SSL gets confused.

    I gave up and installed two processes and am very happy with that solution.

    Apache+OpenSSL 2.0.48/Win32 on Windows Server 2003.
    • SLL and non-SSL on one process doesn't require IP-based vhosting - you can quite happily use port-based vhosting. *:80 for http, and *:443 for SSL...
    • This is not true. Each individual SSL hostname must have it's own IP address. However it is possible to run HTTP and HTTPS on the same IP address. As a matter of fact, I have about 1400 HTTP vhosts and 7 HTTPS vhosts running in one Apache process on 7 IP adresses.

      Yes, it is possible. Yes, I highly recommend it.

      Relevant parts:

      ### Section 2: 'Main' server configuration
      Port 80
      ## SSL Support
      <IfDefine SSL>
      Listen 80
      Listen 443
      </IfDefine>

      ### Section 3: Virtual Hosts
      NameVirtualHost 192.168
    • You can do multiple-port and/or multiple IP based virtual hosts just fine with one apache instance and virtual servers. We're doing about 70 at my site.

      The only really ugly case is trying to do named virtual host SSL; which would work fine if you could get the browsers to agree on how they verify certificates:

      • For IE, put multiple CN entries in your certificate.
      • For older Netscapes, make the *first* CN be a wildcard matching the respective hosts
      • For newer Netscape/Mozilla builds, who knows?

      But if y

  • One reason to keep these running separately is that it becomes easier to dedicate hardware specifically to the SSL portions of your hosting infrastructure, should you choose to go that route.

    Another consideration is whether Apache limits your use of a cert:key pair to each instance. We have always used (Netscape Enterprise|iPlanet Enterprise|Sun ONE). Earlier versions would limit you to 1 instance per port listened OR 1 instance per SSL cert. Eventually, we had up to 30 instances of the httpd running becau

I tell them to turn to the study of mathematics, for it is only there that they might escape the lusts of the flesh. -- Thomas Mann, "The Magic Mountain"

Working...