Isolated Apache Virtual Hosts? 46
An anonymous reader writes: "Anyone ever had to set up virtual hosting on a server that allows CGI execution, etc? This seems to be simple, until you want to keep users out of each other's data. The Apache config seems straightforward enough, but I still haven't figured out the best way to set up the user groups on the box to keep them trapped in their areas and out of each other's business. I thought I could put each user in his own group to block prying eyes on the system side, then add the web user to all the other user's groups allowing him to get to their files, using suexec to prevent one user from using the web server to look at another user's files. This works well, but there seems to be a limit on the number of secondary groups a user can be a member of. So, the web user hits a wall at roughly 16 "customers" or user accounts. Any suggestions on how to improve on this and get beyond the limit? Or is there a better way to approach this than the group/suexec thing? Any pointers to online resources dealing with this type of config would be great..."
Pretty easy, actually. (Score:1)
Super easy, I've been doing it for years. ('Course now somebody's going to point out some huge security flaw in this arrangement and I'll be kicking myself from here to breakfast...
Re:Pretty easy, actually. (Score:1, Informative)
Here's a link [apache.org] to Apache's own info on security (including suexec).
Re:Pretty easy, actually. (Score:5, Insightful)
This is no good if you have php/perl scripts. The scripts run as the Apache user and you don't want people exploiting that to intrude/destroy other users on the system. You want each virtual host to run as a different user, not just be restricted to a different directory.
Suexec promises a solution, but it really does seem like using a mallet to remove a cork from a wine bottle. And as the author of this story discovered, suexec isn't a perfect solution.
Unfortunately the only way I've found to solve this particular problem was to instantiate a new Apache process per user. This is understandably a resource hog, not to mention a configuration mess.
The S/390 "VM" concept is almost perfect. Each VM can run a different Apache process as a different user, so that way you have perfect sandboxing. Unfortunately an S/390 is a little pricey (even the base model).
The real problem is that Apache forks a herd of processes but each process can serve pages for any virtual host. If the Apache process changes its uid to a normal user, then it won't be able to change back for the next virtual host. Killing and restarting processes would work but would also destroy throughput. That is, unless you used threads, but threads won't be available until 2.0 is stable.
The problem is a common question on the Apache mailing list (and I've been guilty of asking about it too, before checking the FAQ). The last time I checked the answer was "we know about this, it's not trivial to fix, don't ask about it before at least Apache 2.0".
Re:Pretty easy, actually. (Score:2)
Or does apache fork out a process for each user id- then each process runs multiple threads?
Re:Pretty easy, actually. (Score:2)
I don't really understand myself, but I can make an educated guess. Apache 1.x starts up several processes in advance (MinSpareServers) and puts them into a pool, ready to serve any request for any virtual host. These processes are heavyweight so best throughput is achieved by preforking them. Each process then serves 30 requests for any virtual host before exit'ing, but these requests are served out as the same UID. The pro is that you get better performance. The con is that you have to use the same UID for every virtual host.
Yes, that's my understanding of what's changed for 2.0. With Apache 2.0 you will be able to assign a dedicated child process to a virtual host by using the AssignUserId directive. With older Apache this would have been either a waste of resources (large and expensive pool of processes per virtual host) or a performance disaster (not enough preforked processes in each pool). With Apache 2.0 you can afford to create a process per virtual host because the pool consists of threads, not processes (inexpensive single process per virtual host).
The sad bit from the documentation is "This MPM does not currently work on most platforms. Work is ongoing to make it functional". Otherwise the AssignUserId directive is what the author of the AskSlashdot wanted. Until 2.0 is finished and works, the only solution I've found is what I described in my previous post: create a standalone Apache pool per virtual host and just live with the waste of resources.
As I said, I'm not an Apache developer, this is just an educated guess based on my limited understanding of what's going on. A real Apache developer would probably slap me silly for providing a totally bogus explanation.
Re:Pretty easy, actually. (Score:1)
You can actually increase this up to 32, but it breaks NFS. So if you don't use NFS (including NAS storage) then you can increase it 32 groups per user (wwwrun or httpd) before you have to buy another machine using your solution.
I'm not offering a solution, I'm just as curious as the original author.
Put 'em in jail (Score:1)
Might be worth looking into whether or not Apache can handle chrott with virtual hosting.
Re:Put 'em in jail (Score:1)
Re:Put 'em in jail (Score:2)
Seems like you have it backwards... (Score:1)
Zope (Score:1)
Re:Zope (Score:1)
Re:Zope (Score:1)
Or learn Python and write a Product for Zope, adding more value to Zope ;) ...
When it comes down to it, this guy is having a problem with the UNIX permissions structure, so you'd be writing a new permissions module for Apache. Zope already has it's own permissions structure. In Zope, I would create a folder for each user, giving them "Owner" privileges on the that folder, and say that, by default, only Owners (and Managers) can View the content of the folder. This additionally allows users to specify if they'd like Anonymous users to view specific items. Done in 5 minutes. Time for coffee!
VM servers? (Score:1, Interesting)
While in theory you could give each user their own webserver in a single unix/linux machine, as far as I'm concerned once a user has an account on a typical Unix machine they can eventually get root if they want, so you might as well give them their own machine (virtual or otherwise). You can secure each machine reasonably for them at the start but if they want to do silly things hey it's their machine.
And I figure if the VM implementations are good, you should be able to migrate a user to his/her own physical machine reasonably easily for a higher priced "Gold" service. And more importantly back down again.
And a platinum service could be one VM on many physical servers!
Use groups to exclude (Score:2, Insightful)
Re:Use groups to exclude (Score:2, Informative)
This doesn't really work well at all if users have the ability to run CGI scripts (perl/php/etc). CGIs typically run as the uid/gid of the web server process (typically apache or nobody, death to any man running apache as root). Due to this, Joe Cracker could simply use his 31337 perl coding skillz to read the contents of a target file in any other user's directory.
Now, you might say this won't work for files chmod'ed in such a manner that the web server process can't read them. Okay, granted that's true. But what happens when Joe Customer wants to set up a file containing his database login information, to be accessed by a perl script delivering content to his visitors? The file has to be readable by the web server process...
Really, using a CGI wrapper (such as scgi-wrap) or suexec, both of which allow users to execute cgi scripts as their userid, is the best current solution aside from using actually virtual private servers (say on *bsd, where jails are tight).
What about php? (Score:3, Funny)
vserver (Score:2, Informative)
A friend of mine told me that the vserver software they use (currently under freebsd) is open, but I couldn't find any mention of that anywhere. Supposedly there is a similar vserver project going on under RedHat.
Or you might want to ask the maintainer of PVHost [sourceforge.net] if he will implement what you need. The project is defined as:
"PVHost is an ISP/poweruser tool that lets admins easily create new virtual web servers using Apache, PHP, mod_auth_mysql, and custom ftpd. It supports PHP, FTP and FrontPage rights control, etc. Custom ftpd allows creation of ftp accounts without the need"
Re:vserver (Score:1)
Re:vserver (Score:1)
One thing to watch out for is that the DSO's for the Verio VPS Apache servers don't seem to be kept up to date, although their Apache is kept up as are their other main packages (major domo, mysql, postgresql, etc).
Their copy of mod_auth_pgsql dates back to March 2000 and has a SQL injection vulnerability documented in CERT. When I brought this to the attention of their support they not only couldn't keep straight PgSQL and MySQL, they didn't seem to really care about it. I also tried to be considerate and use the email support, but it can take 2-3 days to get a reply, where as on the phone you can get hold of someone within 10-15 minutes (even if they're really almost no help at all and don't understand more than basic system maintenance).
The GOOD news is that they do let you build your own software, so I was able to build my own PHP (I needed mnogosearch support which they didn't provide), so I built my own mod_auth_pgsql as well. Building your own packages in their environment can be "challenging". It took me 3 days to get OpenSSL to compile into PHP (needed tof PKI crypto), and none of your directories are included in their library path. Also they use an older / oddly restricted version of Apache-SSL, not mod_ssl so they couldn't do some of the things I needed to do, like do smart card authentication.
I couldn't find the version on their Apache-SSL but some of the newer Apache-SSL directives weren't recognized, so they may be using an older version of that as well, and that may present a buffer overflow vulnerability as well.
- Dave
Re:vserver - FreeBSD Jail's (Score:1)
Are you in jail?
-If you have unrestricted `ps` capabilities, there will be no 'init' process.
-`df` will show you some procps file systems (4k apeice) mounted under the name of your server or whereever your server was booted from.
Why apache, try Roxen (Score:2)
Re:Why apache, try Roxen (Score:2)
Actually, you don't need to run as root, you just have to start as root (which you have to do anyway to bind to ports <1024).
It is dificult to get PHP running on Roxen, but for everything else it works great.
For PHP, I recommend Caudium.. actually, I'd recommend Caudium over Roxen anyway
hp secure OS software for Linux (Score:1)
Mandatory Access Control in HP Secure OS Software for Linux can solve the problem. Part of it is proprietary, but then again, so is the VM in IBM zSeries (S/390). The kernel modifications and kernel modules are licensed under the GPL, so you are not stuck with a binary kernel module that breaks if you breathe too hard near it.
</BLATANT PLUG>
Big Iron (Score:4, Interesting)
Of course, there is a downside - $500,000 for the Iron, and some outrageous license fee for using VM.
As an aside, I've heard the computer science dept. of one University was going to do this and give each student thier own Linux box to use, as an alternative to shell accounts.
You can see some Linux on VM/390 screenshots here [eagle7.org].
Re:Big Iron (Score:1)
More here [freebsd.org].
Bye, Jonas
Re:Big Iron (Score:3, Insightful)
Incidentally, the hardware console for a 390 is a Thinkpad. That's right - a whole Thinkpad just for the console. And often there are multiple Thinkpads for redundancy.
The other big difference is bandwidth - the bandwith in a mainframe is incredible.
If someone offered me to colocate my server in an x86 farm, or under VM on a 390, I would choose the latter any day. Instant setup, and most reliable hardware in the world. If you need more data space, or processing time, etc, you don't even need to bring the machine down - cut them a check, they tweak some settings in VM, and *voila*, you're set.
Once you get to know them, Mainframes are really cool.
Re:Big Iron (Score:2)
Re:Big Iron (Score:1)
Use port mangling with IPTables (Score:1)
Say you have clients XX, YY and ZZ. XX runs on port 81, YY:82, ZZ:83. You would then need to code up an IPTables module that would peek at incoming HTTP requests and alter their destination port on-the-fly (this tool might already exist, try Google).
no port mangling needed (Score:1)
Bind multiple IP's to the NIC and bind each instance of apache to an IP, and an instance is started for each virtual host-- which isn't so virtual anymore. It's real-- it's just shared hardware.
Re:no port mangling needed (Score:1)
FreeVSD (Score:1)
I haven't seen anybody mention these [freevsd.org] guys (and they're GPL'd to boot!) so I thought I'd throw in my highly devalued 2 cents.
They seem to have a similar solution to this problem as Ensim [ensim.com] does with their Webppliance virtual server suite. Basically making available to each virtual server their own copy of the standard filesystem underneath their /home directory via links and using chroot to keep everyone in their own little area. It seems to work pretty well.
Note: I use an Ensim Webppliance but other than that I really don't know what I am talking about.
chroot (Score:1)
Obscurity (Score:2)
Step two is to deal with PHP, perl, cgi's and whatnot which all run as whatever user the web server is running as. For PHP, for example, you can set the variable open_basedir to force it to not descend any farther than your base document root directory (i.e.
Sadly, the best solution I've found is to cheat and just try and obscure things. Make it not obvious what a given directory might be and then it is much harder for naughty users to look where they shouldn't be. An MD5 sum of the domain name chopped in some fashion, or hell, even a totally random string can be used. You'd then have a website stored something like this:
/var/www/{hash}/www.domain.com
/var/www/{different-hash}/www.differentdomain.c
The user who owns www.domain.com would then need to know what the hash is for www.differentdomain.com to be able to try and access those files. Make your hash generation random or at least very, very non-obvious and you're about as secure as you're going to get with Apache in it's current state.
Re:Obscurity (Score:1)
Ever heard of 'ls
a few ideas: wrappers and ngroups_max (Score:1)
1. CGI-wrappers. It's been a while since I did a big VH server (but I will be this summer as we upgrade our Apache and re-write the site), but it seems like cgi-wrappers helped a lot. I think the way it worked was to set each script with a sticky bit on the owner, and it is owned by the user, and Apache will run the process under that id.
2. Increase max_groups. Hit google for the change in your particular *nix. In solaris edit system file and add "set ngroups_max=32". This does break NFS-- be warned! this includes NAS attached storage. You can then make Apache a member of up to 32 groups (that need write or execute privilege). Don't go all crazy and make Apache a member of every stinkin' group-- remember the group permission is only needed for read, write or execute privilege where the world does not need to go!
Re:a few ideas: wrappers and ngroups_max (Score:1)
Virtual servers work for me. Easy. (Score:1)
Java web / app servers are a good model (Score:2)
Essentially, the thread serving a user's request has their authenticated ID associated with it, and cannot access any resource to which that ID hasn't been explicitly authorized, e.g. in mappings in the web.xml file. If the resource corresponds to an external file or process, it is easy enough to assign it ACLs in the app server and let the server decide whether / how to run it.
The server itself doesn't need to be root.
For example, see the WebLogic security docs [bea.com] - you probably won't find similar these features in free products now, but with the impressive capabilities that Java 1.4 provides it shouldn't be too long before they appear.
Different level dirs (Score:1)
I'm not sure if this is only a matter of group ownership of the files.
If this is the question, the solution could be having a few directory levels with different groups:
/home/users/user1/website
/home/users/use r2/website
drwxr-x--- root:users
drwxr-x--- web:group1
drwxr-xr-x user1:other
This way, only users in the group users could cross de first dir, then only THE web user and THE user with group group1 could cross the second dir, then everybody could cross the third level, but in fact, everybody only means web and group1 (user1).
So, you only need one group per user and you don't need the webserver user to be in all groups.
Is this ok?
Miquel