Horizontal or Vertical Server Architecture? 45
zetes asks: "For a while now I have been a supporter of a horizontal server architecture/environment, where one server does one and only one thing and a business/department would then have numerous servers. This way if a service needs to go down (especially in a Windows Server environment), nothing else is affected. However, due to the rising importance of security in the last couple of years, I have decided that having fewer servers is both easier to manage and update and also provides fewer points of entry for hackers or virus writers. I am not going to a purely vertical architecture, where everything is on one server, but more of a hybrid (down from 10-12 servers to around 5). So what I am asking you all today is what you prefer to support or manage? One or the other, or a hybrid? And does the operating system or particular service(s) dictate this architecture to a certain extent?"
Dear slashdot (take 2), (Score:2)
More servers = ease of administration only if you have automated patch management tools. Otherwise the "patch one service at a time" advantage is negated by the deployment of said patches to many machines. It is usually the load on the services that really determine how to break out your servers. In a windows enviroment, SQL and Exchange should never be on the same box with any kind of load: both require too much memory. Determine how much l
go with horizontal (Score:2, Interesting)
I have a group of about 10 FreeBSD servers and I use the "clusterit" port which has "dsh", distributed shell. There are also other packages with similar functionality, i.e., run stuff on many machines at once.
Each time I need to patch them, or do anything really, I just write a script like this (abbreviated):
#!/bin/sh
cvsup
cd
make clean && make depend install && make clean
kill `cat
Re:go with horizontal (Score:2)
Depends on budget (Score:3, Interesting)
In my ideal world, each server would have one, and only one, purpose. That way if a piece of software is compromised in such a way that remote access is granted, the damage is somewhat contained.
Also, from a maintenance perspective having single purpose machines make life a lot easier....especially if they are redundant single-purpose boxen.
But, sometime reality has to set in and some service consolidation may have to occur to keep hardware costs down.
Re:Depends on budget (Score:3, Interesting)
My firewall runs openbsd, pf, bind 9 (chrooted, primary dns) and snort (chrooted). Snort logs to a monitoring server that is running Mandrake 9.1, postgresql, apache 2, mrtg, some homegrown perl scripts, and nagios. The mail server is running all the mail services as well as dns (to speed
Re:Depends on budget (Score:2)
The only problem I could think of is whether or not something like OpenMosix could properly handle the networking work involved (since the service will only run on one machine at a time, but forked connection processes could mi
Re:Depends on budget (Score:2)
In a slightly more ideal world, you would use a single machine running an OS that supports proper partitioning of the machine into sandboxes or VMs or zones (whatever you want to call them). This machine would have redundant and hot-swappable everything. There would be another identical machine in another datacent
Windows Server..... (Score:1)
*takes deep breath
Windows is fairly easy to administer, and it is basically point and click, so I would just stay with the 10-12 servers, balance the load roughly equally, and just periodically have each machine automatically contact Windows Update for certain tasks. Or, have one machine keep checking for updates, and when there are updates, have the machine download them, then "push" them to the other servers.
Re:Windows Server..... (Score:1)
Once Upon A Time (Score:4, Insightful)
But Microsoft has made it standard practice to set up a box for every service (And making you pay lots of microsoft tax in the process). And at the NT4 days, that was a good practice, because the O/S wasn't stable enough to be trusted with more than one task at a time - Exchange, SQL-Server and other services could all bring the entire O/S down when a malformed user request came in.
Nowadays, the Windows 2000 and 2003 systems are stable enough to run more than one service at a time. They don't use their resources efficiently, but they are capable of the level of separation needed, and are generally stable enough. Yet, many applications are built with the older mindset, and make demands that are not reasonable for a machine that hosts many services. For example, they might require a specific service pack, a specific version of another tool (Database, Exchange, whatever). A batch text processing system might require a specific Word version to be installed on the server, and a different system might require another.
If two such demands are incompatible, you're out of luck because Microsoft doesn't let you install two versions of the software at the same time. Most Unix packages can have multiple version co-existing at the same time.
So, to sum up - on modern operating systems, there's no real inherent technical argument against packing all the services on one machine. However, due to historical reasons (tracing back to days when this was unpractical), you often have no choice but dedicate different boxes to different services when running Microsoft O/S and its tools. There is rarely such a problem on Unix systems and their like.
Re:Once Upon A Time (Score:2)
I haven't used their software, but it appear that Softricity [softricity.com] has a good solution for this. They basically create "jails" for each application, which can have its own registry, DLLs, etc. You can have multiple "jails" on a server, so this really helps out with, for instance, a Citri
personally (Score:5, Funny)
Re:personally (Score:1)
Now I know who has been gumming up our CPU fans.
Kinda horizontal... (Score:2)
At work I don't have too many services running on each box. We have a regular upgrade cycle, so have a good supply of older server-grade hardware that doesn't have a manufacturer's warranty any more but is still good for less critical tasks. Why wouldn't we use them instead of letting them lie around doing nothing? Patching is easy enough with ssh public keys and a shell script that logs into each one in turn and executes a command, eg allserv sudo a
Re:Kinda horizontal... (Score:1)
Because when the drives/memory/scsi controller fails, you have no warranty or come back on it.
Unless of course, you're completely comfortable with having whatever service you have running on it killed off suddenly with no clear ETA on restoration time.
We used to run a few old things on old purchased hardware, but it just caused us too many problems. No matter how non-critical a system or service is, the users will always c
Re:Kinda horizontal... (Score:2)
Unless of course, you're completely comfortable with having whatever service you have running on it killed off suddenly with no clear ETA on restoration time.
Yes, actually. One is running serving web pages and everything on it is duplicated on another system that can be switched over in minutes. One's got Ghost images on it and the users won't even notice if that's down for a while. And if an apps server fails,
Re:Kinda horizontal... (Score:1)
In your case though, you're pretty covered because you've got redundancy in your design.
It really depends (Score:5, Insightful)
Really it does. If you are after increased security, server consolidation isn't necessarily the way to go in any environment. For example, placing additonal daemons on a single machine adds rather than reduces the number of potential exploitable holes a single system has. Running Exchange and MS SQL or MySQL and Sendmail or whatever on the same box presents a greater number of potential points of vulnerability that a minimalistic install and just Exchange or Sendmail or whatever simply because more things are running on the machine. This same concept can be used to distribute updates from your distribution vendor (if that is the route you take) from a single source which has the added benefit of reducing your bandwidth consumption.
Also, reducing the number of servers doesn't necessary reduce the the amount of administration. Most admins download and patch servers one at a time regardless of the environment. You could for example (in a *nix environment), have a base environment server that you used to do all of your patching work/distribution on/from; then you create the package of your choice (RPM, .deb whatever) for install on the other machines using an update script. Or you could copy the binaries, conf files, etc., over to the other machines instead of packaging. For a Windows machine, you can script the patching as well using a decent login processor like KiXtart with a login script that checks recursively for patches and applies what you want. Or you could use Perl and command line tools to accomplish this across your servers (workstations too). This is beyond most Windows adminstrators. It shouldn't be but it is. I've been administering Windows and networks since just after NT 3.1 Advanced Server shipped, Netware about the same length of time, and Linux servers and networks about 5 or 6 years and most Windows admins are too indimidated or lazy to learn anything other than the GUI that ships. I've done this done on all environments for years and years and have seen or know others that do this too.
Next time on ask slashdot: (Score:1)
Re:Next time on ask slashdot: (Score:2)
Oh definitely Horizontal. You can see further ahead that way, so there's room for more action on the screen. Compare R-Type to the new Iridion 3D for the GBA. No contest.
horizontal management (Score:2)
In terms of security, don't you have a firewall? Please don't tell me that all 10-12 of your servers are readily a
Hybrid is the most likely answer. (Score:2)
The operating system, particular services, capacity, load and reliability requirements dictate the architecture completely. Every situation is different and they all need to be evaluated based on the metrics of that particular situation. For a small ten user office, I would think nothing of hosting DNS, DHCP, web proxy, email, file storage, print server, SQL database and maybe more on a single box but, if
linux = many/per, windows = one/per (Score:1)
On Windows:
File/Print, one server.
Exchange, one server.
SQL, one server.
AD/DNS/DHCP on server.
Group by function (Score:2)
The DHCP server, DNS server, NTP server, etc. all live on several ident
As usual, it depends :) (Score:1)
1, What are the dependencies? That is, do you have an application server that depends on an RDBMS or the like? If so, it might make sense to put them on one machine, assuming you have enough capacity. In a 2-tier system setup, if one server goes down, the service is unavailable anyway. Or you could take those 2 boxes and cluster them for better availability.
2, How much capacity do you nee
Re: (Score:2)
OpenMosix? (Score:2)
Not really suited to all applications (forget anything with large numbers of short lifespan threads/processes - ie. a webserver), but I have managed to completely remove a number of servers used by one client.
Instead we now have one Mosix box (basically with no additional software loaded) which dynamically takes up the slack when a particular sections server is being swamped. Very nice.
Q.
Use Virtual Servers (Score:1)
We have migrated to single application servers running on User-Mode Linux. The only down side here is that you can burn thru a lot of IP addresses, and in that most of our servers are public, it is not the best usage of address space.
In terms of maintenance, virtuals are ideal. You can individually firewall them opening only a f
Mix and Match (Score:2)
Your decisions should partially be based upon which operating system you are using and the service involved. For example, a unix-like OS running a traditional mail server could handle thousands of mailboxes and possibly still provide your DNS and DHCP services, but I would not ask the same of Win
hybrid. (Score:2)
Vertical and VMWare (Score:3, Insightful)
Big challenge is windows apps. Big name packaged stuff like Exchange or Weblogic or whatever you can probably pack together on one server. Once you get into specialized and very expensive software (e.g.: Sagent, Mercator, Fax Software, etc.) the vendor will insist that you dedicate 2 servers (one production one development/test) and refuse to guarantee performance if you don't dedicate a server.
A solution to reduce machine clutter, if not OS clutter, is to virtualize with something like VMWare. In a DMZ for example you could have separate 'boxes' running SMTP, FTP, DNS, etc. all running on one server. Get two of these servers and you've got a pretty secure setup with load balancing. Another big advantage to that is that migrating between servers is as simple as copying the disk image and booting it up. If a system gets compromised you can save that disk image and boot from a known-good one, patch it, and still have the other one for analysis/prosecution.
Same thing goes for internal networks -- run test/development on a VMWare system with a boatload of 'machines'. If VMWare performance is acceptable, then you can run production as well.
Different Vendors Provides Isolationism (Score:2)
On one of our big disk farms that also has backup tape robots attached, the system engineers have all the devices hooked up via SAN (fibre). So really the problem most folks have is: Will the system grind to a halt when I try to do a little tiny upgrade? If you have two distinct services, like disk farm and backups, on the same machine, place cards from multiple vendors in the machine. In our case, we put two Fibre Channel controllers each from a different vendor. Vendor A serves the disk farm, vendor B
Decide: (Score:1)
Let's say XYZ company upgrades their server(s).
Mail
SQL
File
Print
Accounting Software
Dual/quad xeon 3ghz box with 2GB of ram
Let's say it cost $12,000. Pile them all on there.
Mail
SQL/Accounting
File/Print
Buy 3 $2,000 servers. Split them up.
Now, would you rather have a server crash (admin error, hardware failure, software bug) take down the ONE server, losing all services?
Or, would you rather have a server crash just take down the email system, for example. But, with 3 boxes you'd have 3 times the likelyhoo
The answer (Score:1)
Vertical, of course (Score:2)
Re:Vertical, of course (Score:1)
-D
Re:Vertical, of course (Score:1)
...and server blades [dell.com] save even more space...
Your Idea of Reliability is Wrong (Score:2)
Let me explain.
Imagine that the hardware on all the machines is identical and that hardware failures will statistically happen over time, with one machine, you get one failure per time, but with two, you get double the failures, and of course with 15 you get 15 times the failure rate.
Now this is true for hardware, but its also true