Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Hardware Technology

Horizontal or Vertical Server Architecture? 45

zetes asks: "For a while now I have been a supporter of a horizontal server architecture/environment, where one server does one and only one thing and a business/department would then have numerous servers. This way if a service needs to go down (especially in a Windows Server environment), nothing else is affected. However, due to the rising importance of security in the last couple of years, I have decided that having fewer servers is both easier to manage and update and also provides fewer points of entry for hackers or virus writers. I am not going to a purely vertical architecture, where everything is on one server, but more of a hybrid (down from 10-12 servers to around 5). So what I am asking you all today is what you prefer to support or manage? One or the other, or a hybrid? And does the operating system or particular service(s) dictate this architecture to a certain extent?"
This discussion has been archived. No new comments can be posted.

Horizontal or Vertical Server Architecture?

Comments Filter:
  • I have a laundry list of require... oh, sorry, wrong ask slashdot.

    More servers = ease of administration only if you have automated patch management tools. Otherwise the "patch one service at a time" advantage is negated by the deployment of said patches to many machines. It is usually the load on the services that really determine how to break out your servers. In a windows enviroment, SQL and Exchange should never be on the same box with any kind of load: both require too much memory. Determine how much l
  • go with horizontal (Score:2, Interesting)

    by Anonymous Coward
    I prefer many servers with one major service each.

    I have a group of about 10 FreeBSD servers and I use the "clusterit" port which has "dsh", distributed shell. There are also other packages with similar functionality, i.e., run stuff on many machines at once.

    Each time I need to patch them, or do anything really, I just write a script like this (abbreviated):

    #!/bin/sh
    cvsup
    cd /usr/src/sys/whatever
    make clean && make depend install && make clean
    kill `cat /var/run/whatever.pid` /usr/sbin/whate
  • Depends on budget (Score:3, Interesting)

    by venom600 ( 527627 ) on Tuesday October 07, 2003 @07:30PM (#7158231) Homepage Journal

    In my ideal world, each server would have one, and only one, purpose. That way if a piece of software is compromised in such a way that remote access is granted, the damage is somewhat contained.

    Also, from a maintenance perspective having single purpose machines make life a lot easier....especially if they are redundant single-purpose boxen.

    But, sometime reality has to set in and some service consolidation may have to occur to keep hardware costs down.

    • Re:Depends on budget (Score:3, Interesting)

      by nocomment ( 239368 )
      Yes but then you also have a lot more work to do via patching, and a ton more forensics work to do if a system is compromised. I like to cluster related services services together. I guess I'd be a considered a hybrid.

      My firewall runs openbsd, pf, bind 9 (chrooted, primary dns) and snort (chrooted). Snort logs to a monitoring server that is running Mandrake 9.1, postgresql, apache 2, mrtg, some homegrown perl scripts, and nagios. The mail server is running all the mail services as well as dns (to speed
    • Just something I thought of when I read your post... What about 15 servers, as a single image system cluster? Then you have the maintenance ease of one system to maintain, and the uptime and reliability of 15 systems. If one system goes down, migrate the process to another system.

      The only problem I could think of is whether or not something like OpenMosix could properly handle the networking work involved (since the service will only run on one machine at a time, but forked connection processes could mi
    • In my ideal world, each server would have one, and only one, purpose. That way if a piece of software is compromised in such a way that remote access is granted, the damage is somewhat contained.

      In a slightly more ideal world, you would use a single machine running an OS that supports proper partitioning of the machine into sandboxes or VMs or zones (whatever you want to call them). This machine would have redundant and hot-swappable everything. There would be another identical machine in another datacent
  • Yes, I know what I'm about to say is heresy to the /. crowd, but...

    *takes deep breath

    Windows is fairly easy to administer, and it is basically point and click, so I would just stay with the 10-12 servers, balance the load roughly equally, and just periodically have each machine automatically contact Windows Update for certain tasks. Or, have one machine keep checking for updates, and when there are updates, have the machine download them, then "push" them to the other servers.
  • Once Upon A Time (Score:4, Insightful)

    by Circuit Breaker ( 114482 ) on Tuesday October 07, 2003 @07:32PM (#7158250)
    Before Microsoft Windows was considered stable enough to run servers, the norm was to pack as much as possible onto each server. On SPARC/Solaris servers, that's still the case, and is the reason why 64-way servers are actually reasonable - every processor can access the file system, and they share the load. To get comparable (in the CPU power sense) from independent computers, you'd need much more CPUs, perhaps 4 times as much, because an idle CPU in one box can't take over an overloaded CPU in another.

    But Microsoft has made it standard practice to set up a box for every service (And making you pay lots of microsoft tax in the process). And at the NT4 days, that was a good practice, because the O/S wasn't stable enough to be trusted with more than one task at a time - Exchange, SQL-Server and other services could all bring the entire O/S down when a malformed user request came in.

    Nowadays, the Windows 2000 and 2003 systems are stable enough to run more than one service at a time. They don't use their resources efficiently, but they are capable of the level of separation needed, and are generally stable enough. Yet, many applications are built with the older mindset, and make demands that are not reasonable for a machine that hosts many services. For example, they might require a specific service pack, a specific version of another tool (Database, Exchange, whatever). A batch text processing system might require a specific Word version to be installed on the server, and a different system might require another.

    If two such demands are incompatible, you're out of luck because Microsoft doesn't let you install two versions of the software at the same time. Most Unix packages can have multiple version co-existing at the same time.

    So, to sum up - on modern operating systems, there's no real inherent technical argument against packing all the services on one machine. However, due to historical reasons (tracing back to days when this was unpractical), you often have no choice but dedicate different boxes to different services when running Microsoft O/S and its tools. There is rarely such a problem on Unix systems and their like.
    • If two such demands are incompatible, you're out of luck because Microsoft doesn't let you install two versions of the software at the same time. Most Unix packages can have multiple version co-existing at the same time.

      I haven't used their software, but it appear that Softricity [softricity.com] has a good solution for this. They basically create "jails" for each application, which can have its own registry, DLLs, etc. You can have multiple "jails" on a server, so this really helps out with, for instance, a Citri

  • personally (Score:5, Funny)

    by larry bagina ( 561269 ) on Tuesday October 07, 2003 @07:37PM (#7158284) Journal
    I prefer a doggy-style server architecture.
  • There are lots of things that are best when horizontal ;)

    At work I don't have too many services running on each box. We have a regular upgrade cycle, so have a good supply of older server-grade hardware that doesn't have a manufacturer's warranty any more but is still good for less critical tasks. Why wouldn't we use them instead of letting them lie around doing nothing? Patching is easy enough with ssh public keys and a shell script that logs into each one in turn and executes a command, eg allserv sudo a
    • > Why wouldn't we use them instead of letting them lie around doing nothing?

      Because when the drives/memory/scsi controller fails, you have no warranty or come back on it.

      Unless of course, you're completely comfortable with having whatever service you have running on it killed off suddenly with no clear ETA on restoration time.

      We used to run a few old things on old purchased hardware, but it just caused us too many problems. No matter how non-critical a system or service is, the users will always c

      • Because when the drives/memory/scsi controller fails, you have no warranty or come back on it.

        Unless of course, you're completely comfortable with having whatever service you have running on it killed off suddenly with no clear ETA on restoration time.


        Yes, actually. One is running serving web pages and everything on it is duplicated on another system that can be switched over in minutes. One's got Ghost images on it and the users won't even notice if that's down for a while. And if an apps server fails,
        • True, although the parts will probably take a lot longer to get to you once the machine is out of warranty as most server warranties have a level of service attached to them.

          In your case though, you're pretty covered because you've got redundancy in your design.

  • It really depends (Score:5, Insightful)

    by Halvard ( 102061 ) on Tuesday October 07, 2003 @07:44PM (#7158333)

    Really it does. If you are after increased security, server consolidation isn't necessarily the way to go in any environment. For example, placing additonal daemons on a single machine adds rather than reduces the number of potential exploitable holes a single system has. Running Exchange and MS SQL or MySQL and Sendmail or whatever on the same box presents a greater number of potential points of vulnerability that a minimalistic install and just Exchange or Sendmail or whatever simply because more things are running on the machine. This same concept can be used to distribute updates from your distribution vendor (if that is the route you take) from a single source which has the added benefit of reducing your bandwidth consumption.

    Also, reducing the number of servers doesn't necessary reduce the the amount of administration. Most admins download and patch servers one at a time regardless of the environment. You could for example (in a *nix environment), have a base environment server that you used to do all of your patching work/distribution on/from; then you create the package of your choice (RPM, .deb whatever) for install on the other machines using an update script. Or you could copy the binaries, conf files, etc., over to the other machines instead of packaging. For a Windows machine, you can script the patching as well using a decent login processor like KiXtart with a login script that checks recursively for patches and applies what you want. Or you could use Perl and command line tools to accomplish this across your servers (workstations too). This is beyond most Windows adminstrators. It shouldn't be but it is. I've been administering Windows and networks since just after NT 3.1 Advanced Server shipped, Netware about the same length of time, and Linux servers and networks about 5 or 6 years and most Windows admins are too indimidated or lazy to learn anything other than the GUI that ships. I've done this done on all environments for years and years and have seen or know others that do this too.

  • Horizontal or Vertical Shooters?
  • Isn't one of the benefits of the horizontal architecture the fact that, since a server can go down and others can take its place, you can update each server one at a time without taking the service they provide offline? Don't most server systems provide a system where you can manage this also? I know Solaris and Mac OS X do (JumpStart, etc. netbooting, Workgroup Manager, Network Install, etc.).

    In terms of security, don't you have a firewall? Please don't tell me that all 10-12 of your servers are readily a
  • And does the operating system or particular service(s) dictate this architecture to a certain extent?"

    The operating system, particular services, capacity, load and reliability requirements dictate the architecture completely. Every situation is different and they all need to be evaluated based on the metrics of that particular situation. For a small ten user office, I would think nothing of hosting DNS, DHCP, web proxy, email, file storage, print server, SQL database and maybe more on a single box but, if
  • On Linux, I may run several services that are related like sendmail/bind or apache/php/mysql.

    On Windows:
    File/Print, one server.
    Exchange, one server.
    SQL, one server.
    AD/DNS/DHCP on server.
  • Personally, I like to group services by function and politics. You never want a low-priority function running on the same machine as a high-priority function. You never want to have two different managers arguing about who gets to decide how the machine's used. You DO want interdependent services to be on the same machine if the load is low enough to allow it -- it lets you eliminate a whole pile of variables if something goes wrong.

    The DHCP server, DNS server, NTP server, etc. all live on several ident
  • I think it depends on the situation. I personally prefer fewer systems, but that's just me. A few things to think about:

    1, What are the dependencies? That is, do you have an application server that depends on an RDBMS or the like? If so, it might make sense to put them on one machine, assuming you have enough capacity. In a 2-tier system setup, if one server goes down, the service is unavailable anyway. Or you could take those 2 boxes and cluster them for better availability.
    2, How much capacity do you nee
  • Comment removed based on user account deletion
  • You could give Mosix a try.

    Not really suited to all applications (forget anything with large numbers of short lifespan threads/processes - ie. a webserver), but I have managed to completely remove a number of servers used by one client.

    Instead we now have one Mosix box (basically with no additional software loaded) which dynamically takes up the slack when a particular sections server is being swamped. Very nice.

    Q.

  • I used to run hybrid servers packing as many function on a single box as practical. The problem here is that it can be tough to manage a mix of many services on a single server.

    We have migrated to single application servers running on User-Mode Linux. The only down side here is that you can burn thru a lot of IP addresses, and in that most of our servers are public, it is not the best usage of address space.

    In terms of maintenance, virtuals are ideal. You can individually firewall them opening only a f
  • I generally recommend that you dedicate a server to critically important services and that you provide a few less important services on other servers, with each server running services that get along well together.

    Your decisions should partially be based upon which operating system you are using and the service involved. For example, a unix-like OS running a traditional mail server could handle thousands of mailboxes and possibly still provide your DNS and DHCP services, but I would not ask the same of Win
  • I would say a hybrid approach would be ideal. Use a *nix and run server machines that function individually in a vertical style systems where each machine can litterally handle EVERY singly service run on the network and uses an external data source/storage /fileserver. Then you can add identical "cluster" machines to handle the load, each machine will take its equal share of the total amount of service requests. They would all boot from the centrallized file source, which would have an identical mirror
  • by smoon ( 16873 ) on Wednesday October 08, 2003 @05:54AM (#7161331) Homepage
    Go vertical whenever you can. Often you can't e.g.: public ftp or web server should probably not host a database full of sensitive information. Often you can.

    Big challenge is windows apps. Big name packaged stuff like Exchange or Weblogic or whatever you can probably pack together on one server. Once you get into specialized and very expensive software (e.g.: Sagent, Mercator, Fax Software, etc.) the vendor will insist that you dedicate 2 servers (one production one development/test) and refuse to guarantee performance if you don't dedicate a server.

    A solution to reduce machine clutter, if not OS clutter, is to virtualize with something like VMWare. In a DMZ for example you could have separate 'boxes' running SMTP, FTP, DNS, etc. all running on one server. Get two of these servers and you've got a pretty secure setup with load balancing. Another big advantage to that is that migrating between servers is as simple as copying the disk image and booting it up. If a system gets compromised you can save that disk image and boot from a known-good one, patch it, and still have the other one for analysis/prosecution.

    Same thing goes for internal networks -- run test/development on a VMWare system with a boatload of 'machines'. If VMWare performance is acceptable, then you can run production as well.
  • On one of our big disk farms that also has backup tape robots attached, the system engineers have all the devices hooked up via SAN (fibre). So really the problem most folks have is: Will the system grind to a halt when I try to do a little tiny upgrade? If you have two distinct services, like disk farm and backups, on the same machine, place cards from multiple vendors in the machine. In our case, we put two Fibre Channel controllers each from a different vendor. Vendor A serves the disk farm, vendor B


  • Let's say XYZ company upgrades their server(s).

    Mail
    SQL
    File
    Print
    Accounting Software

    Dual/quad xeon 3ghz box with 2GB of ram
    Let's say it cost $12,000. Pile them all on there.

    Mail
    SQL/Accounting
    File/Print

    Buy 3 $2,000 servers. Split them up.

    Now, would you rather have a server crash (admin error, hardware failure, software bug) take down the ONE server, losing all services?

    Or, would you rather have a server crash just take down the email system, for example. But, with 3 boxes you'd have 3 times the likelyhoo
  • by PD ( 9577 ) *
    Put all of your eggs into one basket. And watch that basket.
  • Using rack-mount components saves gobs of floor space.
  • While I understand your initial idea of multiple servers with a single service, not only are you increasing your management load, you're also increasing your failure rate...

    Let me explain.

    Imagine that the hardware on all the machines is identical and that hardware failures will statistically happen over time, with one machine, you get one failure per time, but with two, you get double the failures, and of course with 15 you get 15 times the failure rate.

    Now this is true for hardware, but its also true

"One lawyer can steal more than a hundred men with guns." -- The Godfather

Working...