Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Programming IT Technology

Managing Environment Specific Config Files? 20

byrnereese asks: "It seems that every organization I have worked for has been plagued with the same problem: 'how do you effectively manage multiple environment configurations for the various development, staging and production environments that exist.' For example, each application at our company requires a different set of database credentials for each environment we deploy it to. For a long time we have maintained different config files for each environment and checked them into our SCCS; but that has proved incredibly error prone because a different build is required for each environment. Each company I have worked for utilizes a different systemology to make config file management a little easier, but none of them have stumbled upon anything that is any less error prone, or any easier to manage than any other solution to choose from. So I am curious, how does your company manage environment specific application configuration? And what does your change management process for these configurations look like?"
This discussion has been archived. No new comments can be posted.

Managing Environment Specific Config Files?

Comments Filter:
  • I Am Not A Programmer, but how about this:

    have home directories mounted on an NFS share. To test program foo at stage beta, login on machine as user 'betatester', wherein the appropriate environment already exists.

    Put that NFS share into CVS, so that you can change it as needed. If multiple projects, add program name. So for two programs, foo and bar, and two stages, alpha and beta, you would have:

    beta-foo
    alpha-foo
    beta-bar
    alpha-bar

    Not sure I even understood the question, but this might do it if my take is correct.
    • Re:IANAP (Score:2, Interesting)

      Rather then logging in as different users, I would just setup a set of shell scripts which contain a PROJECT variable.

      The shell scripts could be shared by different users, so each user could set their environment to one of the four phase-project environments.

      Script1: PROJECT=beta-foo
      Script2: PROJECT=alpha-foo

      That way, any of your developers could simply set their $PROJECT , and YOU as the administrator, don't need to clean up when they fscked up the shared user account.
      • Or, instead of setting $PROJECT manually, have it exported from the shell script as well.

        Good idea. (Yours, that is:)

  • by arb ( 452787 ) <`moc.liamg' `ta' `absoma'> on Wednesday August 07, 2002 @08:46PM (#4029768) Homepage
    At one company I worked for, someone had built a very small application which handled switching environments and all the associated settings. While the implementation was very crude, it was also quite effective. The applications we were developing were Windows-based and written in Visual Basic. The environment manager was customisable (to an extent) and used an INI file to describe each of the possible environments (dev, system test, UAT, production, training, demo), where to find the relevant versions of the files that were required and any relevant registry entries. User's were able to switch between different environments relatively easily and it worked quite well. We had a seperate directory set up for each environment which contained a complete set of files for that environment. Each time a new build was done, the environment's directory was updated, and when the users ran the environment manager only the changed files were copied to their machine. This tool was also used to roll out new versions of the applications, effectively replacing bulky windows installers. (Okay, there was still one initial setup program that needed to be run to initially install the software, but after that the environment manager did the rest.)

    All the meta-data for the environment manager was kept under source-control, and the users executed the environment manager from the network. (They could optionally run it locally, but it was then pointed to the network to make sure it was getting the correct INI file.) The only minor complication was that after a build, the environment directory needed to be updated on the network with the latest executables, however this was easily integrated into our build/roll-out procedures. We tagged each release in our source-control system, and were able to easily roll-back any environment if needed.

  • by Anonymous Coward
    For Java properties files, I made different sets of lines for dev, test and production. The app checks an environment variable at startup that has the hostname of the server. Here's an example (javadev, etc. are the hostname). The dev/tst/prd are prefixes that determine which set of lines to read below.

    javadev=dev
    javatest=tst
    javaqa=tst
    javaprod= prd

    devDBUsername=bubba
    devDBPassword=password

    tstDBUsername=bubba
    tstDBPassword=qa

    prdDBUsername=bubba
    prdDBPassword=secret

    For PHP code, we have if() blocks or switch statements that check $HTTP_HOST or $SERVER_NAME and define variables for each environment.
    • We do most Java server side work with Tomcat [apache.org] which uses XML configuration files. We create a config.xml file with the appropriate settings for each environment and use an Ant [apache.org] task to apply an XSL stylesheet to get the web.xml and server.xml files we need.
  • by GusherJizmac ( 80976 ) on Wednesday August 07, 2002 @09:48PM (#4030045) Homepage
    A place I worked for we did the following:

    each environment was based on a user-name. All environemnts with a particular username were assumed the same. For example, we had 3 live servers, 1 dev and 1 stage (total of 5 boxes). On the 3 live servers, the UNIX user was named 'live'. 'stage' and 'dev' on the stage an dev boxes.

    Then, each config file in the SCCS was named configfile.logname, so you end up with: somefile.conf.live
    somefile.conf.stage
    somefile.conf.dev
    somefile.conf.joeprogrammer
    somefile.conf.bobarchitect

    and so forth.

    The build script would then either symlink or copy the file for the user running it to the right place:

    cd /etc/conf/myapp ; ln -s /build/conf/somefile.conf.$LOGNAME somefile.conf

    When symlinking and using CVS, updates to /build/conf get reflected in /etc/conf/myapp

    Occasionally, we would need something more complicated than just config files, and so there were scripts that could abstract out the global configuration parameters from the environment specific ones and then glue them together:

    /build/conf/global_opts.conf
    /build/conf/environment_opts.conf.live
    /build/conf/environment_opts.conf.stage
    /build/conf/environment_opts.conf.dev

    In the simplest case the script does:

    cat /build/conf/global_opts.conf /build/conf/environment_opts.conf.$LOGNAME > /etc/conf/myapp/config.conf

    That said, on any project you must have a release engineer, even if not full time, one person on the project should be assigned the duty of handling the build and release procedures. This includes updating all conf files, and disemenating changes to the group. No tool can replace good team communication. I view systems like what I described as something the release engineer deals with and as a tool for him/her to increase efficiency. The developers should just be able to type 'make' (or equiv) and have it all work out.

    Pros: Simple, based on standard UNIX stuff, little or no secret sauce.

    Cons: Tougher to get it to work in cross-platform environment, doesn't handle potentially complicated configuration files, requires a person full or part time to administer and maintain.

  • by phamlen ( 304054 ) <phamlen.mail@com> on Thursday August 08, 2002 @12:36AM (#4030740) Homepage

    I've encountered this issue in a number of places, and have only been satisfied with homegrown systems. Here are some of the issues we've tackled (but entire books have been written on this topic.)

    Requirements

    First, here's my short list of requirements of a build/config system:

    1. All config files must be under version control. Given a release of software, you must be able to find the appropriate config files for that release.
    2. Config files MAY change when the software doesn't - this should still count as a release (ie, a release is not just 'when the developers have new code.')
    3. Config files may vary depending on the environment (ie, stage, development, production, backup production.)
    4. In an environment where you may have to rebuild machines (ie, WebServer1 just bit the toilet), you need to be able to build a new machine with the correct config file. Thus, your "build routine" for new machines needs to hook into the system.
    5. You need some kind of script that can check the validity of a machine's configuration - (eg, run 'select 'x' from dual' against the database, read/write to the appropriate directories, check that you can connect to server X on port Y, etc.)

    Some solutions:

    Generally we've "rolled our own" because the good config packages cost a lot of money and still eat up a lot of resources to maintain them.

    The Redhat Package approach We build a package of all appropriate config files - one for each type of machine in each environment. The machine contains a "well-known file" which indicates which kind of machine and what type of environment it's in. The package checks to ensure that it's being installed on the appropriate type of machine.

    Advantages:

    • the wrong config files can't get onto a machine.
    • You can use rpm commands to check if config files are installed.

    Disadvantages:

    • It's a pain in the neck to write the build script.
    • There are a lot of tricky issues around how to write the Redhat package, ensure the deployment works, etc.
    Centralized Perl 'Config Server'

    In this approach, you write a centralized Perl server that keeps track of all config files. If you write it correctly, it can even be 'heirarchical' (ie, there are default config files, then webserver config files, and Webserver1 inherits all default config files and all webserver config files.)

    Each machine then asks the Perl server for its config files.

    The perl server ensures that config files are checked into CVS

    Advantages:

    • One centralized location for config files.
    • the ability to determine which machines have which config files

    Disadvantages

    • Someone has to write the Perl server- it took us about 3 months, but once it was done, it worked great.
  • Environment Files (Score:2, Informative)

    by Bklyn ( 21642 )
    See environ [umn.edu] for a nice package that does this sort of thing and uses Perl code to manipulate your environment.
  • LDAP / SCM? (Score:3, Insightful)

    by Aniquel ( 151133 ) on Thursday August 08, 2002 @09:36AM (#4032308)
    Just a thought, but isn't LDAP designed for information kinda like this? Not the actual config information, but pointers to the correct config file? You could put the config files themselves into an SCM system, and then when a box needs to find out what config file it should be using it queries the ldap server for what file it should hit. Offers atomic updates, scalability. hmm... might try that here.
  • We split environment-dependent configuration (ie db config and addresses) from build-dependent configuration (think 'basic' and 'deluxe' versions). Every env parameter has a default. The defaults and the build parameters are stored with the app and shipped with it.

    On the target platforms we store env configuration in such a way that its not overwritten by app upgrades. The app 'pulls' config from its environment as needed (basically, only parameters which have a default can be overridden). This means upgrades never overwrite customer config, so backing out of an upgrade is easy.

    Administration of each environment is independent of the build. Altogether this means we don't have many different build variations; as you suggest, a build-per-environment is a pain to maintain; its also prone to failure as the build the testers use isnt the one the customers get.

    Environments which are 'common' (eg basic developer setup including locations of common dev db) are stored under configuration management in a separate tree from the application. Actually, once we reach alpha releases we make disk images so eg sales can get a consistent environment. In the normal course of work, developers will just make a copy of these managed environments rather than check them out, edit, and check in.

    We're working in java and much of the way we manage our env issues is down to the container/application separation in J2EE, and the clean separation of application assembler/deployer/administrator roles. If you follow the philosophy you dont have much option but to act as we did above.
  • the problem is that you have two seperate things 1) programs and 2) configuration. These two things combine to create a third thing, your run time environment.

    create a script to ensure that the configuration is valid, and matches a given set of programs.

    I've got something like this for creating Solaris packages. I check to make sure all of the file that go in the package are in the profile AND that all of the files listed in the profile are present.
  • check out ant at the apache project (www.apache.org). It's a tool for automatically building a java project - it uses an XML-based build file to specify the actions it should take.

    We used it to :

    • check out all the code for a given project and/or branch from the source code control system

    • check out all the relevant config files and putting them in the appropriate place

    • compile everything that needs to get compiled

    • restart any services that need to be restarted

    Of course, by creating different build targets, we could make a bunch of decisions in the build process, e.g precisely which files to check out (live didn't get some of the debug stuff compiled in, so didn't need a bunch of support code), which config elements to check out, whether to enable debug features etc.

    You still end up with multiple config files and all the mess that goes with it, but because the build process is totally automated, it is a lot less error prone. You can think of the ant build file as a meta-config file, identifying your environments and their specific requirements etc.

    I believe there are similar tools available for other programming environments - you could do something similar with "make" if you really wanted to - and you could use ant for non-java projects if you want to invest a bit of time.

    As a general principle, I always like to see build processes automated - it allows you to create environments without needing the skills of your company sociopath, it enables developers to find bugs in the finished product by re-creating it quickly, and when bad stuff happens, you can recover without too much panic - you know that your automated build scripts work reliably, and you don't have to remember that the database connection relies on some obscure library that you have to download from an extinct website somewhere.

    Check out www.xprogramming.com for more on the importance of regular builds.

The biggest difference between time and space is that you can't reuse time. -- Merrick Furst

Working...