Managing Environment Specific Config Files? 20
byrnereese asks: "It seems that every organization I have worked for has been plagued with the same problem: 'how do you effectively manage multiple environment configurations for the various development, staging and production environments that exist.' For example, each application at our company requires a different set of database credentials for each environment we deploy it to. For a long time we have maintained different config files for each environment and checked them into our SCCS; but that has proved incredibly error prone because a different build is required for each environment. Each company I have worked for utilizes a different systemology to make config file management a little easier, but none of them have stumbled upon anything that is any less error prone, or any easier to manage than any other solution to choose from. So I am curious, how does your company manage environment specific application configuration? And what does your change management process for these configurations look like?"
IANAP (Score:1)
have home directories mounted on an NFS share. To test program foo at stage beta, login on machine as user 'betatester', wherein the appropriate environment already exists.
Put that NFS share into CVS, so that you can change it as needed. If multiple projects, add program name. So for two programs, foo and bar, and two stages, alpha and beta, you would have:
beta-foo
alpha-foo
beta-bar
alpha-bar
Not sure I even understood the question, but this might do it if my take is correct.
Re:IANAP (Score:2, Interesting)
The shell scripts could be shared by different users, so each user could set their environment to one of the four phase-project environments.
Script1: PROJECT=beta-foo
Script2: PROJECT=alpha-foo
That way, any of your developers could simply set their $PROJECT , and YOU as the administrator, don't need to clean up when they fscked up the shared user account.
Re:IANAP (Score:2)
Good idea. (Yours, that is:)
Custom built environment manager... (Score:4, Informative)
All the meta-data for the environment manager was kept under source-control, and the users executed the environment manager from the network. (They could optionally run it locally, but it was then pointed to the network to make sure it was getting the correct INI file.) The only minor complication was that after a build, the environment directory needed to be updated on the network with the latest executables, however this was easily integrated into our build/roll-out procedures. We tagged each release in our source-control system, and were able to easily roll-back any environment if needed.
How I've seen it done (Score:1)
javadev=dev
javatest=tst
javaqa=tst
javaprod
devDBUsername=bubba
devDBPassword=password
tstDBUsername=bubba
tstDBPassword=qa
prdDBUsername=bubba
prdDBPassword=secret
For PHP code, we have if() blocks or switch statements that check $HTTP_HOST or $SERVER_NAME and define variables for each environment.
Re:How I've seen it done (Score:1)
naming conventions and a little scripting (Score:4, Informative)
each environment was based on a user-name. All environemnts with a particular username were assumed the same. For example, we had 3 live servers, 1 dev and 1 stage (total of 5 boxes). On the 3 live servers, the UNIX user was named 'live'. 'stage' and 'dev' on the stage an dev boxes.
Then, each config file in the SCCS was named configfile.logname, so you end up with: somefile.conf.live
somefile.conf.stage
somefile.conf.dev
somefile.conf.joeprogrammer
somefile.conf.bobarchitect
and so forth.
The build script would then either symlink or copy the file for the user running it to the right place:
cd /etc/conf/myapp ; ln -s /build/conf/somefile.conf.$LOGNAME somefile.conf
When symlinking and using CVS, updates to /build/conf get reflected in /etc/conf/myapp
Occasionally, we would need something more complicated than just config files, and so there were scripts that could abstract out the global configuration parameters from the environment specific ones and then glue them together:
In the simplest case the script does:
cat /build/conf/global_opts.conf /build/conf/environment_opts.conf.$LOGNAME > /etc/conf/myapp/config.conf
That said, on any project you must have a release engineer, even if not full time, one person on the project should be assigned the duty of handling the build and release procedures. This includes updating all conf files, and disemenating changes to the group. No tool can replace good team communication. I view systems like what I described as something the release engineer deals with and as a tool for him/her to increase efficiency. The developers should just be able to type 'make' (or equiv) and have it all work out.
Pros: Simple, based on standard UNIX stuff, little or no secret sauce.
Cons: Tougher to get it to work in cross-platform environment, doesn't handle potentially complicated configuration files, requires a person full or part time to administer and maintain.
Requirements and other issues (Score:3, Interesting)
I've encountered this issue in a number of places, and have only been satisfied with homegrown systems. Here are some of the issues we've tackled (but entire books have been written on this topic.)
RequirementsFirst, here's my short list of requirements of a build/config system:
Some solutions:
Generally we've "rolled our own" because the good config packages cost a lot of money and still eat up a lot of resources to maintain them.
The Redhat Package approach We build a package of all appropriate config files - one for each type of machine in each environment. The machine contains a "well-known file" which indicates which kind of machine and what type of environment it's in. The package checks to ensure that it's being installed on the appropriate type of machine.Advantages:
Disadvantages:
In this approach, you write a centralized Perl server that keeps track of all config files. If you write it correctly, it can even be 'heirarchical' (ie, there are default config files, then webserver config files, and Webserver1 inherits all default config files and all webserver config files.)
Each machine then asks the Perl server for its config files.
The perl server ensures that config files are checked into CVS
Advantages:
Disadvantages
Re:Requirements and other issues (Score:2)
Do you know of any books or documentation which dicusses this process?
Re:Requirements and other issues (Score:2, Informative)
As far as books go, you can check out A Guide to Software Configuration Management (Artech House Computer Library) [amazon.com] as a reasonable book on the concepts and issues around CM.
There are at least two lists on Amazon of books on this issue: here's one [amazon.com]
And of course, Google has a whole directory on configuration management. [google.com]
HTH -PeterRe:Requirements and other issues (Score:1)
I've seen the term 'SCM' thrown around from time to time, but mostly by marketing people from large VC vendors (like Rational's Clearcase); and none of the marketers have really been able to give me an example...
So once again, thanks for the tip!
Environment Files (Score:2, Informative)
Re:Wow (Score:2)
I did alpha testing, and there was a script called "alpha" (others had similar scripts for their roles; I also used the "beta" script because I coordinated the beta tests). If I wanted to run program foo, I just typed "foo" (which, in turn, was just a script that ran the real foo, which might be fooC05). If I wanted to run the alpha version of foo (which might be fooC06) I typed "alpha foo" and alpha knew to look in the proper directory for the alpha version of the script foo, which in turn set up the environment as needed and called the appropriate foo executable. These scripts were all part of the source code and maintained with SCCS. We kept everything in SCCS, including my test scripts and test data files -- some of which had to be UU encoded to use SCCS, but it worked.
LDAP / SCM? (Score:3, Insightful)
What we did (Score:2)
On the target platforms we store env configuration in such a way that its not overwritten by app upgrades. The app 'pulls' config from its environment as needed (basically, only parameters which have a default can be overridden). This means upgrades never overwrite customer config, so backing out of an upgrade is easy.
Administration of each environment is independent of the build. Altogether this means we don't have many different build variations; as you suggest, a build-per-environment is a pain to maintain; its also prone to failure as the build the testers use isnt the one the customers get.
Environments which are 'common' (eg basic developer setup including locations of common dev db) are stored under configuration management in a separate tree from the application. Actually, once we reach alpha releases we make disk images so eg sales can get a consistent environment. In the normal course of work, developers will just make a copy of these managed environments rather than check them out, edit, and check in.
We're working in java and much of the way we manage our env issues is down to the container/application separation in J2EE, and the clean separation of application assembler/deployer/administrator roles. If you follow the philosophy you dont have much option but to act as we did above.
are you solving the wrong problem? (Score:1)
create a script to ensure that the configuration is valid, and matches a given set of programs.
I've got something like this for creating Solaris packages. I check to make sure all of the file that go in the package are in the profile AND that all of the files listed in the profile are present.
automated build tools.... (Score:1)
We used it to
check out all the code for a given project and/or branch from the source code control system
check out all the relevant config files and putting them in the appropriate place
compile everything that needs to get compiled
Of course, by creating different build targets, we could make a bunch of decisions in the build process, e.g precisely which files to check out (live didn't get some of the debug stuff compiled in, so didn't need a bunch of support code), which config elements to check out, whether to enable debug features etc.
You still end up with multiple config files and all the mess that goes with it, but because the build process is totally automated, it is a lot less error prone. You can think of the ant build file as a meta-config file, identifying your environments and their specific requirements etc.
I believe there are similar tools available for other programming environments - you could do something similar with "make" if you really wanted to - and you could use ant for non-java projects if you want to invest a bit of time.
As a general principle, I always like to see build processes automated - it allows you to create environments without needing the skills of your company sociopath, it enables developers to find bugs in the finished product by re-creating it quickly, and when bad stuff happens, you can recover without too much panic - you know that your automated build scripts work reliably, and you don't have to remember that the database connection relies on some obscure library that you have to download from an extinct website somewhere.
Check out www.xprogramming.com for more on the importance of regular builds.