Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?

How Do You Manage Dev/Test/Production Environments? 244

An anonymous reader writes "I am a n00b system administrator for a small web development company that builds and hosts OSS CMSes on a few LAMP servers (mostly Drupal). I've written a few scripts that check out dev/test/production environments from our repository, so web developers can access the site they're working on from a URL (ex: site1.developer.example.com). Developers also get FTP access and MySQL access (through phpMyAdmin). Additional scripts check in files to the repository and move files/DBs through the different environments. I'm finding as our company grows (we currently host 50+ sites) it is cumbersome to manage all sites by hacking away at the command prompt. I would like to find a solution with a relatively easy-to-use user interface that provisions dev/test/live environments. The Aegir project is a close fit, but is only for Drupal sites and still under heavy development. Another option is to completely rewrite the scripts (or hire someone to do it for me), but I would much rather use something OSS so I can give back to the community. How have fellow slashdotters managed this process, what systems/scripts have you used, and what advice do you have?"
This discussion has been archived. No new comments can be posted.

How Do You Manage Dev/Test/Production Environments?

Comments Filter:
  • by sopssa ( 1498795 ) * <sopssa@email.com> on Tuesday October 20, 2009 @01:58PM (#29811471) Journal

    How have fellow slashdotters managed this process, what systems/scripts have you used, and what advice do you have?"

    I do the same as Slashdot.org does - Make the changes on live code, except a little downtime and weird effects and then try to fix

    it - while actually never fixing it. After all the results are not that significant:

    - if someone posts about it on a thread, mods will -1 offtopic it and no one will hear your complain
    - many people will "lol fail" at the weird effects, like when kdawson decides to merge two different stories together [slashdot.org]

    • by cayenne8 ( 626475 ) on Tuesday October 20, 2009 @03:07PM (#29812659) Homepage Journal
      "I do the same as Slashdot.org does - Make the changes on live code, except a little downtime and weird effects and then try to fix"

      That's not that far from the truth in MANY places and projects I've seen.

      I've actually come to the conclusion, that on many govt/DoD projects, that the dev. environment in fact becomes the test and production environment!!

      I learned that it really pays, when spec'ing out the hardware and software that you need, to get as much as they will pay for for the 'dev' machines....because, it will inevitably become the production server as soon as stuff is working on it, the deadline hits, and there is suddenly no more funding for a proper test/prod environment.

    • Just roll them into one [thedailywtf.com]. It's even got a catchy name.
    • 1) A solid, well-defined subversion structure
      2) Ant
      3) SSH keys that Ant can use

      Done and done. I work for a major broadcast network, pushing out hundreds of Java, .Net and Oracle Forms applications day in and day out to some number of servers I haven't bothered to count.

      90% of them can be pushed out from a single shell script with just a couple of command line switches. Most of this is done through identifying environments and destination paths for each of them in a build.properties file, then specifying the

      • by chdig ( 1050302 )
        Unfortunately, this isn't a solution for the question at hand (not that I've got one to offer).

        The challenge of web development brings a database and its changes together with that of code. A database's structure will change with the evolution of a code base, and subversion doesn't help deal with that. It also makes it awkward, for me at least, to deal with dev and live environments. Maybe it's an inherent problem that comes with using relational databases, but I've yet to see any solid open-source, cr
    • That's crazy. You need to have a stage and live. Fortunatlythese daysyou can doosixmachines on one virtual machine. I suggest you:

      * Buy one server (it needs to be powerful, so maybe one of those ones that goes in a rack (don't worry about a rack though)
      * Install Windows on it and VMWareServer (the free one)
      * Install 3VMs and put Windows on each.
      * You'll need one web server and one database server for live and stage. So use two VMs for production, and the other VM and the mac

  • You are not a n00b (Score:5, Insightful)

    by davidwr ( 791652 ) on Tuesday October 20, 2009 @02:01PM (#29811537) Homepage Journal

    You may be a new system administrator, but you are not a n00b.

    A n00b wouldn't realize he was a n00b.

  • Separate SVN deploys (Score:4, Informative)

    by Foofoobar ( 318279 ) on Tuesday October 20, 2009 @02:02PM (#29811547)
    Create separate SVN deploys as separate environments. Deploy them as subdomains. If they require database access, create a test database they can share or separate test databases for each environment. Make sure the database class in the source is written as DB.bkp so when you deploy it, your deployed DB class won't be overwritten by changes to the source DB class.
    • Re: (Score:3, Informative)

      Do _not_ use Subversion for this. Use git, even if you have to use gitsvn to point it to an upstream Subversion repository. Subversion's security models in UNIX and Linux are exceptionally poor, and typically wind up storing passwords in clear-text without properly notifying you. (Now it notifies you before storing it, but usex it automatically.) Subversion also has very poor handling of multiple upstream repositories, and there is no way to store local changes locally, for testing or branching purposes, an

      • by Foofoobar ( 318279 ) on Tuesday October 20, 2009 @03:08PM (#29812693)
        Git does not have integration with Apache and other tools that developers still find useful. TRAC integrates with Subversion as do several other tool. You also cannot coordinate Git with your IDE. Don't get me wrong, it is definitely where version control will be in the future but the tools to support it have to get there first before widespread adoption should be advised for day to day use.
        • Re: (Score:3, Informative)

          What do you mean, git doesn't integrate with Apache? It works well as an Apache client, there's 'viewgit' if you need a bare web GUI. And for this purpose, locally recordable changes seems critical.

          • You are talking a repo browser vs an Apache module. Git does not have anything like mod_dav_svn yet so there is not full integration with Apache yet. Many people in the Git community want something like mod_dav_svn but there isn't anything like it yet.

            As I stated, the tools aren't there yet. Good version control but ther support tools have to be built yet.
        • Re: (Score:3, Informative)

          by garaged ( 579941 )
      • by AlXtreme ( 223728 ) on Tuesday October 20, 2009 @03:15PM (#29812785) Homepage Journal

        Subversion's security models in UNIX and Linux are exceptionally poor, and typically wind up storing passwords in clear-text without properly notifying you.

        Auth token caching can be easily disabled and svn export, not svn checkout, should be used for deploying test/prod environments (like I've seen way too many people do).

        Git (or any other distributed version control system) is great if you are into distributed development, but don't blame the tool when you don't know how to use it properly or expect it to be something that it's not.

        • "Auth token caching" is enabled by default, with no server or system to disable it. It's only disabled on a client by client basis: this is completely unacceptable in security terms and always has been. So unless you have direct control of the source code for Subversion for all the systems, then no, you can't "easily disable it".

          'svn export' is fairly insane for most configuration environments, since it provides little single hint of what files have been altered or modified locally against the base reposito

          • "Auth token caching" is enabled by default, with no server or system to disable it. It's only disabled on a client by client basis

            You can say exactly the same about any form of password caching. You don't use Firefox because it can store your passwords?

            I don't allow Firefox to store my passwords, just as I don't allow my subversion client to store my passwords. It's one or the other, security or ease-of-use. Most go for the latter, so it's on by default. I don't see the issue, as this can be disabled with o

            • Re: (Score:3, Informative)

              You wrote:

              > You can say exactly the same about any form of password caching. You don't use Firefox because it can store your passwords?

              No, I can't say exactly the same thing. Subversion, by default on UNIX and Linux, stores the passwords in cleartext. That is one of the stupidest things I've ever seen for allegedly "enterprise" class software. The only other source control I've seen do something that stupid is CVS, which is what Subversion is descended from.

              "Configure in your dev environement" means that

      • by IMightB ( 533307 )

        I prefer Mercurial>git>SVN, otherwise grandparent is a good suggestion.

      • by Jack9 ( 11421 )

        Do _not_ use Subversion for this. Use git, even if you have to use gitsvn to point it to an upstream Subversion repository. Subversion's security models in UNIX and Linux are exceptionally poor, and typically wind up storing passwords in clear-text without properly notifying you. (Now it notifies you before storing it, but usex it automatically.) Subversion also has very poor handling of multiple upstream repositories, and there is no way to store local changes locally, for testing or branching purposes, an

        • Oh, dear. "We trust the people we work with" to prevent people reading the unencrypted passwords in $HOME/.svn/auth/. Yes that trick works really well for root owned system configuration files. No one would _ever_ steal those or modify them behind your back, even when they put their write-authorized passwords in their NFS shared home directories, even when those passwords are also used for email and sudo, and even when those passwords can be used to alter root-owned system configuration files. This certainl

      • by nahdude812 ( 88157 ) * on Tuesday October 20, 2009 @05:37PM (#29815079) Homepage

        svn+ssh doesn't store anything in clear text. If that's a security concern for you, there's already a solution in place. Git is not the be-all and end-all solution to source control; it does many things very well, but there are a few things it does very poorly (repository control; with git, developers have a local copy of the repository which means that a stolen laptop comes with complete revision history). When the systems you're working on have certain sensitivities (legal, patent, security, etc), this can be a major weakness.

        We do something very similar to the original submitter at work. We have 10-15 project branches open at a time. For us, we make sure that our code is subdirectory-agnostic (meaning it can run on the root of the website, or it can run out of a subdirectory). We use directory paths for branches, and we use internal DNS records for environments. http://de-appname/branchname [de-appname] would be a development branch while http://va-appname/branchname [va-appname] would be a validation branch.

        For our lifecycle, we have development, validation, staging, pre production, and production. Development and validation are the only branching locations; staging, pre, and production are each single-path locations (though staging is a branch reserved for this purpose, pre production and production are /trunk)

        On http://de-appname/ [de-appname] and http://va-appname/ [va-appname] there is essentially a directory listing along with the fully qualified branch name, revision, most recent contributor's name (even spelled out by looking up their record in LDAP), commit time, and most recent log message. Developers get a drop-down menu next to project branches which they can use to update the working copy there on that shared server (does a little ajax call and shows you the result in real time as though you were at a terminal). You can also create clean checkouts and even create new branches (either off of trunk or off of another branch). Finally you can even close a branch through this interface; it deletes the branch with a meaningful log message, and cleans up the files on that server. All through a web interface, no need to remotely log into a machine for this purpose. There's no reason for someone to administer this, each developer creates a working copy when and where he thinks it makes sense for himself.

        Because our back end is SAP, we don't have to deal with multiple database environments. There's no "create a new copy of SAP" - indeed when this is something that's organizationally important (testing a major upgrade), it's a multiple day long process. We have a fixed set of data environments, and these are tied to hostname (going back to de-appname, va-appname, stage-appname, etc). If there were multiple database environments to worry about (eg, you wanted to be able to effectively branch database environments too), it wouldn't be a huge deal to set up a template database and have the same scripts we use to manage branches through a web API clone that database and update config in the appropriate app.

        The key that I'm trying to get at is that you should create a web based tool to allow developers to manage this themselves. The developers will thank you because they'll be able to get what they want faster than you could have provided it, and they'll have control over when where and why working copies of their work in progress appear.

        • You're correct. svn+ssh doesn't store passwords. But it's the least supported, least documented, and most difficult way to set up Subversion. And I can understand the problem of revision history being stolen with git, but compared to the chronic risks of repository access and associated account access of most Subversion setups, I'd say it's a much smaller problem. In fact, revision history is a _useful_ component to have without having to contact the central repository every time you want to check logs or d

          • It's certainly true that the branching tools we wrote aren't specific to any given source control system; the git-vs-svn opening of my response was mostly just to point out that there's a better way to use SVN.

            You might be right about svn+ssh being poorly documented; I never had a hard time getting it up and running, but most of the concepts were pretty familiar before I started, so I might not be the most representative. As to it being poorly supported, I'd be surprised; it's essentially filesystem access

            • Your response is cogent, but let's be clear. svn+ssh is _daemon_ access, not file-system access, to a locally operated svnserve daemon. Because the SSH connection starts up the daemon, that daemon has only the permissions of the svn+ssh target user, which is why a well-configured svn+ssh setup uses a common user, with URL's such as svn+ssh://svnuser@svnhost/svnparentdir/reponame. There is no direct filesystem access involved, which is why the svn+ssh user can be a common user for multiple SSH clients.


    • Re: (Score:3, Informative)

      There are so many things to do to get this right - basically I'd add that you need a three-silo people model to match the three silos of dev,test,prod. Your development side are the creative ones; give them the tools they ask for and let them play. You need a critical, intelligent and demanding test manager in the middle, and for the production gatekeeper you need someone with absolutely no imagination at all (follow the rules, tick the boxes, or *zero* chance of advance to production. Seriously. Tell

  • happy with phing (Score:3, Informative)

    by tthomas48 ( 180798 ) on Tuesday October 20, 2009 @02:05PM (#29811585) Homepage

    There's really only so much you can do generically. I'm really happy with phing. I use the dbdeploy task to keep my databases in a similar state. I build on a local machine, deploy via ssh and then migrate the database.

    I'd suggest that rather than checkout at each level you create a continous integration machine using something like cruise control or bamboo, then push out build tarballs and migrate the database.

    • by Jaime2 ( 824950 )
      I can't agree more, especially for data. I take great care in source controlling database and I would never dream of auto-building a deployment package.

      Myself and all of the developers don't have access to the database schemas in the dev environment with our normal accounts. I have a SubVersion hook script that runs all checkins to the database schema files on the dev DB. If the script errors, the commit is rejected. This guarantees that the only way to get changes into the database is to put those ch
  • When I did this years ago, each server would run scripts to read logs, etc and if they found something bad they would email me with what they found.

    Simple and scalable

    • by BitZtream ( 692029 ) on Tuesday October 20, 2009 @02:12PM (#29811747)

      Never heard of a loghost eh?

      • Ever heard of Unixware!

        remember this WAS 10 years ago!

        (Now, get off my lawn!)

      • That's interesting. At the moment we have a loghost, and all logs of all applications go to that syslog server. Now we face the problem of allowing access to those logs to developers. Say you have 50 production apps logging to that logserver, do you know some software (best would be a webapp) that can be configured to let developers login and see the logs for the application they are responsible for? We could simply share the log files with a samba share, but a webapp that has some kind of integrated tail,

        • syslogd on every modern unix is capable of routing to a specific log file for a specific app. If the basic syslogd isn't enough, your loghost can run syslog-ng or any of the other more powerful syslog daemons. You only have to replace the one on the server, the other clients should just be forwarding EVERYTHING to it.

          Of course at this sort of level, you'd probably save yourself a metric assload of trouble if you implemented a proper network monitoring/management server.

          Myself, having only 15 or so hosts t

  • ...testing was what the production environment was for. Nothing like having dozens of end users flooding the help desk with calls because someone messed with a server or an active database. They take care of all that pesky and tedious testing for you!

    /sarcasm (in case you couldn't tell)

  • If there's not a project to fit your bill, develop it internally and release it as an OSS project. It'll add some nice OSS experience to your resume and also add visibility to your employer. If it succeeds, it'll be a big deal for your company. If it doesn't succeed, at least you got the project done. Sounds like everyone wins.

    I've never actually done this (my employer balks at the suggestion), but I'd love to have that sort of opportunity.
  • SVN etc. (Score:3, Informative)

    by djkitsch ( 576853 ) on Tuesday October 20, 2009 @02:17PM (#29811839)
    My company (for upwards of 10 years) has been using:
    • An SVN (Subversion) server on our dev box
    • Developer or group specific subdomains in IIS / Apache on the dev server, to which working copies are checked-out
    • Deployment to live servers via SVN checkout when the time comes
    • Global variables to check which server the app's running on, and to switch between DB connection strings etc.

    Still not figured out an efficient way to version MSSQL and MySQL databases using OSS, though. Open to suggestions!

    • My company wrote a small project for this (not released in any form, though). It has a collection of SQL scripts identified by date (eg "2009-10-15 1415 Renamed Foo.Bar to Foo.Baz.sql") and a table with columns for script name and date applied. Any scripts it finds that aren't listed in that table, it applies in order according to the date in the script name.

      You should be able to hack this together in a day or so.

    • Re: (Score:3, Informative)

      by Erskin ( 1651 ) *

      Deployment to live servers via SVN checkout when the time comes

      Side note: I humbly suggest (as someone else mentioned elsewhere) you use export instead of checkout for the live deployments.

  • by dkh2 ( 29130 ) <dkh2&WhyDoMyTitsItch,com> on Tuesday October 20, 2009 @02:18PM (#29811871) Homepage

    If you're able to script deployments from a configuration management host you can script against your CVS (SVN, SourceSafe, whatever-you're-using).

    There are a lot of ways to automate the management of what file version is in each environment but a smart choice is to tie things to an issue tracking system. My company uses MKS (http://mks.com) but BugTracker or BugZilla will do just as well.

    Your scripted interface can check-out/export the specified version from controlled source and FTP/SFTP/XCOPY/whatever to the specified destination environment. For issue-tracker backed systems you can even have this processes driven by issue-id to automatically select the correct version based on issues to be elevated. Additionally, the closing task for the elevation process can then update the issue tracking system as needed.

    Many issue tracking systems will allow you to integrate your source management and deployment management tools. It's a beautiful thing when you get it set up.

  • Hilarity (Score:2, Interesting)

    by eln ( 21727 )

    Another option is to completely rewrite the scripts (or hire someone to do it for me), but I would much rather use something OSS so I can give back to the community. How have fellow slashdotters managed this process, what systems/scripts have you used, and what advice do you have?"

    I'm sure you have a legitimate problem, and there are lots of ways to solve it, but this line just cracks me up. You COULD write it yourself or pay someone but if you use someone else's Open Source work (note: nothing is said about contributing to an OSS project, just using it) you'd be "giving back to the community.

    Translation: I have a problem, and I don't want to spend any of my own time or money to solve it, so I'm going to try and butter up the people on Slashdot in hope of taking advantage of the

  • Or I would if I were in management. For some reason they won't promote me here.

  • by BlueBoxSW.com ( 745855 ) on Tuesday October 20, 2009 @02:22PM (#29811941) Homepage

    Most important thing is to treat your code and data separately.


    Dev -> Test -> Production


    Production -> Test -> Dev

    Many developers forget to test and develop with real and current data, allowing problems to slip further downstream than they should.

    And make sure you backup you Dev code and you Production Data.

    • by mcrbids ( 148650 )

      Dang. Out of mod points, so I'll reply.

      Parent covers an EXCELLENT point. We've gone to great lengths to replicate data from production to test/dev modes. We have scripts set up so that in just a few commands, we can replicate data from production to test/dev, and that do data checks to make sure that something stupid isn't done. (EG: copying a customer's data from test -> production and wiping out current data with something 2 weeks old, etc)

      In our case, each customer has their own database, and their ow

    • by zztong ( 36596 ) on Tuesday October 20, 2009 @03:44PM (#29813151)

      Testing with real data is not necessarily a good practice. Consider sensitive data, such as social security numbers. Auditors may ding your development practices for providing developers access to information they do not need. You need realistic data, not necessarily the real data. If you're bringing real data from prod back to test and dev, consider having something scrub the data.

  • Puppet (Score:2, Informative)

    If you are in the unix/linux world take a look at puppet. You provision out a set of nodes (allows node inheritance) and manage all your scripts, config files, etc from one central location (called the puppet master). Changes propagate to all servers that the change applied to automatically. It is built around keeping the configuration files in a versioned repository and is ready to use today.
  • by HogGeek ( 456673 ) on Tuesday October 20, 2009 @02:24PM (#29811973)

    We utilize a number of tools depending on the site, but generally:

    Version Control (Subversion) for management of the code base (PHP, CSS, HTML, Ruby, PERL,...) - http://subversion.tigris.org/ [tigris.org]
    BCFG2 for management of the system(s) patches and configurations (Uses svn for managing the files) - http://trac.mcs.anl.gov/projects/bcfg2 [anl.gov]
    Capistrano/Webistrano for deployment (Webistrano is a nice GUI to capistrano - http://www.capify.org/ [capify.org] / http://labs.peritor.com/webistrano [peritor.com]

    However, all of the tools above mean nothing without defining very good standards and practices for your organization. Only you and your organization can figure those out...

  • by Fortunato_NC ( 736786 ) <verlinh75@m[ ]com ['sn.' in gap]> on Tuesday October 20, 2009 @02:25PM (#29812005) Homepage Journal

    It's hosted Subversion, with a slick web interface that walks you through darn near everything. You can configure development / test / production servers that can be accessed via FTP or SFTP and deploy new builds to any of them with just a couple of clicks. It integrates with Basecamp for project management, and it is really cheap - it sounds like either their Garden or Field plans would meet your needs, and they're both under $50/month.

    Check them out here. [springloops.com]

    Not affiliated with them in any way, other than as a satisfied customer.

  • by bokmann ( 323771 ) on Tuesday October 20, 2009 @02:27PM (#29812029) Homepage

    Capistrano started life as a deployment tool for Ruby on Rails, but has grown into a useful general-purpose tool for managing multiple machines with multiple roles in multiple environments. It is absolutely the tool you will want to use for deploying a complex set of changes across one-to-several machines. You will want to keep code changes and database schema mods in sync, and this can help.

    Ruby on Rails has the concepts of development, test, and production baked into the default app framework, and people generally add a 'staging' environment to it as well. I'm sure the mention of any particular technology on slashdot will serve as flamebait - but putting that aside, look at the ideas here and steal them liberally.

    You can be uber cool and do it on the super-cheap if you use Amazon EC2 to build a clone of your server environment, deploy to it for staging/acceptance texting/etc, and then deploy into production. A few hours of a test environment that mimicks your production environment will cost you less than a cup of coffee.

    I have tried to set up staging environments on the same production hardware using apache's virtual hosts... and while this works really well for some things, other things (like an apache or apache module, or third party software upgrade) are impossible to test when staging and production are on the same box.

  • ... is just to call everything beta, then you never have to bother with testing, or documenting anything (though, to be fair, you didn't ask about documentation - so I guess you'd already decided not to bother with that detail). That way you get much faster development time and keep your time to market down to the same as your competitors - who are using the same techniques.

    The trick then is to move on to another outfit just before it hits the fan. Don't worry about your customers - if they are running w

  • People are supposed to TEST this stuff first!?

    Did he forget the Sarcasm Mark ~, or does he not know about it?
  • TPS reports with lots of cover letters.

  • by keepper ( 24317 ) on Tuesday October 20, 2009 @02:43PM (#29812269) Homepage

    Its amazing, how this seemingly obvious question, always gets weird and overly complex answers.

    Think about how every unix os handles this. Packaging!

    Without getting into a flame war about the merits of any packaging systems:

    - Use your native distributions packaging system.
    - Create a naming convention for pkgs ( ie, web-fronted-php-1.2.4, web-prod-configs-1.27 )
    - Use meta-packages ( packages, whose only purpose is to list out what makes out a complete systems )
    - Make the developers package their software, or write scripts for them to do so easily ( this is a lot easier than it seems )
    - Put your packages in different repositories ( dev for dev servers, stg for staging systems,qa for qa systems , prod for production, etc et c
    - Use other system management tools to deploy said packages ( either your native package manager, or puppet, cfgengine, func, sshcmd scripts, etc )

    And the pluses? you always know absolutely whats running on your system. You can always reproduce and clone a systems.

    It takes discipline, but this is how its done in large environments.


    • Now hire me 3 engineers to preserve and bundle all this arcane packaging, and to do the package management itself. If you've created something smart enough to compare and manage all the packages, you've simply transferred the scripting to the package management, with a net cost in engineering of all the time building all those packages.

      There are several several tools I favor:

      1: Checklists (to keep track of features needed or enabled for each machine)
      2: Actual package management: installing Apache or Perl mo

      • by keepper ( 24317 )

        What's hard about building packages?

        The thing you are not getting, is that with packages, and the infrastructure to support them, you only do the hard work ONCE. So you say its hard to package your software?

        A good systems engineer or admin, knows that sometimes taking a little time more to setup things right in the first place, saves an invalable amount of time later.

        And to your staff snide.. well, i've managed 300 os installs ( virtual and physical ), with only a developer as a release engineer and me as t

        • by keepper ( 24317 )

          Missed a part..

          So you say its hard to package your software? Most scripting languages have modules that allow you to automatically build rpm or debs. Java and C are also trivial to general .spec or deb definition files. Its at most a few days worth of work for one person, and weeks of work in savings.

          Automation is key!

          • by chrome ( 3506 )
            The biggest obstacle is learning it the first time. If you're smart, you then write a Makefile or shell script or whatever to the automate it for the next package. Its Not That Hard (tm) Really, I'm a lazy sysadmin, so I prefer the software does all the hard work, not me. :)
          • It's not that "hard". It's time-consuming. Building RPM's or .deb's to deploy a single configuration file change is time-consuming: the same time spent to build and configure and manage the arcaneries of dependencies for RPM's or deb's would often be better spent on a single management system. Setting up hostnames and enabling particular authentication configurations or X configuraitons or printer setups, for example, seems a bit misplaced in an actual software package.

            I agree that package management is ver

    • Re: (Score:3, Informative)

      by chrome ( 3506 )
      +1 also, use the package signing system to verify that the packages distributed to machines are really released. use the package dependencies to pull in all the required packages for a given system. If you do it right, all you need is an apt repository, and you type "apt-get install prod-foobar-system" and everything will be pulled in and installed. In the correct order. I converted a site to this method (on Fedora Core many years ago) and we went from taking a day to build machines to 30 minutes. 1) Pu
      • by keepper ( 24317 )

        Yeah, agreed. Most people get scared of the work, not realizing the savings in time later.

  • Fabric.

    http://www.gjcp.net/articles/fabric/ [gjcp.net]

    Saves so much hassle and buggering about.

  • CVS (of whatever flavor) can help you do this. It's a pain in the ass, and everybody will hate it, but it works.

    I've done this with virtual machines as well. It's kinda whizzy to do, but probably overkill.

    The simplest way for me was to simply use rsync. Rigid delineation between live and test/dev environments is important. Use a completely separate database (not just a different schema), and if possible a completely separate database server. Changes to the database schema should be encapsulated in updat

  • The simple answer? Virtual Machines. If you have to stay with linux, go with vmware or for a free solution, KVM. See http://en.wikipedia.org/wiki/Kernel-based_Virtual_Machine

    If you want to run LAMP on open solaris/solaris, ZFS has very robust and easy to manage virtual machines called zones. Sun also provides enterprise ops center software that can be used to manage the zones via a gui. Copy/create/rollback, etc..

    After that, smart system administration is required to keep things easy to manage.

    How you

    • by rho ( 6063 )

      Sun also provides enterprise ops center software that can be used to manage the zones via a gui. Copy/create/rollback, etc..

      This is very important. One of the most important traits of a 3-tiered development system is setting it up so that the "test" environment can be rebooted back to a clone of the live site. "Test" should be just that--for testing. If your test environment goes pear-shaped, who cares? Clone the live site, run the updates from "dev", and your "test" is back.

      In general it's rarely a good

  • Quick Brief (Score:5, Informative)

    by kenp2002 ( 545495 ) on Tuesday October 20, 2009 @03:05PM (#29812641) Homepage Journal

    Develop 4 Environment Structures

    Development (DEV)
    Integration Testing (INTEG)
    Acceptance (ACPT)
    Production (PROD)

    For each system create a migration script that generically does the following:
    (We will use SOURCE and DEST for environments. You migrate from DEV->INTEG->ACPT->PROD)

    The migration script as it's core does the following:

    1) STOP Existing Services and Databases (SOURCE and DEST)

    2) BUILD your deployment package from SOURCE (This means finalizing commits to an SVN, Creating a dump of SOURCE databases etc.) If this is a long process then you can leave the DEST running and STOP DEST at the end of the build phase. I do this as builds for my world can take 2-3 days.

    3) CONDITION your deployment package to be configured for DEST environment (simple find and replace scripts to correct database names, IP address, etc. These should be config files that are read and processes.) This is common if there are different security SAPs, Certificates, Etc that need to be configured. For instance you may not have SSL enabled in DEV but you might in INTEG or ACPT.

    4) BACKUP DEST information as an install package(this is identical to the BUILD done on the source. This BACKUP can be deployed to restore the previous version.) This should be the same function you ran on SOURCE with a different destination (say Backups verus Deploys)

    5) MIGRATE the install package from SOURCE to DEST


    7) If all tests pass then APPROVE. This is the green light to re-start the SOURCE services so development can move on.

    That is a brief of my suggestion.

    DEV is obvious
    INTEG is where you look for defects and resolve defects. Primary testing.
    ACPT is where user and BL acceptance testing occurs and should mirror PROD in services available.
    PROD ... yeah...

    I handle about 790+ applications across 2000+ pieces of hardware so this may appear to be overkill for some but it can be as simple as 4 running instances on a single box with a /DEV/ /IT/ /ACPT/ /PROD/

    Directory structure with MYSQL running 4 different databases. The "Script" could be as simple as dropping the DEST database and copying the SOURCE database with a new name. Other options are creating modification SQLS for instance that are applied onto the exist database.


    to preserve existing data. In the case of Drupal your DEV might pull a nightly build and kick out a weekly IT, a biweekly ACPT, and a monthly PROD update.


    The script to deploy needs to handle failure. There has to be a good backout.

    You should have a method to backup and restore the current state. Integrate that into the script. Always backup Before you do changes and AGAIN after you change. DEV may need to look at the failed deploy data (perhaps a substitution or patch failed, they need to find out why.)

    Before Backup and After Backup in the migration script.

    And always 'shake out' a deployment in each environment level to make sure problems to propogate. You find problems in IT, you test to make sure what you found in IT is resolved in ACPT. Your testers should NOT normally be finding and filing new defects in ACPT environments with the exception of inter-application communication that might not be available in earlier environments. (Great example might be ACPT has the ability to connect to say a marketing companies databases where you use dummy databases in IT and DEV.) 80/20 is the norm for IT/ACPT that I see.

    Good luck. Use scripts that are consistent and invest in a good migration method. It works great for mainframes and works great in the distributed world too.

    A special condition is needed for final production as you may need temporary redirects to be applied for online services (commonly called Gone Fishing pages or Under Construction Redirects)

  • Web devs need to have security enforced or they won't think about it for their sites. Shut off FTP and enforce SFTP only. If bandwidth is a factor is choosing FTP over SFTP, at the very least, use kerberized FTP. Make certain that phpMyAdmin is behind https and that authentication is required. Yes, this means they have to use two passwords. Tough.
  • I'm just now reading "Pro PHP Security" (Snyder & Southwell, Apress), and it's got a lot of good information - hands-on examples, best practices and technical background that is useful whether you support PHP or not. It covers both local and web-based attacks such as XSS, SQL injection, vuln exploits, etc.

    Among other things, it suggests you set up virtual servers for each domain user. You could use FreeBSD 'jails', linux virtualization tools, etc. - the book is agnostic on which ones you use, and does

  • Segue in to a new paradigm and experience increased synergy - consolidate already!

    Kidding, naturally.
  • One solution that I have implemented at several commpanies is to use Hudson [hudson-ci.org] and the Hudson Promoted Builds [hudson-ci.org] plugin. Read this brief introduction [blogspot.com] to the concept.

  • I've found it's useful to put any env-specific properties in external properties files, and then make a copy for each env. On each environment, there's a one-time exercise of creating symbolic links to point to the appropriate files.
          ln -s db.properties.dev db.properties
          ln -s server.properties.dev server.properties ...

    Then just use the links in the app code.

  • by Hurricane78 ( 562437 ) <deleted AT slashdot DOT org> on Tuesday October 20, 2009 @03:42PM (#29813127)

    I have adapted my system from the 5 years I professionally did it.

    First of all, it's a 3-stage system.
    You have a couple of live servers, a identical staging server, and the user machines.
    Every system has a clone of the files. the servers have rsync copies of the stage server files.
    And the users all sync to the stage with GIT.
    Everyone has a local clone of the stage server software too, so he can test server-side code right on his machine.
    That's important in every company where people could do conflicting (and even big, global) patches.

    The stage server then has validity tests running. Compilations and unit test cases wherever possible. Including the database, the server side code, and rendering test pages in all relevant browsers, to diff the rendered versions (images) of the pages. (There's a app for that in Firefox, but otherwise it's desktop automation.)
    There's an red alert box in the test case overview when something fails. Which gets checked every evening, before pushing anything to the live servers at night.
    The only thing that turns out to be a bit hard, is to test the client-side logic (e.g. of web-apps) in a transparent manner (= keeping the software configurations and serveride code the same to be able to rely on it).

    Then there is a emergency push and and emergency direct live update mechanism, for cases when you quickly have to fix something that got overlooked. (Which usually should result in a new test case to be written, to catch all such problems.)

    A well-integrated project management system is very important. At the end of my first company, it was a self-written one with good integration. But in the beginning, something like Trac might suffice.

    Then very important is, to have a knowledge base for all things that need to be remembered. Like a meta-documentation. Workflows and procedures. Why the mysql server will not restart on a reboot of stage server clones. Little hooks and mantraps like that. I recommend a Wiki.

    And last but not least, never ever forget to have a Bugzilla. If you're good, you can integrate Bugzilla, the test validations and the task/project management into one system. Making the validity tests create bugs in Bugzilla, and bugs being the same as tasks (which makes test-driven development easier).

    Yet this all is completely worthless, if your colleagues don't use it! ;)
    Unfortunately, I learned, that when someone can/em> do something wrong, he will.
    So if you can't lock down possibilities to only those required, you have to be very very careful with who you hire. Especially with "web development", where you get sinology students who learned HTML while working as a taxi driver, stating that they are "professional web developers with 5 years of experience", while honestly believing that. And team leaders believing it too, because they are just as "competent". Because they themselves either started as something an simple as link collectors, or the boss of the company does not know shit about his business, and hired those types. They then usually get promoted to "Head of ...". It's the mother of all PHB stories. ^^

    The key is: Make them like to work the proper way. If nothing helps, money can always push them in the right direction. It's called "bonus".
    And making it their project too, by also embracing their decisions! :)

    • Ok, it's 10 years or experience, if you count like those taxi-driver types. ^^

      And luckily, I noticed that I forgot to mention, what is already mentioned here: http://ask.slashdot.org/comments.pl?sid=1411459&cid=29811941 [slashdot.org]
      To clone the test data in the other direction: from the live servers, to to the staging server, to the dev systems.

      I don't think backups should need to be mentioned. They are expected from every company. Period.
      The nice thing about using GIT, is that backups are very easy with it. Just cl

    • Lool. I guess the lem> was a Freudian slip. It's the exact part that I still have sleepless nights over. So please forgive me. :))

  • (See subject.)

  • With no personal offense to the OP, (and noting that this is Drupal), I think the OP is trying a little to hard or suffers from inexperience. My first Drupal server hosts 100+ sites and Dev/Test/Production was rarely an issue-- which is to say, what is the OP doing that requires that level of segmentation? It's simply not that difficult on the scale mentioned.

    For large sites, of course, Drupal dev/test/production is another matter-- and there is a Drupal group that handles such questions and consider

    • HAHAHAHA the Drupal forums! yeah. Shall I post my years old questions that were never ever answered? Or the ones with the ever helpful: "Upgrade to Version 6...it's shiny!"

  • Put everything in test, not configured properly for production, until such time as enough people start using test that it becomes production on its own. This usually happens slowly and organically, and usually in the middle of the night. Once you have at least 2-3 different groups screaming at you over the lack of availability of your test system you can be reasonably confident that it is now production.
  • I'm amazed -- no one yet seems to have figured out one very obvious possibility:

    You don't need development machines. Let them develop on their workstations. They're developers, they'll figure it out.

    Use Git. Or, if you must use SVN, use developer-specific branches -- and after using this for awhile, it should be very obvious why you should use Git. The point is that each developer should be checking into version control often, without having to worry about causing problems for other users. Want someone to t

  • We do this for many many Drupal sites on many horizontal web nodes via bzr + ant. By 'sites' I mean no multi-site; each 'site' gets its own Drupal instance. By 'Drupal instance', I mean the 'Drupal instance' is an ant-powered deploy from a branch in bzr comprised of vendor branches (core + modules) merged in plus customizations by our shop. Each environment gets a branch, and we merge code upstream (dev -> tst -> prd).

    The only thing 'shared' across the infrastructure is the web services and framewo

  • this is such a nebulous question. you want your dev/qa/pre-prod to emulate your production environment as much as possible. this subject in itself could fill a book on best practices, techniques, and the like. Easiest said by saying: keep all developed code separate from 3rd party application code. packages/versioning/repositories are a good start. make things relocatable, have one installer, and have it take multiple environmental variables. ie - make environment variables 'run time', don't make the

When a fellow says, "It ain't the money but the principle of the thing," it's the money. -- Kim Hubbard