Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Businesses Programming IT

Who Owns Deployments - Dev or IT? 152

txpenguin asks: "I am IT manager for a small software company. We host several generations of our applications in a fairly complex environment. Our systems are very much inter-dependent (clustering, replication, heavily loaded, and so forth), and bad changes tend to have a domino effect. Additionally, it seems that there are always those who need to be 'in the loop', but aren't aware of changes which affect them. There is a constant battle between IT and Development regarding who should handle the deployment of new code releases and database schema changes to production systems. Dev doesn't understand the systems, and IT does not know the code well. How do you handle this at your company? What protocols seem to work best? Can there be a middle ground?"
This discussion has been archived. No new comments can be posted.

Who Owns Deployments - Dev or IT?

Comments Filter:
  • Middle ground (Score:5, Insightful)

    by Southpaw018 ( 793465 ) * on Wednesday December 13, 2006 @07:49AM (#17221036) Journal
    These kinds of things where there are two opposing sides always have the same answer. Unless one side is teh debil or something.

    You have to compromise. That's it. Middle ground. There are no other solutions to or ways around this problem. As you describe it, each side has access to and knowledge of half the problem. Half plus half is whole!

    So, meet with the guys in Dev. If you want to be beaureaucratic and official about it, create a "deployment team" consisting of an equal number of members from each side that will sit down, discuss, and supervise all necessary changes to production systems. Hell, send someone to a project management class if you need to.

    Now, the obstacle you're likely to hit is office politics. People won't want to listen to others and/or won't want to give up their turf or allow others on it. Too bad. To place how serious this issue is in overcoming the political terms: everyone in both departments needs to be cooperating or unemployed.

    So there you go. Just like any other relationship, business or otherwise: sit down and talk it over. Problems solved!
  • by BadAnalogyGuy ( 945258 ) <BadAnalogyGuy@gmail.com> on Wednesday December 13, 2006 @07:56AM (#17221062)
    IT should only be involved in the maintenance of your company's network. The developers need to own the entire product, and if that means bringing in some database experts who can handle deployments, then that's a must. If you are a consumer of your own product (and a lot of companies do that as a way to be one of their own wins) then IT should handle the network right up to the point where your network starts hitting the product network.

    Of course you will need to have separated the networks in order for this to be any use at all. Putting your production servers on the same network as your business servers will be catastrophic if anything were to ever happen. In the worst case you'd lose both and be totally dead in the water.

    The problem is not "You've got peanutbutter in my jelly", but rather that you either need to hire devs who can do network stuff themselves or you need to bring a network admin onboard who can work solely on the deployment system. If you can't afford that, then you will need to use the current IT guys you've got, but they need to know that they are to keep the two networks separate.
  • by mwvdlee ( 775178 ) on Wednesday December 13, 2006 @07:58AM (#17221080) Homepage
    IT should own the deployments.
    Assuming the dev department does their job well, a deployment would not require any of the dev department's skills.

  • Re:Middle ground (Score:4, Insightful)

    by popeyethesailor ( 325796 ) on Wednesday December 13, 2006 @08:13AM (#17221152)
    There's no need for a compromise.
    Developers write code & documentation.
    Installation/Deployment guides *are* required documentation.
    IT takes the software, and follows the guide.

    Business applications are mostly consolidated on a few servers. IT guys know dependencies, time windows, batch runs and the whole shebang. A dev team has no business doing all this.
  • Ommm... (Score:4, Insightful)

    by MarkusQ ( 450076 ) on Wednesday December 13, 2006 @08:33AM (#17221306) Journal

    In art classes they teach students to draw the space around the objects they are trying to depict. It's a useful skill in many areas.

    Rather than imagining that there is this atomic transition point that one side or the other must own, look more closely at what happens when changes are put into production, zooming in until you have enough detail that every piece naturally belongs to one team or the other.

    Then look at how this would play out in the real world, to find the "frothy" or "tangled" parts (well, IT should do this, then Dev should do that, then IT should do two more things, then it's Dev's turn again). These parts should be sorted out by requiring documentation (or scripting) to flow one way or the other, so that the process can be performed by one group without the direct involvement of the other.

    In short, the problem here is the granularity of your question.

    --MarkusQ

  • by SomeoneGotMyNick ( 200685 ) on Wednesday December 13, 2006 @08:40AM (#17221356) Journal
    If the developers complain that IT didn't follow the instructions correctly, then the instructions were wrong.
    Send it back to the developers to write better instructions.

    That's not always true, Mr. Nick Burns [wikipedia.org]. Sometimes IT has a permanent bug up their network port and refuses to learn a small amount of Developer's vernacular to share in the process. Likewise, Developers should not have to know how to speak 100% "IT" to write instructions. There is a common ground. IT personnel are paid for their experience, and ability to adapt, and not to simply follow instructions.
  • by Anonymous Coward on Wednesday December 13, 2006 @08:41AM (#17221372)
    We're talking about actual development here. That is, we're not talking about taking example code from Sun's website, changing a few class names and calling it a new and finished product. We're not talking about taking snippets of code, unattributed, from various incompatibly-licensed open source projects and combining them into a single crashing pile of shit.

    We've seen shit like that the few times we've dealt with Indian software firms, and frankly their development practices are just plain unacceptable. And I'd hardly call it "development". It's more a case of "let's-fuck-up-because-we're-clueless-and-then-try -to-trick-our-clients-with-Sun's-example-code".

  • by MoralHazard ( 447833 ) on Wednesday December 13, 2006 @09:19AM (#17221694)
    You are 100% correct. I don't know what this guy's problem is--are IT and development are the only two departments allowed by law in his jusrisdiction? I mean, it's normal for an "Ask Slashdot" question to be totally stupid, but this one is pretty bad. I wish I knew what company it was, so I could avoid them. Someone always needs to lead a deployment project, and to be responsible for both the quality of the application AND the quality of the installation it's running on. This dude doesn't need a whole department, but he does need to have access to the time and knowledge of both the people writing the code and the people running the servers. A company that hasn't figured that out, yet, is courting trouble.

    I take issue with other details of the poster's question, too:

    1) Why is their production environment so unstable? If the server software (OS, middleware, support apps) is unstable, why the hell are they running it in production? If it's the specifics of their setup that are causing the problems, fire the IT director and get somebody in there who know how to engineer production systems. And then tell them to come up with a plan to fix all of it, give them a budget and time and people, and make sure they fix it.

    2) If people aren't "in the loop" regarding changes that effect them, your mananagers need to be sacked. A large part of a manager's job is to keep issues that don't concern you from bothering you, and to make sure that you ARE aware of issues that do concern you. A sysadmin or software developer should have someone above him who sees the big picture, goes to inter-department meetings, and stays in the loop so he can keep his people in the loop.

    3) Any idiot could tell you that the developers need a staging environment that replicates the production runtime environment as closely as possible. That means it includes whatever feature may bear on the application operation/health/efficiency, like load balancing and replication. The sysadmins set it up, the developers write-test-debug their code on it, and when they hit a release candidate, the deployment project manager checks out the release, installs it on the staging systems, and runs it through the QA process. If it doesn't work properly, the project manager sends it back to the developers with notes and they fix it. This *should* guarantee that the developers are aware of whatever production-environment issues exist--if not, fire your sysadmin because he lied to you when he said the staging environment was as identical to the production environment as possible.

    Having lived through the kinds of startup environments where these issues crop up, I would guess that the current clusterfuck of affairs is jointly the fault of a lot of people. Management doesn't understand these issues--they think they can hire a bunch of sysadmins, a bunch of programmers, and appoint a director of IT and shit will get done. The people lower down fail to impress on management the need for proper processes, and they get lazy and don't want to be as careful and thorough as they should.

    Of course, many companies do achieve a buyout without ever fixing their issues--but nobody will ever have a successful growth past that without realizing the right way to run production systems.
  • by Anonymous Coward on Wednesday December 13, 2006 @10:04AM (#17222178)
    Assuming the dev department does their job well, a deployment would not require any of the dev department's skills.
     
    As an IT guy who has done deployments of internally developed programs, that's a big IF. I had several tough times of such situations but one stands out...had to deal with an dev dept that wouldn't touch anything they deemed "done", it was an IT issue and they would just tell the managers that it was already done anything going wrong now must be something IT did. So, we collected everything wrong with the program from exe's that when closed still ran in the background hogging 30 - 50 percent of the CPU, images in the program were actually getting served from the desktop PC of the guy who put them in across a WAN to 300 sites, coding in IP's instead of names when network resources were under change and some setup as dynamic, a list I could go on about. In the end the trouble tickets from the helpdesk ended up going to the Dev team and stopped coming to IT until they could show that it wasn't there problem.
     
    Would be nice if there was a fine seperation, but the proper checks were never done in the first place. My piece of mind is back now though as the current companies dev team accepts input from IT and often promptly works on a fix for issues and actually seem to enjoy the feedback.
  • Re:Middle ground (Score:4, Insightful)

    by gbjbaanb ( 229885 ) on Wednesday December 13, 2006 @10:12AM (#17222250)
    Neither. You do not have to compromise if you're the boss and you require stuff to work. Office politics and willy-waving over who's more important should be a secondary issue to making the stuff work.

    So. you have a certification team (or quality team, or test team) who's job it is to certify that what dev has goven them works as dev said. These guys install it on their own separate systems that mirror the business (on a smaller scale) and test it out. Bugs get reported back to dev who get to fix them and so on. Eventually it'll get rolled out to IT who will have a reasonably good expectation that it'll all work.

    However - even in the best of cases there will be exceptional circumstances, and its at this point that IT will get dev members to come and fix up issue that arise on the live system. IT should be first contacting the cert team who will determine the bug (hopefully with a bit more inside knowledge to reproduce it on their systems) and will then get dev to issue a patch, which goes through the standard release process.

    Of course, if you want to let dev team hack about (which is probably why you have such a complex system in the first place), and IT to twiddle with their setup then fine - expect it all to go arse-up.

    I like to think of these environments as always having a 'customer' that they deliver to. If they provide a poor service, the customer has every right to complain. So, Dev's customer is the IT guys, IT's customer is the Business, and Business answers to real, paying customers. Such a chain of responsibility does focus people's attention on what they are trying to achieve for the company.
  • by Horza66 ( 1039328 ) on Wednesday December 13, 2006 @10:12AM (#17222252)
    Plenty of other posters have pointed out that you sound a like an operation that is a bit small for the full Software Development Process. However if you're asking I suspect you're a growing company, in which case you need to get a Process in place, and soon or you will experience the full agony of a chaotic IT environment. (NB That's where I work now - I've worked sane places too) Fairly typical Process: 1. Dev receive Requirements and Defects from the Business, and code to them, unit testing their code (). 2. Code is delivered to Operations with a 'Release Note' or equivalent covering how to deploy the code to Environments 3. Operations deliver (deploy) the code to test environment(s). Link and Acceptance testing is performed - does it meet Requirements? are key defects resolved? Plus regression testing - does it break the existing system? Test sign it off if it clears these tests. 4. Operations deploy the code to the Production system on sign off. You inevitably end up with tensions in the Business vs IT, plus the divisions between the priorities of Dev, Test and Operations. Sounds like you are at the stage of not having any well-defined roles/teams for these responsibilities. I'll detail the Operations breakdown too. Operations: As others pointed out this breaks down into various teams. DBAs, Sysadmins, Change Management, Release Management, Operators, depending on your site. Operations are responsible for the stability and smooth running of the Production system - they must accept change, but control it. Since I work there, and you specifically address the subject, I'll detail Release Management too Release/Change Management Usually end up the Gatekeepers on changes. They'll need to be familiar with the whole system, and resistant to the pressure they'll receive from all sides. They need to know what versions of code are where, and be able to reject bad code when it turns up, but be flexible enough to make sure Test have something to test. They need to be experts on everything your IT does. No jobsworths here, and good generalists are rare. Since you'll inevitably go through a period of chaos if you are growing I'll mention that staff turnover here is very high - unless you get in contractors, and pay highly for them. The Change Management role, sometimes covering the Release role too, is to track changes and know who they impact, and be able to prioritise changes. If Release and Change roles are separate, CM is closest to the business. Hope that helps.
  • Re:Middle ground (Score:3, Insightful)

    by ComputerizedYoga ( 466024 ) on Wednesday December 13, 2006 @10:16AM (#17222300) Homepage
    err, then there's also testing.

    "rolling back the install" is a low-grade disaster recovery scenario. Testing the install on a non-production machine, working out the install/upgrade kinks and maybe even having a team of testers or some scripted testcases to throw at it before you start doing anything on the production systems is disaster prevention.

    And any doctor, sysadmin, or person with a modicum of common sense (or at least familiarity with some common-sense aphorisms) will tell you something about the relationship between prevention and cure...
  • by walt-sjc ( 145127 ) on Wednesday December 13, 2006 @10:37AM (#17222524)
    Good IT people are ALSO programmers. Check out the SAGE job descriptions... Even for Junior System Administrators, one of the "desired" skills is "Programming experience in any applicable language." Beyond Junior level, it's a "required" skill. I wouldn't put a junior person on a major deployment project other than at a mentoring level (which should be done - how else are they going to get beyond "junior"?.) I think it is a travesty that some educational institutions are pumping out degreed IT people that can't write one line of code.

    IT should be able to work around most deployment issues, and ensure that any minor fixes needed to the code / process are communicated back upstream. After all, this is the "real world" where deadlines are real, and money is at stake. A top notch IT team is critical to the success of a huge portion of the modern business world. There is very little room for incompetence at senior levels.
  • What about QA? (Score:4, Insightful)

    by trcooper ( 18794 ) * <coop@redout . o rg> on Wednesday December 13, 2006 @10:41AM (#17222580) Homepage
    In my company QA is the bridge between development and production. I'm a team lead (dev) in a company which has a suite of web applications. Each application has a lead assigned to it, who handles the development and documentation of a product through their team. We do several deployments of software each week, and if our leads had to hand-hold through each of them we'd be hamstrung for time and working more night hours than we'd like.

    When we have a RC I'll branch the trunk, and request that QA perform a Pre-Production build. Developers will work with ops to get this running properly on the pre hardware, as this can be done outside of maint hours. We'll then do several builds of the branch until it's gold, and then tag off the branch as X.vv.zz.

    While a major release is in QA the lead focuses on creating/updating the operations document which addresses day-to-day maintainence issues and tells operations how to troubleshoot the app in the case of a problem. They also produce an implementation plan which identifies the groups/persons needed to deploy the application, and the steps needed to be taken, using what they've learned from the initial pre deployment. Once this is done, and QA has promoted the app, a dry-run is performed to try to catch any missing steps. The implementation plan is handed to QA, who coordinates with IT/Ops to resolve any conflicts and schedule the deployment. Ops/DBA's then physically performs the deployment following the steps given in the plan. In a major release situation, you may have a team lead or platform manager coordinating the actual steps on a conference bridge. But for minor releases we've been able to just have our operations teams do the full deployment with verification by QA and the product's customer service group.

    We also have a twice weekly meeting where any upcoming production changes are discussed between IT/Ops, QA and Dev. Release documents are put on a calendar, so if an issue comes up on another product we can go to this and see what may have caused it.

    Dev and QA also meet weekly to discuss the progression of products through or into QA. Any issues with testing or problems with builds not being stable can be addressed.

    It took us a while to get to this point. We had previously been in a situation where dev would handle the build and deployment process, and it was had for many of the leads to let their projects go, but now we can see the benefit, not only for the company, but also in the fact we don't have to be doing releases at 12AM on tuesdays anymore. It takes a lot of work across departments, and definitely is a long road, but something that needs to be done.
  • by Fastolfe ( 1470 ) on Wednesday December 13, 2006 @12:12PM (#17223896)
    I have to agree with the post you're replying to. I work at a major telecommunications company in a large IT department, and "needs of the business" trump "correct" every time. Projects are always due-date-driven, not quality-driven. In theory, if a deployment team should do deployments, but if they have to rush to meet their due dates, you can bet the developers are just as much on the hook and are going to be the ones up in the middle of the night. Eventually someone asks, "Why don't we just make the development team responsible for this permanently?" Unless you can respond to that question in a way that directly translates to getting the work done faster or cheaper, there's just no point in trying.

    There is a HUGE difference between companies that sell software, and companies that produce software for internal use. With both companies, it's the bottom line that matters. But when you sell software, the quality of your software is directly tied to your revenue (monopoly situations notwithstanding). It's in your best interests to do things "correctly" in these situations. But if you're just producing software for internal use, you're not making any money from selling that software. There's no reason to strive for quality, and you focus instead on costs. It is preferable to have defects and poor process, because it is cheaper to deal with defects and poor process than it is to design and implement everything correctly. Everyone hates this except for management.

    If you want to have a good software development experience, work for a company that's in the business of producing software.
  • Re:Middle ground (Score:3, Insightful)

    by Doctor Memory ( 6336 ) on Wednesday December 13, 2006 @12:16PM (#17223956)

    Installation/Deployment guides *are* required documentation.
    IT takes the software, and follows the guide.
    And note that these are guides, not step-by-step instructions. They should say things like "Load the database schema update script (app_schema_updt_1_1_19.sql) into the database". The actual mechanics of doing this (making a backup, bringing an additional transaction log file on-line, starting table-level auditing, whatever) are left to the people who actually know the system (and the department procedures). Part of the development process is circulating a draft of the installation guide to IT for comments. IT is free to ask "Why?" or "How?" or "Where am I supposed to get the disk space for that?". I always made a point of inviting someone from IT to our team meetings whenever we discussed deployment requirements, and that always worked well. IT got a heads-up as to what was coming down the pike, and often the IT person had good insight into alternatives and what resources were really available (e.g., "XYZ division got their own server, so all their information in the production system is static. They wanted it updated hourly, but the update locks those tables for over five minutes, so we're just doing it noon and midnight for now." Need I mention that our app was reporting against XYZ division's data, and this was the first we'd heard of it?).
  • by GryMor ( 88799 ) on Wednesday December 13, 2006 @01:03PM (#17224688)
    IT is strictly responsible for low level infrastructure (OS, hardware, physical network, power). Development teams own services and are responsible for their fleets in both a development and operational sense, and is responsible for notifying their upstream and downstream dependancies of changes in advance. Actual deployment (which, if it requires documentation, is not being supported by a sufficiently advanced deployment management system) to production is gated by Development's QA teams, who are responsible for testing on non production systems.

    We used to have dedicated deployment engineers, but that just added friction, and guarenteed that the person doing the push to prod didn't know the full contents of what they were pushing.
  • Re:Middle ground (Score:3, Insightful)

    by hondo77 ( 324058 ) on Wednesday December 13, 2006 @01:16PM (#17224850) Homepage

    And in Magic Happy Land, that actually works without a problem.

    That land is called Sarbanes-Oxley Land and it has to work or you fail your audit.

  • by Aging_Newbie ( 16932 ) * on Wednesday December 13, 2006 @01:48PM (#17225292)
    So, how should one deploy changes?

    1. Dev completes their changes and makes a release including operational details as needed.
    2. QA/Testing roll the package to their staging environment and complete their testing. Pass goto 3 fail goto 1
    3. Configuration Management (usually part of QA) releases the package with installation instructions
    4. IT follows the instructions and rolls the application to the live environment
    5. QA tests the operation in live and reports the status for a go/no-go on the changes

    DBAs should package their changes in the form of repeatable scripts that are used to move the code and data to Staging, and Live. That reduces variability. Most DBAs already know the impact of their actions so they can perform the moves as requested by QA.

    Now, before you ready the tar and feathers, it is possible to plan orderly releases that follow that process and it produces near zero failures in production. QA's job is to be the interface between the development activity and the real world. They have the discipline and skills to follow processes and keep bad things from happening. But QA has to have the power to call the shots.

    If you do this ...

    * Developers win because they no longer hold the bag for consequences of bad changes.
    * IT wins because they know precisely what is going on and they are empowered to fix or restore stuff because they know exactly how to install the code without breaking something.
    * Project managers who carefully orchestrate the whole process earn their keep.
    * Micromanagers and others who like to call for quick hit changes to cover up for disorder and disarray somehow find their habits have no place in the organization.

    Customers will be much happier and willing to accept slower and more orderly propagation of changes when they realize that they get better quality and uptime. Most of the pressure on development comes from emergency recovery from avoidable errors rather than actual work to be completed. One could argue that if the time from a request to acceptable code is measured, the prcess saves time overall.
  • by IgLou ( 732042 ) on Wednesday December 13, 2006 @02:26PM (#17225928)
    You need to get your DEV to generate a package of software that's installable against their last release. That package goes to your QA to install in the QA environment as per the installation instructions that go with the package. If QA says the package is good send it up to your production/application/sys admins to install on the live system.

    The key is to keep the process simple but never, ever let your DEV have access to your live production system unless it's a break/fix scenario and the admin is looking over their shoulder to see what's changing. My experience is that when DEV has access to the production system unaccounted changes will crop up like crazy and weird subsystems start to form in DEV owned areas. I saw a full application being run out of a programmers personal schema in a database; which of course crapped out while they were away. Not good.

I've noticed several design suggestions in your code.

Working...