Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Programming IT

Ask Slashdot: How Would You Stop The Deployment Of Unapproved Code Changes? 324

Over a million lines of code -- in existence for over 10 years -- gets updates in six-week "sprints" using source control and bug-tracking systems. But now an anonymous reader writes: In theory users report bugs, the developers "fix" the bugs, the users test and accept the fix, and finally the "fix" gets released to production as part of a larger change-set. In practice, the bug is reported, the developers implement "a fix", no one else tests it (except for the developer(s) ), and the "fix" gets released with the larger code change set, to production.

We (the developers) don't want to release "fixes" that users haven't accepted, but the code changes often include changes at all levels of the stack (database, DOAs, Business Rules, Webservices and multiple front-ends). Multiple code changes could be occurring in the same areas of code by different developers at the same time, making merges of branches very complex and error prone. Many fingers are in the same pie. Our team size, structure and locations prevent having a single gatekeeper for code check-ins... What tools and procedures do you use to prevent un-approved fixes from being deployed to production as part of the larger code change sets?

Fixes are included in a test build for users to test and accept -- but what if they never do? Leave your best answers in the comments. How woud you stop un-approved code changes from being deployed?
This discussion has been archived. No new comments can be posted.

Ask Slashdot: How Would You Stop The Deployment Of Unapproved Code Changes?

Comments Filter:
  • permissions (Score:5, Informative)

    by Anonymous Coward on Sunday April 16, 2017 @11:04PM (#54247011)

    "How woud you stop un-approved code changes from being deployed?"

    Require approval from someone before changes are pushed out.

    • Re: permissions (Score:4, Insightful)

      by Anonymous Coward on Sunday April 16, 2017 @11:17PM (#54247069)

      That doesn't work. I manage devs on five different continents, and my boss always wins. There is no way to beat bad management.

      • Re: permissions (Score:4, Insightful)

        by Anonymous Coward on Sunday April 16, 2017 @11:46PM (#54247185)

        I had a boss that gave me some really good advice 15 years ago when I was getting chewed out by the owner (which was his boss), "get a backbone". If your boss is constantly reaching around you to your workers, they don't respect you and you should be fired for sucking or you leave because they're doing it wrong.

        If YOU are managing them, it's your job, not your boss's. Take responsibility or take off.

      • Re: permissions (Score:4, Interesting)

        by vtcodger ( 957785 ) on Monday April 17, 2017 @04:06AM (#54247595)

        "That doesn't work"

        It doesn't not work. More eyes do tend to catch out and out bugs. But some still slip through. And stuff that sounds good but busts user's workflow still gets through. And developers hate the delay. And more review costs more. And more testing costs even more than more review. So managers aren't a big fan of either.

        I suppose the solution is never to work with millions of lines of code. But "This sucker is way to big and too complicated" is not an easy sell.

        • by zifn4b ( 1040588 )

          I suppose the solution is never to work with millions of lines of code. But "This sucker is way to big and too complicated" is not an easy sell.

          What you're talking about is technical debt. You're talking about code that is such a complicated mess that code reviews can't effectively determine what the side effects might possibly be to simple changes. I would bet this code you are referring to has zero automated tests which might catch some of these side effects. There is no solution to that problem. It's a sinking ship held together by duct tape and rubber bands. It's a ticking time bomb.

          • Some of that. But I made quite a good living for several decades back in the 1960s, 1970s, and 1980s integrating large systems. Did some development also, and have some sympathy for developers. Good unit tests help. and so would automated testing. But let's get real. Few actual deployed systems have meaningful specs, and even those that started with specs probably didn't maintain the specs and have bugs in the original specs -- omissions (oh, you wanted the trig functions to be fast as well as accurate

            • by zifn4b ( 1040588 )

              Some of that. But I made quite a good living for several decades back in the 1960s, 1970s, and 1980s integrating large systems. Did some development also, and have some sympathy for developers. Good unit tests help. and so would automated testing. But let's get real. Few actual deployed systems have meaningful specs, and even those that started with specs probably didn't maintain the specs and have bugs in the original specs -- omissions

              Hey I'm an old timer too but you apparently never caught up-to-date. Waterfall doesn't work. Never did. By the time a specification is written, it's already out of date and doesn't reflect what the customer actually wants. Other timers by the names of Ward Cunningham, Kent Beck, Martin Fowler and many others figured this out. Now we have Agile and Xtreme Programming and all of the variants.

              Here let's get real with you. First of all, the software projects of the 60's, 70's and 80's were comparatively m

        • When I owned a Saab, a splendid care save for the complexity and unfortunate influence of General Motors, my dealer technician was quite proud of the fact that he was trained and proficient in all repairs and maintenance, even transmission service and body work. Many auto techs are proficient in diagnosis, most engine repair, blah blah, but not in body work nor transmission repair. I was impressed.

          Then I had a problem he couldn't figure out - it would stall at a stop, just die, restart fine. After weeks of

      • by zifn4b ( 1040588 )

        There is no way to beat bad management.

        Actually, bad management usually beats itself. I've seen plenty of companies go out of business because of bad management not learning from its mistakes. The problem is, when the ship sinks, everyone goes down with it including the narcissistic self-righteous douche bag that thought they were the ultimate shizzle.

    • by Anonymous Coward

      A lot could be learned by observing what the developers of the Rust programming language [rust-lang.org] project do when running their project.

      They're dealing with a large project that covers a complex domain, a huge amount of code, and many developers scattered across the globe.

      The first thing to do is to use git, and perhaps something like GitHub [github.com]. This will allow your developers to collaborate using a free and open source version control system.

      Next, you need a Code of Conduct [rust-lang.org] to prevent social injustice from negatively

      • Next, you need a Code of Conduct [rust-lang.org] to prevent social injustice from negatively affecting the project. A Moderation Team [rust-lang.org] is tasked with ensuring that everyone is tolerant, and any intolerance will be ruthlessly stamped out.

        That was a good one!

    • by Z00L00K ( 682162 )

      The problem described in the article is a lot more complex and comes down to that the system has a bad architecture from the start. However it's easy to make a bad architecture with many of the programming languages and development environments that are around today.

      It's of course easy to say that you should run tests and have good test suites to execute to verify the platform but they only take you to a certain level, never to the level where the usability is validated.

      And how do you verify when the input

      • by thsths ( 31372 )

        Testing is the key, especially when working with a legacy project. When a bug report is accepted, you first create a test case that fails, and then you fix the bug. That gives you some kind of assurance that you fixed something. And you ship that fix, unless it fails QA at some point.

        Only the bug reporter can find out whether what you fixed also fixed their problem. You can track that - and say the issue is either confirmed fixed, or unconfirmed fixed. That is what bug trackers do. Sometimes it is easiest f

        • by Z00L00K ( 682162 )

          The description you provide is a very simplified perspective, but the reality is that sometimes the bugs you fix aren't easy to create a simple success/fail test scenario for. This is very common in systems where you have a lot of interdependencies and race conditions. Each piece of the puzzle may work fine but together they create problems one time out of a thousand - and never in a test rig, only in real world platforms.

          • by thsths ( 31372 )

            As somebody said, this may be an architecture problem, or maybe a timing problem, not just a simple bug. And you do not change your architecture willy-nilly, in the hope that it fixes the issue. At the very least, you would want some engineering process to underpin any significant changes.

  • by Teancum ( 67324 ) <robert_horning&netzero,net> on Sunday April 16, 2017 @11:11PM (#54247037) Homepage Journal

    That right there is a part of the problem. Software testers report bugs. Hints about potential bugs can come from end users, but end users are not software testers.

    There is no substitute to a really good (professional.... and paid) software tester that and reliably reproduce bugs that need to be removed. If anything, they are far more valuable than even code monkeys writing thousands of lines of code per month (something also largely irrelevant for quality software). If anything, I would pay the software testing team before the coders if you need to use some volunteer labor like in an open source project.

    End users can offer hints to a good software testing team as to what might be bugs, and end user reports should definitely be taken seriously since it is something that slipped past those testers as well. When the software testers are fired and/or it is presumed that unpaid volunteers are going to be doing that quality assurance process, especially for a commercial software product, you get what you pay for.

    • by zifn4b ( 1040588 )

      There is no substitute to a really good (professional.... and paid) software tester that and reliably reproduce bugs that need to be removed. If anything, they are far more valuable than even code monkeys writing thousands of lines of code per month (something also largely irrelevant for quality software). If anything, I would pay the software testing team before the coders if you need to use some volunteer labor like in an open source project.

      This works a lot but it's not a panacea. I've seen several large software efforts where manual testing and test plans start becoming ineffective because the complexity gets to a level where no amount of highly talented testers can deal with it. The execution of manual test plans starts taking inordinate amounts of time. The cataloging of test plans and keeping them up-to-date becomes counter-productive. The only way to move beyond this point is to develop an automated testing strategy. I'm not referrin

  • In addition to the feature branching you could also look into feature toggles. A feature toggle is a variable used in a conditional statement to guard code blocks, with the aim of either enabling or disabling the feature code in those blocks for testing or release. While this approach would not prevent unapproved code from being in a release, it would act as a gatekeeper to determine if a specific feature should be exposed to the end user. I however am not sure how you could address your concern for the dat
  • More money, better computers, whatever it takes - get a subset of the users on board to test the new code before it's final.

  • by ooloorie ( 4394035 ) on Sunday April 16, 2017 @11:20PM (#54247087)

    Fixes are included in a test build for users to test and accept -- but what if they never do? Leave your best answers in the comments. How woud you stop un-approved code changes from being deployed?

    - You set up the central repository to only accept code if it can be merged and results in all tests passing.

    - You make sure that there is defined code ownership and that people can only change code with a review and with the approval of the owners, also enforced by the source code control system.

    Long-term, there are two more things that should happen:

    - Developers need to learn how to break up large diffs into many small, individually testable diffs.

    - You need to break up your codebase so that it's not a single project with 1Mloc, but 50 small projects with 20kloc each.

    • Code ownership is so 1960s ...

      • Code ownership is so 1960s ...

        Pretty much every project on GitHub has code ownership.

      • by Excelcia ( 906188 ) <slashdot@excelcia.ca> on Monday April 17, 2017 @01:13AM (#54247375) Homepage Journal

        In the 1960's was when you wrote software by punching cards that someone else fed in and where it had to work the first time. Every time. That kind of discipline is sorely needed by the original question submitter.

        The whole haphazard development model described in the question is absurd. First of all, what kind of single bug requires rifling through back end databases, business rules, web services and multiple front ends? That's not a bug in the software, that's a bug in the pre-design definitions phase. That is not a bug. Seriously... you can't just accept all the premises in the question without thought. That kind of change only happens when someone is is calling "the customer wants this feature changed" or "we misunderstood what the customer needed" a bug, which is wrong on its face.

        Secondly, multiple people making changes of that scope simultaneously is just wrong, whatever the cause. Distributed revision control systems were made able to handle multiple simultaneous branches in order to break bottlenecks on people working on different areas of a common source file. They were designed to accommodate merges that had occasional and minor overlaps. What is described here is a completely inappropriate use of that kind of environment. So to answer the question directly, when asked what tools can help, the answer is no tools can help you. The process is wrong. You are far better off reverting to a revision control system that enforces a single checkout of a source file if this is what is going on. Better yet, correct your development strategy.

        This can't be emphasized strongly or often enough. Code ownership is a good step forward in this scenario, but the only real fix for these problems is to completely refactor the way change is managed in this project. You wouldn't be wrong to Gantt chart these changes with their subsystem impacts so they can be scheduled on a non-interference basis. Better yet, if you are having to make multiple back-end through to UI changes, you need to go through a whole scope identification phase again.

        Your change system is hopelessly broken. Fix that, then the correct use of existing tools to assist you will become readily apparent.

        • by thsths ( 31372 )

          While I agree that this may be the case, it is also possible that the software structure is terrible. Maybe one feature is spread out over many different parts of the project. Especially if some mixing language like PHP is involved, that is actually quite typical, because you work with Javascript code that is fix and Javascript code that generated, and possibly from different sources.

          Unfortunately, there is no easy fix for this. You need to refactor the software if you want to make it more maintainable.

    • FWIW you can do this pretty easily with git + gerrit.
    • Fixes are included in a test build for users to test and accept -- but what if they never do? Leave your best answers in the comments. How woud you stop un-approved code changes from being deployed?

      - You set up the central repository to only accept code if it can be merged and results in all tests passing.

      I'll expand on this in case anyone is confused. The code itself should be developed alongside unit tests. Whenever a new interface/class is developed, there should be tests built to ensure that *every method* behaves exactly as expected. Java has packages like JUnit, Python has nosetests, and there are others. Lastly, with such a widespread development team, it's imperative you develop coding standards and have management backing them up. Use things like the SOLID design principle [scotch.io], and make sure that code i

      • I would add accountability too. Linus gets a lot of flack for his often profane in-your-face leadership style, but he has managed to keep the Linux kernel going strong for decades now. He calls out the developers of bad code that break the development rules (principles, structure, behaviors, etc...) and will not accept those changes.

        By the way, the Linux kernel is actually a fairly big codebase, which goes against what I have been saying about breaking up projects. It works for the Linux kernel because of L

    • This sounds all pretty standard if you have a software engineering background, but sometimes office politics takes over.
      The codebase being dispersed over different teams, each team of course thinks their stuff is most important.
      Those 50 small projects? You just created dependency hell! Sure, in theory you should strive for high cohesion and low coupling of projects, but suddenly team A needs this from team B, and they don't want to spend time asking, so they just implement that functionality at their leve
      • Yes, the problems you list occur when you break up big projects into smaller ones. They also occur when you don't break up projects. The difference is that when you break projects up, these problems actually become visible and exposed, which is why they can then be addressed by management and through tools.

  • Presumably you tagged the sources that went into the build that went to your customers?

    If you did, when you make bug fixes you need to check out against that tag, not to the bleeding edge code where new features are being added.

    Depending on how many fixes there are and how complex and messy the source tree is, you can either try to merge the changes into your bleeding edge code base or make the changes twice. In general, if the bleeding edge is being vigorously refactored or otherwise aggressively reorgani

  • by brian.stinar ( 1104135 ) on Sunday April 16, 2017 @11:30PM (#54247129) Homepage

    and the "fix" gets released with the larger code change set, to production...

    The problem here is that the fix "gets released." I agree, that it seems like releases should at least have the criteria that at least one other person has reviewed the code being released. Otherwise, they have the criteria that one person decided to release it (by definition.)

    I think you could create a system by which pull requests are approved by someone other than the person that created them, and then, after it's been approved, then the code is authorized to be merged into a release branch. Here's [ycombinator.com] one such discussion of that. I'm not an expert on this, but I've heard of it, and I think this line of reasoning could help you.

    Good luck!

  • Step One (Score:3, Insightful)

    by Anonymous Coward on Sunday April 16, 2017 @11:43PM (#54247171)

    Step one is to get high-level management to understand and agree with the risks as well as to understand and agree with the costs of preventing them.

    You would think that this is a no-brainer, but its not. I've listened to a COO tell me from across a boardroom table that they have to be able to bypass deployment processes for business-critical hot fixes because time is of the essence in those situations, and that was the end of it. So what you've got is that in an "emergency" an "informal" approval from an "important" person is all that is needed. Feel free to define those words however you wish, naturally.

  • by lucm ( 889690 ) on Sunday April 16, 2017 @11:45PM (#54247183)

    When people are worried about changes in "many layers of the stack", it's usually a good time to re-architect the system and build microservices. Basically, you get the entire stack in every microservice and you stop worrying about ripple effects; you upgrade or troubleshoot things at a much smaller scale.

    I highly recommend this book:
    https://www.amazon.com/Buildin... [amazon.com]

    It explains how to achieve this, including how to deal with the tough parts like the database layer.

    • by Kjella ( 173770 )

      When people are worried about changes in "many layers of the stack", it's usually a good time to re-architect the system and build microservices. Basically, you get the entire stack in every microservice and you stop worrying about ripple effects; you upgrade or troubleshoot things at a much smaller scale.

      Isn't this one of the problems caused by modularization, not solved by it? Basically if everything was in the same VCS it'd be a huge change set doing some database changes, some business rule changes, some desktop GUI changes, some Android GUI changes, some iPhone GUI changes etc. but the moment you start breaking it up you have to start tracking that this change of functionality requires changes in five different projects and unless everything makes it into the next release it won't work. The more you've

    • by zifn4b ( 1040588 )

      When people are worried about changes in "many layers of the stack", it's usually a good time to re-architect the system and build microservices. Basically, you get the entire stack in every microservice and you stop worrying about ripple effects; you upgrade or troubleshoot things at a much smaller scale.

      I highly recommend this book: https://www.amazon.com/Buildin... [amazon.com]

      It explains how to achieve this, including how to deal with the tough parts like the database layer.

      Just watch the Spotify Engineering Culture [youtube.com] videos. What you refer to as "ripple effects" they refer to as "blast radius" which I like much better. The benefit of micro services is that if one micro service blows up, the rest continue to run at least enabling partial functionality as opposed to taking the whole system down or putting the entire system into a funky state.

  • You lay down a CVS Tag and the build technician builds from it. Releases are clearly identified with an MD5 identifier.

    If only one official binary exists, it's what people will run.

  • by sjames ( 1099 ) on Monday April 17, 2017 @12:04AM (#54247255) Homepage Journal

    Next person to release an untested line of code will play the piano for us *SLAM*

  • for testing. That means full blown test environments as well as time in the user's day or a special team to test bug fixes. Also if you go with a team don't outsource that team to the lowest bidder. They'll just pass everything as "OK" and spend the rest of their time looking for a better job. Hire somebody, give them a decent salary and benefits.

    Unless testing really isn't that important. Depending on your needs it might not be (despite all the indignation that engenders). One thing I've learned about
  • by caferace ( 442 )

    As a long time SQA/HQA eng, this is an awesome question. I've been around a lot of blocks. New build. Broken. Here's another one. Broken. My Golden Rule? Stay until it works. I'd so much rather have a good build than something slapped together. It wastes everyone's time.

  • I've tried. And been fired for it. From the unemployment office, it is amusing to watch as WS ATG investigates a company due to bad testing practices. It is disturbing to watch as 'computer glitches' cause hundreds if not thousands of travelers to be impacted. It is disturbing to send an email that - due to bad management practices (just what this article describes) mean that I cannot guarantee or relate the quality of a product; then be escorted out the door the next day.
    I have and always will stand
  • I have had this problem in several different projects with different teams.

    I generally think of it as the "code hostage" or "cherry-picking" problem. You have the work of 15 issues reviewed and merged and loaded up and running on a test environment. For 14 of those issues, a "user" ( or whatever you call the non-developer issue-owner in this case ) checks in and says it is good. Time is passing and the 15th person is a no-show. It's worse than if they said it still wasn't fixed -- then you would immedia

  • as long as everyone buys in that Quality is important and fixing urgent bugs is also important.

    When we go to production we have a 3 day code freeze during which only P0 bugs (which are rare at that point) can lead to the code being changed.
    After production the code is tagged and new development start on a new branch.
    Any Production bugs which cant wait till next release are fixed on the production branch, tested by QA team and released.
    The developer makes sure to do the same fix in the new code branch (if th

  • Hire better, more senior people. Hire more staff if you are understaffed. Because if you have enough developers, and they are seasoned, then together you'll come up with a process that works for YOUR company and YOUR team. Or - be like too many companies in a race to the bottom of underpaying, understaffing, and then complaining when it doesn't work out. Or worse - thinking some process or tool will save the day. It won't, and if it did - you wouldn't deserve it.
  • We use JIRA but there are many ways to do this. Implement a pipeline that must be followed.

    * User creates a ticket in system (bug/issue, etc).
    * Developer works on ticket (in new branch)
    * Developer get's code review from peer
    * Developer pushes changes to staging server
    * user who filed ticket tests the change/signs off on fix
    * code deployed to QA to test change
    * QA signs off on case
    * Developer merges code from branch to master (or build release engineer)
    * code is released

    I'm sure you've found tha
  • Want to keep unapproved changes out and have a cross geo org? Easy, just make Gerrit [gerritcodereview.com] code reviews mandatory.
    • by tlhIngan ( 30335 )

      Exactly.

      Gerrit requires code be approved before it will merge it into the mainline branches. It replaces a centralized Git server.

      Deployments pull from the official Gerrit mainline, while developers can push/pull into their own private branches without requiring approval. But to push to mainline requires approval and review.

      And there's a full chain of custody - if some bad code gets approved, you can see all the comments and who approved the change.

      It's a bit tricky if you need to revise a fix, but it just

    • Yup. What is this doing on the front-page?

  • by Billly Gates ( 198444 ) on Monday April 17, 2017 @02:30AM (#54247487) Journal

    ... hides

  • There's a wealth of excellent suggestion here already, but I'd like to take a different approach to answering your question. Put simply, start with you customer[s]. Your post does not mention but strongly suggests that this software is for in-house use. On that basis, identify your stakeholders and understand their appetite for risk and their desire to move forward quickly.

    As the old adage [of project management] goes: "Do you want this quickly? do you want this with quality> do you want this at low c
  • From my experience I suspect the problem doesn't start with "The code being pushed into production" in wrong way, the problem starts with *Users* sending un-coordinated bug reports and feature request directly to the developers.

    Without some program/feature person "In charge" on the user side it feels pretty hopeless.

    The only solution I have come up with for me: Make sure that management KNOWS that without proper procedures in place the is an increased risk of bugs slipping through that might affect producti

  • I don't really understand this development model "theory". Are users "end users" or are users in-house testers?
    If they are end users, how would they test the fix before it reaches production code?
    If they are testers and they don't test the fix, who the hell closes the bug report?

    Since you worry about errors getting introduced by merges, it sounds like you are also missing regression testing.

    If you don't do regression testing, and don't verify fixes on the release branch, what the hell is your QA depar
  • GHE has a new mode where pull requests need explicit approval to be merged, and you can also prevent pushing to the main branch.

    • New to Github Enterprise?

      Gitlab Community Edition has had this for about 18 months. Pull/merge requests can run automated builds including running tests, the results of which can be seen in the merge review screen. It can also be configured to auto-merge based on testing criteria (coverage, test results etc.).

  • by Opportunist ( 166417 ) on Monday April 17, 2017 @06:57AM (#54247829)

    Hey, not that it's anything unusual, but if you cut corners, expect to be sued for patent infringement by Apple. Or something like that.

    In all seriousness, though, if your USERS report bugs, you have a fundamental problem here. Because this is what it should look like:

    User defines specification. Programmer codes to spec. QA tests if spec is implemented correctly. Program ships. User finds something he doesn't like? It obviously has to be a change request, because the program does what the user specc'd.

    Yes, it is that simple. And yes, I'm fully aware that users don't have the first clue what they actually want. But they will never learn if you keep treating their blunders and imprecise specifications as if it was YOUR fault!

  • You don't want to make it even more cumbersome to change the code, as it sounds like you are already struggling with the 10 Mloc codebase. So forget about having humans "approve" the changes.

    What you want to do is make it easy to submit good code and difficult to submit bad code. This means that you will need the capability to quickly assess the proposed patch, for some definition of "good" and "bad". Computers are fairly good at this. In other words: test-first development, with automated testing on severa

  • Unit tests and pull requests. Use them.
  • Massive whips.

    And attack dogs. Angry attack dogs.

  • Don't allow developers to promote code - that should be done by a specific role responsible for change management, code review and deployment. Use file system monitoring software to alert and log all changes.
  • Produce a one-page "procedures" document - clearly, but simply lay out the process in moving code from the programmer's branch to the QA branch and into the production branch. Have everyone read and sign it.

    The first time someone violates it, you give them an informal warning.

    The second time they violate it, have them sit down with management and HR and tell them that if they violate the rules again, they'll be terminated.

    The third time, you terminate them.

    Easy...no automation required...you simply have to

  • I am dealing with similarly sized project for last 13 years. Our workflow is different though. We have continuous updates (no batch updates) when it comes to bug fixing. We use our in-house Project Management system.

    - Client sends a request/bug report, our team creates new Task in our PM and
    - assigns it to department/programmer
    - programmer fixes it, tests it on Devel version puts the task to QC status
    - QC team tests described problem and greenlights deployment
    - programmer deploys/merges the fix into the liv

  • you hire a person to be the gatekeeper that does exactly what you want.

    Sorry, but there is no cheap and free way to do this, it's called a project manager and they need to be competent and detail oriented.

    Devs sign off on the code and their own testing, then this person makes sure that the bet testers also tested and signed off on it, then they personally sign off that all is well and OK's publishing to a new gold release.

    Not hard to do, and requires your management to be competent and understand they need

  • My organization a while ago saw significant issues with untested fixes being deployed and similar bad practices (undocumented configurations, lack of integration testing, etc.). The thing that did it for us was seeing our up-time drop below 99% in production systems. It became downright embarrassing and started costing us real $.

    So our then-CIO froze all production changes for 90 days. In that time, we instituted a change review board. They now approve all production changes. Without the culture change that

  • I think your difficulty is that your current work flow obfuscates two roles. You are using your Stakeholders (the users of your software) as the individuals in charge of acceptance. They're likely not motivated, or possibly not qualified, to do so really. Remember they're just interested in getting THEIR work done. If their workflow were "enlightened" like yours, then they might care about process... but probably not.

    The missing piece of the puzzle here is the role of Product Owner. An individual that works

  • This is the part of C++ that you can't learn in 21 days.

  • Developers code. Developers review code. Developers write code to test code.

    Developers can't touch production.
    Automated checks of ANYTHING are golden because you aren't relying on people, and you aren't pitting people vs people.
    It is all about consistency when you boil down to the bones of a well-running team.
    Start with those rules and actually follow them and you end up with a pretty awesome setup, because Developers will naturally gravitate to defensive, test-driven programming when those are the r
  • You are missing a quality engineering or quality assurance group -- a group that's independent of the developers and specializes in developing appropriate tests, test harnesses, test automation and also provides another set of eyes on features and bug fixes.

    Our process:

    Bug is reported or feature request is approved for developer to work on.

    Development cycle:
    Developer implements feature / fixes it. Ideally based on design of fix/feature, QE develops test plan/tests. A lot of work can happen concurrently. Som

  • The golden path should be:
    1) User reports a problem to the service desk.
    2) Service desk looks into the problem and either addresses it if it's user error or punts it to QA/testing.
    3) QA/testing investigates and documents as much as possible about the bug - replication steps, affected screens, whatever. They would do this both in production and a staging environment to see if it's an environmental issue.
    4) Developer takes the bug and figures out the issue, creates a fix, which is then sent back to QA/testing

E = MC ** 2 +- 3db

Working...