Ask Slashdot: How Would You Stop The Deployment Of Unapproved Code Changes? 324
Over a million lines of code -- in existence for over 10 years -- gets updates in six-week "sprints" using source control and bug-tracking systems. But now an anonymous reader writes:
In theory users report bugs, the developers "fix" the bugs, the users test and accept the fix, and finally the "fix" gets released to production as part of a larger change-set. In practice, the bug is reported, the developers implement "a fix", no one else tests it (except for the developer(s) ), and the "fix" gets released with the larger code change set, to production.
We (the developers) don't want to release "fixes" that users haven't accepted, but the code changes often include changes at all levels of the stack (database, DOAs, Business Rules, Webservices and multiple front-ends). Multiple code changes could be occurring in the same areas of code by different developers at the same time, making merges of branches very complex and error prone. Many fingers are in the same pie. Our team size, structure and locations prevent having a single gatekeeper for code check-ins... What tools and procedures do you use to prevent un-approved fixes from being deployed to production as part of the larger code change sets?
Fixes are included in a test build for users to test and accept -- but what if they never do? Leave your best answers in the comments. How woud you stop un-approved code changes from being deployed?
We (the developers) don't want to release "fixes" that users haven't accepted, but the code changes often include changes at all levels of the stack (database, DOAs, Business Rules, Webservices and multiple front-ends). Multiple code changes could be occurring in the same areas of code by different developers at the same time, making merges of branches very complex and error prone. Many fingers are in the same pie. Our team size, structure and locations prevent having a single gatekeeper for code check-ins... What tools and procedures do you use to prevent un-approved fixes from being deployed to production as part of the larger code change sets?
Fixes are included in a test build for users to test and accept -- but what if they never do? Leave your best answers in the comments. How woud you stop un-approved code changes from being deployed?
permissions (Score:5, Informative)
"How woud you stop un-approved code changes from being deployed?"
Require approval from someone before changes are pushed out.
Re: permissions (Score:4, Insightful)
That doesn't work. I manage devs on five different continents, and my boss always wins. There is no way to beat bad management.
Re: permissions (Score:4, Insightful)
I had a boss that gave me some really good advice 15 years ago when I was getting chewed out by the owner (which was his boss), "get a backbone". If your boss is constantly reaching around you to your workers, they don't respect you and you should be fired for sucking or you leave because they're doing it wrong.
If YOU are managing them, it's your job, not your boss's. Take responsibility or take off.
Re: permissions (Score:4, Interesting)
"That doesn't work"
It doesn't not work. More eyes do tend to catch out and out bugs. But some still slip through. And stuff that sounds good but busts user's workflow still gets through. And developers hate the delay. And more review costs more. And more testing costs even more than more review. So managers aren't a big fan of either.
I suppose the solution is never to work with millions of lines of code. But "This sucker is way to big and too complicated" is not an easy sell.
Re: (Score:2)
I suppose the solution is never to work with millions of lines of code. But "This sucker is way to big and too complicated" is not an easy sell.
What you're talking about is technical debt. You're talking about code that is such a complicated mess that code reviews can't effectively determine what the side effects might possibly be to simple changes. I would bet this code you are referring to has zero automated tests which might catch some of these side effects. There is no solution to that problem. It's a sinking ship held together by duct tape and rubber bands. It's a ticking time bomb.
Re: (Score:2)
Some of that. But I made quite a good living for several decades back in the 1960s, 1970s, and 1980s integrating large systems. Did some development also, and have some sympathy for developers. Good unit tests help. and so would automated testing. But let's get real. Few actual deployed systems have meaningful specs, and even those that started with specs probably didn't maintain the specs and have bugs in the original specs -- omissions (oh, you wanted the trig functions to be fast as well as accurate
Re: (Score:3)
Some of that. But I made quite a good living for several decades back in the 1960s, 1970s, and 1980s integrating large systems. Did some development also, and have some sympathy for developers. Good unit tests help. and so would automated testing. But let's get real. Few actual deployed systems have meaningful specs, and even those that started with specs probably didn't maintain the specs and have bugs in the original specs -- omissions
Hey I'm an old timer too but you apparently never caught up-to-date. Waterfall doesn't work. Never did. By the time a specification is written, it's already out of date and doesn't reflect what the customer actually wants. Other timers by the names of Ward Cunningham, Kent Beck, Martin Fowler and many others figured this out. Now we have Agile and Xtreme Programming and all of the variants.
Here let's get real with you. First of all, the software projects of the 60's, 70's and 80's were comparatively m
Re: (Score:3)
Well, that's the thing about Dunning-Kruger; if you thought it through a little farther, you'd realize that a bunch of people glancing at my post and laughing are almost all at that early peak. ;) You seem rather sure of yourself too, and yet, you didn't even get far enough in to touch on the meat of my comment.
Know going in that I'm an old-school software developer and that my comment was a universal truth. If you didn't understand it, or were too busy laughing at the nearest meme to try, that's fine; but
Re: (Score:2)
When I owned a Saab, a splendid care save for the complexity and unfortunate influence of General Motors, my dealer technician was quite proud of the fact that he was trained and proficient in all repairs and maintenance, even transmission service and body work. Many auto techs are proficient in diagnosis, most engine repair, blah blah, but not in body work nor transmission repair. I was impressed.
Then I had a problem he couldn't figure out - it would stall at a stop, just die, restart fine. After weeks of
Re: (Score:2)
There is no way to beat bad management.
Actually, bad management usually beats itself. I've seen plenty of companies go out of business because of bad management not learning from its mistakes. The problem is, when the ship sinks, everyone goes down with it including the narcissistic self-righteous douche bag that thought they were the ultimate shizzle.
Re: (Score:2)
there is no way to stop management
Really? What about a nuclear war? Or are they just like cockroaches?
Re: (Score:2)
And without testing, your post introduced a minor bug into the thread.
Fortunately it is innocuous. Some code is less equal than some other.
Re: (Score:3)
Its the worst. where I work our CEO is a business guy who runs a multi state multi million $ company with large numbers of employees aaaand he likes to code. In fairness, he's actually not a bad coder , coming from a C++ background, but hes rarely over all the issues that the engineers are over, and as a result theres always a background noise of randomness coming into the code fr
Re: (Score:2)
I would just get a new job and during the interview process, ask if the owner codes in production at 1am with vi. If they say yes, walk out.
Re: permissions (Score:5, Insightful)
This. We have devs in the US and in South America, Eastern Europe, NA, and Asia. That doesn't stop my boss from merging bad codel
Where I work, when I do a pull request for the develop branch, I _must_ specify a reviewer and a tester, and until the reviewer has marked the code as fine, and the tester has marked it as fine, and a merge can be done with no conflicts, nobody can merge the code, including any boss. You can quite easily set this up in JIRA, for example.
Re: (Score:2)
Nope. Sometimes it's "set fire to management".
Learn from the Rust project's developers. (Score:2, Insightful)
A lot could be learned by observing what the developers of the Rust programming language [rust-lang.org] project do when running their project.
They're dealing with a large project that covers a complex domain, a huge amount of code, and many developers scattered across the globe.
The first thing to do is to use git, and perhaps something like GitHub [github.com]. This will allow your developers to collaborate using a free and open source version control system.
Next, you need a Code of Conduct [rust-lang.org] to prevent social injustice from negatively
Re: (Score:2)
Next, you need a Code of Conduct [rust-lang.org] to prevent social injustice from negatively affecting the project. A Moderation Team [rust-lang.org] is tasked with ensuring that everyone is tolerant, and any intolerance will be ruthlessly stamped out.
That was a good one!
Re: (Score:2)
Where I work, the key value is respect. This allows us to collaborate, to propose the best solution, to do what the customer wants and not what seems right to us.
Even then we struggle with egos, but respect is the most important value, and the core of the business.
Re: (Score:2)
The problem described in the article is a lot more complex and comes down to that the system has a bad architecture from the start. However it's easy to make a bad architecture with many of the programming languages and development environments that are around today.
It's of course easy to say that you should run tests and have good test suites to execute to verify the platform but they only take you to a certain level, never to the level where the usability is validated.
And how do you verify when the input
Re: (Score:2)
Testing is the key, especially when working with a legacy project. When a bug report is accepted, you first create a test case that fails, and then you fix the bug. That gives you some kind of assurance that you fixed something. And you ship that fix, unless it fails QA at some point.
Only the bug reporter can find out whether what you fixed also fixed their problem. You can track that - and say the issue is either confirmed fixed, or unconfirmed fixed. That is what bug trackers do. Sometimes it is easiest f
Re: (Score:2)
The description you provide is a very simplified perspective, but the reality is that sometimes the bugs you fix aren't easy to create a simple success/fail test scenario for. This is very common in systems where you have a lot of interdependencies and race conditions. Each piece of the puzzle may work fine but together they create problems one time out of a thousand - and never in a test rig, only in real world platforms.
Re: (Score:2)
As somebody said, this may be an architecture problem, or maybe a timing problem, not just a simple bug. And you do not change your architecture willy-nilly, in the hope that it fixes the issue. At the very least, you would want some engineering process to underpin any significant changes.
Re: permissions (Score:2, Interesting)
Yes. In addition to 100% unit code coverage and integration tests.
Re: permissions (Score:4, Insightful)
Yes, or even more. How many people do you think usually look at each line of text or each line of music before it gets published? And there, the stakes are usually considerably lower than for code.
Code reviews: Just say yes (Score:5, Informative)
Re: (Score:2)
so now you have two coders looking at every line of code?
You really only need one, as long as the one is called something like Knuth or Venema.
Re: (Score:2)
Knuth might prove it correct but he wouldn't test it.
Re: (Score:2)
Re: permissions (Score:5, Interesting)
It's an old saying that a doctor who treats himself has fool for a patient and an ass for a physician.
Yet this is precisely the way many IT shops treat testing.
One of the biggest problems with this approach is that the developer "knows" where the weak spots are and test them, insofar as the schedule allows any real testing at all. An independent tester is not as prone to this sort of tunnel vision, especially when the tester isn't looking at code, but instead at the way the code works. Which, is after all, what the code is ultimately for.
A second problem is that characteristics that make a good software tester are not necessarily those of a good software developer. A good tester has to be the sort of meticulous person who can go over items line-by-line over and over again and never take shortcuts. A good developer may be a good developer precisely because he/she can leap around the concepts and tie together seemingly unrelated points.
Then there's the third problem, which is that contrary to whatever non-Euclidean world Management lives in, you cannot dump the jobs of developer and tester on the same person and rationally expect that they can inflate to handle both requirements optimally, Real employees have limits. Not that that matters when "right-sizing" the corporate personnel assets for the next quarter's executive bonuses.
Re: (Score:2)
Yup. Software developers and software testers -- at least if they're both good -- use different skill sets. Technically, testing things before you ship them is a cost center, but most companies do not want to spend the time and money to make practically-defect-free software before testing: It requires that someone who knows what they're doing set up a strong development process, that developers consistently think hard about what they're doing, and that a lot of risks are eliminated or realized before thin
Re: (Score:2)
Every activity you pay for is a cost center. Everything that brings in revenue is a profit center.
A business is profitable when the profit centers deliver more value than the cost centers, and are able to do so sufficiently to meet the expectations of investors (if any) and sustain either growth or continuity.
None of this is new, but we do see efforts (not new either) to redefine success in business.
Feh.
Re: (Score:3)
Before forced reductions in anticipation of product decommissioning, we had a dedicated tester, who had good enough regression testing suites that he caught bugs in most releases. And these were most always forehead slappers, 'darn, I forgot that again!!!' type.
Moving to the new version, new platform, they solved the testing problem by delaying releases interminably, denying features until forced, then complaining about schedules and arbitrary timelines, as in being forced to release code 8 years late.
Our w
Re: permissions (Score:4, Insightful)
so now you have two coders looking at every line of code?
Yeah...because this is how it's done when it's done professionally. You have one coder...the guy who wrote the change...and then another coder...the one who tests it.
This happens in non-code places too, like journalism. One person writes the article, and another proofreads it. (Due to the acceleration of the news cycle, this has been going away...with predictably-bad results.) Consulting? Yes, you have quality control (another person reading and checking the deliverable..every line of it) before it goes to the client. Engineering? One engineer builds the spec, and another has to approve it; this is actually mandated by law for a lot of things, in fact, where permitting is involved (like construction).
Fundamentally, the question is "how to you keep code from being pushed to the public before it's tested." You seemed to miss that in your reply, because the very point of the question requires two people...people who must understand what their reading (and thus, are coders)...to look at the code. Also, your reply seems to imply that a code change requires reading ALL of the code, not just the new or changed code, and this is simply not true.
Users don't report bugs (Score:4, Interesting)
That right there is a part of the problem. Software testers report bugs. Hints about potential bugs can come from end users, but end users are not software testers.
There is no substitute to a really good (professional.... and paid) software tester that and reliably reproduce bugs that need to be removed. If anything, they are far more valuable than even code monkeys writing thousands of lines of code per month (something also largely irrelevant for quality software). If anything, I would pay the software testing team before the coders if you need to use some volunteer labor like in an open source project.
End users can offer hints to a good software testing team as to what might be bugs, and end user reports should definitely be taken seriously since it is something that slipped past those testers as well. When the software testers are fired and/or it is presumed that unpaid volunteers are going to be doing that quality assurance process, especially for a commercial software product, you get what you pay for.
Re: (Score:2)
There is no substitute to a really good (professional.... and paid) software tester that and reliably reproduce bugs that need to be removed. If anything, they are far more valuable than even code monkeys writing thousands of lines of code per month (something also largely irrelevant for quality software). If anything, I would pay the software testing team before the coders if you need to use some volunteer labor like in an open source project.
This works a lot but it's not a panacea. I've seen several large software efforts where manual testing and test plans start becoming ineffective because the complexity gets to a level where no amount of highly talented testers can deal with it. The execution of manual test plans starts taking inordinate amounts of time. The cataloging of test plans and keeping them up-to-date becomes counter-productive. The only way to move beyond this point is to develop an automated testing strategy. I'm not referrin
Re: (Score:3)
Are you the owner of Ham Radio Deluxe?
Feature Toggles (Score:2)
Entice users to test the new code (Score:2)
More money, better computers, whatever it takes - get a subset of the users on board to test the new code before it's final.
isn't this pretty straightforward? (Score:5, Informative)
- You set up the central repository to only accept code if it can be merged and results in all tests passing.
- You make sure that there is defined code ownership and that people can only change code with a review and with the approval of the owners, also enforced by the source code control system.
Long-term, there are two more things that should happen:
- Developers need to learn how to break up large diffs into many small, individually testable diffs.
- You need to break up your codebase so that it's not a single project with 1Mloc, but 50 small projects with 20kloc each.
Re: (Score:2)
Code ownership is so 1960s ...
Re: (Score:2)
Pretty much every project on GitHub has code ownership.
Re:isn't this pretty straightforward? (Score:4, Insightful)
In the 1960's was when you wrote software by punching cards that someone else fed in and where it had to work the first time. Every time. That kind of discipline is sorely needed by the original question submitter.
The whole haphazard development model described in the question is absurd. First of all, what kind of single bug requires rifling through back end databases, business rules, web services and multiple front ends? That's not a bug in the software, that's a bug in the pre-design definitions phase. That is not a bug. Seriously... you can't just accept all the premises in the question without thought. That kind of change only happens when someone is is calling "the customer wants this feature changed" or "we misunderstood what the customer needed" a bug, which is wrong on its face.
Secondly, multiple people making changes of that scope simultaneously is just wrong, whatever the cause. Distributed revision control systems were made able to handle multiple simultaneous branches in order to break bottlenecks on people working on different areas of a common source file. They were designed to accommodate merges that had occasional and minor overlaps. What is described here is a completely inappropriate use of that kind of environment. So to answer the question directly, when asked what tools can help, the answer is no tools can help you. The process is wrong. You are far better off reverting to a revision control system that enforces a single checkout of a source file if this is what is going on. Better yet, correct your development strategy.
This can't be emphasized strongly or often enough. Code ownership is a good step forward in this scenario, but the only real fix for these problems is to completely refactor the way change is managed in this project. You wouldn't be wrong to Gantt chart these changes with their subsystem impacts so they can be scheduled on a non-interference basis. Better yet, if you are having to make multiple back-end through to UI changes, you need to go through a whole scope identification phase again.
Your change system is hopelessly broken. Fix that, then the correct use of existing tools to assist you will become readily apparent.
Re: (Score:2)
While I agree that this may be the case, it is also possible that the software structure is terrible. Maybe one feature is spread out over many different parts of the project. Especially if some mixing language like PHP is involved, that is actually quite typical, because you work with Javascript code that is fix and Javascript code that generated, and possibly from different sources.
Unfortunately, there is no easy fix for this. You need to refactor the software if you want to make it more maintainable.
Re: (Score:2)
Re: (Score:2)
- You set up the central repository to only accept code if it can be merged and results in all tests passing.
I'll expand on this in case anyone is confused. The code itself should be developed alongside unit tests. Whenever a new interface/class is developed, there should be tests built to ensure that *every method* behaves exactly as expected. Java has packages like JUnit, Python has nosetests, and there are others. Lastly, with such a widespread development team, it's imperative you develop coding standards and have management backing them up. Use things like the SOLID design principle [scotch.io], and make sure that code i
Re: (Score:2)
By the way, the Linux kernel is actually a fairly big codebase, which goes against what I have been saying about breaking up projects. It works for the Linux kernel because of L
Re: (Score:2)
The codebase being dispersed over different teams, each team of course thinks their stuff is most important.
Those 50 small projects? You just created dependency hell! Sure, in theory you should strive for high cohesion and low coupling of projects, but suddenly team A needs this from team B, and they don't want to spend time asking, so they just implement that functionality at their leve
Re: (Score:2)
Yes, the problems you list occur when you break up big projects into smaller ones. They also occur when you don't break up projects. The difference is that when you break projects up, these problems actually become visible and exposed, which is why they can then be addressed by management and through tools.
Re: (Score:2)
The length of a name should be either configured in a central location, or it should be (effectively) unlimited. Using fixed length fields does seem very 1960s. It is possible that a number of pieces fail on that issue.
presumably you tagged the sources (Score:2)
Presumably you tagged the sources that went into the build that went to your customers?
If you did, when you make bug fixes you need to check out against that tag, not to the bleeding edge code where new features are being added.
Depending on how many fixes there are and how complex and messy the source tree is, you can either try to merge the changes into your bleeding edge code base or make the changes twice. In general, if the bleeding edge is being vigorously refactored or otherwise aggressively reorgani
Gatekeeper(s) - Needed (Score:3)
and the "fix" gets released with the larger code change set, to production...
The problem here is that the fix "gets released." I agree, that it seems like releases should at least have the criteria that at least one other person has reviewed the code being released. Otherwise, they have the criteria that one person decided to release it (by definition.)
I think you could create a system by which pull requests are approved by someone other than the person that created them, and then, after it's been approved, then the code is authorized to be merged into a release branch. Here's [ycombinator.com] one such discussion of that. I'm not an expert on this, but I've heard of it, and I think this line of reasoning could help you.
Good luck!
Step One (Score:3, Insightful)
Step one is to get high-level management to understand and agree with the risks as well as to understand and agree with the costs of preventing them.
You would think that this is a no-brainer, but its not. I've listened to a COO tell me from across a boardroom table that they have to be able to bypass deployment processes for business-critical hot fixes because time is of the essence in those situations, and that was the end of it. So what you've got is that in an "emergency" an "informal" approval from an "important" person is all that is needed. Feel free to define those words however you wish, naturally.
Microservices (Score:3)
When people are worried about changes in "many layers of the stack", it's usually a good time to re-architect the system and build microservices. Basically, you get the entire stack in every microservice and you stop worrying about ripple effects; you upgrade or troubleshoot things at a much smaller scale.
I highly recommend this book:
https://www.amazon.com/Buildin... [amazon.com]
It explains how to achieve this, including how to deal with the tough parts like the database layer.
Re: (Score:2)
When people are worried about changes in "many layers of the stack", it's usually a good time to re-architect the system and build microservices. Basically, you get the entire stack in every microservice and you stop worrying about ripple effects; you upgrade or troubleshoot things at a much smaller scale.
Isn't this one of the problems caused by modularization, not solved by it? Basically if everything was in the same VCS it'd be a huge change set doing some database changes, some business rule changes, some desktop GUI changes, some Android GUI changes, some iPhone GUI changes etc. but the moment you start breaking it up you have to start tracking that this change of functionality requires changes in five different projects and unless everything makes it into the next release it won't work. The more you've
Re: (Score:2)
When people are worried about changes in "many layers of the stack", it's usually a good time to re-architect the system and build microservices. Basically, you get the entire stack in every microservice and you stop worrying about ripple effects; you upgrade or troubleshoot things at a much smaller scale.
I highly recommend this book: https://www.amazon.com/Buildin... [amazon.com]
It explains how to achieve this, including how to deal with the tough parts like the database layer.
Just watch the Spotify Engineering Culture [youtube.com] videos. What you refer to as "ripple effects" they refer to as "blast radius" which I like much better. The benefit of micro services is that if one micro service blows up, the rest continue to run at least enabling partial functionality as opposed to taking the whole system down or putting the entire system into a funky state.
Yes it addresses the problem (Score:2)
Rolling it out in microservices just means some have issues and others not, making it more confusing and less cohesive.
I don't think you understand what a microservice is.
The point is not to randomly package web services; the point is to define components that correspond to specific areas of the business domain. That way, when you upgrade or deploy a specific microservice, you know exactly what part of the business you're impacting, and this makes governance a lot easier. Instead of having a handful of managers snoozing in CAB meetings and rubberstamping changes that are not clear, you can get the thumbs up from the right p
Re: (Score:2)
the point is to define components that correspond to specific areas of the business domain.
If people were capable of that, they wouldn't be in the problem to begin with. Their system would already be suitably modular.
Re: (Score:2)
the point is to define components that correspond to specific areas of the business domain.
If people were capable of that, they wouldn't be in the problem to begin with. Their system would already be suitably modular.
So you're basically saying: they can't fix it because if they could they wouldn't have to fix it. So from the get go you're already in patch mode. That's the best way to set yourself up for failure.
Here's the key to improving things: you don't compromise when you're at the whiteboard. You compromise once you have a clear picture of where you want to get and once you know that the gap between your current state and the desired state is too large for your current resources. And even then you don't change the
Re: (Score:3)
The underlying problem isn't microservice vs non-microservice: both can be fine architectures. The underlying problem i
Official Builds. CVS Tags (Score:2)
You lay down a CVS Tag and the build technician builds from it. Releases are clearly identified with an MD5 identifier.
If only one official binary exists, it's what people will run.
Eliminating Bugs: 101 (Score:2)
A piano (Score:3)
Next person to release an untested line of code will play the piano for us *SLAM*
For starters get your company to pay (Score:2)
Unless testing really isn't that important. Depending on your needs it might not be (despite all the indignation that engenders). One thing I've learned about
Amen (Score:2)
As a long time SQA/HQA eng, this is an awesome question. I've been around a lot of blocks. New build. Broken. Here's another one. Broken. My Golden Rule? Stay until it works. I'd so much rather have a good build than something slapped together. It wastes everyone's time.
As a STE/QA/SDET (Score:2)
I have and always will stand
Separate Testing Envs, hold merges til post-UAT (Score:2)
I have had this problem in several different projects with different teams.
I generally think of it as the "code hostage" or "cherry-picking" problem. You have the work of 15 issues reviewed and merged and loaded up and running on a test environment. For 14 of those issues, a "user" ( or whatever you call the non-developer issue-owner in this case ) checks in and says it is good. Time is passing and the 15th person is a no-show. It's worse than if they said it still wasn't fixed -- then you would immedia
This is not difficult (Score:2)
as long as everyone buys in that Quality is important and fixing urgent bugs is also important.
When we go to production we have a 3 day code freeze during which only P0 bugs (which are rare at that point) can lead to the code being changed.
After production the code is tagged and new development start on a new branch.
Any Production bugs which cant wait till next release are fixed on the production branch, tested by QA team and released.
The developer makes sure to do the same fix in the new code branch (if th
Spend Money (Score:2)
Project Management (Score:2)
* User creates a ticket in system (bug/issue, etc).
* Developer works on ticket (in new branch)
* Developer get's code review from peer
* Developer pushes changes to staging server
* user who filed ticket tests the change/signs off on fix
* code deployed to QA to test change
* QA signs off on case
* Developer merges code from branch to master (or build release engineer)
* code is released
I'm sure you've found tha
Gerrit (Score:2)
Re: (Score:2)
Exactly.
Gerrit requires code be approved before it will merge it into the mainline branches. It replaces a centralized Git server.
Deployments pull from the official Gerrit mainline, while developers can push/pull into their own private branches without requiring approval. But to push to mainline requires approval and review.
And there's a full chain of custody - if some bad code gets approved, you can see all the comments and who approved the change.
It's a bit tricky if you need to revise a fix, but it just
Re: (Score:2)
Yup. What is this doing on the front-page?
Sarbanes Oxley/SOX ... (Score:4, Interesting)
... hides
Different Tack (Score:2)
As the old adage [of project management] goes: "Do you want this quickly? do you want this with quality> do you want this at low c
I'm somewhat in the same boat where I work. (Score:2)
From my experience I suspect the problem doesn't start with "The code being pushed into production" in wrong way, the problem starts with *Users* sending un-coordinated bug reports and feature request directly to the developers.
Without some program/feature person "In charge" on the user side it feels pretty hopeless.
The only solution I have come up with for me: Make sure that management KNOWS that without proper procedures in place the is an increased risk of bugs slipping through that might affect producti
How would a user test a fix before production? (Score:2)
If they are end users, how would they test the fix before it reaches production code?
If they are testers and they don't test the fix, who the hell closes the bug report?
Since you worry about errors getting introduced by merges, it sounds like you are also missing regression testing.
If you don't do regression testing, and don't verify fixes on the release branch, what the hell is your QA depar
Pull requests (Score:2)
GHE has a new mode where pull requests need explicit approval to be merged, and you can also prevent pushing to the main branch.
Re: Pull requests (Score:2)
New to Github Enterprise?
Gitlab Community Edition has had this for about 18 months. Pull/merge requests can run automated builds including running tests, the results of which can be seen in the merge review screen. It can also be configured to auto-merge based on testing criteria (coverage, test results etc.).
So ... your users are the QA department? (Score:4, Insightful)
Hey, not that it's anything unusual, but if you cut corners, expect to be sued for patent infringement by Apple. Or something like that.
In all seriousness, though, if your USERS report bugs, you have a fundamental problem here. Because this is what it should look like:
User defines specification. Programmer codes to spec. QA tests if spec is implemented correctly. Program ships. User finds something he doesn't like? It obviously has to be a change request, because the program does what the user specc'd.
Yes, it is that simple. And yes, I'm fully aware that users don't have the first clue what they actually want. But they will never learn if you keep treating their blunders and imprecise specifications as if it was YOUR fault!
Wrong question (Score:2)
You don't want to make it even more cumbersome to change the code, as it sounds like you are already struggling with the 10 Mloc codebase. So forget about having humans "approve" the changes.
What you want to do is make it easy to submit good code and difficult to submit bad code. This means that you will need the capability to quickly assess the proposed patch, for some definition of "good" and "bad". Computers are fairly good at this. In other words: test-first development, with automated testing on severa
Pull Requests (Score:2)
Whips. (Score:2)
And attack dogs. Angry attack dogs.
Separation of duties (Score:2)
Just spell out the rules clearly. (Score:2)
Produce a one-page "procedures" document - clearly, but simply lay out the process in moving code from the programmer's branch to the QA branch and into the production branch. Have everyone read and sign it.
The first time someone violates it, you give them an informal warning.
The second time they violate it, have them sit down with management and HR and tell them that if they violate the rules again, they'll be terminated.
The third time, you terminate them.
Easy...no automation required...you simply have to
Project Management System (Score:2)
I am dealing with similarly sized project for last 13 years. Our workflow is different though. We have continuous updates (no batch updates) when it comes to bug fixing. We use our in-house Project Management system.
- Client sends a request/bug report, our team creates new Task in our PM and
- assigns it to department/programmer
- programmer fixes it, tests it on Devel version puts the task to QC status
- QC team tests described problem and greenlights deployment
- programmer deploys/merges the fix into the liv
This is easy. (Score:2)
you hire a person to be the gatekeeper that does exactly what you want.
Sorry, but there is no cheap and free way to do this, it's called a project manager and they need to be competent and detail oriented.
Devs sign off on the code and their own testing, then this person makes sure that the bet testers also tested and signed off on it, then they personally sign off that all is well and OK's publishing to a new gold release.
Not hard to do, and requires your management to be competent and understand they need
Change the Culture and get a Change Review Board (Score:2)
My organization a while ago saw significant issues with untested fixes being deployed and similar bad practices (undocumented configurations, lack of integration testing, etc.). The thing that did it for us was seeing our up-time drop below 99% in production systems. It became downright embarrassing and started costing us real $.
So our then-CIO froze all production changes for 90 days. In that time, we instituted a change review board. They now approve all production changes. Without the culture change that
Blurred roles here (Score:2)
I think your difficulty is that your current work flow obfuscates two roles. You are using your Stakeholders (the users of your software) as the individuals in charge of acceptance. They're likely not motivated, or possibly not qualified, to do so really. Remember they're just interested in getting THEIR work done. If their workflow were "enlightened" like yours, then they might care about process... but probably not.
The missing piece of the puzzle here is the role of Product Owner. An individual that works
Welcome to Software Development (Score:2)
This is the part of C++ that you can't learn in 21 days.
Well-defined Devops. (Score:2)
Developers can't touch production.
Automated checks of ANYTHING are golden because you aren't relying on people, and you aren't pitting people vs people.
It is all about consistency when you boil down to the bones of a well-running team.
Start with those rules and actually follow them and you end up with a pretty awesome setup, because Developers will naturally gravitate to defensive, test-driven programming when those are the r
You are missing QE/QA in your process. (Score:2)
You are missing a quality engineering or quality assurance group -- a group that's independent of the developers and specializes in developing appropriate tests, test harnesses, test automation and also provides another set of eyes on features and bug fixes.
Our process:
Bug is reported or feature request is approved for developer to work on.
Development cycle:
Developer implements feature / fixes it. Ideally based on design of fix/feature, QE develops test plan/tests. A lot of work can happen concurrently. Som
Whole lotta wrong in that. (Score:2)
The golden path should be:
1) User reports a problem to the service desk.
2) Service desk looks into the problem and either addresses it if it's user error or punts it to QA/testing.
3) QA/testing investigates and documents as much as possible about the bug - replication steps, affected screens, whatever. They would do this both in production and a staging environment to see if it's an environmental issue.
4) Developer takes the bug and figures out the issue, creates a fix, which is then sent back to QA/testing
Re: (Score:2)
Yeah, this is a management problem. The technical safeguard I proposed won't work unless management buys in, which clearly they haven't already since this could be addressed by policy, but it hasn't been.
Re: (Score:2)
Even easier: automatically approve all code changes.
Re: (Score:2)
Re: (Score:2)
In a marketing-driven organization, the software group does not necessarily have this power. If marketing agrees to accept the risks, you sometimes have no choice.
Re: (Score:2)
While they read Slashdot? Surely they will waste more time than that.
Re: Give all the users AIDS by anorectal insemena (Score:2, Funny)
Somewhere, on some porn forum right now, someone is asking a coding question.
Re: (Score:2)
Exactly this. To me this essentially sounds like trying to implement version control within the code. Reinventing the wheel and doing a poor job of it to boot.
There may occasionally be something so trivial that a runtime switch makes sense to back out the functionality (probably in the UI layer). The original poster was talking about an extremely large, layered, and complex system. Feature flags in that context will eventually just make a bad problem worse, if not immediately