Are Regression Tests an Industry Standard? 50
Sludge asks: "I just finished leading a team through a software project. It was the first of it's type for our company: financial transactions were involved, and it was therefore very fault intolerant. In order to complete this, a set of regression tests were written. For example, if the amount of money collected doesn't match up to our order table, we get notified via our cellphone's text messaging as soon as the cronjob picks it up. Lots of other implementation-specific tests exist as well. My question is, how common is this for the software industry? My company had never heard of this before I came along. Is it the norm? (When you answer, also say whether or not your company does risk management.)"
Yup. (Score:3, Informative)
In fact regression tests spotted not only implementation errors, but documentation errors when the semantics of a function changed, but the docs, and hence the regression tests didn't match.
Now, strictly speaking, what you describe is more of a sanity audit rather than a regression test, unless you provide test data to trigger the potential conditions you test for.
Regression test tools? (Score:1, Interesting)
Re:Regression test tools? (Score:4, Informative)
You catch doc. errors this way because when the test reveals an error it means one of three things: (a) you coded the test wrong; (b) you found an implementation error; (c) the implementation is right, but you coded to the docs, which were wrong.
Now, if you use various 4GL tools (dunno if Rational Rose lets you do this), they might be able to automate the tests for you.
Good luck.
Automated regression tests are great (Score:3, Informative)
If management knows about automation but no such system exists in the company, it's mainly a matter of money and tuits. If management doesn't know about automation at all, then you're dealing with very inexperienced leadership. Any book that deals with software testing, even the very simple treatment of the topic by Kaner, Falk, and Nguyen discusses test automation and regression testing.
Don't talk to me about managment.... (Score:3, Interesting)
Used a very expensive tool (which we already had) to perform very simple regression testing on a new software package, and found more faults in an automated run over one weekend than we normally found in a two month period testing by hand.
Saved the company and customers untold amounts of money, and when the software went live we had at most 5% of the normal faults reported in that area. Customer was delighted.
For the next project I wanted to expand the tool to cover more of the functionality. Everyone appreciated the reasoning, and I asked for a month or two to develop it. (In a pinch I could have done it in a fortnight, but I'm an engineer, so I padded my estimate
The response? 'Do it the old way - our project timescales are too tight for this'.
Even a 'give me a fortnight and let me prove the concept' fell on stony ground. This DESPITE the fact I'd proved the concept already.
Shortly afterwards, I quit - this was the final straw for me.
All the wonderful automation, test tools and experienced testers count for nothing if management have an anus/cranium interface issue...
Re:Don't talk to me about managment.... (Score:1)
A system that is necessary for long term in-house use and that requires constant development, a regression suite ought to be mandatory. Another post talks about banking systems.
OTOH, a product that is to be shipped to customers, there are a few more considerations. First is whether the product will continue being developed. If it takes two months to develop the test suite, but it only takes 2 weeks to run a full test pass using current techniques, management will always choose not to develop the suite. If the product is in its first release and it appears to have a long happy road ahead of it with multiple releases and upgrades, then it makes sense to spend a little time up front developing the suite in order to cut down on the workload later on.
A game wouldn't need an automated regression suite because it is simply a one-off deal. A game engine, OTOH, definitely needs one as the plans are to have it in many games.
Re:Don't talk to me about managment.... (Score:1)
Re:my experience... (Score:1)
Posting anonymously for a reason...
*snip*
Jaime McCarthy.
"'Dear idiots: If you ever want to see your pets again, give us the avocado. Love, Waldo and Steve.' Hehehe"
"WHAT??? That's not funny!"
"Of course it's funny. Look, they cut out magazines to make the note and then they went and signed with their names."
Regression Testing (Score:2, Interesting)
industry standard nomenclature (Score:5, Informative)
What you describe isn't regression testing. Regression testing "is a quality control measure to ensure that the newly-modified code still complies with its specified requirements and that unmodified code has not been affected by the maintenance activity." [pcwebopaedia.com] More accurately what you've done is paranoid programming [brighton.ac.uk]. Really, these two things are orthogonal [dictionary.com].
This depends. Every company I've worked for has claimed to be concerned with mitigating risks both in the testing phases and post-release phases of the software development lifecycle. However, the amount and kind of testing and programming actually done have varied wildly and always ends up being determined by the industry for which the software is being built. In your case, money is the biggest factor. Organizations such as banks and other financial institutions are highly risk-averse due to the responsibilities and legal concerns related to handling others' money. It follows that these organizations regularly conduct formal testing of their code as well as "program paranoid" to mitigate screw-ups. In start-up's I've worked at in the past, this wasn't nearly the case since it was more important to get a product out the door and this sort of testing/coding always went out the window with looming deadlines.
So to answer your question, yes, regression testing (and paranoid programming) are highly common in the IT industry and their respective importance is a function of the risk aversion of the intended users/customers. My advice is to always practice good, paranoid, professional programming augmented by formal testing procedures. Vary the time spent on each to achieve the appropriate balance.
Frankly, the best way to enlighten yourself on this matter is to educate yourself in the ways of Extreme Programming [extremeprogramming.org]. The horribly trendy name aside, this is the truly the only management fad I've seen in 10+ years that holds any merit.
Re:industry standard nomenclature (Score:3, Informative)
Seriously, though. I used to work in a company that wrote testing software. Regression tests are tests that you know work (and tests that fail when they are supposed to).
For example: A few regression tests for a calculator would be to run a few additions, multiplications, etc, and ensure the answer is correct. Divide by zero and ensure the calculator fails the calculation (friendly fail).
When you make the next version of the software, you run said tests against the new system to make sure what you just added/modified didn't break the stuff that already worked.
Yeah, they are industry norm, especially on sellable products.
Testing? (Score:2)
Re:Testing? (Score:1)
Simple scripts (Score:2)
change management (Score:2)
there are subtle differences between the two.
Testing is very common (Score:3, Insightful)
I have worked on quite a large number of software projects, and every single one included "testing". The level to which software is tested, however, varies widely. One project I worked on was a billing application which collected the entire company's annual revenue. Yep, we tested that one pretty rigorously....But I've also worked on web site projects where the downside of getting it wrong was not so severe; we tested almost as an afterthought.
There are a lot of test "gurus", and a bunch of different methodologies to provide a testing framework. Checkout testing.com [testing.com] to get a feel for this...
It all boils down to the decision how much time do you spend on testing versus other quality assurance methods. Testing is the most expensive and least effective way of finding bugs except for releasing the code to your customers. Practices such as specification, design and code reviews, design-by-contract, aspect-based programming give you far more bang for your buck.
FWIW, on the billing project, we had a formal specification review to make sure that the product we built did what the business needed, a business representative to help fill in the blanks in the specification, a design review to make sure that the software we intended to build was indeed what the spefication asked for, and made sense in its own right. We produced numerous prototypes and mock-ups to get our customers to tell us we were on the right or wrong track without having to learn to read software design documents.
During the code phase, we created unit and integration tests which measured the kinds of thing you mention (e.g. order total must equal sum of order lines), and had a dedicated test resource. We ran code reviews. We also made sure we showed the work in progress to our business sponsors as often as we could.
When we thought we were done, we had a formal show-n-tell to present our work to the business; this lead to a bunch of rework, which again was tested, reviewed etc.
The software was succesful from the business point of view; with hindsight, I'd say that the code was truely awful, and I wish we'd spent more time on code reviews. How important was testing in the QA process ? It provided a useful yardstick to tell us how close we were to meeting our objectives. Would I have relied on testing without all the other stuff - reviews, prototypes, great access to the business folk ? Hell no - if you don't know that what you're testing is what the customer wants, your tests are pretty much valueless.
So, I guess I distrust any organisation that over-emphasizes testing as a QA process - there are better ways of avoiding bugs. On the other hand, you have to provide the appropriate level of testing - if you're writing nuclear missile guidance systems, you need to allocate a lot more resources to testing than if you're building a website to hail your cats' achievements as politicians.
Ne
Buzzword (Score:1)
So, while I still don't understand what regression testing really is, I do know to warn you to learn what it is first before you begin employing it. That way, fewer people on the project will be fooling themselves about the quality of the end product.
Yes, we do it. (Score:3, Informative)
Under our Unix-based app, we use a terminal-emulator which supports scripting to send sequences of characters to the app to simulate normal use. Very easy, and very efficient.
We're currently in the process of trying out various Windows-based regression testing packages, to test our brand-new Windows-based app (which, sadly is due to replace the Unix app), but it's proving to be a much harder thing to do under Windows than under character-based terminals, because of the mouse-driven and event-driven nature of the environment.
We are starting to get to grips with the problem, but it has been a much bigger task than we expected. If a minor detail (eg size of an input field) changes in the Unix app, no changes are needed to the test-suite; under Windows, you have to keep a much tighter control on it.
That's not regression testing (Score:2)
In order to complete this, a set of regression tests were written.
That's not really regression testing. Regression testing is making sure something that used to work still works; this is done by running tests that were originally done to qualify the original software.
If you don't have any test results from the "before" system, you cannot compare them with test results from the "after" system. And without such a comparison, you are not regression testing.
The one example test you describe is more of an internal-consistency-error-handler check. These are important too, of course. Perhaps the most difficult part of internal-consistency-error-handler testing is that you have to cause an internal inconsistency to test the error-handling code.