Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Programming IT Technology

Are Regression Tests an Industry Standard? 50

Sludge asks: "I just finished leading a team through a software project. It was the first of it's type for our company: financial transactions were involved, and it was therefore very fault intolerant. In order to complete this, a set of regression tests were written. For example, if the amount of money collected doesn't match up to our order table, we get notified via our cellphone's text messaging as soon as the cronjob picks it up. Lots of other implementation-specific tests exist as well. My question is, how common is this for the software industry? My company had never heard of this before I came along. Is it the norm? (When you answer, also say whether or not your company does risk management.)"
This discussion has been archived. No new comments can be posted.

Are Regression Tests an Industry Standard?

Comments Filter:
  • Yup. (Score:3, Informative)

    by renehollan ( 138013 ) <[rhollan] [at] [clearwire.net]> on Monday July 08, 2002 @10:36PM (#3846734) Homepage Journal
    I did this at my last place of employment as well as my current one.

    In fact regression tests spotted not only implementation errors, but documentation errors when the semantics of a function changed, but the docs, and hence the regression tests didn't match.

    Now, strictly speaking, what you describe is more of a sanity audit rather than a regression test, unless you provide test data to trigger the potential conditions you test for.

    • What tools are being used for regression testing...and also I'd like more information on how to use regression testing to catch documentation errors.
      • by renehollan ( 138013 ) <[rhollan] [at] [clearwire.net]> on Monday July 08, 2002 @11:41PM (#3847057) Homepage Journal
        We did it the old fashioned way: designing unit tests for each function and state transition for stateful interactions. The results were matched against what was supposed to happen, and stimuli for errors were included as well as stimuli for non-error responses. You can scale this technique up to subsystem and system testing and adapt it for client/server and p2p protocols.

        You catch doc. errors this way because when the test reveals an error it means one of three things: (a) you coded the test wrong; (b) you found an implementation error; (c) the implementation is right, but you coded to the docs, which were wrong.

        Now, if you use various 4GL tools (dunno if Rational Rose lets you do this), they might be able to automate the tests for you.

        Good luck.

  • by ObviousGuy ( 578567 ) <ObviousGuy@hotmail.com> on Monday July 08, 2002 @10:51PM (#3846813) Homepage Journal
    Not having automation is one thing. It's actually quite a bit of work to get a good regression test automated. Testers, many very green fresh out of school types, generally do not have the experience necessary to write good tests that can fit into a harness framework. SDETs have the experience, but are more expensive to hire and keep around.

    If management knows about automation but no such system exists in the company, it's mainly a matter of money and tuits. If management doesn't know about automation at all, then you're dealing with very inexperienced leadership. Any book that deals with software testing, even the very simple treatment of the topic by Kaner, Falk, and Nguyen discusses test automation and regression testing.
    • I wrote some automated test programs in my last job.

      Used a very expensive tool (which we already had) to perform very simple regression testing on a new software package, and found more faults in an automated run over one weekend than we normally found in a two month period testing by hand.

      Saved the company and customers untold amounts of money, and when the software went live we had at most 5% of the normal faults reported in that area. Customer was delighted.

      For the next project I wanted to expand the tool to cover more of the functionality. Everyone appreciated the reasoning, and I asked for a month or two to develop it. (In a pinch I could have done it in a fortnight, but I'm an engineer, so I padded my estimate :)

      The response? 'Do it the old way - our project timescales are too tight for this'.

      Even a 'give me a fortnight and let me prove the concept' fell on stony ground. This DESPITE the fact I'd proved the concept already.

      Shortly afterwards, I quit - this was the final straw for me.

      All the wonderful automation, test tools and experienced testers count for nothing if management have an anus/cranium interface issue...
      • I feel bad for you. An automated regression suite would cut future testing time as well. But again it seems that my first instincts were right: your management couldn't (or claimed so, at least) afford the cost and certainly didn't have enough tuits.

        A system that is necessary for long term in-house use and that requires constant development, a regression suite ought to be mandatory. Another post talks about banking systems.

        OTOH, a product that is to be shipped to customers, there are a few more considerations. First is whether the product will continue being developed. If it takes two months to develop the test suite, but it only takes 2 weeks to run a full test pass using current techniques, management will always choose not to develop the suite. If the product is in its first release and it appears to have a long happy road ahead of it with multiple releases and upgrades, then it makes sense to spend a little time up front developing the suite in order to cut down on the workload later on.

        A game wouldn't need an automated regression suite because it is simply a one-off deal. A game engine, OTOH, definitely needs one as the plans are to have it in many games.
        • This is slightly off topic, but what in the Sam Hell is a "tuit?" The first time it looked like a typo, but the second time you used the word it seems you meant it. And your usage suggests that you're trying to use the word "tuits," so WTF is it supposed to mean?
  • Regression Testing (Score:2, Interesting)

    by SN74S181 ( 581549 )
    The last place I worked did extensive regression testing at all levels of software development. I was a member of a team developing telemetry firmware for communicating with implantable medical devices, so needless to say we needed to minimize any problems. Software 'bugs' in the field are not taken lightly.
  • by Pauly ( 382 ) on Tuesday July 09, 2002 @12:13AM (#3847191)

    In order to complete this, a set of regression tests were written. For example, if the amount of money collected doesn't match up to our order table, we get notified via our cellphone's text messaging as soon as the cronjob picks it up.

    What you describe isn't regression testing. Regression testing "is a quality control measure to ensure that the newly-modified code still complies with its specified requirements and that unmodified code has not been affected by the maintenance activity." [pcwebopaedia.com] More accurately what you've done is paranoid programming [brighton.ac.uk]. Really, these two things are orthogonal [dictionary.com].

    My question is, how common is this for the software industry? My company had never heard of this before I came along. Is it the norm? (When you answer, also say whether or not your company does risk management.)

    This depends. Every company I've worked for has claimed to be concerned with mitigating risks both in the testing phases and post-release phases of the software development lifecycle. However, the amount and kind of testing and programming actually done have varied wildly and always ends up being determined by the industry for which the software is being built. In your case, money is the biggest factor. Organizations such as banks and other financial institutions are highly risk-averse due to the responsibilities and legal concerns related to handling others' money. It follows that these organizations regularly conduct formal testing of their code as well as "program paranoid" to mitigate screw-ups. In start-up's I've worked at in the past, this wasn't nearly the case since it was more important to get a product out the door and this sort of testing/coding always went out the window with looming deadlines.

    So to answer your question, yes, regression testing (and paranoid programming) are highly common in the IT industry and their respective importance is a function of the risk aversion of the intended users/customers. My advice is to always practice good, paranoid, professional programming augmented by formal testing procedures. Vary the time spent on each to achieve the appropriate balance.

    Frankly, the best way to enlighten yourself on this matter is to educate yourself in the ways of Extreme Programming [extremeprogramming.org]. The horribly trendy name aside, this is the truly the only management fad I've seen in 10+ years that holds any merit.

    • I agree with this post.

      Seriously, though. I used to work in a company that wrote testing software. Regression tests are tests that you know work (and tests that fail when they are supposed to).
      For example: A few regression tests for a calculator would be to run a few additions, multiplications, etc, and ensure the answer is correct. Divide by zero and ensure the calculator fails the calculation (friendly fail).

      When you make the next version of the software, you run said tests against the new system to make sure what you just added/modified didn't break the stuff that already worked.

      Yeah, they are industry norm, especially on sellable products.
  • We hardly ever test anything before releasing it. So, regression testing? Customers are our regression tests, they are quite fast at finding all the stuff we've broken with the new release.
  • I know this is not a commercial project in a large corporation, I thought I mention it anyway. In the GnuGo [gnu.org] project we have a large set of simple test scripts, and all the time we keep track of which of them succeed, and which fail. There are always some failures, but we can live with that (we sort of have to). This test set has saved us a lot of time. It has also given valuable information on the effects of a new algorithm or piece of code, or what happens when you adjust some tunable parameters. No expensive tools used, just a bit extra coding to make all the necessary information available for the tests, and a few scripts...
  • Is also a great tool, not just risk management.

    there are subtle differences between the two.
  • by PinglePongle ( 8734 ) on Tuesday July 09, 2002 @08:49AM (#3848660) Homepage
    As someone else has already mentioned, the process you describe is not strictly speaking regression testing.

    I have worked on quite a large number of software projects, and every single one included "testing". The level to which software is tested, however, varies widely. One project I worked on was a billing application which collected the entire company's annual revenue. Yep, we tested that one pretty rigorously....But I've also worked on web site projects where the downside of getting it wrong was not so severe; we tested almost as an afterthought.

    There are a lot of test "gurus", and a bunch of different methodologies to provide a testing framework. Checkout testing.com [testing.com] to get a feel for this...

    It all boils down to the decision how much time do you spend on testing versus other quality assurance methods. Testing is the most expensive and least effective way of finding bugs except for releasing the code to your customers. Practices such as specification, design and code reviews, design-by-contract, aspect-based programming give you far more bang for your buck.

    FWIW, on the billing project, we had a formal specification review to make sure that the product we built did what the business needed, a business representative to help fill in the blanks in the specification, a design review to make sure that the software we intended to build was indeed what the spefication asked for, and made sense in its own right. We produced numerous prototypes and mock-ups to get our customers to tell us we were on the right or wrong track without having to learn to read software design documents.
    During the code phase, we created unit and integration tests which measured the kinds of thing you mention (e.g. order total must equal sum of order lines), and had a dedicated test resource. We ran code reviews. We also made sure we showed the work in progress to our business sponsors as often as we could.
    When we thought we were done, we had a formal show-n-tell to present our work to the business; this lead to a bunch of rework, which again was tested, reviewed etc.

    The software was succesful from the business point of view; with hindsight, I'd say that the code was truely awful, and I wish we'd spent more time on code reviews. How important was testing in the QA process ? It provided a useful yardstick to tell us how close we were to meeting our objectives. Would I have relied on testing without all the other stuff - reviews, prototypes, great access to the business folk ? Hell no - if you don't know that what you're testing is what the customer wants, your tests are pretty much valueless.

    So, I guess I distrust any organisation that over-emphasizes testing as a QA process - there are better ways of avoiding bugs. On the other hand, you have to provide the appropriate level of testing - if you're writing nuclear missile guidance systems, you need to allocate a lot more resources to testing than if you're building a website to hail your cats' achievements as politicians.

    Ne
  • You have to be careful with phrases like "regression testing". Here on one project, all the managers where spouting it when no one knew what it meant. They just enjoyed saying "regression" (sounds smart) and putting it on the reports to imply quality where there was none.

    So, while I still don't understand what regression testing really is, I do know to warn you to learn what it is first before you begin employing it. That way, fewer people on the project will be fooling themselves about the quality of the end product.
  • Yes, we do it. (Score:3, Informative)

    by Spudley ( 171066 ) on Tuesday July 09, 2002 @12:35PM (#3850169) Homepage Journal
    Regression testing has been going on at my office since I joined, and for many years previously.
    Under our Unix-based app, we use a terminal-emulator which supports scripting to send sequences of characters to the app to simulate normal use. Very easy, and very efficient.
    We're currently in the process of trying out various Windows-based regression testing packages, to test our brand-new Windows-based app (which, sadly is due to replace the Unix app), but it's proving to be a much harder thing to do under Windows than under character-based terminals, because of the mouse-driven and event-driven nature of the environment.
    We are starting to get to grips with the problem, but it has been a much bigger task than we expected. If a minor detail (eg size of an input field) changes in the Unix app, no changes are needed to the test-suite; under Windows, you have to keep a much tighter control on it.
  • It was the first of it's type for our company

    ...

    In order to complete this, a set of regression tests were written.

    That's not really regression testing. Regression testing is making sure something that used to work still works; this is done by running tests that were originally done to qualify the original software.

    If you don't have any test results from the "before" system, you cannot compare them with test results from the "after" system. And without such a comparison, you are not regression testing.

    The one example test you describe is more of an internal-consistency-error-handler check. These are important too, of course. Perhaps the most difficult part of internal-consistency-error-handler testing is that you have to cause an internal inconsistency to test the error-handling code.

E = MC ** 2 +- 3db

Working...