Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Software

Suggestions for Performing Regression Testing? 37

gmletzkojr asks: "The company that I am currently working at develops a fairly complex industrial controller, complete with embedded software, a GUI on the controller, and a Windows app to connect over Ethernet. On previous versions of a similar project, we have performed testing manually - ie, monkey presses button, and sees that light turns on, widget turns, GUI updates, etc. However, this is extremely time consuming (previous complete regression testing took ~3 weeks) and is error prone in itself. How do you perform complete system regression testing? Do you use shrink-wrapped packages, or build your own? How do you test features that are easy for humans to observe, but not as easy for software to detect (ie, the light came on, the GUI updated when I pressed the external input, etc)?"
This discussion has been archived. No new comments can be posted.

Suggestions for Performing Regression Testing?

Comments Filter:
  • by ForestGrump ( 644805 ) on Saturday April 09, 2005 @05:43PM (#12189153) Homepage Journal
    and their name starts with an M. Microsoft maybe? I don't remember. NDAs have a way for making me forget details.

    Anyway, doing regression testing is a 2 edged sword.
    1. You have to program more stuff into your program to allow for automation. Now, because you're adding another feature to the entire program, that means more testing to ensure that the added functionality actually works. Lastly, you have to hire/write regression scripts. How much time does it take to write scripts? How long will they be useful for? If it takes ~the same time/money to write scripts as to hire/be the monkey...How much is that worth?

    2. It can be a big convenience if you can justify #1.

    Grump
    • We use c++ in our project, so the solution to your first problem is pretty simple.

      class Foo
      {
      friend class FooTester; ...
      };

      That's it. Now you can do whatever you want in your regression testing without molesting your existing code (having to "program more stuff into your program," as you say)
  • by noahbagels ( 177540 ) on Saturday April 09, 2005 @05:44PM (#12189166)
    I work at a company with a 30+ year mainframe application that runs exclusively via line mode commands. This software is wrapped by SOAP middleware that I largely wrote or migrated our old legacy binary middleware to. This SOAP middleware is called by Tomcat - our choice of Java Servlet Container serving up dynamic web / transactional content. The application is extremely complex and results of almost any command/query change on a minute-by-minute basis - based on approx 1GB of fresh weather and other data updated asynchronously with over 20 data feeds from worldwide reporting stations.

    How do we test this you ask?
    Break down the problem into:
    (a) components
    (b) component integrations (i.e. network/middleware checks)

    Automate tests for each and you're at least 80/90% of the way there. I.e. Unit test pieces such as a UI or Middleware, and also make unit tests that verify the connections between components.

    If you're unlucky enough to be in a totally mixed-up environment without any seperation between GUI, logic, back-end, etc... often lumped together as MVC, but I prefer just to say "component based" you may have to do some re-architecture to get to a better scriptable setup.

    Our automated tests (web environment, mind you) test for:

    1. Basic web server & Tomcat / Servlet function.
    2. Web/Tomcat Server -> Middleware connection (no actual request)
    3. Tomcat->Middleware->Legacy System connection.
    4. Tomcat->Middleware->Legacy System operations - i.e. actually do something.

    Given the complex and ever changing data we operate with, we can't test everything that only a human eyeball can spot. However, these tests are re-used for live system monitoring, regression testing, and load testing. They also can find many of the "brain dead" issues - such as a total system meltdown or specific function being broken by a code change without requiring a tester to test everything out.

    I strongly recommend going for automated testing as much as possible - even if you're only getting at the low hanging fruit. Breaking down the problem into managable chunks w/unit tests can also save serious hassle trying to figure out later why a tester got an error somewhere.
    • Question: How do you test features that are easy for humans to observe, but not as easy for software to detect (ie, the light came on, the GUI updated when I pressed the external input, etc)?

      Answer: I work at a company with a 30+ year mainframe application that runs exclusively via line mode commands followed by several paragraphs that have nothing to do with the original question, but instead tell how to use (yawn) unit testing.

      Relevancy: None.
      • yeah, I was kinda wondering when he was going to relate or get on track with his answer too...it's kinda funny though that he would waste his time typing that hoping people would think he was smart, but half of intelligence is relevance and there was no clear examples of either there...hmmm...
      • [...]followed by several paragraphs that have nothing to do with the original question, but instead tell how to use (yawn) unit testing.

        He could have connected the dots a little better, but he described a perfectly good approach. The poster asked How do you perform complete system regression testing? and a good way to do that is to break the system down.

        Note that the post you're griping about talked not just about unit testing, but integration testing. Both are important to getting an app like this well
      • Hmmm... the ask /. question first laid out the problem in some detail, noted specifically that the current process was time consuming and error prone, then asked three questions.

        noahbagels' post laid out the process which is followed where he works, which addressed both the time consuming and error prone aspects as well as answering the first question. In noting specifically how they've gone about doing it, he answered the second question.

        You, of course, have tried (unsuccessfully I might add) to show y


  • A few years ago, MS had a testing aid called
    - creatively enough (for MS, at least...)
    Microsoft Test.

    As you didn't even get it with MSDN Universal
    I never got the chance to evaluate or use it,
    but perhaps there is a counterpart in OSS...?

    I'd love to find one... TIA
    • I think you mean MS Visual Test, which was (surprisingly) sold off to Rational Software. Rational has, I believe, in turn been sold to IBM. (At least, the first result of a Google search for "Rational Visual Test" is now an ibm.com page.) It was a Rational product when I first encountered it; I thought that it was pretty decent. However, it was based on Visual Basic; I would have preferred something based on C or C++ as those were the languages I was used to writing in at the time. There was a 30-day d
      • Ahh.

        Judging by your UID, you are not new here... but let me tell you how times have changed. Microsoft now advertises on Slashdot!

        Yes my friend. The distribution of Linux users on this site is swiftly declining. As Slashdot becomes more mainstream, you can expect to see this trend continue.
  • Mercury? (Score:4, Informative)

    by spacecowboy420 ( 450426 ) * <rcasteen.gmail@com> on Saturday April 09, 2005 @06:52PM (#12189492)
    Mercury has a decent suite:
    They have Quality Center:
    Which will help you write manual test scripts and create new test cases and do defect tracking.
    Quick Test Pro:
    Semi decent intergration into quality center so you write your test cases, then bust it out to code. You CAN just simply record a test and run it, but the lifetime of your script may vary. You need to build in tons of logic just to ensure the script can handle what it thrown at it and handle various error conditions.

    Caveats:
    If you like open source, this is not for you.
    If you don't like vbscript, you may find a difficult path to using other languages. There are options, but if you have a large automation team, a common language is useful. Vbscipt is sometimes the common denominator.

    If you ARE familiar with Load Runner or their performance tools, you'll love it.
    Even if you don't mind vbscript you may find their tool unweildy in the way it picks up objects.

    All in all, probably the BEST commercial offering - simply because a monkey can use it. There are good ones in opensource if you can code, i.e. "Push To Test" if you like java.

    ps: sorry there are no links, I would just google them anyway
    • We WinRunner and TestDirector for this at work. We built a WinRunner framework so that business analysts can basically "script" the test using excel data sheets. We run at night using TestDirector and review the results in the morning. There is a lot of up front work but once you get everything working it is pretty easy to maintain and use with minimal resource expense. Of course any major change in the application will break things. It is no panacea but we are having decent results.
    • Re:Mercury? (Score:3, Informative)

      by lowmagnet ( 646428 )

      I have built testing frameworks in both WinRunner and QuickTest Pro, and I have to say the QTP framework was easier to write. Using the XLS-as-database structure built in to everything COM, we were able to write tests in Excel files, and do it in about 1,500 lines of code.

      Anything is simple if you break down each step into a window, object, action, input, output format. One thing we found that makes our lives somewhat easier is to forego the entirety of QTP except as an execution engine. We don't use actio

    • Yeah, my opinion on Mercury is mixed. One on hand test script development is a time consuming process that can involve nearly as much work as developing the product you are intending to test, especially when u are starting from the ground up. This isn't Mercury's fault, it's just the way it is, but Mercury will have you believe that all you do is install Winrunner, click the Record button and it records your test script as use your product, save the result and your work is done-all you have to do from now
  • by pfdietz ( 33112 ) on Saturday April 09, 2005 @07:27PM (#12189639)
    Can you cleanly separate the internals of the system from the external interfaces? That way, interaction with the internals can be scripted without having to watch GUIs or blinking lights. You do have to test the part that drives the GUI/etc., but that is presumably easier than having to do all the system tests through it.
  • I've seen it tried with a stopwatch... You are feeling sleepy... you are feeling sleepy...
  • AutoIt? (Score:5, Interesting)

    by Futurepower(R) ( 558542 ) on Saturday April 09, 2005 @08:00PM (#12189827) Homepage

    Looks like the free AutoIt [hiddensoft.com] would help.

    Also see AutoIt Script home page [autoitscript.com].
  • Good luck... (Score:5, Informative)

    by bluGill ( 862 ) on Sunday April 10, 2005 @01:31AM (#12191390)

    Good luck. I have found that regression tests often cost just as much as just paying someone to do the test. The only time they are an advantage is when you run them against the daily builds. The problem is when you do that every day one test will fail because of a new feature, and the test now needs to be changed, not a bug written up. (Of course you will catch bugs too) So you have to have a full time programmer who only job is to keep the regression tests up to date. This can be more expensive than just hiring testers. (Kids are cheap and most of them can operate a computer well enough to do your testing if you keep track of them)

    Don't read too much into that downside. The more expensive programmer will in the end catch more bugs with the regression test suite, so your code has less bugs when you ship. However it is expensive.

    Automatic regression tests are NOT a substitute for manual regression testing! You still need to plan on some time (though not the 3 weeks you did before) for regression testing, just to make sure the automatic tests didn't miss anything. Much of this will be done as part of your normal tests, but make sure you are watching for regressions when you test.

    • Re:Good luck... (Score:2, Interesting)

      by dubl-u ( 51156 ) *
      The more expensive programmer will in the end catch more bugs with the regression test suite, so your code has less bugs when you ship. However it is expensive.

      I think this can be true if you write your regression tests afterward. But if you're doing test-driven development, so that your app is designed from the beginning for testability, then I think the costs are pretty reasonable. In my experience, the time testing is more than made up for by the time and stress saved on debugging.

      On a recent 9-month
    • I agree with you that testing is extremly expensive. At Microsoft only about half of their full-time programmers are contributing new code to products. The rest (literally thousands of programmers) are writing automated test code to make sure everything is working properly.
  • by Spoing ( 152917 ) on Sunday April 10, 2005 @02:07AM (#12191493) Homepage
    Keep in mind that if the UI or backend changes on a regular basis, you will also be making substantial changes to the automated tests. Part of this can be delt with by a good tool automatically, though for most substantive changes or ones that change the workflow in even minor ways that will not be the case.

    Also, people tend to think that automated testing takes less time...it *CAN* though expect that on many projects it will take much more time as automated tests are detailed and implementation specific; you can't create tests at the spec level unless your specs are detailed design documents too and even then only in a limited way.

    The time savings kicks in when you want to frequently repeat the tests across the whole project when even minor changes are made to the code in one place. It also allows you to be somewhat certian that only the things you expect to change do indeed change.

    If you do not have the budget or time to do complete manual tests, forget about autmating it unless you are dealing with a very static project that requires excessively detailed testing.

    I expect people to disagree on much of what I wrote above...when they do, keep in mind that situations can differ. These are just general rules of thumb and worked for the vast majority of projects I've been on.
    • Well put.

      I am a software test engineer, I lead a small software test team, and am a huge advocate for automation when it is the best solution to the problem.

      James Bach, a leading test automation consultant, wrote an excellent article [satisfice.com][PDF] debunking some of the reckless assumtions folks make about test automation.

      Generally speaking (and this is VERY context-dependent) I would like to have both skilled manual regression testing and regression suites written by skilled automation engineers, but if forc

  • by dubl-u ( 51156 ) * <2523987012&pota,to> on Sunday April 10, 2005 @02:39AM (#12191609)
    How do you test features that are easy for humans to observe, but not as easy for software to detect (ie, the light came on, the GUI updated when I pressed the external input, etc)?"

    For the GUI, I recommend instrumenting your app so you can programatically tell what's going on. An API is one way, but a quick and dirty way is just to keep an internal event log and then probe that. Then for free, you get a detailed log you can dump if there's an error in production.

    For the physical hardware, consider building a simulator. You could do it partly in hardware, adding simulation logic to your hardware controller, but running disconnected from the machinery. Or you could build another board that connects to your production board's inputs and outputs and simulates the machinery at an electrical level. Another option is to simulate your production board entirely, leaving the embedded code out of the testing loop.

    The right choices depend a lot on where you can get the best bang for your test automation buck. Unfortunately, starting with a lot of untested legacy code means you have a long slog ahead of you. Start with the modules that generate the most bugs, or the bugs hardest to find during manual testing and automate your tests of those. That will teach you a lot about good ways to test in your environment.
    • Hmm, but that's not really testing the GUI directly. Sometime back I read about an IBM tool that actually does OCR on a predefined section of the screen to verify the functionality of the user interface.

      There are also tools that query the actual components rendered on a window - sort of like the Spy++ tool included in Visual C++. This is the approach taken in the book "Effective GUI Test Automation" (ISBN 0-7821-4351-2)

      With cheap webcams, open-source software for detecting/comparing changes in images (sim
      • Hmm, but that's not really testing the GUI directly.

        This is true. If you're going for theoretical purity, that might be necessary. In a commercial effort, though, you're more interested in bang for the buck.

        If you instrument your controller code (which is often easiest) then you catch all bugs except for GUI toolkit ones and in connecting the GUI library to your app, which is, I understand, not a big source of bugs, as it's simple stuff that programmers tend to hand-test pretty well.

        If you go further an
        • Yes, I agree that testing is a 'maximum bang for buck' engineering discipline.

          I suggested the webcam for the physical blinking lights stuff the poster mentioned - this should be possible even today with open-source surveillance software I mentioned in passing (set zone, and the software continually analyse for image differences). For the GUI, screen capture would be more efficient.

          True, creating a testing tool that did screen capture would be a hassle. However, IBM already Also, most informed people also
  • If you're not averse to proprietary software...

    Mercury Interactive makes a great set of tools for regression and load testing. If the app is Windows based, WinRunner or QuickTest will allow you to script end-user actions for repetitive testing scenarios. LoadRunner is a great way to test a back end system, not just for load but also regression. And, if you're not useing a bug-tracking mechanism already, TestDirector is a nice web based tool for bug tracking that integrates well with the other tools.

    I u
  • I've used a product called QFTest http://www.qfs.de/en/qftestJUI/ [www.qfs.de] for performing long-term stress testing and functional testing on Java Swing applications.

    It is similar to many of the other products that do automated GUI testing, but will run on Linux and Windows (others are very Windows centric). It is also is scriptable using Jython.

    Doesn't relate directly to your needs but I thought others working with Java might find this information useful.
  • My previous project used JUnit [junit.org] very successfully, and had it integrated into their nightly build process. At the end of each day, all the developers checked in their modules and found out via the regression test reports the next morning, whether their changes caused someone else's module to blow up! :D
    • I am a fan of JUnit (and the xUnit family) as well. It's a fabulous toolset for unit testing...which complements but is not the same thing as regression testing.

      A good unit test suite will catch many errors which might break existing functionality. Regression testing on the other hand tests changes after integration, tends to be more GUI-driven and user-oriented, is often best done manually, and is likely to find a somewhat different set of bugs than unit testing will.

  • GUI, hunh? (Score:2, Insightful)

    by Intron ( 870560 )
    Reminds me of HP storage. They removed a scriptable, serial port CLI in favor of an IE-only interface. Guess which is harder to use and I imagine is a bitch to test. Layering the GUI on top of the CLI would have been my choice, as either a user or a tester.
  • I've used Rational TestStudio and EnterpriseStudio for doing just this sort of thing.

    Other than having to give up your left testicle to afford it (I volunteered my boss for it), it proved itself quite capable of handling unit, integration, system, regression, and full-bore load testing. If you're also planning to do any load testing the "virtual users" will cost you the other testicle (I'd suggest you use multiple levels of management's PHBs - they get skittish after a couple of roosters become hens).
  • Concerning tools I'd only used compuware QA suite a lot of time ago. Was not so bad even if things that it's exactly cheap. In the company where I work today regression testing are performed using internal script (shell or other funky languages). I think each case should be considered differently... My understanding is that none can test WEB application using same methodology used to test matlab like application.

    I problem we have today it's a conflict between R&D testsuites (corner cases, basic functio

Dynamically binding, you realize the magic. Statically binding, you see only the hierarchy.

Working...