Forgot your password?
typodupeerror
The Almighty Buck IT

How Do You Prove Software Testing Saves Money? 312

Posted by samzenpus
from the taking-it-for-a-test-drive dept.
cdman52 writes "I work at a small software development company. We have one app that is used by a few hundred clients and was initially developed by a few undergrads about 10 years ago. The app is collection of about 25 developers preferences and ideas. Testing wasn't an initial concern since it was created as an internal application, I guess. Anyway, the app is now large and used frequently. Part of my duties are to fix bugs users find, I'm on a team with a few other people and at least once every 2-3 months I see some bug I fixed come back, and I can only assume it's because we don't have a formal test suite. The owner doesn't want to invest time or money in getting one set up, but I'm sure that in the long run it would save time and money. Can anyone offer suggestions for how to convince the owner that setting up a test suite is in his own best interest?"
This discussion has been archived. No new comments can be posted.

How Do You Prove Software Testing Saves Money?

Comments Filter:
  • by ls671 (1122017) * on Monday January 03, 2011 @03:44AM (#34741948) Homepage

    I would say the most important thing to save money is a good design and programming guidelines for the application.

    Test suites are obviously a good idea but it may end up as a tedious task if the application as poor or no real starting design and if everybody just did things their ways.

    I have seen many applications that had grown larger than expected that typically cost 10 times more than it should to maintain, bug fix and implement change requests.

    It is sometimes possible to fit existing code in a new clean design and that saves a lot of money if the application is going to be used for a long time.

    Let's hope your application has proper design ! ;-)

    • by Chatsubo (807023)

      I agree with the above, when you set out to write code knowing you're going to have to test it, it will influence how you do the design and insulate functionality from the rest of the system, in order to be easily testable.

      The problem, however, is you've just made this guys problem even worse. He can't even get allocation for testing. Now he'll need allocation for refactoring AND testing.

      My opinion is to break it down into small incremental changes that may take years to filter through the whole system. He

      • by ls671 (1122017) *

        > My opinion is to break it down into small incremental changes....

        I agree, I just posted a reply that is along the same line:

        http://slashdot.org/comments.pl?sid=1932550&cid=34742086 [slashdot.org]

      • by Kjella (173770)

        Then design wise, the thing to do would be for every problem found, see what can be re-factored and improved right there and then, and do it immediately (with test cases to prevent regression).

        Well, in my experience it goes like this: You're looking to refactor the code because the code already is quirky. Some 99% of the time, that's just because it's sloppy coding, poor structure and logic or it is simply undefined or irrelevant, the corner cases are never reached, the function is always called with data of the correct format and so on. Then in 1% of the cases, the application depends on this quirky behavior and all in all works correctly - or at least works - until you clean the code. If you ha

        • by mcvos (645701) on Monday January 03, 2011 @05:11AM (#34742216)

          Well, in my experience it goes like this: You're looking to refactor the code because the code already is quirky. Some 99% of the time, that's just because it's sloppy coding, poor structure and logic or it is simply undefined or irrelevant, the corner cases are never reached, the function is always called with data of the correct format and so on. Then in 1% of the cases, the application depends on this quirky behavior and all in all works correctly - or at least works - until you clean the code. If you have to assume that everything that is there should be there, you end up with code almost as bad as what you started with. This is the problem trying to reverse engineer the intended behavior from existing code.

          1%? I think even if the code is buggy, it's quite likely that the code now depends on those bugs in various ways.

          In any case, the right way to refactor is to make sure you've got proper tests for the code you're going to refactor. That way, after refactoring, you can check whether the functionality is still unchanged.

          So yeah, the submitter has a problem. What he really needs from a code-quality point of few is pretty big and expensive. If there's no money for it, but there is money for continuing as before, then I guess that's the only option here.

  • First things first (Score:5, Insightful)

    by Anrego (830717) * on Monday January 03, 2011 @03:46AM (#34741954)

    Can anyone offer suggestions for how to convince the owner that setting up a test suite is in his own best interest?

    Are you sure that it will be?

    but I'm sure that in the long run it would save time and money

    How much time and how much money?

    What you need here is hard numbers to back up your opinion. Specifically, and I hate to say the word, but you need metrics.

    - How much do these bugs cost (these are a mixruee of hard numbers (dev time, etc) and "fuzzy" numbers (customer relations and so forth)?
    - How much is setting up this test suite going to cost? How much is it going to cost to use it?
    - How effective do you think it will be (look at existing bugs.. would they have been caught)?

    That last one is especially important. Testing is good, but it doesn't solve all problems and there are other options. Really take a good look at whether previous bugs would have been found in your proposed system.

    Once you have this information, the math then becomes pretty simple.

    If your program is modular-ish .. it might be possible to do a trial. This is probably a good idea anyway. There are different ways to do testing, some of which are incompatible with certain development mindsets. Best to find that out early on a small piece vice the entire code base.

    • The metrics is a good idea. This will allow the owner to see what it costs him to not have good QA. However, eventually, the conversation will float to how expensive will it be to have QA? Like any security situation, you will want to find the happy medium where the trade off in time/money balances the risk. Given you only have a few hundred users you might wish to experiment by adding QA incrementally. Also, you might find why those bugs are coming back? Poor documentation? Or are developers not maintainin
      • by Thing 1 (178996)

        However, eventually, the conversation will float to how expensive will it be to have QA?

        We had a similar discussion a few weeks ago, where the gist was: if you're paying your programmers to write documentation and perform tests, then you're paying way too much for doc and test. This sounds even worse; this sounds like they're "paying their users" to perform the testing.

    • by mcvos (645701)

      but I'm sure that in the long run it would save time and money

      How much time and how much money?

      And how long is that run exactly? I'm all for proper test frameworks, but if it takes 5 years before it starts paying off, and nobody knows where the app will be in 5 years, then it's really not such a great investment at all. So, like parent said: do the math. Figure out how much the test framework will cost, how much the recurring bugs cost, and then you've got some numbers to convince your boss. Or yourself.

      • by tibit (1762298)

        Frankly said it's not much of a business if they can't at least wish for where they'll be in 5 years.

    • by Splab (574204) on Monday January 03, 2011 @05:42AM (#34742304)

      Also, putting testing infrastructure into an existing system is a huge undertaking. And it requires a very strict handling for all future commits - if you just slip a little on the test suite you are basically invalidating the whole thing.

      Many a times I've tried to implement test suites, both as request from employers and also for own sanity sake - and every time it has failed because the workload was simply too overwhelming to support both testing and development.

      In the end, the basic economics of it has dictated that it is cheaper to hot fix than to make it right the first time around.

      • by Glonoinha (587375) on Monday January 03, 2011 @07:38AM (#34742648) Journal

        In the end, the basic economics of it has dictated that it is cheaper to hot fix than to make it right the first time around.

        What the OP failed to provide is the nature of the bugs his users are experiencing, the impact to the users and to the business, and the costs associated with the manifestation of those bugs.

        If we're talking about user interface glitches where a button is not aligned with the others, the color of the button when pressed is wrong - then it is probably way cheaper to hot fix than to insure 100% quality the first pass.
        If we're talking about workflow and data errors in a financial application with regulatory implications, or bugs that bring the system down for an hour and the company loses five figures in revenue for each hour of downtime, that's a different story.

        Testing doesn't necessarily save money. Saves pride and credibility more often than not, but as others are saying it isn't necessarily the most fiscally prudent choice. Neither is a wholesale re-write of 10 year old code (even though that's our first inclination when we inherit code.)

        • by Lumpy (12016) on Monday January 03, 2011 @08:00AM (#34742736) Homepage

          Exactly, the cost of not testing may be a IT helpdesk position that can be eliminated by doing better software testing. It can also be loss of productivity, loss of work quality, etc....

          Honestly, just because it's software does not make this hard, replace the word software with "business process" or "machine" and you have the exact same problems. Would a company buy a $250,000 machine without research or testing to see if it was the best fit for their solution? No.. yet many buy the crap Ticket system that is BMC Remedy for close to the same price just because everyone else does.

          testing is not a IT process it's a business process. Lack of software testing is simply a indicator of a failure of the management in the company.

        • by Splab (574204)

          Absolutely agree.

          Right now I work for a telecom, small one - but we still have some pretty strict demands on availability. Testing in our system is an absolute nightmare; what we can do, is prove that our program works for well defined input, but once you add customers to the equation, all bets are off. We have at several points in our history looked into test driven development etc, but fact is, the type of errors we see aren't easily detected by a fixed test. Most problems we experience in production is r

          • by rtaylor (70602) on Monday January 03, 2011 @08:21AM (#34742830) Homepage

            Incidentally, if anyone out there has suggestions on how to reliably test for race conditions, please speak up.

            It's not easy but I have had good luck with pretty simple load generators and having the system put in random (from very long to zero) delays in the processes. Find lots of race conditions (short delays) and poor or missing interlocking (long delays).

            The tricky part is making it replayable by recording the process step, random pieces of data used, and delays to a log which can be re-executed to prove the bug exists and prove the correction.

            For really important systems I've run the random load generator for a month before sending the product out.

          • It is my experience that test driven development is good in that it makes sure the code is unit tested. But unit tests do not take the place of user testing. What many people fail to take into consideration is that just because the application works doesn't mean it is doing what it is supposed to. For example, just because it displays the results of a financial calculation, doesn't mean the calculation is correct. Many times it requires someone with in depth knowledge of the business in order to make that d
    • by arth1 (260657)

      Also, testing introduces bugs. Both because there will be bugs in the test code itself, and because of bias - even when not meaning to, new code will be skewed towards passing the test case, and the tests skewed towards validating the code instead of finding bugs. I've seen examples of bugs that wouldn't have existed if it wasn't for the testing.

      Then there's time wasted on testing things that simply don't go wrong, or where the impact would be so huge that it doesn't require a test case - if it prevents t

      • by msclrhd (1211086)

        Tests can fails either due to the code under test, the test code or the environment.

        Tests form a contract between teams (e.g. defining the way a protocol works, invalid and valid data with the expected results). They also serve as a form of documentation -- this is how you use this class; we need to support this corner case; this test checks that we don't regress this security bug.

        Tests also document what your assumptions are -- e.g. does this method allow NULL pointer strings or not? One of my primary test

    • by balloonhead (589759) <doncuan@NosPAM.yahoo.com> on Monday January 03, 2011 @07:19AM (#34742592)
      Why not just edit the splash screen with a big Google-esque 'Beta!" stamped over the application title? Problem solved.
    • by bmajik (96670)

      You don't say this explicitly, but most people in the thread seem to be assuming it:

      why is there a focus on automated tests here?

      I would contend that step 1 is writing a document. It describes the test scenarios you think are important, why you think they are important, and their relative importance w.r.t to other scenarios.

      Shop the document around -- include it and update it when the cost/benefit excercize is done.

      Eventually, you will arrive at a point in time where people begrudgingly agree that "we shou

    • by Uzik2 (679490)
      Kudos! Well thought out and written response.
    • Can anyone offer suggestions for how to convince the owner that setting up a test suite is in his own best interest?

      Are you sure that it will be?

      This is the key, I think.

      There is a generally-held belief among coders that "doing it right, the first time" and "rewriting this mess" will save money in the long-term, and that managers are idiots for not seeing that. This can, of course, be true. But it isn't always true, and coders are sometimes projecting their OCD-desire to have nice code (and sometimes suffering from "I didn't write it so it must be crap [wikipedia.org]" syndrome) and assuming that this will translate into the dollars and cents that the company ca

  • Charge him by the bug. You'll see how quick he'll make sure hes not paying to fix the same one twice.
  • by SuperKendall (25149) on Monday January 03, 2011 @03:57AM (#34741982)

    You need to think very hard about exactly how any why this will save the business money. What do YOU think will be gained, for your software, from a technical standpoint?

    Many people just think testing will make things better. But I have seen testing make things better, and I have also seen testing make software far harder to maintain and later to deliver.

    What kinds of problems are you seeing that you think testing will solve? Do you see a lot of maintenance causing other errors? Do you want to do this in preparation for overhauling some part of the system?

    You see bugs come back every 2-3 months, but is preventing that really worse than the overhead of maintaining and adhering to a test suite that all of the developers would have to do? That alone may not be enough of a reason to impose the overhead that testing can cause.

    Since you are fixing bugs you have access to the codebase. Start a test framework yourself, of bugs you cover, and run it against the code day by day. See how often it catches issues, vs. how often you have to alter the tests because the code has changed in ways that are valid. Then you will have more real world metrics about problems solved vs. costs...

    • by ls671 (1122017) *

      > Start a test framework yourself, of bugs you cover,
      > and run it against the code day by day.

      I second that. It is amazing sometimes how you have to take the decision and just bill it on normal bug fix time or whatever task is assigned to you. Often, managers won't care about it as long as the average time to fix a bug or the time spent on your regular tasks remain the same.

      It is harder to do it this way because you then have do it gradually. We even have redesigned application cores without letting t

  • start small (Score:5, Insightful)

    by sugarmotor (621907) on Monday January 03, 2011 @04:00AM (#34741986) Homepage

    Just start small, with "failing tests" for each new bug. Bug fixed, test passes. Keep expanding the test coverage.

    • Re:start small (Score:5, Insightful)

      by VirexEye (572399) on Monday January 03, 2011 @04:32AM (#34742098) Homepage

      +1 to moding parent up.

      You won't get far convincing a product owner that you should spend months writing tests for the entire system.

      Convincing someone that you should write unit tests for all new functionality to help guarantee the bug fix/new feature will continue to always work into the future is a much easier sell.

      • by arth1 (260657)

        Convincing someone that you should write unit tests for all new functionality to help guarantee the bug fix/new feature will continue to always work into the future is a much easier sell.

        Not necessarily. You need to factor in the time for creating a framework for actually running the unit tests, and vetting the results (is a fail due to flaws in the code, flaws in the test (false positive), or irrelevant?).
        And quite often you have to refactor at least parts of the code to accomodate introduction of unit tests.

        Then there's the meta-cost of collecting and presenting metrics, and meetings with your boss(es) and developers.

        Yes, tests are good. But not necessarily cheap to introduce, even in a

        • Re:start small (Score:4, Informative)

          by msclrhd (1211086) on Monday January 03, 2011 @07:55AM (#34742716)

          #include "myclass.h"
          #include "assert.h"

          void test_bug123()
          {
                  myclass a(10);
                  assert(a.value() == 10);
          }

          int main()
          {
                  test_bug123();
          }

          ---

          How hard was it to create that test framework? You don't need something that is overly complex, just something that will pass if the test succeeds or fails if it doesn't. You can then improve the test cases/framework as you go along -- look at the Wine project for example. They didn't create http://test.winehq.org/data/ [winehq.org] over night, it was built up gradually, starting with getting the tests running as part of the build and slowly defining a wine test API as needed.

          Collecting metrics and reporting them should be done automatically, but not initially.

    • by Usagi_yo (648836)
      Best Answer by far for small non mission critical apps. If you can't justify the cost of testing vs the cost of a bug, then you'll never get funding for a distinct project for testing. When you do fix a bug, you invariably code review and write a test app against it. When it comes to internal apps, testing is typically overlooked and bugs are fixed ad hoc or work around become tribal knowledge. But even the tiniest mission critical application needs to have a test suite developed side by side with the a
    • Re:start small (Score:5, Informative)

      by johnjaydk (584895) on Monday January 03, 2011 @07:42AM (#34742668)
      Excellent advice. Don't try to start a BIG, EXPENSIVE testing program. Have a look at the book: Working Effectively with Legacy Code by Michael C. Feathers
  • WHY do you have to prove software testing saves money? Even a cheap and nasty electrical appliance from China is tested despite there being a lot of motivation to cut corners and not do so. Why should software be any different?
    • by Rosco P. Coltrane (209368) on Monday January 03, 2011 @04:19AM (#34742066)

      WHY do you have to prove software testing saves money? Even a cheap and nasty electrical appliance from China is tested despite there being a lot of motivation to cut corners and not do so. Why should software be any different?

      Rest assured that Chinese companies test their products precisely *because* it saves money. If it didn't, they wouldn't; they don't have a commitment to good quality for its own sake.

      An untested product leads to high rates of return (lost money), customer dissatisfaction and brand name damage (HUGE loss of money in the medium to long term). Nobody puts up with bad products anymore. Software is one of the last kind of products where it's still somewhat accepted but people are quickly becoming intolerant to bugs nowadays.

      Still, what is true of physical products (that extensive testing on top of proper design and good manufacturing practices) may not be true of software. I.e. the question is: is laborious and careful design and implementation with minimal testing more or less expensive than quicker, laxer design and coding and a strong test/correction feedback process? I'm not sure the answer is clear-cut. As a (former) programmer, I can see argument for both approaches.

      • by Kjella (173770) on Monday January 03, 2011 @05:20AM (#34742236) Homepage

        Still, what is true of physical products (that extensive testing on top of proper design and good manufacturing practices) may not be true of software. I.e. the question is: is laborious and careful design and implementation with minimal testing more or less expensive than quicker, laxer design and coding and a strong test/correction feedback process? I'm not sure the answer is clear-cut. As a (former) programmer, I can see argument for both approaches.

        Didn't you just answer your own question: extensive testing on top of proper design and good manufacturing practices

        For the first I'll just quote Knuth: "I have only proved it correct, not tried it." And if you send piles of shit to testing/QA and expect them to be the impenetrable barrier between you and the customer, you're equally deluded.

        I've found that for anything that should last a while and be stable, there should be a test case. It's so easy to subtly break something. However, I've found writing a proper test suite that deals with databases, network communication+++ and not just the application itself is pretty hard.

      • So you think American companies have a commitment to good quality for its own sake. Just out of curiosity, what do you drive?

      • by tibit (1762298)

        Nobody puts up with bad products anymore.

        -- Ha, ha, ha. The U.S. market obviously demands plenty bad, nearly worthless crap, if one were to judge from what's on the market.

        There are simple things like cast-iron hand-cranked meat grinders where the market is saturated with extremely poorly made chinese crap, with absolutely no alternatives. I know, I've tried to buy a new one -- ended up getting a used Porkert instead.

        Another simple thing: portable DVD players. I've already had to fix two of them because they had liquids spilled on them. It's not

      • by gbjbaanb (229885) on Monday January 03, 2011 @07:17AM (#34742590)

        Software is one of the last kind of products where it's still somewhat accepted but people are quickly becoming intolerant to bugs nowadays.

        in consumer devices perhaps, but that's a different area to most software. For most software sold, especially to business users, bugs are accepted as one of those things that happen. Now, really buggy software will eventually get replaced but in most cases the important thing is to have a good (or excellent) customer service that responds to, and fixes the bugs. If you have that, your customers will not care so much about bugs that they find. you just need to be responsive, and considerate in your communication with them. Once you have that in place, you'll often find that its an asset to you, the more bugs the customer finds, the more they feel loved. And customers like feeling that you care for their business - especially when its some executive/supervisor reporting them, and some grunt who has to actually use your PoS software. Its how the big names can make such poor enterprise software and still have it selling in billions of dollars a year in maintenance/support payments.

        I think I might have just explained how buggy software is good for your business. I feel dirty.

    • Because it is. Do you know the term "bananaware"? It's used internally by a huge German company. It means "letting the goods mature at the customer's (just as you do with bananas, picking them while green and letting them mature during transport and at the customer's shop)".

      At the customer's shop, and at his expense, of course.

      The core reason is that people got so used to crappy software that they don't even expect it to work "out of the box" anymore. It's also trivially easy today to replace broken softwar

    • Because some people and companies like to have a quality product.

      • by dbIII (701233)
        Exactly. So why not test things properly and give it to them instead of the usual situation?
        Testing to determine if something is fit for a purpose applies to pretty well everything apart from basket weaving and software.
  • by initialE (758110) on Monday January 03, 2011 @04:01AM (#34741992)

    Perhaps the app is just more important to you than it is to the company? First take your ego out of the equation, i guess, then make your case before the boss. Just sayin'. Also if you have a time sheet of how much time you've spent on this app so far it would be helpful in drumming up some numbers, plus a few scenarios where the failure of the app causes lost time for the company as a whole.

  • Cost-Benefit Time! (Score:5, Interesting)

    by wild_berry (448019) * on Monday January 03, 2011 @04:02AM (#34741994) Journal

    We've had to justify creating auto test suites where I work.

    Over the last decade our product has grown from one code-base into three strands, each with separate customer foci, and we've had a healthy amount of staff turnover so that there are still brilliant, creative and skilled people working on it but some of the original knowledge has left us.

    We found* numbers to justify that automated testing of existing features, applied later will protect against regressive changes. Even where there are complicated features which were not modular in design, or which lack good interfaces, the tests have saved us massive amounts of time testing by hand. The real win is hidden under something we didn't realise until later: creating the tests have forced us to really document what the features are and how they work**, sometimes from a unit-test level, sometimes at the interface level and sometimes in a top-to-bottom vertical slice. Once you have a record of what your software does, in a computer which is skilled at remembering exactly and repeating exactly what some former staff member told it a couple of years ago, you have a decent reason to be confident that your bug fixes won't cause more harm than good.

    *: ballpark figures / educated guesses / made up.
    **: We favour working code over comprehensive documentation, until our agile team is reassigned to other projects or leaves the company.

  • QA can be contracted out for 20 dollars an hour in house, I would suggest you look into that. It is a full time job for anything but the smallest of code bases.

    • QA is not the same as testing, in this context. The kind of tests he is referring to tell you whether the code is broken before it ever gets to the QA stage.
  • Start off easy, you must have test cases for the bugs you fix (so you'll know they're fixed and some sort of script or procedure you used for that testing. Just hack together a quick script on top to run them in succession as part of your regular duties fixing bugs.

    Now, put them somewhere where other developers can access them. Mention it around the coffee pot.

    That's not exactly comprehensive testing, but it's a start and should only take a few minutes. You could argue that you had to do it anyway as part o

  • If all you do to make the test suite is add one test each time a new bug is discovered, you'll probably find it doesn't take much longer than fixing the bug and retesting manually anyway, and over time you'll build up a reasonable suite. Yes, the benefits won't be instant, but you can do it without much outlay of time (or money) and therefore it's easier to justify to the product owner.

    In case you haven't, you should read Robert C. Martin's book Working Effectively With Legacy Code, which is full of advice

    • by gbjbaanb (229885)

      unfortunately, you'll find that your test suite becomes bigger and more costly to maintain that the actual software its supposed to test.

      I think there are better ways to achieve quality software, and while unit tests do have their place, an over-reliance on them is just too much overhead to justify.

  • The better question to ask yourself is, "Why am I still working here?"

    If I were working on that team, I would walk up to the owner and tell him straight out, "You hired me for my ability and experience in software development. It's my job to give you the most customer satisfying functionality for the money. You've seen my credentials. You've seen me work. Stop second guessing me and let me get on with my job." But my employers have historically hired me because of my experience in these kinds of are

    • Yeah, walk up to your boss and tell him you know more than he does. Great career move...

      Have you heard of diplomacy? You may know more than he does, but he doesn't want to hear it. Instead, you should convince him that what *you* want to do is *his* idea.

      • by mcvos (645701)

        If you need to use that kind of deception to do your job, maybe you're better off looking for a better job.

      • by tibit (1762298)

        I have subordinates and I actually like learning new stuff from them. So please don't make all bosses stupid, self-centered jerks.

        • by gbjbaanb (229885)

          it came across like the guy himself was a stupid, self-centred nerd-jerk.

          I find managers are often just ignorant. (which isn't a criticism as they tend to know more about other stuff, but still have some idea that they should also know everything about the code/systems their staff work with and have too much input into the nuts and bolts when guidance and assurance are what's really needed from the boss).

      • by sosume (680416)

        obligatory big bang theory quote incoming ...

        Sheldon's Mom: Now, you listen here. I have being telling you since you were four years old, it's okay to be smarter than everybody but you can't go around pointing it out.

        Sheldon: Why not?

        Sheldon's Mom: Because people don't like it. You don't remember all the ass kickings you got from the neighbor kids. Now lets get cracking. Shower, shirt, shoes and let's shove off.

        Sheldon: There wouldn't have been any ass kickings if that stupid death ray had worked.

    • by Anrego (830717) *

      That's a little.. extreme. That kind of trust isn't assumed with credentials (at least not in my book) but builds over time.

      Not only that but the boss may (and likely does) have a broader knowledge of the company and it's objectives, and any time you're talking about spending more money I would expect to have to at least show the boss a few power point slides or something. At the very least, having to convince management that an idea is sound is a good exercise, and may cause you to look at things you would

  • Send the suits one of their cherished Excel sheets?

        ((downtime duration) * (customer's lost income per hour) * (customers))
    + ((bugfix duration) + (rollout duration)) * (hourly developer salary)
    + (overhead costs to appease angry customers)

  • by SanityInAnarchy (655584) <ninja@slaphack.com> on Monday January 03, 2011 @04:35AM (#34742112) Journal

    As others are saying, if you can get hard numbers, get them. Nothing like actual evidence to back up your claim.

    However, there's a more important reason for doing this which is harder to find solid evidence for -- the concept of "paying down your technical debt." Often, we hack software together relatively quickly, without fully testing, etc -- every time we do that, we're effectively borrowing against the future stability and maintainability of the product, or realistically, against our own debugging and refactoring time in the future.

    Like all debt, it has interest -- the longer you let it fester, the worse of a mess it's going to be. So like all debt, it makes sense to pay it down when you can -- a little refactoring now can save a lot of headaches later.

    So, it may be difficult to get hard numbers -- though again, those are better. But if you can't do that, sell it as an investment. Invest a little time right now at least putting the automated tests in place, and maybe start with just some regression testing. Twenty minutes writing a regression test will almost certainly save you an hour the first time that bug appears -- and those are pessimistic numbers (though also made up on the spot -- find your own actual facts!)

    • by Mr Z (6791)

      Well, one could also start with a more narrow notion of "regression tests." Specifically, whenever somebody fixes a bug, they should also write a set of short tests that should try to expose the bug they just fixed and add them to the suite. After all, they need to ensure they fixed the bug, right? Don't worry about retrospectively writing tests just yet for everything that already works (or mostly works). If nothing else, such an approach would protect any regressions among the existing set of bug fixe

  • I guess I'm missing something, but don't you test your bug fix when you write it?

    You said that your bug fixes "come back", it sounds like you are cutting some code and sending it off without testing that you actually fixed the problem?

    • Some times a scenario slips through, like testing on a machine without the required dependencies, or on a low-privileged user account, against a different versioned data source, or on a box that was used in last week's ritualistic goat sacrifice.

      The "but it works on my machine" syndrome is a common occurrence.

  • #!/usr/bin/perl -w
    #####################
    # File: money-test.pl
    # Desc: Tests if Money is Saved.
    #####################

    use strict;
    use FileHandle;

    # This Should Save Money!
    sub saveMoney {
    my $tout = new FileHandle( "> test-output.txt" ) or return 0;
    $tout->print( "Money\n" );
    $tout->close();
    $tout->open( "< test-output.txt" ) or return 0;
    die "Lost Money!" if ( <$tout> !~ /^Money/ );
    return 1;
    }

    saveMoney() or d

  • Just do it and show him the tests running.  It always impresses people to see a bunch of automated tests happening--it makes it more intuitive what the value is.

    Anybody can understand how that can save valuable human tester time (particularly if that human is expensive and talented).
    • by Lennie (16154)

      It isn't very impressive if it hasn't found any bugs. Then it's just plain boring.

  • However you prove the need for a test suite, express it on a shiny powerpoint. That way, even if the people to whom you'll show it are techs, they'll be able to pass it to their PHB and have him understand just what's needed: allocated time slots for tests.
  • The owner doesn't want to invest time or money in getting one set up, but I'm sure that in the long run it would save time and money.

    No, it will NEVER save money nor time. It will just improve quality.
    Testing requires investment, and there is no ROI (Return On Investment) on it, except that the quality of the software improves, since you are able to avoid regression bugs.

    In my company, we spent a lot of time (several man years) writing tests to strengthen our software, and I can assure you that this was very expensive, and I still believe that it was a waste of time and money (disclaimer: I worked onto writing the tests), since this coul

  • by Paul Johnson (33553) on Monday January 03, 2011 @05:37AM (#34742288) Homepage
    Basically you have to develop a mathematical model of the costs of the current situation, and compare it with a mathematical model of the costs of using tests. As part of this you will have to produce a plan for introducing tests, with the costs for each step itemised. Use the best numbers available, but don't worry if some of those numbers are "best guess". Just don't try to hide the fact. Put both models in a spreadsheet and come up with a number for how long it will take to recoup the initial investment (break even). Don't forget to discount future cash flows. In MBA-speak this is known as a "business case".
  • One bug every 2-3 months. How long does it take to resolve that bug. An estimate of 4-6 bugs a year is not that severe of an issue and like the owner is better off without the extensive bug testing.

  • at least once every 2-3 months I see some bug I fixed come back, and I can only assume it's because we don't have a formal test suite.

    Many have already mentioned that you need metrics if you want to prove that a test suite would be cost effective. You don't really say if that's to be automated and that's harder. You also don't say if the intent is module unit testing or integration testing or some other slice, nor do you say what technology the system is built around. 10 years probably would be VB6 and t

  • How Do You Prove Software Testing Saves Money?

    Generally, in short term testing doesn't save money. Though in long term, there are some benefits.

    0. Software testing doesn't save money to the software developer itself - it generally saves money to the customer. Having software being tested, allows customers to cut the time and the effort in the critical phases of the larger project deployments. Some companies I know sell testing separately. You can buy their product and test the particular configuration yourself - or the ISV can do it for you. (Featu

  • once every 2-3 months I see some bug I fixed come back, and I can only assume it's because we don't have a formal test suite

    No, it's because you didn't fix it. "Bug" is usually described as a set of symptoms. A very simple test can determine if the changes you made altered the behavior of a program that was recognized as a bug, however it's your responsibility to have understanding of a product that allows you to recognize and fix problems. Tests merely help to determine if some known aspect of product's behavior is unaltered (if tests could find bugs, products would never be released with them).

    The owner doesn't want to invest time or money in getting one set up, but I'm sure that in the long run it would save time and money.

    You are on a fixed salary, you id

  • "at least once every 2-3 months I see some bug I fixed come back, and I can only assume it's because we don't have a formal test suite"

    I assume that what you really want is to improve the development process and reduce your frustration time. A shiny test suite seems to be the answer.

    Hmmmm... before you go buying a test suite you have to understand that the business should drive what you do. Business cases are all about research, analysis and documentation:

    1. Metrics: What is the value of the app? What is the cost of maintaining it? ie: How many person hours are spent on development and maintenance? How many bugs are fixed per per

  • Although a lot of consultants like to try and pin dollar costs on "quality", coding standards, development tools and other intangibles, the inescapable truth is that unless these processes and methods actually stop additional money from leaving the company they have no real-world, measurable value. Sure they might make your life easier (though most just increase the amount of time it takes to get something done), they might reduce the number of bugs, or the time taken to fix one, However if that time is alr
  • How do you prove software testing saves money? Not going to happen in this case. The place testing saves money is during development. Once the software is deployed, its too late.

    This is a pretty much the same place I'm in with my current position. 10-15 year old application that just "grew" into its current state, generally without formal testing. The decision to introduce formal testing into the process was probably made to meet standards of external users in my case. However, as the Software QA an

  • Civilization V was released with a ton of bugs, many of which were (are!) showstopping crashers, as long as you got to a few hundred turns. Add a trillion smaller, non-crasher bugs, and you have one of the buggiest releases of all time. And yet, between brick-and-mortar and Valve/Steam, they sold about a million units. That's about $50M in the bank.

    Testing? What testing?

    • by vlm (69642)

      Civilization V was released with a ton of bugs, many of which were (are!) showstopping crashers, as long as you got to a few hundred turns. Add a trillion smaller, non-crasher bugs, and you have one of the buggiest releases of all time. And yet, between brick-and-mortar and Valve/Steam, they sold about a million units. That's about $50M in the bank.

      Testing? What testing?

      The key difference is Civ5 is a short term experience, its just something you play until Civ6 comes out, assuming they survive.

      On the other hand, the original post

      We have one app that is used by a few hundred clients and was initially developed by a few undergrads about 10 years ago.

      implies every sale is vital since there have only been a couple hundred, and its just one app, not a series.

      In other words, everyone knows civ5 is super buggy and as long as its fun for awhile, who cares. On the other hand, over the previous decade word has gotten around about the original poster software thus only a couple hundred sales.

      Qualit

  • by dtmos (447842) on Monday January 03, 2011 @06:44AM (#34742478)

    If you truly are working for a small software development company, cost may not be the issue. Frequently (I may even say universally) the issue facing the owners of small companies is not cost, or even profitability, but cash flow -- making sure that there is enough money coming in to make the next big expense coming up (frequently payroll). In small companies there typically aren't the cash reserves available to spend on unanticipated expenses or program delays. The boss may even agree with you that the overall cost would be reduced if software testing were introduced, but may not "want to invest the time and money" because the company does not have the cash available to make the investment.

    For example, the software you're working on needs to ship by the end of January to receive payment from the customer by the end of February, so that payroll can be made in March. If bug-fixing stops in January for development of the bug-fixing program, the customer doesn't get his software until the end of February, so he doesn't pay until the end of March and so payroll is missed that month. Having fewer bugs in the long term has to be balanced by the need to stay in business until the "long term" is reached.

    It's like anything else in finance: You have better options and get a better return on your investment if you invest $100,000 than if you invest $1,000, but if you don't have $100,000, it doesn't much matter -- you do what you can with the resources you have available. Similarly, you get a better return if you're able to invest your money for a year than if you must have it back in a week. (The guy who said "time is money" actually knew something.) This trade between cash flow and long-term efficiency (leading, one hopes, to profitability), i.e., knowing when and how much to invest, is the real art of business management.

    The solution to your problem? IMHO, incremental investment. I agree with those above who suggest implementing tests for new bugs found. This should enable you to begin to work more efficiently over time, without substantially delaying current work, while collecting metrics (I know, I know...) that can help quantify the cost of the bug re-work. Once a substantial body of tested bugs has been collected, one can institute a formal testing program (preferably by including it in the next project, so that it is a planned expense) as a cost reduction, since by then it should speed up development work over the ad hoc method then in place.

  • The owner doesn't want to invest time or money in getting one set up, but I'm sure that in the long run it would save time and money. Can anyone offer suggestions for how to convince the owner that setting up a test suite is in his own best interest?

    Why don't you simply explain to him why YOU are so sure that it will save time and money, and what testing you have in mind that will do so. Assuming your certainty is based on strong evidence, he should find it convincing as well.

  • Your question can never be answered definitively enough, especially to satisfy suits and bean-counters. As evidence I offer you the ONGOING tug-of-war over tech support: is it an expense to be minimized and avoided at all cost, or does it actually aid the bottom line by boosting goodwill, etc? Even after decades that question hasn't been answered, and customers often suffer because of it.

    The question you ask is just as insoluble. You might as well be debating religion.

  • One of the worse things that can happen while fixing bugs, is introducing a new one in an unexpected part of the software hidden somewhere deep inside the code.

    With a (big) set of automated tests, you can check everything is still working as expected and if a test fails, you know where to look. That is if you've made enough tests.

    One problem I found in Scrum is refactoring. If at any point you want to change a piece of the code, you'll find you'll have to rewrite the tests as well.
    Another thing is writing t

  • If you're repeatedly seeing the same bugs come back that you've already fixed, it strongly suggests you have sloppy developers repeatedly merging old/outdated code back in to the current trunk over fixes.

    It seems clear your real problem is around a lack of software process and/or version management so testing is after-the-fact in this case. you need to fix your development process first. Adding a formal test environment on a bad process is just putting expensive lipstick on a pig.

  • The first issue is that you have to convince people that their business practices are wrong. This is not easy, and often requires lying through your teeth with statistics. Testing costs time and money and slows down releases. _Foolish_ testing, with lots of business case meetings and Powerpoint slides and expensive test configurations that don't actually test anything but demoware, is even more expensive and wasteful. The testing needs to actually prevent support calls, save engineering time re-fixing the s

  • One thing I haven't seen mentioned/asked - is it an actual bug, or is an area of ambiguity?

    I once worked on a project with ambiguous requirements (there's a shocker), and at least two programmers had a different interpretation of what the requirements meant. Since they didn't write code that interacted directly, it didn't normally matter. But every few months there'd be a bit of a circus where their implementations would cause the software to behave poorly in a third area. And it would take a few days t

  • You have two choices: 1) Test now or 2) Fix the bugs later. Okay, technically you could blow them off entirely but you won't be in business much longer. But I digress. The choice between 1 or 2 could depend on your business model. If your company charges for maintenance releases then it's in the company's best interest to not fix them and wait until it's getting paid to fix them. This could also apply to the annual major release business model since you can blow off bug fixes until next year. If you s

  • Just stick the word "Beta" at the end of the software version but sell it anyway. When bugs come back say "Thank you for your beta testing!" and then promise them some free, but useless addition to the software when it finally comes out of beta (which it never will.) It seems to work for everyone else out there, why not you?
  • Lose The Job (Score:5, Insightful)

    by assertation (1255714) on Monday January 03, 2011 @08:19AM (#34742808)

    If you are catching shit when a bug happens and they refuse to set up formal testing get out of the job now.

    If you aren't catching shit of any kind, then look at the bugs as helping you keep employed. If it annoys you, go find another job.

    I've worked for similar cheapskates. They aren't likely to change.

  • by mpsmps (178373) on Monday January 03, 2011 @08:22AM (#34742832)
    The Economic Impacts of Inadequate Infrastructure for Software Testing [nist.gov] finds an average ROI for software testing somewhere between about 100% and 1000%.
  • A couple things could happen:

    1) Code related issues lead to critical failure. For a revenue generating system this can have immediate effects on the bottom line. For non-revenue systems this can affect how many support hours are burned fixing problems.

    2) Financial impact from lawsuits or non-compliance can be costly. If a security bug or design flaw in your application leads to PCI non-compliance, you may find your cost of doing business with credit providers can rise dramatically.

  • Depending on how bad the code is, it might be more cost-effective to rewrite it from scratch (assuming you've got the resources to do it right this time).

    What a comprehensive test suite can do for you is verify that each new release of the product meets specs (i.e. prevent regressions). But for the tests to have real value, you need to do it right:

    1. If you don't have a comprehensive system specification, create one. The system specification should be written with an eye towards verification -- e.g. it need

  • What you are dealing with is a situation where the company is using "bench time" where the programmer isn't producing revenue, or isn't doing the job that his salary is allocated against. To the company, bench time is basically free (yes, it does have a real cost, but it's a sunk cost, so it's perceived as being already paid for), and there isn't enough of it for a major project. So the application is updated by whoever is free, and the QA guy gets to test and bugfix when he is available. The situation suc

  • by stewbacca (1033764) on Monday January 03, 2011 @03:09PM (#34746866)

    Testing doesn't "save" money any more or less than "training" saves money. They both cost the company money.

    Without either, however, you are less likely to receive future revenue because your software will garner the reputation of being bad (because it wasn't tested) and being hard to use (because there is no training).

    There's always the old adage that you've got to spend money to make money. I think that holds true in this case.

Ever notice that even the busiest people are never too busy to tell you just how busy they are?

Working...