Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Software Programming

How Can I Make Testing Software More Stimulating? 396

An anonymous reader writes "I like writing software. In fact, I revel in it. However, one thing has always kept me back from being able to write the best software I possibly can: testing. I consider testing to be the absolute bane of my existence. It is so boring and un-stimulating that I usually skip it entirely, pass the testing off to someone else, or even worse, if I absolutely have to test, I do a very poor job at it. I know I'm not that lazy, as I can spend hours on end writing software, but there's something about testing that makes my mind constantly want to wander off and think about something else. Does anyone have any tips on how I can make non-automated testing a little bit more stimulating so I can at least begin to form a habit of doing so?"
This discussion has been archived. No new comments can be posted.

How Can I Make Testing Software More Stimulating?

Comments Filter:
  • by bgibby9 ( 614547 ) on Monday August 16, 2010 @09:31PM (#33270906) Homepage
    Personally I feel that the dev should never do the testing of their own code as they are too close to the subject to test every angle.

    Sorry this doesn't answer you question :P
  • Test Porn Software (Score:1, Insightful)

    by Anonymous Coward on Monday August 16, 2010 @09:31PM (#33270920)

    eom

  • Simple (Score:5, Insightful)

    by Anonymous Coward on Monday August 16, 2010 @09:32PM (#33270930)
    Add porn
  • by afaik_ianal ( 918433 ) * on Monday August 16, 2010 @09:35PM (#33270970)

    Does anyone have any tips on how I can make non-automated testing a little bit more stimulating so I can at least begin to form a habit of doing so?

    No, I don't. I strongly think you're directing your effort the wrong way, and duplicating work if you're spending too much time on non-automated testing.

    Software Engineers are not good at poking holes in their own work, so you should have someone else doing the bulk of that kind of testing anyway. You obviously need to do some cursory testing to avoid wasting someone else's time, but there are much better ways of directing your testing effort.

    Focus on developing unit tests both before and during the development effort. Avoid developing your unit tests after writing the code though - your mind will be tainted with your approach, and you'll miss the obvious stuff. Not only do unit tests reveal bugs, the act of writing them will also help you get interfaces right, and help ensure a better overall design for your code.

  • by Chuck_McDevitt ( 665265 ) on Monday August 16, 2010 @09:41PM (#33271050) Homepage
    The developer should never be the ONLY one testing the code. But the developer should absolutely test the code before turning it over to QA. You should be convinced your code works before giving it to someone else to test.
  • by NorbrookC ( 674063 ) on Monday August 16, 2010 @09:42PM (#33271060) Journal
    When I wrote code, I knew how the program was supposed to work. I made the user interfaces "obvious" - to me. So my "testing" was along the lines of "does this compile properly," and "does it output what I expected it to?" The rude awakening came when I handed off the "finished" product to someone else. All sorts of errors I hadn't thought about handling happened, people were confused by the user interface, and more than one "oops" cropped up. While the "boring" testing you're doing on your code may catch the obvious things, it's always better to have someone else test it.
  • One word (Score:1, Insightful)

    by Anonymous Coward on Monday August 16, 2010 @09:47PM (#33271112)

    tele-dildonics hooked to the exception mechanism.

  • Read The Daily WTF (Score:3, Insightful)

    by Red Storm ( 4772 ) on Monday August 16, 2010 @09:57PM (#33271236)

    http://thedailywtf.com/Default.aspx [thedailywtf.com]

    The threat that one day someone will post your code and or screen shots from your programs for everyone to ridicule should be motivation to either improve or write worse code.

  • Then you are lazy. (Score:4, Insightful)

    by Alcoholist ( 160427 ) on Monday August 16, 2010 @10:22PM (#33271446) Homepage

    "It is so boring and un-stimulating that I usually skip it entirely, pass the testing off to someone else, or even worse, if I absolutely have to test, I do a very poor job at it."

    Which sums up why software is so shitty today. I seriously hope that you don't write software for the areospace industry because I don't feel like falling out of the sky because you were too bored to test your code.

    Every job has its boring moments, testing your code is one of those things that programmers must do. Should do, it encourages discipline and discipline is what makes good code. You can automate the testing to some degree but at some point you've got to poke it and prod it yourself because computers are stupider than even we are. If you can't hack that, find a different line of work.

  • by a9db0 ( 31053 ) on Monday August 16, 2010 @10:33PM (#33271544)

    I do software QA for a living. And if you're not a tester, don't try to be. It's your job to write code that meets spec, runs clean, is efficient and effective. Write it well. Write it secure. Write it to handle errors from data, users, networks, etc. Double check that you validate input. Make sure it doesn't leak memory. Write good unit tests. Test it enough to make sure it works. Then give it to a tester.

    Good software testers are a different breed. They are a sceptical, picky, pedantic, detail oriented bunch who take new code as a personal challenge to find the inevitable bugs. They will test your code a dozen different ways you would never think of. They will find bugs that could not possibly exist. They don't care that your shiney new whistle or bell will be the next big thing that will make you all rich. They care that it doesn't barf when you pass it a string with more than 256 characters. Including special characters. In German. Or Japanese. They care that when it's been running for 12 days straight with automated stuff beating on it that the memory usage hasn't ballooned. They care how it deals with data files 10 times larger than you say it should handle, or runs on a machine with half the ram it should have, or handles twice the workload it should - because somewhere out there is a user who will ask it to. They will chew it up, spit it out, and ask you to fix it. Then they will do it all again.

    Testers are a strange bunch, and good ones are hard to find. Find some good ones and cultivate them. They are a lot cheaper than a ticked off client.

  • by bgibby9 ( 614547 ) on Monday August 16, 2010 @10:37PM (#33271584) Homepage
    I think my point is that, devs that do unit testing are contributing to the testing overall which is sweet, but come on, who ever makes unit tests that are 'decent' enough to ensure that the testing is done correctly?

    IMHO testing should be done by people who CARE about testing. Devs who do shitty unit testing contribute to the facade that the testing is done and working properly but in reality a shitty unit test could mask other problems.

    At that point you're creating more problems than you are solving!
  • by Anonymous Coward on Monday August 16, 2010 @10:39PM (#33271608)
    I think many of the (as usual, interesting and informative) replies are skipping over some essential questions. For whom are you writing software? A capitalist company with deep pockets? A non-profit with shallow? A small scale project (meaning, possibly you're the only author/developer) with a limited audience? I'm a so-called computer scientist researcher at a national laboratory. Testing is really important, but it's one of the last things I do. For me, the "test driven development" scenario doesn't work. Why? Because most of the code I write is just playing around with algorithms and ideas, figuring out what works and what doesn't. Example: someone asks me to code up something in OpenMP to do XYZ efficiently, on a platform with 256G memory. My first effort *should* work, but it keeps blowing out memory. So I code versions 2, 3, 4, and 5. In the meantime the requirements have changed, because the person I'm working for is also a scientist, and she's learned something new, and would like slightly different capabilities. I finally get something that appears to work, and then I wrote a few system-level test. My point (see first paragraph and ignore others) is that you haven't given us nearly enough background to coherently answer your question.
  • by BLAG-blast ( 302533 ) on Monday August 16, 2010 @10:47PM (#33271690)
    Simple. Automate it!

    I used to dislike testing until I learn how to implement code designed to be tested. Use a dependency injection frame work (that will keep you busy for a while) and write testable code. Writing elegant, readable code which scalable and testable is not an easy or boring task. If you can not automate the tests, you are probably do something wrong.

  • by xswl0931 ( 562013 ) on Monday August 16, 2010 @10:49PM (#33271706)

    The reality is that all software ships with bugs. Some known and some unknown. Typically it depends on how easy it is for the customer to find, what is the impact to the customer, cost to fix, and risk of regression. Given that software is typically patched after shipping, it means even more bugs get shipped rather than slipping the ship date.

  • by RedLeg ( 22564 ) on Monday August 16, 2010 @10:55PM (#33271768) Journal

    It's inherently boring....

    SO, Build hooks into the 'ware as you write it, and automate the testing.

    Work smart, not hard.

    Red

  • by Anonymous Coward on Monday August 16, 2010 @11:10PM (#33271866)

    > Of course the developer can system test do

    Hmm, with that one the force strong is I feel.

    That aside, do you know one can't tickle oneself, don't you?

    I wonder why Mozart composed so much... could hearing his own music be not as pleasing as creating new ones?

  • by crimsontime ( 1666391 ) on Monday August 16, 2010 @11:11PM (#33271868)
    Do you enjoy finding the weak points in things? Do you use new devices, maps, etc without reading the instructions because you just think you should just know how they should work? Do you find that alarm bells go off in your head when you read a phrase that could possibly interpreted in more than one way? Do you often use things in ways that they were never intended yet those ways seem the most logical to you? Do you possess powers of intuition that lead you unforseen vulnerabilities? Do you find the needle in the haystack? Coding isn't inherently more interesting than testing. Anyway, if you identify with any of these phrases, you can apply it to testing...
  • by lgw ( 121541 ) on Monday August 16, 2010 @11:15PM (#33271902) Journal

    Unit tests should excercise the corner cases in your code. If you know what they are, write tests for them.

    QA testing should break all design assumptions about how the software should be used. Having the programmer sitting there telling the QA guy what to click on (and I've seen that far too often) invalidates that. The most useful bugs are the ones where the QA guy says "I did what I thought would get the job done, and instead it formatted my hard drive", leaving the dev to sputter "but, but, you're not supposed to do that". Given enough users, every possible "stupid" thing you can do with your software will be done in the field, and you really want to know that you will at least fail safely in all those "but that's not how you're supposed to do it" cases.

  • by bertok ( 226922 ) on Monday August 16, 2010 @11:21PM (#33271950)

    UI design is one of those areas of expertise that is both an "Art" and a "Science" at the same time. Very few people are capable of excelling at both simultaneously. That's why you end up with ugly but capable interfaces, or beautiful but useless interfaces. There are several examples in other industries that are similarly difficult, and the people who do them tend to be highly paid specialists. Plastic surgery comes to mind, for example.

    The computer industry hasn't evolved to quite that level yet, people just don't realize that good UI design is hard, so it often ends up a some random task assigned to whoever is available.

  • by weicco ( 645927 ) on Monday August 16, 2010 @11:46PM (#33272100)

    Developers shouldn't necessarily participate in unit tests if you follow TDD and CI techniques. Automate your unit tests and the whole build environment and you get regression tests as a bonus.

    I think that developer shouldn't test, at least, his own code. When ever I test my code it magically just works. When I pass it to someone else everything suddenly breaks. If you have to test your own code you need at least dedicated test environment.

    In system tests developers are a big no-no. We are currently running system test for production control system in a real production environment, which is rather complex factory with robots and stuff. Developers tend to cut some corners by doing hot-fixes in the middle of the test and altering data directly into the database. They don't even tell us who did and what and most importantly why all the time! This makes it impossible to write decent bug reports for instance. The worst thing is that they are interfering with the tests when they should be fixing the reported bugs.

    So there's couple of points why I think developer as a tester is (most likely) not a good thing. By hiring couple of tester, who knows their stuff, you really increase the quality of your product.

  • by Goetterdaemmerung ( 140496 ) on Tuesday August 17, 2010 @12:19AM (#33272286)

    I am a test engineer (ie electrical/computer engineer with an emphasis in development for test and QA, not the guy who runs equipment) and I write a lot of software. I'm going to answer the question of how to create stable software rather than the mindset that all software needs to be unit or regression tested. This definitely serves an important purpose in any complex system and can't be disregarded, but I see most implementations created to test conditions that will never fail. It's a matter of focusing your testing optimally.

    Correctly implemented code is more a state of mind and what your focus is. I perpetually think of how code can fail and I am also a minimalist. This is an inherent conflict of interest, but it really makes my code stable and reusable by numerous other groups. I've been complimented that my product "just works."

    Creating stable code is a matter of understanding your inputs and performing error and bounds checking as necessary, but not obsessively. The act of implementing test code often creates bugs when the goal is to reduce bugs. Correct error handling is another topic of consideration. Unit and regression testing is important when used in the right places.

    I suppose what I'm getting at is the process of creating good code takes time, effort and skill. Developing testing criteria is similar. If you feel you are wasting your time doing code testing, you aren't focusing your efforts in the right place.

  • by Firehed ( 942385 ) on Tuesday August 17, 2010 @12:48AM (#33272446) Homepage

    If your users aren't testing your code, then you don't have users.

    Of course, they shouldn't be the first people to test your code...

  • by donscarletti ( 569232 ) on Tuesday August 17, 2010 @02:20AM (#33272866)

    Personally I feel that the dev should never do the testing of their own code as they are too close to the subject to test every angle.

    No, the dev should never do final testing, since they have a subconscious incentive to do it badly in order to find less mistakes. However, nobody is as qualified to know the edge cases and weaknesses than the individual who just wrote it. So, the best result comes from giving the developers a large incentive to find problems in their own code before the reviewer gets to it. The system I have seen work very well in the past is to create a culture of accountability, where as part of finding the bug, if nobody owns up to making a mistake, the SVN logs are traced to find the originator of the bug and it filed as part of the bug report. It sounds vindictive, but it is more a learning exercise to find out the weaknesses in our development process. What this does however is give someone a great incentive to test every case before committing, since their mistakes will be known to the whole team soon. Nobody tests better than the original dev, when his pride is on the line.

    However, this system's result is very much dependant on culture. In Australia, people are used to being told "you fucked up mate" the minute that their mistakes are uncovered, in other places, maybe not. The system is very much dependant on individuals expected to set a good example, such as managers and senior staff acknowledging when they themselves make a mistake without prompting.

  • by Phloebas ( 1621967 ) on Tuesday August 17, 2010 @03:32AM (#33273106)
    As a software tester by trade, I find that not a whole lot can make it nearly as interesting or stimulating as actually writing the software in the first place. However, properly written test cases do help. And by properly written I don't just mean exhaustive. Written with intelligence, test cases will test all possible code paths without boring the tester to death repeating the same steps over and over. Also, and I don't know if this will help, but I tend to populate my test cases with test data inspired by popular culture and films. My first person to use is DarthV@deathstar.com.
  • by ^Bobby^ ( 10366 ) on Tuesday August 17, 2010 @05:47AM (#33273568) Homepage

    My expectation would be that engineers are scientifically-minded and skeptical

    This is actually the problem; being scientifically minded means you tend to miss the bugs that are exposed by someone doing something that makes you go 'what were they thinking to try that?

  • by MadJo ( 674225 ) on Tuesday August 17, 2010 @06:36AM (#33273814) Homepage Journal

    System testers have to know what the corner cases are. They can't guess them all
    That's why you need proper documentation, like use cases, technical designs. Often (and preferably) written by analists and not the developers.

    Preferably you have a setup like this:
    Business analyst writes documentation based on requirements from the business.
    Developers build the application, based on the documentation.
    Testers write testcases based on the documentation and test the software as soon as it is released to them.

    Testing is a profession too.
    And there are many tools and methodologies (TMap, ISTQB, Testframe etc) to ensure proper test-coverage and to have anything meaningful to say about the quality of the tested application.

  • by SQLGuru ( 980662 ) on Tuesday August 17, 2010 @08:55AM (#33274576) Homepage Journal

    Yep. Much more fun to try to break someone else's code than it is to trudge back through your own. Pair up with someone (officially or unofficially) and swap code. Your goal is to find more bugs than he does. I've got a reputation for finding "gnats" that I'm quite proud of. Everyone "hates" that I find them, but the code from our group is usually pretty solid because of it; making the whole team look good.

  • by gestalt_n_pepper ( 991155 ) on Tuesday August 17, 2010 @10:08AM (#33275352)

    "but, but, you're not supposed to do that".
    Yeah, that one still makes my *hair* stand up. Most devs are superb at narrow domain problem solving. Of course, reality and users tend not to accommodate them. I can't tell you how many times I've pushed the limits of a software package to get something done and it fails miserably.

    So, yes, they will have 37 instances of the software running at once and yes, they will try and save one or more projects/files with the same name at the same time and yes, no matter how many times dev says it's their fault, they're wrong. Machines are designed to accommodate people, not the other way around.

  • by CyberDong ( 137370 ) on Tuesday August 17, 2010 @11:44AM (#33276500)

    At the risk of sounding pedantic, I'd suggest that you limit your email testing to either address you own, or else domains like "example.com" that are reserved for testing. Domains like asdf.com are routinely flooded with unsolicited email due to people using it as a bogus domain name. More importantly, by using real domain names while testing software, you risk inadvertently emailing sensitive data to somewhere it should not go!

  • by Abcd1234 ( 188840 ) on Tuesday August 17, 2010 @01:02PM (#33277618) Homepage

    The requirements are vague, such as "difficulty of problems presented to the user must increase gradually over the course of the session," where difficulty isn't defined rigorously.

    Uh, you define the requirement. If you've written code, you've *already* done that, you've just made the requirement inaccessible as its now encoded in a programming language. Write it down first and everyone will be happier.

    The application's specification includes a physical simulation with random, pseudorandom, chaotic, or otherwise nonlinear behavior

    That's not unit testing. Most code in a simulation is *not* random, it's fed inputs from other parts of the code, which ultimately result in the behaviour you describe.

    Automating system test for something like this is interesting. First, you *must* design the code to be testable. That means being able to, for example, define random number seeds and so forth, so that a run can be reproduced.

    Second, the implicit assumption is that there is *some* way to determine, based on inputs, if the outputs of your software are correct. If you can't do that, you have far bigger problems. Of course, those tests might be fuzzy ("value should be between X and Y", "value should be greater than zero", etc), but they're still tests.

    Third, realize automated testing isn't the end-all and be-all of testing. For some systems, you will be forced to have some amount of manual testing (for example, integration testing large, complex software systems can be difficult to do in an automated fashion). That said, I'm willing to bet, for *most* systems, you can automate most if not all of the testing.

    You know your automated test suite's coverage isn't 100%, and you're doing non-automated testing to find things that your automated test suite is missing.

    Beg the question much? If your test suite is missing things, fill in the gaps. If you can't fill in the gaps, ask yourself why.

Real Programmers don't eat quiche. They eat Twinkies and Szechwan food.

Working...