How Can I Make Testing Software More Stimulating? 396
An anonymous reader writes "I like writing software. In fact, I revel in it. However, one thing has always kept me back from being able to write the best software I possibly can: testing. I consider testing to be the absolute bane of my existence. It is so boring and un-stimulating that I usually skip it entirely, pass the testing off to someone else, or even worse, if I absolutely have to test, I do a very poor job at it. I know I'm not that lazy, as I can spend hours on end writing software, but there's something about testing that makes my mind constantly want to wander off and think about something else. Does anyone have any tips on how I can make non-automated testing a little bit more stimulating so I can at least begin to form a habit of doing so?"
Too close to the subject... (Score:5, Insightful)
Sorry this doesn't answer you question
Test Porn Software (Score:1, Insightful)
eom
Simple (Score:5, Insightful)
Focus your attention elsewhere (Score:5, Insightful)
Does anyone have any tips on how I can make non-automated testing a little bit more stimulating so I can at least begin to form a habit of doing so?
No, I don't. I strongly think you're directing your effort the wrong way, and duplicating work if you're spending too much time on non-automated testing.
Software Engineers are not good at poking holes in their own work, so you should have someone else doing the bulk of that kind of testing anyway. You obviously need to do some cursory testing to avoid wasting someone else's time, but there are much better ways of directing your testing effort.
Focus on developing unit tests both before and during the development effort. Avoid developing your unit tests after writing the code though - your mind will be tainted with your approach, and you'll miss the obvious stuff. Not only do unit tests reveal bugs, the act of writing them will also help you get interfaces right, and help ensure a better overall design for your code.
Re:Too close to the subject... (Score:4, Insightful)
Re:Too close to the subject... (Score:5, Insightful)
One word (Score:1, Insightful)
tele-dildonics hooked to the exception mechanism.
Read The Daily WTF (Score:3, Insightful)
http://thedailywtf.com/Default.aspx [thedailywtf.com]
The threat that one day someone will post your code and or screen shots from your programs for everyone to ridicule should be motivation to either improve or write worse code.
Then you are lazy. (Score:4, Insightful)
"It is so boring and un-stimulating that I usually skip it entirely, pass the testing off to someone else, or even worse, if I absolutely have to test, I do a very poor job at it."
Which sums up why software is so shitty today. I seriously hope that you don't write software for the areospace industry because I don't feel like falling out of the sky because you were too bored to test your code.
Every job has its boring moments, testing your code is one of those things that programmers must do. Should do, it encourages discipline and discipline is what makes good code. You can automate the testing to some degree but at some point you've got to poke it and prod it yourself because computers are stupider than even we are. If you can't hack that, find a different line of work.
You can't, so don't try (Score:5, Insightful)
I do software QA for a living. And if you're not a tester, don't try to be. It's your job to write code that meets spec, runs clean, is efficient and effective. Write it well. Write it secure. Write it to handle errors from data, users, networks, etc. Double check that you validate input. Make sure it doesn't leak memory. Write good unit tests. Test it enough to make sure it works. Then give it to a tester.
Good software testers are a different breed. They are a sceptical, picky, pedantic, detail oriented bunch who take new code as a personal challenge to find the inevitable bugs. They will test your code a dozen different ways you would never think of. They will find bugs that could not possibly exist. They don't care that your shiney new whistle or bell will be the next big thing that will make you all rich. They care that it doesn't barf when you pass it a string with more than 256 characters. Including special characters. In German. Or Japanese. They care that when it's been running for 12 days straight with automated stuff beating on it that the memory usage hasn't ballooned. They care how it deals with data files 10 times larger than you say it should handle, or runs on a machine with half the ram it should have, or handles twice the workload it should - because somewhere out there is a user who will ask it to. They will chew it up, spit it out, and ask you to fix it. Then they will do it all again.
Testers are a strange bunch, and good ones are hard to find. Find some good ones and cultivate them. They are a lot cheaper than a ticked off client.
Re:Too close to the subject... (Score:2, Insightful)
IMHO testing should be done by people who CARE about testing. Devs who do shitty unit testing contribute to the facade that the testing is done and working properly but in reality a shitty unit test could mask other problems.
At that point you're creating more problems than you are solving!
Re:Too close to the subject... (Score:1, Insightful)
how to make non-automated testing simulating? (Score:4, Insightful)
I used to dislike testing until I learn how to implement code designed to be tested. Use a dependency injection frame work (that will keep you busy for a while) and write testable code. Writing elegant, readable code which scalable and testable is not an easy or boring task. If you can not automate the tests, you are probably do something wrong.
Software always ships with bugs (Score:3, Insightful)
The reality is that all software ships with bugs. Some known and some unknown. Typically it depends on how easy it is for the customer to find, what is the impact to the customer, cost to fix, and risk of regression. Given that software is typically patched after shipping, it means even more bugs get shipped rather than slipping the ship date.
You can't... work smarter (Score:3, Insightful)
It's inherently boring....
SO, Build hooks into the 'ware as you write it, and automate the testing.
Work smart, not hard.
Red
Re:Too close to the subject... (Score:1, Insightful)
> Of course the developer can system test do
Hmm, with that one the force strong is I feel.
That aside, do you know one can't tickle oneself, don't you?
I wonder why Mozart composed so much... could hearing his own music be not as pleasing as creating new ones?
Testing is a mindset more than anything else (Score:3, Insightful)
Re:the developer should participate in system test (Score:5, Insightful)
Unit tests should excercise the corner cases in your code. If you know what they are, write tests for them.
QA testing should break all design assumptions about how the software should be used. Having the programmer sitting there telling the QA guy what to click on (and I've seen that far too often) invalidates that. The most useful bugs are the ones where the QA guy says "I did what I thought would get the job done, and instead it formatted my hard drive", leaving the dev to sputter "but, but, you're not supposed to do that". Given enough users, every possible "stupid" thing you can do with your software will be done in the field, and you really want to know that you will at least fail safely in all those "but that's not how you're supposed to do it" cases.
Re:Too close to the subject... (Score:4, Insightful)
UI design is one of those areas of expertise that is both an "Art" and a "Science" at the same time. Very few people are capable of excelling at both simultaneously. That's why you end up with ugly but capable interfaces, or beautiful but useless interfaces. There are several examples in other industries that are similarly difficult, and the people who do them tend to be highly paid specialists. Plastic surgery comes to mind, for example.
The computer industry hasn't evolved to quite that level yet, people just don't realize that good UI design is hard, so it often ends up a some random task assigned to whoever is available.
Re:Too close to the subject... (Score:3, Insightful)
Developers shouldn't necessarily participate in unit tests if you follow TDD and CI techniques. Automate your unit tests and the whole build environment and you get regression tests as a bonus.
I think that developer shouldn't test, at least, his own code. When ever I test my code it magically just works. When I pass it to someone else everything suddenly breaks. If you have to test your own code you need at least dedicated test environment.
In system tests developers are a big no-no. We are currently running system test for production control system in a real production environment, which is rather complex factory with robots and stuff. Developers tend to cut some corners by doing hot-fixes in the middle of the test and altering data directly into the database. They don't even tell us who did and what and most importantly why all the time! This makes it impossible to write decent bug reports for instance. The worst thing is that they are interfering with the tests when they should be fixing the reported bugs.
So there's couple of points why I think developer as a tester is (most likely) not a good thing. By hiring couple of tester, who knows their stuff, you really increase the quality of your product.
Re:Too close to the subject... (Score:2, Insightful)
I am a test engineer (ie electrical/computer engineer with an emphasis in development for test and QA, not the guy who runs equipment) and I write a lot of software. I'm going to answer the question of how to create stable software rather than the mindset that all software needs to be unit or regression tested. This definitely serves an important purpose in any complex system and can't be disregarded, but I see most implementations created to test conditions that will never fail. It's a matter of focusing your testing optimally.
Correctly implemented code is more a state of mind and what your focus is. I perpetually think of how code can fail and I am also a minimalist. This is an inherent conflict of interest, but it really makes my code stable and reusable by numerous other groups. I've been complimented that my product "just works."
Creating stable code is a matter of understanding your inputs and performing error and bounds checking as necessary, but not obsessively. The act of implementing test code often creates bugs when the goal is to reduce bugs. Correct error handling is another topic of consideration. Unit and regression testing is important when used in the right places.
I suppose what I'm getting at is the process of creating good code takes time, effort and skill. Developing testing criteria is similar. If you feel you are wasting your time doing code testing, you aren't focusing your efforts in the right place.
Re:Too close to the subject... (Score:4, Insightful)
If your users aren't testing your code, then you don't have users.
Of course, they shouldn't be the first people to test your code...
Re:Too close to the subject... (Score:3, Insightful)
No, the dev should never do final testing, since they have a subconscious incentive to do it badly in order to find less mistakes. However, nobody is as qualified to know the edge cases and weaknesses than the individual who just wrote it. So, the best result comes from giving the developers a large incentive to find problems in their own code before the reviewer gets to it. The system I have seen work very well in the past is to create a culture of accountability, where as part of finding the bug, if nobody owns up to making a mistake, the SVN logs are traced to find the originator of the bug and it filed as part of the bug report. It sounds vindictive, but it is more a learning exercise to find out the weaknesses in our development process. What this does however is give someone a great incentive to test every case before committing, since their mistakes will be known to the whole team soon. Nobody tests better than the original dev, when his pride is on the line.
However, this system's result is very much dependant on culture. In Australia, people are used to being told "you fucked up mate" the minute that their mistakes are uncovered, in other places, maybe not. The system is very much dependant on individuals expected to set a good example, such as managers and senior staff acknowledging when they themselves make a mistake without prompting.
Re:Too close to the subject... (Score:2, Insightful)
Re:Focus your attention elsewhere (Score:2, Insightful)
My expectation would be that engineers are scientifically-minded and skeptical
This is actually the problem; being scientifically minded means you tend to miss the bugs that are exposed by someone doing something that makes you go 'what were they thinking to try that?
Re:the developer should participate in system test (Score:3, Insightful)
System testers have to know what the corner cases are. They can't guess them all
That's why you need proper documentation, like use cases, technical designs. Often (and preferably) written by analists and not the developers.
Preferably you have a setup like this:
Business analyst writes documentation based on requirements from the business.
Developers build the application, based on the documentation.
Testers write testcases based on the documentation and test the software as soon as it is released to them.
Testing is a profession too.
And there are many tools and methodologies (TMap, ISTQB, Testframe etc) to ensure proper test-coverage and to have anything meaningful to say about the quality of the tested application.
Re:Too close to the subject... (Score:4, Insightful)
Yep. Much more fun to try to break someone else's code than it is to trudge back through your own. Pair up with someone (officially or unofficially) and swap code. Your goal is to find more bugs than he does. I've got a reputation for finding "gnats" that I'm quite proud of. Everyone "hates" that I find them, but the code from our group is usually pretty solid because of it; making the whole team look good.
Re:the developer should participate in system test (Score:3, Insightful)
"but, but, you're not supposed to do that".
Yeah, that one still makes my *hair* stand up. Most devs are superb at narrow domain problem solving. Of course, reality and users tend not to accommodate them. I can't tell you how many times I've pushed the limits of a software package to get something done and it fails miserably.
So, yes, they will have 37 instances of the software running at once and yes, they will try and save one or more projects/files with the same name at the same time and yes, no matter how many times dev says it's their fault, they're wrong. Machines are designed to accommodate people, not the other way around.
Re:Too close to the subject... (Score:3, Insightful)
At the risk of sounding pedantic, I'd suggest that you limit your email testing to either address you own, or else domains like "example.com" that are reserved for testing. Domains like asdf.com are routinely flooded with unsolicited email due to people using it as a bogus domain name. More importantly, by using real domain names while testing software, you risk inadvertently emailing sensitive data to somewhere it should not go!
Re:Three non-automated test cases (Score:3, Insightful)
The requirements are vague, such as "difficulty of problems presented to the user must increase gradually over the course of the session," where difficulty isn't defined rigorously.
Uh, you define the requirement. If you've written code, you've *already* done that, you've just made the requirement inaccessible as its now encoded in a programming language. Write it down first and everyone will be happier.
The application's specification includes a physical simulation with random, pseudorandom, chaotic, or otherwise nonlinear behavior
That's not unit testing. Most code in a simulation is *not* random, it's fed inputs from other parts of the code, which ultimately result in the behaviour you describe.
Automating system test for something like this is interesting. First, you *must* design the code to be testable. That means being able to, for example, define random number seeds and so forth, so that a run can be reproduced.
Second, the implicit assumption is that there is *some* way to determine, based on inputs, if the outputs of your software are correct. If you can't do that, you have far bigger problems. Of course, those tests might be fuzzy ("value should be between X and Y", "value should be greater than zero", etc), but they're still tests.
Third, realize automated testing isn't the end-all and be-all of testing. For some systems, you will be forced to have some amount of manual testing (for example, integration testing large, complex software systems can be difficult to do in an automated fashion). That said, I'm willing to bet, for *most* systems, you can automate most if not all of the testing.
You know your automated test suite's coverage isn't 100%, and you're doing non-automated testing to find things that your automated test suite is missing.
Beg the question much? If your test suite is missing things, fill in the gaps. If you can't fill in the gaps, ask yourself why.