Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Software Technology

What Does a Software Tester's Job Constitute? 228

First time accepted submitter rolakyng writes "I got a call from a recruiter looking for software test engineer. I'm a software engineer and my job is development and testing. I know I mentioned testing but I'm pretty sure it's totally different from professional testing practices. Can anyone shed light on what a software test engineer's day to day responsibilities are? They said they'll call me back for a screening and I want to be ready for it. Any tips?"
This discussion has been archived. No new comments can be posted.

What Does a Software Tester's Job Constitute?

Comments Filter:
  • Re:Ummmm (Score:3, Informative)

    by tripleevenfall ( 1990004 ) on Friday February 10, 2012 @06:12PM (#39000381)

    Taking builk testing responsibilities off developers so they can work on more important stuff.

  • by wwbbs ( 60205 ) on Friday February 10, 2012 @06:16PM (#39000441) Homepage
    I've never heard of Software Engineer that did not now what a Software Test Engineer does. Perhaps User Acceptance Training? Your the middle man that takes requirements from client and makes sure that what the Developers produce works within the framework the client provided. Generally you create mock ups and run through the data until all the results and the interface are what the client expects.
  • by Lashat ( 1041424 ) on Friday February 10, 2012 @06:16PM (#39000443)

    Anything from glorified mouse-clicker and result recorder on up to programming test cases and developing an automation framework.

    The line blurs depending on WHO you work for and WHAT you work on.

    My best suggestion is to ask the person offering the job what they have in mind for someone of your skillset.

  • by CannonballHead ( 842625 ) on Friday February 10, 2012 @06:20PM (#39000491)

    I'm a software tester for data moving products.

    While a lot of testing is repetitive, the repetitive stuff can often be automated. For example, there's functionality that exists in every release ... so automating those testcases such that they are easy to run hands-off is good. This automation is often something the tester will be doing.

    For new features, what typically happens in my group is that the developers will explain how they implemented a given feature and how it should work. We are responsible for testing this feature - with any tips that dev gives us - as well as trying to put it through various scenarios that cause it to break.

    In my product's line, for example, we do clean reboot and power cycle/crash testing. What happens to our product when the power goes out? What happens to the data that was being moved? Does it recover? That sort of thing. This requires thought - and, contrary to some comments here, since we all want our business to SUCCEED and make money, which means customers need to be happy with the product, development is happy when we find errors or scenarios that they did not plan for in their coding. The earlier we find them, the better.

    Day to day activities? Well, I'd break it into two major sections.

    Planning

    The test group, in my case, is responsible for reading through the planned product's features and changes and coming up with a test plan accordingly. This is then reviewed by developers, the test group, etc. Usually, during this time period is when a lot of work on automating previously-manual testcases can be done, in preparation for the next release. Also, planning for what testing environments will need to be setup and starting to set them up... it depends how big your group is I guess. Since mine is relatively small, all the testers help out with setting up various machines for testing, too.

    Testing

    During testing, the test plan is executed. Day to day activities include test environment setup, manual testing, automated testing, discussing potential issues with developers and opening work requests (WRs) if it's decided that it is really an issue and not a weird environmental problem etc.

  • Variety Pack! (Score:5, Informative)

    by Isarian ( 929683 ) on Friday February 10, 2012 @06:21PM (#39000511)

    As a software tester at my job, my work includes:

    - Building test scripts for each application (I use Google Docs spreadsheets) that we develop
    - Perform feature-specific or fix-specific regular testing of applications during development cycle
    - Argue with developers over severity of bugs
    - Coordinate full-scale software testing before each release
    - Update documentation when developers fail to do so
    - Argue with developers over importance of different features in terms of development time

    A big part of what makes or breaks you as a software developer is the willingness to go off the beaten path. For example, when I test, this is what I consider:

    - Hmm, that's an interesting text field, and it's meant for an IP address. I wonder what happens if I type "abc::1234**!!whymeeee" into it (input validation)
    - This is a resizeable dialog - if I resize it absurdly in vertical/horizontal, do elements in the dialog scale correctly?
    - Here's a text area that's meant for a paragraph or two of text. If I put the Iliad into it, does the text run off the page? (bounds checking, text limit checking)
    - Here's a dialog that has to validate text - what are all the possible errors it could encounter, and are the error dialogs properly implemented for each? (check all error condition handling possibilities)
    - This dialog is localized into 15 languages - is the page sized/formatted correctly in all languages?
    - This program is meant to be installed to C:\Program Files\Blahcompany\Product - what if I install it to a nonstandard location?

    This will ultimately put you at odds with a lot of developers because your job, every day, is to make the assumption that they have made mistakes that you will find. I enjoy it, and find it to be a rewarding experience, but that's because I work at a company that highly values its software testers and takes QA as a serious priority. Try to get a feel for how this company treats QA, because if all they're doing is using you as the fall guy for bugs you made them aware of before a release, it'll be no good.

  • by chuckfirment ( 197857 ) on Friday February 10, 2012 @06:26PM (#39000583)

    OP is correct - the job of a software tester is to try to break the software. I've enjoyed working in software quality assurance (SQA) for over a dozen years now. I get paid to break things all day, and when I do break it - I don't have to fix it.

    SQA is very different depending on where you go and what you're testing.

    Web Applications - you'll want programming experience so you can write flexible automated scripts. You can test manually for every supported browser/OS combination, but it's tedious.
    Desktop Applications - Sometimes manual testing is enough. If the software is large, you'll likely want to automate.

    Large companies that move fast will want automation. Small companies that move fast will want automation, but might not realize it. You can get away with manual testing at small slow companies.

    You don't need automation skills to be a software tester. You do if you want to become a software tester with a high income.

  • Re:scripts (Score:5, Informative)

    by chuckfirment ( 197857 ) on Friday February 10, 2012 @06:30PM (#39000619)

    I have to disagree. I've never worked somewhere like this, and I've been in SQA for many years at many jobs.

    If you want to be a low-paid button pusher, yes. Do the same thing over and over, all day long with no deviation. If you want to enjoy your job testing, test the software. Try to break it. Troubleshoot. Do things the developers wouldn't expect. (After all, who expects an apostrophe in a name field? "We only expect regular text, right Mr. O'Hanlon?")

    The job of a software tester (tester, not button pusher) is to try to find all the defects in the software and report them to development so they can be fixed.

  • by perpenso ( 1613749 ) on Friday February 10, 2012 @06:32PM (#39000659)

    Taking builk testing responsibilities off developers so they can work on more important stuff.

    Not quite. Developers often make poor testers. Software tends to get debugged and tuned for the way developers use the software, which is not necessarily how others (in particular customers) will use the software. How many developers have written a piece of code, tested it conscientiously themselves, presented it to others expecting no problems, and watched these other folks find serious bugs within minutes?

    Having dedicated testers between developers and customers yields better products, even when the developers take testing seriously.

  • by Firehed ( 942385 ) on Friday February 10, 2012 @06:48PM (#39000847) Homepage

    Indeed. Some stuff happens from weird ways that users try to do things that developers simply wouldn't think to test. Case in point, bug I found today: someone bought a six-month subscription by entering 0.5 as the quantity on a 1-year subscription. Our code wasn't expecting non-integer quantities, but happily did the math to get the line-item subtotal. When the data was stored, the 0.5 quantity went into an integer column, which the database cast to an integer by rounding down and suddenly qty * price != subtotal. It was caught and quickly fixed by a data integrity check, but a QA/dedicated testing person that thinks of weird user interactions like that could have prevented it from going out to production in the first place.

    Now we have one more thing added into unit tests so it won't happen in the future, but there you go. The code was not untested, it was just used in an unpredictable way.

  • Re:Boring test cases (Score:5, Informative)

    by Anonymous Coward on Friday February 10, 2012 @06:49PM (#39000861)

    Posting as AC but I've been a tester for over 10 years at different companies, many of them contract work. I very much enjoy the work.

    Let me clarify many of the things about being a software tester (which can also include embedded software/firmware). From my perspective:

    Following the test script as written is only a small amount of the big picture.

    Issue characterization. It's not good enough just to report the issue. How often is it reproducible? Device specific? Configuration specific? Timing specific? Line by line steps to reproduce, what was the observed behavior, what was the expected behavior. Is it only on first time launch? Does it reproduce on a variant? Localization form and fit--even if the language is not fully understood, when checking localized builds for form and fit, is there any trucation or overlap, does text go outside the button areas? Does it always reproduce, or how many times out of how many attempts?

    Severity determination--not everything is going to be a showstopper but properly rate the ones that are showstoppers. Also, a low severity defect can still have a higher priority if more than 50% or so of users will see the issue.

    Exploratory testing skills are equally essential. Even after running the script line by line in order, what else wasn't covered? What if a different file is used, a purposely corrupted file--is the software still robust?

    Quick turnaround on resolved issues. Verify the issue is fixed and close the issue, or reopen the issue with additional information as to why the issue still occurred. However--if the issue was fixed and a new issue is a side effect of the fix, then the issue resolved gets closed and a new issue gets submitted with many testing teams. They do not usually add the new issue to the existing issue except as a comment, so that closing one issue does not lose the other issue.

    Teamwork--collaborating with other testers, developers, managers, maybe even conference calls with outsource vendors.

    Test case authoring. Even in a pure manual testing environment, existing test cases need to be updated (or removed if a function no longer exists), new test cases need to be added for new functionality, best coverage in the least amount of test cases is the goal.

    Automated testing--developing test scripts in the automated testing environment. This is programming for testing purposes in many cases.

    It all varies from company to company, project to project--but in a lot of cases being a software tester is not about just go to the next test case and mark it pass, fail, or blocked/not possible for each working day, every day.

    I hope this helps.

  • by networkBoy ( 774728 ) on Friday February 10, 2012 @06:52PM (#39000885) Journal

    Yup.
    I write test software, specifically software that acts on firmware. The majority of our devs do unit tests, but once the whole thing is assembled together and used as a system interactions show up that are not ideal.
    You need some form of QA department between Devs and Customers.
    As someone who is already familiar with software engineering you likely would be an ideal candidate for test engineering, because you know how software generally works, and can write meaningful bug reports.
    For an interview, if they ask you for strengths, I would focus on your development background and ability to write meaningful bug reports.
    -nB

  • by asliarun ( 636603 ) on Friday February 10, 2012 @07:04PM (#39001051)

    Taking builk testing responsibilities off developers so they can work on more important stuff.

    Not quite. Developers often make poor testers. Software tends to get debugged and tuned for the way developers use the software, which is not necessarily how others (in particular customers) will use the software. How many developers have written a piece of code, tested it conscientiously themselves, presented it to others expecting no problems, and watched these other folks find serious bugs within minutes?

    Having dedicated testers between developers and customers yields better products, even when the developers take testing seriously.

    Actually, that is not necessarily true. I get what you are trying to say, but you seem to gloss over the differences between QA, manual tester, and what the OP was referring to: Software Test Engineer.

    To highlight some of the differences:

    QA is responsible for "assuring quality". This is different from QC which is "checking quality". More often than not, a good QA is a process expert, with the assumption being that good processes ensure good quality. Their goal is to avoid the problem, not to detect the problem or fix the problem. Where the line gets blurred is the fact that a QA often performs the role of a manual tester. This usually depends on the size of the team.

    Manual testing is usually QC - understanding what to test, how to test, and going ahead and testing it. They start off by translating the requirement specification (or user stories if you are agile) into a suite of test cases, add other test cases that might be non-functional or regression related, and finally test the system manually every time before it is released to customers.

    Generally (although not always true), a "test engineer" is more of a developer than a tester. They are usually tasked to develop test frameworks using third party tools or even creating their own framework. The former usually involves scripting and lightweight coding and the latter can involve full blown coding. They can be developing a test framework for executing and managing unit tests and functional tests (often white box), and integration tests, regression tests, and performance tests (often black box). While many project teams skimp on devoting this much engineering to testing, it can give huge returns, perhaps even better returns than development can after a certain point.

    To be fair, the OP has not mentioned anything else beyond "software test engineer" so the role might very well be manual testing. However, the word "engineer" leads me to believe it is more of a automation role. Having said this, companies often embellish their titles with "engineer" to make it sounds weighty.

  • Avoid it. (Score:4, Informative)

    by NilleKopparmynt ( 928574 ) on Friday February 10, 2012 @08:52PM (#39002047)

    I have been working in the testing field for almost 20 years but after a 5 year stint at Microsoft I found it to be such a horrible experience that I will never work with testing ever again. There are numerous problems and here is a selection.

    1. As a tester at Microsoft your main use is as a scape goat. If you find a big bug then it is all your fault. No matter when you found it, you should have found it earlier. It is a pretty wierd experience when you do your job properly and well and you still can be blamed for doing a bad job.

    2. As a tester at Microsoft you really are a second class citizen. You are considered less competent and more stupid. You are also far less important than anyone else since what you do does not explicitly impact the product.

    3. As a tester at Microsoft you do not have a career. It is pretty easy for a newbie to reach SDET2 but very few reach senior level. Where I used to work there was a 1:1 ratio between testers and devs but it was a 1:7 ratio between senior testers and senior devs.

    4. When you point out the problem with testers not having careers it only results in you having to listen to the director of test lying to you for an hour or two regaring how they are aware of the problem and how more testers now are going to be promoted. The result that year was that 12 devs reached senior level but not a single tester reached senior level.

    5. If you are good at your job people are going to hate you. Your job (among many other things) is to find bugs in the product and people really love having someone pointing out all the mistakes they make.

    6. If you have bad luck (like me) then you might end up in an automation swamp where the devs repeatedly break your tests and you spend an enormous amount of time fixing all the breaks. This really murders your competence.

    7. If you have really bad luck (like me) then you might find yourself with a test manager that has nothing but contempt for testers and their competence plus thinks it is really important for testers to do a lot of mundane manual testing.

    8. Also, having tester in your CV is bad if you want to pursue a career in software development. It will make it harder for you to get a job as software developer.

  • Re:Boring test cases (Score:4, Informative)

    by shish ( 588640 ) on Friday February 10, 2012 @09:54PM (#39002387) Homepage
    If your test is simple enough that it can be turned into a script for a person to run, turn it into a script for a computer to run, and then go test some other more interesting part of the system
  • by networkBoy ( 774728 ) on Saturday February 11, 2012 @12:07AM (#39002853) Journal

    I maintain the test infrastructure kernel. It's a constant battle to keep up with dev changes, they change a struct, I change the struct, for the most part they are well linked (we import their headers), but there are some things that simply cascade downstream bad enough to prevent us from even compiling, it's a niche job, half maintenance coder, half lone cowboy. I really like it, but most people don't last on my team. We have three devs that have been at this one job for more than 3 years. Everyone else either goes to just test development (our customers, and sounds like where you are), the rest go to the dev side of house (you can always tell which ones are going to leave outright because they bitch about the maintenance coding).

    One word of advice to new test devs out there: Parent post is right, most test development is spent on documentation, a full 3/4 of your coding time is likely to be some form of maintenance coding, deal with it well and you will not hurt for money or employment. Bitch about it and your co-workers will not like you all that much, which makes for very long working days.
    -nB

The Tao is like a glob pattern: used but never used up. It is like the extern void: filled with infinite possibilities.

Working...