Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Businesses Software IT

Ask Slashdot: Building a Software QA Framework? 58

New submitter DarkHorseman writes: I am looking into a new position with my employer and have the opportunity to work with the development and QA team to further the creation of a Quality Assurance Framework that will be used into the long-term future. This is software that has been in continuous development, in-house, for >10 years and is used company-wide (Fortune100, ~1000 locations, >10k users, different varieties based on discipline) as a repair toolset on a large variety of computers (high variability of SW/HW configuration). Now is the time to formalize the QA process. We have developed purpose-built tools and include vendor-specific applications based on business need. This framework will ideally provide a thorough and documentable means by which a team of testers could help to thoroughly ensure proper functionality before pushing the software to all locations. The information provided by Lynda.com along with other sources has been invaluable in understanding the software side of QA but I have seen very little in terms of actual creation of the framework of the process. What would you consider the best resources to prepare me to succeed? Even if your QA needs are for smaller projects, what advice do you have for formalizing the process?
This discussion has been archived. No new comments can be posted.

Ask Slashdot: Building a Software QA Framework?

Comments Filter:
  • by TheBilgeRat ( 1629569 ) on Monday September 28, 2015 @02:02PM (#50614387)
    Tons of marketing speak and a link to Lynda.com. Write a document explaining its use, get marketing to put a price tag on it, and call it good.
    • by Apocryphos ( 1222870 ) on Monday September 28, 2015 @02:26PM (#50614615)
      I'm about to accept an offer to take over the QA process at a Fortune 100 company.

      How do you test software?

      uh...
      • by Anonymous Coward

        From the submitter's article, the solution is obvious, give Lynda.com all your money!

        Considering the submitter's ability to think, sounds like a reasonable expectation to him! Considering the audience that will take up such a solution, sounds like fools and their money can still be easily parted!

    • Forget Lynda.com. Use Selenium, all the cool kids are doing it for QA.
      • Forget Lynda.com. Use Selenium, all the cool kids are doing it for QA.

        Not really. The coolest kids still use Watir (although admittedly it is now built around Selenium WebDriver.)

        • Is there any advantage of Watir other than it's built in Ruby?
          • It's probably important here to distinguish between Selenium, Selenium Webdriver, and Watir Webdriver.

            Selenium by itself is more of a passive testing framework. For years, Watir enabled testing by "driving" (automating) the browser, so that you could actually interact with the page (click buttons, select from dropdowns, etc.).

            Originally, that was done only in Internet Explorer using the COM interface. Later, a JavaScript interface layer was developed which worked with more browsers (and other OSes), a
    • by nmpg ( 4032029 )
      My thoughts exactly. This lynda link seems not innocent.. oh well: guys there's pluralsight too, and tuts plus, and udemy... and youtube :)
  • Use your toolchain (Score:5, Insightful)

    by plover ( 150551 ) on Monday September 28, 2015 @02:18PM (#50614555) Homepage Journal

    Presumably you are using modern tools to compile and build your software, manage source code, and manage your project work. Many of these tools will either incorporate or integrate with bug tracking software and testing frameworks. If there's a native bug tracker available, select it. If there's a native test framework available, use it.

    What you need is a least-friction option, where testers, analysts, and developers can all see the bugs, write up the bugs, test the bugs, and fix the bugs. You don't need "The Most Advanced Framework Available Today", you don't need "The Best Test Tracking and Reporting Software Ever Produced", you need a solution that works well for all the people involved. Having a third party tool where the developer has to stop working, log in to the bug tracker, read the bug details, switch back to the development environment, make some changes, switch back to the bug tracker, write up the findings, switch to the test framework, execute a test, switch back ... All that switching is a huge productivity killer. The smoother the integration, the more effective and efficient the engineers will be - and that's where your expenses really lie.

    Here's the problem. Some organizations say "hey, let's evaluate and buy the bestest test software out there" without giving a thought to the developers. So the QA department runs off on their own, buys a tool, and starts building tests in it that the developers can't run. If the developers can't run the tests, they don't know if they're fixing the problems correctly, so they waste tons of time. Worse, if a developer makes a change that breaks some test, they won't know until that result is reported to them, possibly days, weeks or even months later, depending on your QA cycles. During the intervening time, the developer continues to write code based on their original faulty change, creating technical dependencies on what may be a completely flawed base assumption. When the test finally reveals the flaw, the developer's choices are limited to: A) rewrite everything according to the better architecture uncovered by the flaw, or B) make a scabby patch so the test passes. If you choose A, the software's release will be expensively delayed. If you choose B once, you'll likely choose it again, you're incurring technical debt, all your software is likely to be crap, and no good developers will want to work for you. The correct answer is of course C) don't produce tests the developer's can't run themselves on demand, or tests that aren't automated as a part of the build process.

    • by PPH ( 736903 )

      Depends on how anally retentive management is about compartmentalizing processes. There's a story I heard (can't vouch for it's validity) about a guy who got a job as a coder at a (very) major s/w company. His job was to write code. Period. And then turn it over to be compiled/tested. Errors (even compile time typos) were counted and the package was handed back to the coder. Statistics were collected. Pay and promotions were based upon lines of code and number of errors.

      This was back in the days of DOS PCs

      • by plover ( 150551 )

        Having seen Waterfall inaction, I find that story sadly believable. The moment you start siloing what should be a single person's responsibility, the turf wars emerge to amplify the chaos. And a developer's responsibility should encompass everything from testing through coding to design.

    • Be careful which tests you use in a Continuous Integration Build as automated functional tests can often be slow and you don't want a CI Build to take much more than 10 minutes. Have a separate build scheduled to run overnight that runs any long duration tests. Also look at your coverage and try to make any existing defects into Regression Tests (Preferably Automated).
  • by Ungrounded Lightning ( 62228 ) on Monday September 28, 2015 @02:19PM (#50614563) Journal

    I am looking into a new position with my employer and have the opportunity to work with the development and QA team to further the creation of a Quality Assurance Framework that will be used into the long-term future.

    You haven't taken the position yet? Don't!

    Unlike other industries (such as the auto industry, where QA is often the highest ranking and most powerful professional short of plant manager, respected for their global understanding of both engineering and the math of probability and statistics, empowered to order things fixed and, if appropriate, stop the production until they are), the pointy-haired bosses of software development generally view QA people (even the write-the-tools types) as failed developers. "If you were good you'd be a developer." is a typical quote. A QA position on your resume is an albatros around your neck.

    One of the better programming talents I know (and I know a lot of the big names), when starting out, took a QA position as a stop-loss. After that she couldn't get a developer position - even those where they CLAIMED to give her a developer position actually put her in the QA department.

    QA is who they cut first, when things get tight. This is massively stupid - because it's who you need to get OUT of the jam. But understaffing QA shows up on the bottom line a couple years out, after the PHB has lined up his next positio, while understaffing development shows up quickly.

    • by Anonymous Coward

      100% correct. Plus QA is frequently offshored as it is just as effective.

      • by Anonymous Coward

        Offshoring QA is a common mistake that companies make because they think it's a cut and dry task. But every company I've worked with who off shored the QA department has ended up with less quality and more developer time spinner their wheels on false positive test failures than with a QA staff in house. The worst I've seen is when the offshore QA team stops their test plan after a single bug is found. Then its a 24 turn-around before the test plan can continue. This causes huge delays in releasing the s

    • > QA is who they cut first, when things get tight. This is massively stupid - because it's who you need to get OUT of the jam.

      In general, debt is normally dumb. Except when it's for an offsetting investment. OCCASIONALLY, in business debt is smart. For example, being first to market for a rapidly growing sector may well justify debt. Survival mode may justify debt. Technical debt, poor code and systems, IS debt. Get the product out now, pay the cost of poor code later. That's NORMALLY a bad idea

      • Technical debt, poor code and systems, IS debt. Get the product out now, pay the cost of poor code later. That's NORMALLY a bad idea.

        It's also very rarely necessary. Writing code correctly is usually faster in the long term and the short term.
        A lot of times people think "correct" is slower because they think "correct" means "writing a lot of framework code."

    • by Anonymous Coward on Monday September 28, 2015 @05:37PM (#50616007)

      I've been a software tester for more than 10 years. I was regarded as an excellent software developer (by peers and managers) when I *chose* to become a software tester because I decided that I like science more than engineering, which in my opinion, is the key difference between software testing and software development.

      I have not yet come across any managers (pointy-haired bosses) who genuinely believe that testers are failed developers, but I have come across many developers who regard testers as lesser beings. One even told me with a blank look on his face - "I think you are good enough to be a developer?".

      It doesn't help that there are indeed many failed developers working as software testers, spouting nonsense like "I am not technical" or "Testing is so easy".

      Software testing is science. It's an intellectual activity. It's challenging. It's very difficult to perform efficiently and effectively. Anyone claiming otherwise simply misunderstands software testing. (If you are one of those people, I recommend that you look up James Bach.)

      Sure, I could have continued my career as a developer because I don't like others thinking of me as a failed developer. But in the end, I chose my path based on what I liked rather than what others (incorrectly) thought about the profession.

      We need more capable people to perform meaningful software testing to dispel the myths in the industry, including ones posted below such as "QA is just as effective when offshored". Telling people to don't become testers because others incorrectly view them as failed developers is not a helpful advice, neither for those people or the industry as a whole.

      • by Anonymous Coward

        It is true that we developers do look down on software testers. They are paid less as well. Frequently they are offshored. I'm glad you like it, but you chose the wrong path if you are looking to maximize your usefulness and earning power in the industry. No offense, but that is the truth.

        • by cowdung ( 702933 )

          It is true that we developers do look down on software testers. They are paid less as well. Frequently they are offshored. I'm glad you like it, but you chose the wrong path if you are looking to maximize your usefulness and earning power in the industry. No offense, but that is the truth.

          Wow.. sounds like you come from a twisted corporate culture. Why dis someone who's work protects you from making a fool of yourself in front of customers?

          Testing is critically important in a successful software project.

    • by valles ( 2826761 )
      QA people need not apply.
  • by selectspec ( 74651 ) on Monday September 28, 2015 @02:19PM (#50614565)

    ...how to build a software company.

    Anyway, there is a lot to QA, but everything starts with the build and bug/feature tracking systems. QA is anchored to these two pillars. Formalizing (and documenting) how these systems work and interact is critical to correctly tracking issues, reproducibility and inter-QA-to-developer communication. A strong nightly build system coupled with a properly used tracking tool (like Bugzilla, Jira, etc) are pretty much the cornerstone of any framework.

    The build system itself breaks down into the nightly automated build process as well as source revision control system. Testing should start before QA ever sees a build by testing the build after checkins and running developer implemented unit tests. Tagging release candidates and making sure that QA is working with the correct build is critical to optimizing QA efficiency. Nothing is worse than having QA run tests on the wrong build.

    Next, QA testing itself gets divided into two major categories: automated and manual testing. Automation has a high initial cost of development, but pays off in the long run with lower labor, especially for mundane tasks that are too tedious to possible test by human.

    Automation typically is closely linked into the build system. Manual testing breaks down into a bunch of different sub-categories, but all manual testing should follow documented test plans. Test plan documentation can get overboard, but at least you want a wiki of some kind (Confluence, etc) to document how development wants QA to perform the testing.

    Manual testing is often where labor is the biggest factor, and thus it tends to get the most attention from management. These folks tend to be either "testers" good at testing vs. system integrators who are good at staging environments that look like real world customer environments. Much here of course depends on the nature of your product and your customer.

    However, with both manual and automated testing, it is important to have infrastructure for creating clean environments to test with. This usually involves some form of virtualization. QA needs to be able to produce clean images of systems to test with. This part of QA has never been easier, unless you are a hardware company, and actually have to still use a lot of physical equipment instead of VMs.

    Release quality is determined by comparing the list of open issues targeting a given milestone and their status. Again, a well integrated tool for issue tracking is critical in the process of release management.

    Lastly, I would not wait 10 years to formalize a QA plan. Every software startup that I know of had a QA director starting to fill this plan out within a few months of getting seed money. QA is integral to software (and hardware) engineering.

  • by Dino ( 9081 ) * on Monday September 28, 2015 @02:23PM (#50614593) Homepage

    I have built and seen succeed and crash-n-burn spectacularly a variety of automated test frameworks for large enterprises. Let's start with the successes:

      - High availability / Robust
      - Staging environment for automated test developers
      - Performance metrics
      - Easy to understand test results

    The failures were due to:

      - Brittle and poorly designed tests which don't run the same in the CI system versus the tester's machine
      - Testers committing bugs into the test environment
      - No performance metrics
      - Hard-to-understand failure results require duplication and deep analysis

    As you can tell, the failures are the opposite of the successes. Allow me to further explain.

    The most important item is that the tests always work and always be running. This means test machines and back-up test machines. Running the same test on three different machines is even better because you can throw away or temporarily ignore outliers. Outliers need to be addressed eventually, but for day-to-day developers and managers just want to know if the code they committed causes failures. Having the test be in any way unreliable causes faith in the tests to disappear. The tests must run, and they must run well. Test environments go down or require maintenance and you want to be able to continue to run tests during these downtimes. Treat the tests like a production system.

    Next, a big improvement I've seen is to have automated test developers contribute new work to a separate-but-equal staging environment. Automated test teams run on an Agile/Scrum iteration and only "release" their new tests at specific times. Another thing which reduces faith in test results if the tests are breaking due to the fault of automated test developers-- which happens all too often.

    Automated tests are the ideal platform for generating performance metrics.

    Lastly, a big pet peeve of mine is for understandable test failures. Test failures should obviously describe the set-up, expected and actual result. If test failures are obtuse and require a lot of time to analyze and triage-- that is wasted time that could have been spent fixing the root cause.

    Good luck! If past experience is any indicator, you will be spending far more time and money than you ever imagined to create a robust system that developers and managers will have faith in.

  • The objective of QA is to verify that the developers have been doing proper testing and not to do the testing yourself.

  • Dev's are there own testers and QA is the end users.

    also we have that easy update system. So release now patch / update later.

  • Getting buy-in from management and the developers to actual follow the process is the hard part.
  • by CokoBWare ( 584686 ) on Monday September 28, 2015 @03:48PM (#50615235)

    Perhaps look into the following standards and consider emplementing one or both of them:

    * ISO/IEC 9126 https://en.wikipedia.org/wiki/ISO/IEC_9126 [wikipedia.org]
    * ISO/IEC/IEEE 29119 https://en.wikipedia.org/wiki/ISO/IEC_29119 [wikipedia.org]

  • It must be a slow news day for /.

    Rule of thumb is to use similar languages/technologies/frameworks that the developers use to track their workflow. That way they'll help you when you break stuff.

    --bm

  • QA is not just about testing. Testing is Quality Control; just one aspect of a QA framework.
  • If you're writing a big, complicated testing framework, don't forget to write tests for the tests. I'm not even joking

  • "in continuous development, in-house, for >10 years"

    A little late for the rodeo, aren't you?

    Now is not the time to think of this sort of thing; that time was 10 years ago.

On the eighth day, God created FORTRAN.

Working...