Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Programming IT Technology

Testing Products for Web Applications? 250

hellbunnie asks: "I work with a team of developers many of whom spend much of their time writing web-based front-ends for DBs or other applications. Now, while we enjoy programming, we're pretty lazy when it comes to testing. Even if we weren't so lazy, I think we'd still miss a number of problems, 'cos there's just so many different screens that use any particular method/function that you might change. That means there's a lot to be tested after each change. So, my question is has anyone any experience with automated systems for testing web applications?"

"I've seen a lot of automated test suites advertised and I've always assumed that they were no substitute for careful testing by a human. However, as the number of web pages that we need to maintain grows, I've begun to wish that we had something that we could kick off at night, that would follow all links on our system and fill in values for the various forms it encountered, then when we arrived in the next morning there'd be some sort of report available detailing its findings. It could flag any pages that returned something obviously incorrect, such as a SQL error, a blank page or just the word 'error'.

Does such a thing exist or am I just engaging in wishful thinking to imagine that there might be something flexible enough to do the job? What do other people do to test their software?"

This discussion has been archived. No new comments can be posted.

Testing Products for Web Applications?

Comments Filter:
  • mercury (Score:4, Informative)

    by psychopenguin ( 228012 ) on Friday September 13, 2002 @02:17PM (#4252467)
    Mercury Interactive - www.mercuryinteractive.com has some products that will do this. I used them for a short while and they seemed pretty good.
    • Re:mercury (Score:1, Informative)

      by Anonymous Coward
      I've used Mercury loadrunner and winrunner for a couple of years and they are very good. The scripts created can be manipulated with C. They also have a very good customer support with people actually knowing their s***.

      Downside: Expensive (but it is all relative to the savings to be made.)
      • The "Expensive" bit is a BIG downside with Mercury. I believe the last invoice I saw from them was charging about $100 per virtual user. This is on top of the $50G or so for the core software (LoadRunner Controller, VUser Generator, a few extra monitors). And then there's a yearly maintenance charge to keep all those VUsers running in the latest and greatest versions.

        So, if you're looking to test 5,000 simultaneous users, open the pocketbook.

        If everything can be exercised via HTTP calls, give OpenSTA a try. http://www.opensta.org

        That won't cover interface changes, necessarily, but it will provide load testing.
    • Re:mercury (Score:4, Informative)

      by SpeakerOfTruth ( 524938 ) on Friday September 13, 2002 @04:07PM (#4253340)
      My company was using Mercury Interactive's WinRunner for a while, but has since switched to Seapine's QA Wizard. We found that with WinRunner, it took a long time to create the scripts - which sometimes took longer because it semi-frequently crashes. We ended up purchasing the advanced training class that WinRunner offers, but the training received only seemed rudimentary. It also requires that your QA people have the ability to program - which I am not sure is a fair requirement. After about 12-14 months of WinRunner, we gave up on it. At the moment, we are using QA Wizard. We haven't been using it for very long, but it does seem to be much easier for the QA people to create their test scripts and test the web pages.
      • Re:mercury (Score:2, Interesting)

        by teasea ( 11940 )
        It also requires that your QA people have the ability to program - which I am not sure is a fair requirement.

        This depends entirely on how important QA is to you. I see QA and development as two sides of the same coin. QA people should be accustomed to scripting. Loops, variables, arguments, procs and functions; this is coding. Everything else is just perspective. Simple black-box stuff is fine for training, but QA people need to learn more to effectivly describe the deeper issues.

        Inevitably, development pushes the due date for their code, but the final date does not change. Automation is the best way to do regression tests. The human eye can then focus on new functionality.
    • Re:mercury (Score:3, Interesting)

      by SHiVa0 ( 608471 )
      I've used Mercury Interactive product for some time 2 years ago. Mainly for load testing our web application.

      The scripting tools are nice.. Recording with a browser. But the best part of that software is being able to script (*read program here*) using a "pseudo" C language.

      The library at your disposal are awesome. You can post random data, or data from a include file. And then compare every value received from your post.

      You can throw transaction failure and log.

      Doing so will even enable you to stress test it. Let's say you build one script checking every function of your web app. Then add some randomize for value, login, password, etc....

      Then put 100 clients doing those things at the same time.

      The report generator is neat and easy to read.

      There are many ways to test your application from DCOM, SOCKET and HTTP requests.

      Checkout loadrunner from mercury interactive.

      Those software will probably give you all you need and maybe too much some time. Learning curve is steep but worth every bit of it.
    • Too bad it's so damn expensive.

      If you have a copy of VS.NET, you can use the included ACT to test applications. You basically click on the links you want to test as it records and then you can run the scripts, simulating number of users and for how long. Since it outputs to a .vbs file you can easily modify the script and add dynamic variables such as a making a list of 1000 keywords, picking a random number and testing your search box [as one example] Not good for stress testing, but good for finding memory leaks and generating psuedo-traffic.

      I wouldn't have mentioned this on /., but since they run ads [perhaps unknowingly through doubleclick] for VS.NET I thought it would be approriate. Not to mention, you can run ACT against any web application; it doesn't matter because it just simulates clicking around.
    • Mercury is great. If you're java hackers, also take a look at httpunit and htmlunit. They aren't as quick and easy to use and tend not to have js support.

  • With the right perl modules, or even perhaps in shellscript with things like CURL, you could roll your own http/html regression tester. Won't handle javascript of course, and won't notice browser-dependant problems. You might be able to find a generic javascript/DOM library for unix to do the JS thing from your regression tester though.
    • Right Perl Modules? both of which I just happen to maintain.
  • by Boss, Pointy Haired ( 537010 ) on Friday September 13, 2002 @02:17PM (#4252469)
    Web Application Stress Tool (freebie from M$)

    http://webtool.rte.microsoft.com/

    • My previous comment [slashdot.org] posted to the Ask /. about Website Load Testing tools.
    • Jerk moderators modding me offtopic but why? [slashdot.org]

      It's the same information (I missed this post earlier.) plus an additional link to WCAT which is not easily found.

    • I found WAST to be of extremely limited value. I was forced to use it because the manufacturer client of mine saw it was free and didn't feel like having me make perl test programs even though that is exactly what we should have used.

      We also could have used PureLoad which I recommended, Java based and not too expensive (wind river's load runner (sorry if I am off on names here) was extremely, extremely expensive.

      The uselessness of WAST came because I was testing a tomcat-based web proxy and (for one example) error message pages would simply contribute to the average bytes transferred without telling me there was an error.. so it is very hard to tell what is going on unless everything is working perfectly. I used sar (linux) to get data from the (linux) server. Would have been much better off making scientific tests than trying to outguess such a squishy app as WAST. Get PureLoad!
  • Cactus (Score:3, Informative)

    by DevilM ( 191311 ) <devilm@@@devilm...com> on Friday September 13, 2002 @02:17PM (#4252473) Homepage
    You should check out Apache Cactus http://jakarta.apache.org/cactus/.
  • try a latka (Score:2, Informative)

    by Anonymous Coward
    http://jakarta.apache.org/commons/latka/index.html

    it's a java XML solution to writing automated suites of functional tests. and it's free.
  • by McCart42 ( 207315 ) on Friday September 13, 2002 @02:18PM (#4252483) Homepage
    Well you hit upon one good way, you just forgot to post the link...of course if you did you'd be more worried about your server overloading than your web frontends not working correctly...
  • Cactus (Score:4, Informative)

    by sterno ( 16320 ) on Friday September 13, 2002 @02:19PM (#4252497) Homepage
    Well if you are working in Java, I've used Cactus before with success. It's based on junit, and allows you to do unit testing on servlets/jsp's in a nicely automated way. As long as you take the time to create good test cases, it can do quite a good job.
  • by mustangdavis ( 583344 ) on Friday September 13, 2002 @02:20PM (#4252506) Homepage Journal
    I run a couple free web games ... and let me tell you ... if it has a security flaw, these people will find it! Hire a couple people that play my game! I'm sure they'll find any security flaws you may have!

    Seriously, I don't know of any software that does that, but if you find one, I'M INTERESTED!

    I don't know if you're looking for advice or not, but try putting in negative numbers or things like #(-3+1000) ... or end a SQL query in a text box and try to execute another query (or put in a sub query) ... and edit your query strings if you use GET (or build a query string and make sure that your program doesn't take a GET where it is looking for a POST) .... just a couple basics to try ... You might want to write a "validate_input" function for your forms as well ....

    Hopefully that helps a little ...

    • I don't know if you're looking for advice or not, but try putting in negative numbers or things like #(-3+1000) ... or end a SQL query in a text box and try to execute another query (or put in a sub query) ... and edit your query strings if you use GET (or build a query string and make sure that your program doesn't take a GET where it is looking for a POST) .... just a couple basics to try ... You might want to write a "validate_input" function for your forms as well ....


      Never ever, ever trust user-supplied data. <input type="hidden"> fields are user-supplied, cookies are user-supplied, etc. It shouldn't matter if they modify a GET param when you expect a POST. They can forge the POST nearly as easily as the GET.
  • By far the best tools out there, use Google to find them, but they cost quite a bit. All of these solutions require quite a bit of scripting and customization for full testing. For completely automated testing, well, try hiring someone. --ellis
  • Cactus or HTTPUnit (Score:5, Informative)

    by revscat ( 35618 ) on Friday September 13, 2002 @02:22PM (#4252523) Journal

    Both Cactus [apache.org] and HttpUnit [sourceforge.net] allow you to do unit tests on web components. Both are extensions of JUnit. Cactus allows you to do unit tests of servlets and JSPs, while HttpUnit allows for unit tests of the resulting HTML code. (Cactus also integrates HttpUnit to a certain degree.)

    Obviously, these tools are targeted at Java development. I have less experience with HttpUnit than with Cactus, but I imagine it could be used as a general test suite.

    • by slagdogg ( 549983 )
      IIRC, HttpUnit interacts with the site it is testing entirely at the HTTP
      protocol level. There is an object model for parsing elements of web pages
      to check values, set form elements, etc. While the tests themselves are
      written in Java (not to mention the tool), I believe the tool is capable of
      testing sites created using other tools and frameworks. I've not used it
      myself, but it does seem pretty capable from the docs and things I've heard
      from folks who have.

    • http unit is good for probing web pages, parsing content, verifying links off it work. Cactus is more for testing the classes behind the web page, if that makes sense, a kind of RPC back door into the beans.

      httpunit works against any web site, not just java, it just gets, posts, and analyses the results.

      Best book on httpunit is 'java tools for extreme programming'; 'java development with ant' also looks at it from the context of automated build and test processes. you can see that book applied in apache axis, under test/httpunit.

      -steve (who co-authored java dev with ant)

  • Web Site Test Tools (Score:3, Informative)

    by m_ilya ( 311437 ) <ilya@martynov.org> on Friday September 13, 2002 @02:23PM (#4252528) Homepage
    Take a look on Web Site Test Tools and Site Management Tools [softwareqatest.com] page. And of course shameless plug: HTTP-WebTest [martynov.org]. If you will check the latest make sure to try it's beta version.
  • by Anaphilius ( 146909 ) <.moc.liamg. .ta. .yenvogcm.nairb.> on Friday September 13, 2002 @02:23PM (#4252530)
    I used to work on such a system, and I did a lot of the testing for it as well. We found quite a few bugs, but those were overwhelmed by the number of interface design flaws we encountered. There were literally hundreds of unnecessary mouse-clicks per user per day in the original interface, simply because the programmers never had to use their own software for days at a time.

    So I guess my point is, make sure you don't simply rely on automated testing. A bot won't get sick of clicking unnecessary buttons, and won't develop RSI injuries. Humans will, and you'll get great feedback because of it. At my old company, the programmers were very nice about fixing these flaws once I brought them to their attention, and grateful for our input.

    Cheers,
    Anaphilius

  • Automated testing. (Score:3, Informative)

    by FortKnox ( 169099 ) on Friday September 13, 2002 @02:24PM (#4252540) Homepage Journal
    Quick lesson in automated testing.
    The only automated testing tools you can find is for regression tests. Basically, you make "build 1". You use the tool to 'record' the tests you currently run, and have it check for successes and failures. You make "build 2", and run the tests, to ensure everything that once worked, still works. Now you test the new stuff, record these tests with the tool, make "build 3", etc...

    There are three major companies with good automated regression tools. Mercury Interactive's WinRunner, Rational's Robot, and Compuware's QA Center. All of them are great tools (and you can get them packaged with load testing tools if you'd like).
    • The only automated testing tools you can find is for regression tests.

      Excluding, of course, things like HttpUnit [sourceforge.net], where you write code that drives a simulated browser and then check the results.

      I've used it for automated testing of a couple of sites, and I like it plenty. Between HttpUnit, JUnit, and Test-Driven Development [objectmentor.com], we launched a complicated web site and have had it in production for six months with a total of one user-reported bug. And that bug was when the graphic designer broke a link.
  • Several comes to mind--Test Perspective, LoadPro, ActiveTest, etc. You can also buy your own software to do this, or write something in a script language.

    I'm most familiar with LoadPro and Test Perspective..and of course scripting it.

    With Test Perspective, you can record the way the web app works, then have them play it back for you with lots of variations with however many number of users you want.

    LoadPro (http://www.keynote.com/solutions/html/keyreadines s_works.html) has the ability to randomly fill in forms with lists of data you give it. It will figure out what form it is, select the right list of data, submit the form and go to the next one. It can validate that the form returns the correct data too.

    Scripting it yourself is pretty easy too, but you want to make sure you use one that does http 1.1 (perl LWP doesn't) and you want to model your users accurately.

    As for purchasing a tool, there is SILK Performer and Segue, both traditional functionality testing tools

    Donald E. Foss
  • by Christianfreak ( 100697 ) on Friday September 13, 2002 @02:26PM (#4252561) Homepage Journal
    Just post the link to your website on /., if it doesn't crash from the load then it's probably pretty good. Hey maybe Taco should look into this! He could start offering it as a service :)
  • HttpUnit [sourceforge.net] Is a component that you can use with (or without) the jUnit [junit.org] framework to test your sites. It's basically a library that simulates all of the browser features, and you can automate it to load up web pages and compare the result to some base case.

    I've never used it, but it seems like it would probably be pretty helpful.

    -NiS

  • OMG (Score:1, Troll)

    by geekoid ( 135745 )
    If you can't figure this out, please stop programming.

    Man, this is basic stuff anybody with 2 years experience should be able to handle.

    This is suppose to be part of the design, coding, and implimentation. Maybe I should do an ask slashdot:
    dear slashdot readers, I've been programming for a long time, and now I have to write an accounting package, can anybody show me how?"

    sheesh.
  • One of many... (Score:2, Informative)

    by voxlator ( 531625 )
    There are many, and as others have rightly posted finding them on Google is easier than posting to /.

    SilkTest [segue.com] from Segue is good at both scripted testing & stress testing.

    --#voxlator
    • Great...now we get the

      Segue vs. Mercury vs. Rational vs. Radview debates!

      As if emacs vs. vi and kde vs. gnome wasn't enough.

      Personally most of our testers at the lab prefer the Rational stuff but we use whatever the customer has purchased licenses for.

  • by consumer ( 9588 ) on Friday September 13, 2002 @02:27PM (#4252571)
    My experience with commercial load-testing apps is that they are outrageously expensive, a pain to program, don't really scale all that well, and mostly have to run on Windows with someone sitting at the mouse. There are some that work better than others, but the free stuff in this areas is quite good.

    I recommend httperf and http_load for banging on lists of URLs really hard. At one place I worked, one of our developers rigged up some shell scripts that would play back log files through httperf and that worked pretty well.

    If you want to record browser sessions for testing specific paths through the site, look at http-recorder [sourceforge.net] or roboweb [sourceforge.net]. There's also webchatpp [cpan.org], HTTP::WebTest [cpan.org], and HTTP::MonkeyWrench [cpan.org] on CPAN. More info on this can be found on the mod_perl mailing list [apache.org] or on PerlMonks [perlmonks.org].

    • (* and mostly have to run on Windows *)

      What if it relies on Windows-specific IE feataures? (Such as for intranets in an MS shop).

      Any non-MS product would have to mirror IE bug-for-bug to test right.

      The quick answer is don't rely on propriety stuff, but often the tester/developer does not make that call.
  • Get a good QA person (Score:4, Informative)

    by gosand ( 234100 ) on Friday September 13, 2002 @02:29PM (#4252583)
    If you are a developer, do what you do best. If you want a tester, go out and find a good one. They are worth their weight in gold.

    OK, maybe I am a little biased, as I have been in QA for 8 years. :-) But my comments still stand.

    That said, we are currently using Rational's products to test our application, which includes a web piece. Hint: Don't use javascript if you plan on using Rational. They have SiteLoad, which I believe is free, but rest assured the rest of their products are NOT. Their licensing scheme is nothing short of trying to balance the budget of a small country. If you are wanting to implement their products in a big project, to handle requirements (Requisite Pro), Bugs (ClearQuest) and test plans (Test Manager), then prepare yourself for headaches. If you just want to get Rational Robot to record/playback user actions for testing, it is pretty solid. Rational purchased all different components of their system, so they aren't the smoothest to integrate. I have spent many hours with their phone support people.

    I have also worked with Mercury and SilkTest, but to a lesser degree.

    Oh, and if you are constantly changing critical code, you need to worry more about your development practices and not your testing.

    • by sohp ( 22984 )
      if you are constantly changing critical code, you need to worry more about your development practices and not your testing.

      Not true in many developement shops. With short iterations, refactoring, rigorous unit testing, collective code ownership, and continuous integration, code can be constantly changing but stable. Take for example the Mozilla Tinderbox [mozilla.org]. Development proceeds on many components and the builds and tests are run continuously. There are daily build smoketests (download a daily build and you'll see the smoketest menuitem), and sometimes things are broken for an hour or a day, but overall things just get better.

      Embrace Change.
      • Not true in many developement shops. With short iterations, refactoring, rigorous unit testing, collective code ownership, and continuous integration, code can be constantly changing but stable. Take for example the Mozilla Tinderbox [mozilla.org]. Development proceeds on many components and the builds and tests are run continuously.

        I have to really wonder how efficient this is in the long run. Sure, I understand that this *can* work in some instances, but it won't in all. The prototype/spin cycle approach isn't the right one for every project. In this case, tests are reactionary. How on earth are you advancing your testing if the code is constantly changing (especially if the UI changes)? If that is the case, forget system test automation, it won't work. You have to have a reasonably stable, unchanging base in order to automate testing or you will spend all your time re-automating it. The entire purpose of automating your testing is to *save* time in the long run. In this model, there is no long run, everything is done in the short term.

        Embrace Change.

        I do embrace change, but not simply for the sake of changing. I have to have a good reason to change.

        • How on earth are you advancing your testing if the code is constantly changing (especially if the UI changes)? If that is the case, forget system test automation, it won't work. You have to have a reasonably stable, unchanging base in order to automate testing or you will spend all your time re-automating it.

          If all your test automation is top-level, yes. For unit testing, however, the UI is irrelevant at tests targeted at lower-level systems. Even better, if you use XP methodologies, then developers are obligated to update the tests before updating the code -- so you don't have issues with the test code being forever racing to keep up.

          Yes, you need to be able to do some QA on on the top-level interface -- but if the lower levels are stable, the UI itself is much less of a problem.
  • contract-based programming and all relatives.
    This in fact enables you dismiss all tests.
    Remember: the key to successful programming is not to find all error but not to make in first place.
  • Java Tools (Score:4, Informative)

    by GOD_ALMIGHTY ( 17678 ) <curt DOT johnson AT gmail DOT com> on Friday September 13, 2002 @02:31PM (#4252604) Homepage
    A couple of people have already mentioned the Jakarta project and cactus in particular.
    I'd highly recommend picking the book:
    Java Tools for eXtreme Programming [slashdot.org]

    This is a great reference for all of the tools being mentioned and shows you how to integrate them into the development cycle if your using Java. You should be able to write the functional tests if your app is not written in Java.

    As an aside, if your not developing these apps in Java, you really should look at using Tomcat, XDoclet and Struts for simple DB frontends, and then move to EJBs with JBoss, Jetty or Tomcat, Struts and XDoclet. If your lazy and don't want to write a lot of code, you'll love these tools. Reuse is high in Java, and the code generation tools like XDoclet take away most of the pain of using frameworks like EJB and Struts. Besides JSP taglibs allow me to have good looking pages made pretty by people who care about the differences between browsers for CSS, DHTML and what not.

    Good Luck.

  • Come on guys, this is no way to help a lady. :) Quit being kings of your particular castles and lend a helping hand.
  • For a limited time only, the newest Robust Web Application Testing Program is available to you free of charge*. For this week only you can have your very own Slashdot Web Application Analyzer!

    To use this miracle of modern computing, simply submit a story link to your Web Application and the webmaster's e-mail at the bottom of the page! Not only will you be able to test your server bandwidth, but every know-it-all Slashdot Web Guru(tm) will e-mail you with exactly why your Application is not worth the electrons it's stored on!

    For added bonus, have your site flame one of the following groups for extremely extensive testing: Any Goverment, Adobe, Microsoft, Intel, Creative Labs, CowboyNeal.

    Call Now!
    Operators are standing by!
  • Just thought I'd note the obvious...

    By reviewing how you work with the actual code, you can avoid making a lot of the bugs in the first place. When making solutions where more than one module/frontend depend on various backend functions I find that I usually avoid most problems with the API changing if I simply carefully map out whats needed of the API and deciding on how I want to access it once and for all. Once that's decided upon you can change the code as much as you want as long as you leave the actual API alone.

    One of the things made easier by OOP for example :)

    I know I might be pointing out the obvious here, but experiece have shown me that thinking about how you design your actual development cycle is a topic which is too often overlooked, with painful results.
  • Silk (Score:2, Informative)

    by thvv ( 92519 )
    Segue (http://www.segue.com/) makes a set
    of commercial tools that support extensive
    script-driven testing of web applications.
    SilkTest is the testing tool.


    At my previous startup, we bought and used
    these tools and developed extensive test
    libraries for our product.


    There are also companies that will test your
    product for usability on many different platforms.
    Look at http://www.otivo.com/ for one such.

  • It's fine for a simple application, sure. But the more complicated the app is, or if it is tailored to a customer's eBusiness requirements (for instance), you will never beat a set of human eyes aggressively finding the bugs. It can take as long to write the script or set up the robot as it does not just do it.

    *Commercial endorsement is not intended.
  • Rather than just searching for "+automated +test +tool" and picking what looks good, download and install them ALL and see what works for you.

    Some words of advice if you care to follow them.

    First off, ignore anything with the words "stress" or "performance" in the titles or descriptions. They are not the tools you want, and are focused primarily on simulating multiple clients rather than simulating users.

    Second off, seperate the kinds of testing you want to do. Simple form validation requirements will most likely mean you can get away with a tool that bypasses the browser interface (typically a unit testing tool). More complicated user simulation should be done by a tool that actually drives the browser, such as SilkTest or Rational.

    Finally - Hire a dedicated resource just for this purpose. A QA Engineer with experience in automated testing, REAL experience, not just playback and record experience. (My resume is available on demand).

  • You might look up my friends over at F-Test [f-test.com]. By focusing only on functionality testing, they're able to do it more efficiently than almost anyone. They can do it more thoroughly and cheaply than most companies can do themselves, even small shops like ours (and probably yours). They've done great work with our stuff, as well as for big corporate clients like Sony. Nothing beats a team of well-trained, experienced testers banging away at keyboards, but there aren't many people around focusing on just that. Look 'em up. They're in Los Angeles.
  • Check out WWW::Automate [uwinnipeg.ca] if you need to code a web application test in perl.
  • by Anonymous Coward
    So, do you charge more per hour for your development time, or for your whoring time?
  • I have a java application I wrote using open source technology, such as HTTPClient, PerlTools, and the JavaMail API. It can be used for load testing as well. You can download it here: http://www2.netdoor.com/~delonad/WebTestSuite_1.02 .zip It uses an XML config file. In the config file, you specify notification addresses, URLs, and regular expressions....variables can be assigned when regular expressions match as well. Basically, the program runs through the test cases, and if something doesn't match an email notification is sent with the HTML received as an attachment. This application is most useful for regression testing in my opinion.
  • have them "use it like it'll get used" for a while. Note all the problems, fix them. From there, you can create use cases that'll let sobody else do the testing.

    It's CRITICAL, IMHO, that the people requesting the application get directly involved with how the front ends should work. If they don't, you're just asking for UI rework pain.
  • by maraist ( 68387 )
    Recently we started using maxq [bitmechanic.com].
    It has two modes.. One is a proxy server which you'd set your browser to. It records the post/get arguments you used for the page and records it into a jython file. The backend then uses HTTPUnit to fire off the pages.

    It isn't a complete solution.. I had to create a perl filter work with mime-multi-part data, or indeed any form-data that has carriage returns.

    But since it's a simple mixture of python and java, it was relatively easy to apply statistics to the processes and search for all sorts of possible error-types.

    The problem with simple non-human web crawlers is two fold. First there are pages that require valid form-data. Secondly, a "nightly sanity test" is going to be operating on production data.. You'll need to carefully manage such data.
  • Mercury Interactive makes tools that will do this. We have used LoadRunner and WinRunner to test our SAP environment (including the ITS web environment), and it seems to work well. The trouble is that it is very expensive.
  • It isn't bad programming. It isn't lazy programmers. It is called REAL LIFE in a REAL PRODUCTION EVIRONMENT. If there is a programmer today who has followed book instructed design methods to the letter then they are not programmers. They are college professors or working for some governmental instituion.

    One advantage of web applications is so changes can be implemented QUICKLY and CHEAPLY. If I used an includes to build drop down boxes and I changed the core include then EVERYWHERE I included that code I have to test to make sure it won't screw anything up. In the real world your not able to test everywhere when they need the change NOW.

    As well the errors might not be 'true errors' in programming but simply making workflow harder or impossible to do with the new changes.

    If you have a set of screens designed to do A. Individuals start using the set of screens to do B through F with and then the set of screens is modified to do A+1 but that stops B from being used through that set of screens while C through F is several hampered. THAT is what happens with not just web applications but all. THAT is still considered an error by the end users.

    There is never a true replacement of humans from the testing of an application. Data and Workflow issues takes humans to determine the problem
  • by jea6 ( 117959 ) on Friday September 13, 2002 @02:45PM (#4252688)
    Your proirity in testing shouldn't rely on automation necessarily because what you are bound to find is that the application works perfectly, when it's following the script you've programmed. When somebody on my team brings me code/functionality to review, the first thing I try to do is to "do the wrong thing" (eg. letters in a field to be interpreted as numeric). Thorough testing requires "unbridaled" human ingenugity.

    Frankly, what you need are probably consistent programming methods (because your front-ends are probably being written by liberal arts majors who taught themselves --insert language here--), through error handling, documentation, a consistent testing mothodology, and much more upfront requirements analysis.

    This stuff ain't cheap and you need to factor it into your pricing. I'd say that 10% to 20% of your budget should be QA and testing and you should insist that the budget be used for that. Too often QA time is used for actual development, leaving no QA.
    • Of course, any decent regression test suite ensures the negative cases fail as often as the positive cases pass.

      You can't test simply one subset of the API.
    • Your proirity in testing shouldn't rely on automation necessarily because what you are bound to find is that the application works perfectly, when it's following the script you've programmed. When somebody on my team brings me code/functionality to review, the first thing I try to do is to "do the wrong thing" (eg. letters in a field to be interpreted as numeric). Thorough testing requires "unbridaled" human ingenugity.

      There is absolutely nothing that'll find a bug as well as a good QA person who thinks "how can I break this?" However, that QA person should have recorded the sequence of events that breaks the code for two reasons...

      1. Reproducing the problem to show the prorgrammer.
      2. Regression testing to test that bugfix+1000 doesn't re-introduce the bug.
      To me, reason 2 is why automated tools are valuable. Give me a QA person who can break my code in novel ways and knows how to run regressions and I'll crank solid code 3 times faster. Productivity and code quality around here dove when they laid off our single QA person.
  • Well if it were just load based -- their are hundreds of programs that will automate and simulate till your hearts desire. That being said, I believe the question was more geared around how to test that when I hit the submit button -- does everything work like it should?

    The best thing to do is to ensure your testers are familiare enough with the back end and the transaction processes to be able to run cross checks on the Database -- to ensure everything is working as it should. Common things like missing where clauses on deletes, in statements like 'a,b,c' rather than 'a','b','c'. Just simple things that automated tools could never catch. The bad part is that things like this take time and bodies. Atr least were I am sitting -- not near as many of those around here these days :) Everyone wants to claim QA in place, ISO Whatever in place...etc... The reality is, those were the first things to go.
  • If you design your code in the right ways testing is a straight forward process. I design my web applications using an object for each task the site needs to do. Then I can just write a test function that will run that object through a typical scenario to make sure everything works that should work and nothing works that shouldn't work (security tests). This is a fairly reasonable way to check for obvious problems with your site and is good to make sure you don't shoot yourself in the foot but you still need to test everything by hand now and then too. Just don't treat your web applications any different than any other application and you can test using the same methods any programmer uses.
  • I hope you guys don't slaughter me for saying that Microsoft did a decent job, but check out:
    WAST [microsoft.com]

    and

    WCAT [microsoft.com]

    They both seem to work really well and are freely available if you agree to the license. It's been a while since I've used them but I think they'll work fine with testing an apache or any other web server.
  • by codeking24 ( 608446 ) on Friday September 13, 2002 @03:11PM (#4252891)
    Our company, Sentiat Technologies, Inc., has developed a better solution to having several humans bangings on keyboards or to using very expensive products from Mercury or Tonic. We have a new product that automates the testing process for web based applications called XMSGuardian [sentiat.com].

    XMSGuardian's feature list includes:
    • Crawl your site testing every component on every page
    • Give you accurate metrics related to performance and errors
    • Show you the related impact of error conditions
    • Auto-complete forms dynamically to test server side functionality
    • Execute pre-recorded paths through your application.
    • Tons more...
    I would invite anyone who is in need of quality, relative test results for your web applications to look into XMSGuardian at http://www.sentiat.com/ [sentiat.com].
    • To quote your system requirements:

      "The XMSGuardian(TM) Console requires Microsoft Internet Explorer 5.0 or higher running on Windows 95/98/NT, 2000 or XP.....

      Pricing and Availability:
      XMSGuardian(TM) is now available as a monthly subscription. Pricing begins at $1,995 per month for a single URL...."

      And not a downloadable demo in sight. Buh-bye.

  • Sigh.. (Score:3, Insightful)

    by Inoshiro ( 71693 ) on Friday September 13, 2002 @03:18PM (#4252930) Homepage
    Your entire question, well, sucks. If you think you can test at the end of a product cycle, you're smoking the kind of crack cocain that leads to things like this [somethingawful.com].

    When you write a function for your program, you need to write a test unit that is in the debug project. How it will work is that you write some tests in which you take an input, perform the operation, and test the output versus a contstant answer. Have one of these for each case that it handles in the unit. That way, you can always compile the test unit and examine its output versus the constant known-good value. That's good software engineering practice.

    What you're asking, well, is a joke. Nothing's going to save your project if you've been just adding functionality without QAing at each step to verify correctness.

    hellbunnia [hellbunnie.org] asks "I work with a team of developers who spend most of their time adding functionality to code. While we enjoy just cramming more code onto a source tree, we really never test anything. But even if we tested it, I think we'd miss a lot of bugs because we have no design policy. It's a lot to be tested, and it's all interrelated! So my question is, does anyone have a quick and easy solution that will save us from rewritting things with a proper design?"

    "I've read a lot of freshmeat listings for testing, but I've always assumed that they were merely 'Hello, World' programs because nothing beats real testing by real humans. However, as the amount of code grows, I've begun to wish that we wrote a carefully designed set of unit tests as we added functionality, rather than trying to magically make it all work 2 weeks before our shipping deadline. I'm hoping we have some magic QA program which will do everything for us, except actually fix our squirrely code.

    Does such a thing exist, or should I start updating my resume? How fucked am I?
    "
    • Your entire question, well, sucks. If you think you can test at the end of a product cycle, you're smoking the kind of crack cocain[e] that leads to things like this.

      Amen!

      If you find a bug at the end of a development cycle, you have months of changes to rummage through and try to find the problem. This sucks; you'll never get them all out; you just get the biggest ones and then you ship.

      The right way is to write the tests first, before you write the code. Going back and retrofitting good tests will take time and careful thought, two commodities in short supply in the pre-ship rush.
  • Last year I worked at a startup that was writing an instant messaging app consisting of a bunch of web pages and .jsp's. They created their own automated testing application in C++. I didn't examine the code closely, but it was essentially a screen scraper that navigated through the pages using a WebBrowser control and manipulating the DOM programatically -- entering text into input boxes, clicking checkboxes and buttons, checking results against expected results and writing out a log file. The test sequences were stored in a database, which they had a full-time person updating as the app changed.
  • QA Wizard (Score:4, Informative)

    by igiveup ( 267632 ) on Friday September 13, 2002 @03:58PM (#4253273)
    Seapine Software produces a product called QA Wizard that is a fully scriptable testing tool for web applications using Internet Explorer. Netscape/Java support is coming soon. A Windows application testing tool should be available by the end of the year, as well as a load testing tool.
  • The nearly exact same question was asked a while back [slashdot.org], which turned up many excellent suggestions.

    Since that article was posted, I was asked by my company to do some load and scalability testing and I've had great success with OpenSTA [opensta.org]. Give it a chance. It's awkward at once but once you get a feel for the HTTP/S (http scripting) language, you can do some very complicated scripting with it.

    For example I wrote a script which interacts with one of our web products and navigates through several pages, submitting queries, retrieving 'wait' pages, and continuing on when the results are ready. Can't do that with wget... heh. And it gives excellent feeback on timing and can remotely monitor CPU and memory usage.

    As far as I know it is only available on windows, though it is open source.

  • Plenty of test tools exist to automate testing of a Web application. I really like the idea of having an automated test system that would tell you in the morning what it found wrong on your site during a nightly check. I have built several such systems and they provide a big benefit back to the company in decreased down-time and improved user satisfaction. You will find details on how these test automation systems work in my upcoming book Testing Web Services. Try http://www.pushtotest.com/ptt/thebook.html to download the chapters. It's free and I would appreciate your feedback.

    I would advise you to not take a decision to implement an automated test system lightly. Your decision commits your business to maintain the system and that can be expensive and complicated. All of the commercial test tools require an engineer to instrument all of the Web pages to be tested. They give you GUI tools to click through a Web site and the tool writes a test script that the test system can run. Eventually you wind up with a library of test scripts that need to be kept up-to-date as the Web site changes.

    Additionally, these tools are reading Web pages to build scripts. One of HTML's shortcomings is that it mixes presentation data (font sizes, paragraph locations, etc.) with the actual content. HTML is very loosely formatted so test tools often fail to automate the script-writing process.

    I've been building and testing complex interoperable systems for the past 15 years. In my experience the best way to build an automated test system is to give your software developers a test tool that lets them build tests while they are coding. The same tests may then be brought out of the developer's lab and used to check the service in production for scalability, performance and functionality.

    One other thing to point out: there is little difference in functionality between the commercial test tools (which cost $20,000 to $50,000) and the free open-source test tools. I recommend you look at my open-source TestMaker project (http://www.pushtotest.com/ptt) and JMeter (http://www.apache.org.)

    TestMaker comes with a graphic environment, script language, library of test objects (TOOL), sample test agents and a LOT of documentation. Plus my company PushToTest is the "go to" company for enterprises that need to test systems in Web environments. We're here to add functions needed by our customers, to run tests and to train your team in how to use the tool for their own needs.

    Hope this helps. Feel free to drop me a line (fcohen@pushtotest.com) if you need additional help.

    -Frank
  • who is this company? I've never been given a testing time and/or budget.....
  • I use a home-grown browser object for testing my web apps. The object is designed to work like a web browser: you can load a page, go back in the history, check for the existence of certain objects on the page, "click" on links, fill out forms and "click" on submit buttons.

    Whenever I write a web app I begin by creating a script just for that app that uses the browser object. As I add features, I add routines to the script that check that the features work. When I change anything, all I have to do is run the test script.

    I don't have the Browser object on CPAN yet, but if you email me at miko at idocs dot com, I'll be happy to send you the package. Put WWW::Browser in the subject line.

    :-)

  • automated testing is a tricky thing. At the onset, sounds great. But in realty, there's a lot of work that goes into it. For some projects, automated testing is the "right thing" but for the majority of projects, it is not.

    **Writing and maintaining automated test scripts takes lot time.** Someone else posted a metric of 10-1, which I believe is quite fair. You really need to treat those scripts as its own mini-development project. You need to map out scenarios for each script and what goal each should accomplish. Coding (yes, even for those record/playback tools... you need to spend quite a bit of time tweaking it). And testing. Testing test scripts? Absolutely. If your test scripts are wrong, you could end up masking real bugs and creating false confidence.

    Now the questions you need to ask yourself along these lines are: What is the lifexpectancy of my application? How often do release new code to production? The relevance of these two questions are of a cost/benefit ratio. If I'm going to spend x amount of man-weeks (yes, weeks) to create an automated test suite, am I going to get the cost savings back when I know v2.0 is 8 months away? Maybe. What if I only do two releases in those 8 months? Most likely not. (if you're releasing code to a production system on a per fix basis... well that's another slashdot topic)

    In lieu of automated testing, I do have a few suggestions for improving testing.

    1) incorporate "impact analysis" as part of your design/code reviews. If someone is planning on touching function y in module x, your architect / tech lead / rest of developers should be able to identify what other areas are going to be affected. When it comes time to test, you know exactly what areas you need to really focus on and which areas can do with a spot check.

    2) come up with a sensible schedule for bundling multiple code fixes into incremental releases. Every time you touch production, there's an inherent testing overhead. Bundle a multiple fixes together and that overhead is better distributed.

    3) hire dedicated testers. Having someone full time on QA (or part time, split across multiple projects) does wonders. The good ones bring both a great deal of experience for finding "common errors" as well as a fresh perspective to the table to see things that the developers overlook because they're too deep in the trenches. Now of course, dedicated testers may not fit into the budget. Even if you can afford them, developers should always be on the hook for testing. Which brings me to my next point...

    4) tell your developers that they better learn to test or fire them. sounds harsh, but testings part of the game. I don't want anyone who doesn't understand the value of testing -- and isn't willing to put in the effort to test -- on my team.

    my 2 cents and then some...
    • If I'm going to spend x amount of man-weeks (yes, weeks) to create an automated test suite, am I going to get the cost savings back when I know v2.0 is 8 months away?

      About 30% of my time and 50% of my code goes into automated testing. I write unit tests, integration tests, and end-to-end functional tests.

      So you're right that testing takes time. But they payback is immense. If I write the tests as I go, I spend almost no time debugging. I have almost no bug reports. And when v2.0 comes, I don't throw out the old source code; I get to use it all again, as I know it's solid, and thanks to the test suite, I can change it radically without fear of breaking anything.

      I don't want anyone who doesn't understand the value of testing -- and isn't willing to put in the effort to test -- on my team.

      I'd agree with that, but manual testing takes a lot of time. Much better to spend that time on automating the tests. People are bad at doing boring, repetitive things; computers are good at it. Teach them how and let the developers focus on developing!
  • As featured on IBM's devWorks site ...Puffin Automation Framwork [puffinhome.org]

    What is it? see for your self. :) [puffinhome.org]

    As it's name implies, Puffin allows you to automate "actions." An action, in terms of Puffin, is any "high level" execution item that may require inputs, may produce outputs, and whose results may be validated for success or failure. For example, an action may involve making an HTTP request to a dynamic web page. It may involve grabbing a file's contents or even retrieving a specific email based on a keyword in the subject line. All of these are actions in the sense that they can be automated by Puffin. Puffin will manage all inputs, outputs, and validation for these actions.

  • I know you aren't asking about load testing tools but I think that it's a crime to not mention the best open source web testing tool out there:

    OpenSTA

    OpenSTA is primarily designed to be a pluggable test rig that has a lot of plugins designed for stress testing. It has served us very well and with a bit of scripting it can be adopted to do functional regression tests too.

    I urge everyone to give OpenSTA a try especially if you're after a load testing solution. It's just a tool that's really powerful and well respected in the industry. And the best part is that it's Free as in OpenSource :).

  • My expirence (Score:3, Informative)

    by bluGill ( 862 ) on Friday September 13, 2002 @07:11PM (#4254504)

    I've used a few. I strongly recomend you invest in one. However you need to beware of the limitations of these tools. They only test what you tell them to test to make sure it works the same as last time. You will have trouble with dynamic data. (even Dates. The tool can be told to ignore things, but then it is ignoring data, so make sure it is ignoring the right thing)

    These tools do NOT substitute for the first time through testing. You will still need a QA person to examine all known changes and verifty it they work right, and then tell the tool how to test for the new change.

    It is a daily job (Often full time) to update the tool. In fact you should not let the tool guy go on vacation until he has a (several?) replacements who will do the job while he is away. In little time, enough changes that by the time you catch up you are often better off starting over from scratch. Do not let your updates slide, no matter what, or you will regret it.

    The tool is not a substitute for first time testing. In fact if you want something that will only test your pages the first time you write them, you are better off doing it by hand, part of teaching the tool how to test a page is to test it while the tool watches. However once you have tested the page once, the tool has no problem testing it every day to make sure nobody accidenly changed something on it. Fortunatly this latter testing is the boring part nobody wants to do. Just make sure that everyone takes the time to write the test for each change. (or at least has the tools guy write the test, depending on your process)

    We found that it was as much effort to write the test automation as to do the test for each version change (this was software not web pages), but once the test for each version was written you would press the button and run the test each time a patch was released, and everything would be tested. Once in a while bugs were found, but not very often. Many of the "bugs" found were not bugs, but changes in the way the product worked and we needed to change the script.

    Finially the pay off, if there is one, will take more then a year. Warn your management right now about that. Somehow you need to keep metrics (and I'm not convinced any reasonable metrics exists to take) to compare the before and after case. Not everyone who has done test automation is convinced it was worth it. If you think it will take away a lot of the work you are doing now, then no it is not. If you want it to find a lot of bugs you are finding much later, then yes it is.

    Overall, test automation is MORE work than you are doing now (just a guess, but likely), but it will catch more bugs faster. Try it, but remember a fair trial is a lot of work and it will take some time for the pay out.

    • I'd just like to say: CLAP CLAP CLAP CLAP....

      Very well said. Automation is not for everyone and if you are in a RAD environment it certainly isn't for you. Even worse if you have any intention of shipping in the next month or less.

      If you are? You're Doomed (tm)

      Sorry, but it's a proven fact.

      Signed, someone who has been there (is there still), and strongly advises companies to avoid automation unless they know what they are getting into.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...