Software Test Automation - Myths and Facts


Introduction
Today software test automation is becoming more and more popular in both C/S and web environment. As the requirements keep changing (mostly new requirements are getting introduced on daily basis) constantly and the testing window is getting smaller and smaller everyday, the managers are realizing a greater need for test automation. This is good news for us (people who do test automation). But, I am afraid this is the only good news.


Myths & Facts
A number of articles and books are written on different aspects of Software Test Automation. “Test Automation Snake Oil” by, James Bach is an excellent article on some of the myths of automation. I like to discuss some of these myths and will try to point out the facts about these myths. I also like to discuss some of my observations and hopefully point out possible solutions. These are based on my experience with a number of automation projects I was involved with.


Myth 1: Find more bugs

Fact: Some QA managers think that by doing automation they should be able to find more bugs. It’s a myth. Let’s think about it for a minute. The process of automation involves a set of written test cases. In most places the test cases are written by test engineers who are familiar with the application they are testing. The test cases are then given to the automation engineers. In most cases the automation engineers are not very familiar with the test cases they are automating. From test cases to test scripts, automation does not add anything in the process to find more bugs. The test scripts will work only as good as the test cases when comes to finding bugs. So, it’s the test cases that find bugs (or don’t find bugs), not the test scripts.

Myth 2: Eliminate or reduce manual testers

Fact: In order to justify automation, some point out that they should be able to eliminate or reduce the number of manual testers in the long run and thus save money in the process. Absolutely not true. Elimination or reduction of manual testers is not any of the objectives of test automation. Here is why – as I have pointed out earlier that the test scripts are only as good as the test cases and the test cases are written primarily by manual testers. They are the ones who know the application inside out. If the word gets out (it usually does) that the number of manual testers will be reduced by introducing automation then, most if not all manual testers will walk out the door and quality will go with them as well.

Some Classic Testing Mistakes

The role of testing

  • Thinking the testing team is responsible for assuring quality.
  • Thinking that the purpose of testing is to find bugs.
  • Not finding the important bugs.
  • Not reporting usability problems.
  • No focus on an estimate of quality (and on the quality of that estimate).
  • Reporting bug data without putting it into context.
  • Starting testing too late (bug detection, not bug reduction)

Planning the complete testing effort

  • A testing effort biased toward functional testing.
  • Underemphasizing configuration testing.
  • Putting stress and load testing off to the last minute.
  • Not testing the documentation
  • Not testing installation procedures.
  • An overreliance on beta testing.
  • Finishing one testing task before moving on to the next.
  • Failing to correctly identify risky areas.
  • Sticking stubbornly to the test plan.

Personnel issues

  • Using testing as a transitional job for new programmers.
  • Recruiting testers from the ranks of failed programmers.
  • Testers are not domain experts.
  • Not seeking candidates from the customer service staff or technical writing staff.
  • Insisting that testers be able to program.
  • A testing team that lacks diversity.
  • A physical separation between developers and testers.
  • Believing that programmers can’t test their own code.
  • Programmers are neither trained nor motivated to test.

The tester at work

  • Paying more attention to running tests than to designing them.
  • Unreviewed test designs.
  • Being too specific about test inputs and procedures.
  • Not noticing and exploring “irrelevant” oddities.
  • Checking that the product does what it’s supposed to do, but not that it doesn’t do what it isn’t supposed to do.
  • Test suites that are understandable only by their owners.
  • Testing only through the user-visible interface.
  • Poor bug reporting.
  • Adding only regression tests when bugs are found.
  • Failing to take notes for the next testing effort.
Test automation
  • Attempting to automate all tests.
  • Expecting to rerun manual tests.
  • Using GUI capture/replay tools to reduce test creation cost.
  • Expecting regression tests to find a high proportion of new bugs.
Code coverage
  • Embracing code coverage with the devotion that only simple numbers can inspire.
  • Removing tests from a regression test suite just because they don’t add coverage.
  • Using coverage as a performance goal for testers.
  • Abandoning coverage entirely.

Testing Mistakes continued...

From the “find important bugs” standpoint, the first testing effort was superior. It found 100 bugs before release, whereas the second found only 74. But I think you can make a strong case that the second effort is more useful in practical terms. Let me restate the twosituations in terms of what a test manager might say before release:
1. “We have tested subsystem 1 very thoroughly, and we believe we’ve found almost allof the priority 1 bugs. Unfortunately, we don’t know anything about the bugginess ofthe remaining five subsystems.”
2. “We’ve tested all subsystems moderately thoroughly. Subsystem 1 is still very buggy.The other subsystems are about 1/10th as buggy, though we’re sure bugs remain.

”This is, admittedly, an extreme example, but it demonstrates an important point. Theproject manager has a tough decision: would it be better to hold on to the product formore work, or should it be shipped now? Many factors - all rough estimates of possiblefutures - have to be weighed: Will a competitor beat us to release and tie up the market? Will dropping an unfinished feature to make it into a particular magazine’s special “JavaDevelopment Environments” issue cause us to suffer in the review? Will critical customerX be more annoyed by a schedule slip or by a shaky product? Will the product be buggyenough that profits will be eaten up by support costs or, worse, a recall?

The testing team will serve the project manager better if it concentrates first on providingestimates of product bugginess (reducing uncertainty), then on finding more of the bugsthat are estimated to be there. That affects test planning, the topic of the next theme.

It also affects status reporting. Test managers often err by reporting bug data without putting it into context. Without context, project management tends to focus on one graph:


The flattening in the curve of bugs found will be interpreted in the most optimistic possible
way unless you as test manager explain the limitations of the data:
· “Only half the planned testing tasks have been finished, so little is known about half
the areas in the project. There could soon be a big spike in the number of bugs
found.”
· “That’s especially likely because the last two weekly builds have been lightly tested.

“Furthermore, based on previous projects with similar amounts and kinds of testing effort, it’s reasonable to expect at least 45 priority-1 bugs remain undiscovered. Historically, that’s pretty high for a successful product.”
 

© blogger templates 3 column | Make Money Online