A checklist of model requirements

These requirements are grouped according to testing activity.

A test model should:
  1. force a testing reaction to every code handoff in the project.
  2. require the test planner to take explicit, accountable action in response to dropped handoffs, new handoffs, and changes to the contents of handoffs.
  3. explicitly encourage the use of sources of information other than project
    documentation during test design.
  4. allow the test effort to be degraded by poor or late project documentation, but prevent it from being blocked entirely.
  5. allow individual tests to be designed using information combined from various sources.
  6. allow tests to be redesigned as new sources of information appear.
  7. include feedback loops so that test design takes into account what’s learned by running tests.
  8. allow testers to consider the possible savings of deferring test execution.
  9. allow tests of a component to be executed before the component is fully assembled.

Summary

The V model is fatally flawed, as is any model that:
  1. Ignores the fact that software is developed in a series of handoffs, where each handoff changes the behavior of the previous handoff,
  2. Relies on the existence, accuracy, completeness, and timeliness of development documentation,
  3. Asserts a test is designed from a single document, without being modified by later or earlier documents, or
  4. Asserts that tests derived from a single document are all executed together.
I have sketched – but not elaborated – a replacement model. It organizes the testing effort around code handoffs or milestones. It takes explicit account of the economics of testing: that the goal of test design is to discover inputs that will find bugs, and that the goal of test implementation is to deliver those inputs in any way that minimizes lifecycle costs.

The model assumes imperfect and changing information about the product. Testing a product is a learning process. In the past, I haven’t thought much about models. I ostensibly used the V model. I built my plans according to it, but seemed to spend a lot of my time wrestling with issues that the model didn’t address. For other issues, the model got in my way, so I worked around it.

I hope that thinking explicitly about requirements will be as useful for developing a testing model as it is when developing a product. I hope that I can elaborate on the model presented in this paper to the point that it provides as much explicit guidance as the V model seems to.

A different model

Let’s step back for a second. What is our job?

There are times when some person or group of people hands some code to other peopleand says, “Hope you like it.” That happens when the whole project puts bits on a CD andgives them to customers. It also happens within a project:·
  • One development team says to other teams, “We’ve finished the XML enhancementsto the COMM library. The source is now in the master repository; the executablelibrary is now in the build environment. The XARG team should now be unblocked –go for it!”·
  • One programmer checks in a bug fix and sends out email saying, “I fixed the bug inallocAttList. Sorry about that.” The other programmers who earlier stumbled overthat code can now proceed.
In all cases, we have people handing code to other people, possibly causing themdamage. Testers intervene in this process. Before the handoff, testers execute the code,find bugs (the damage), and ask the question, “Do you really want to hand this off?” Inresponse, the handoff may be deferred until bugs are fixed.

This act is fundamental to testing, regardless of the other things you may do. If you don’texecute the code to uncover possible damage, you’re not a tester.Our test models should be built around the essential fact of our lives: code handoffs.Therefore, a test model should force a testing reaction to every code handoff in theproject. I’ll use the XML-enhanced COMM library as an example. That’s a handoff from oneteam to the rest of the project. Who could be damaged?
  • It might immediately damage the XARG team, who will be using those XMLenhancements in their code.
  • It might later damage the marketing people, who will be giving a demonstration ofthe “partner release” version of the product at a trade show. XML support is animportant part of their sales pitch.
  • Still later, it might damage a partner who adopts our product.

We immediately have some interesting test planning questions. The simple thing to do would be to test the XML enhancements completely at the time of handoff. (By “completely,” I mean “design as many tests for them as you ever will.”) But maybe some XML features aren’t required by the XARG team, so it makes sense to test them through
the integrated partner release system. That means moving some of the XML-inspired testing to a later handoff. Or we might move it later for less satisfying reasons, such as that other testing tasks must take precedence in the near term. The XARG team will have to resign itself to stumbling over XML bugs for a while.

Our testing plan might be represented by a testing schedule that annotates the
development timeline.
 

© blogger templates 3 column | Make Money Online