What’s wrong with the V model? (Concluded...)

In the previous diagram, we had arrows pointing upward (effectively, later in time). It can also make sense to have arrows pointing downward (earlier in time):


In that case, the boxes on the left might be better labeled “Whatever test design can be done with the information available at this point”.

Therefore, when test design is derived from a description of a component of the system, the model must allow such tests to be executed before the component is fully assembled.

I have to admit my picture is awfully ugly – all those arrows going every which way. I have two comments about that:
  1. We are not in the business of producing beauty. We’re in the business of finding as many serious bugs as possible as cheaply as possible.
  2. The ugliness is, in part, a consequence of assuming that the order in which developers produce system description documents, and the relationships among those documents, is the mighty oak tree about which the slender vine of testing twines. If we adopt a different organizing principle, things get a bit more pleasing. But they’re still complicated, because we’re working in a complicated field.
The V model fails because it divides system development into phases with firm
boundaries between them. It discourages people from carrying testing information across those boundaries. Some tests are executed earlier than makes economic sense. Others are executed later than makes sense.


Moreover, it discourages you from combining information from different levels of system description. For example, organizations sometimes develop a fixation on “signing off” on test designs. The specification leads to the system test design. That’s reviewed and signed off. From that point on, it’s done. It’s not revised unless the specification is. If information relevant to those tests is uncovered later – if, for example, the architectural design reveals that some tests are redundant – well, that’s too bad. Or, if the detailed design reveals an internal boundary that could easily be incorporated into existing system tests, that’s tough: separate unit tests need to be written.

Therefore, the model must allow individual tests to be designed using information combined from various sources.

And further, the model must allow tests to be redesigned as new sources of
information appear
.

What’s wrong with the V model? (Continued...)

There’s always some dispute over how big a unit should be (a function? a class? a collection of related classes?) but that doesn’t affect my argument. I believe, a unit is the smallest chunk of code that the developers can stand to talk about as an independent entity.

The V model says that someone should first test each unit. When all the subsystem’s units are tested, they should be aggregated and the subsystem tested to see if it works as a whole.


So how do we test the unit? We look at its interface as specified in the detailed design, or at the code, or at both, pick inputs that satisfy some test design criteria, feed those inputs to the interface, then check the results for correctness. Because the unit usually can’t be executed in isolation, we have to surround it with stubs and drivers, as shown at the right. The arrow represents the execution trace of a test.

That’s what most people mean when they say “unit testing”.


I think that approach is sometimes a bad idea. The same inputs can often be delivered to the unit through the subsystem, which thus acts as both stub and driver. That looks like the picture to the right.

The decision about which approach to take is a matter of weighing tradeoffs. How much would the stubs cost? How likely are they to be maintained? How likely are failures to be masked by the subsystem? How difficult would debugging through the subsystem be? If tests aren’t run until integration, some bugs will be found later. How does the estimated cost of that compare to the cost of stubs and drivers? And so forth.

The V model precludes these questions. They don’t make sense. Unit tests get executed when units are done. Integration tests get executed when subsystems are integrated. End of story. It used to be surprising and disheartening to me how often people simply wouldn’t think about the tradeoffs – they were trapped by their model.

Therefore, a useful model must allow testers to consider the possible savings of deferring test execution.


A test designed to find bugs in a particular unit might be best run with the unit in isolation, surrounded by unit-specific stubs and drivers. Or it might be tested as part as the subsystem – along with tests designed to find integration problems. Or, since a subsystem will itself need stubs and drivers to emulate connections to other subsystems, it might sometimes make sense to defer both unit and integration tests until the whole system is at least partly integrated. At that point, the tester is executing unit, integration, and system tests through the product’s external interface. Again, the purpose is to minimize total lifecycle cost, balancing cost of testing against cost of delayed bug discovery. The distinction between “unit”, “integration”, and “system” tests begins to break down. In effect, you have the above picture.

It would be better to label each box on the right “execution of appropriate tests” and be done with it. What of the left side? Consider system test design, an activity driven by the specification. Suppose that you knew that two units in a particular subsystem, working in conjunction, implemented a particular statement in the specification. Why not test that specification statement just after the subsystem was integrated, at the same time as tests derived from the design?

If the statement’s implementation depends on nothing outside the subsystem, why wait until the whole system is available? Wouldn’t finding the bugs earlier be cheaper?

What’s wrong with the V model?

I will use the V model as an example of a bad model. I use it because it’s the most familiar.


A typical version of the V model begins by describing software development as
following the stages shown here:


That’s the age-old waterfall model. As a development model, it has a lot of problems. Those don’t concern us here – although it is indicative of the state of testing models that a development model so widely disparaged is the basis for our most common testing model. My criticisms also apply to testing models that are embellishments on better development models, such as the spiral model.

Testing activities are added to the model as follows:

Unit testing checks whether code meets the detailed design. Integration testing checks whether previously tested components fit together. System testing checks if the fully integrated product meets the specification. And acceptance testing checks whether the product meets the final user requirements.


To be fair, users of the V model will often separate test design from test implementation. The test design is done when the appropriate development document is ready.

This model, with its appealing symmetry, has led many people astray.

Software Test Life Cycle (Part 3 of 3)

4. Test Execution (Unit / Functional Testing Phase):
By this time. the development team would have been completed creation of the work products. Of Course, the work product would still contain bugs. So, in the execution phase - developers would carry out unit testing with testers help, if required. Testers would execute the test plans. Automatic testing Scripts would be completed. Stress and performance Testing would be executed. White box testing, code reviews, etc. would be conducted. As and when bugs are found - reporting would be done.

5. Test Cycle (Re-Testing Phase):

By this time, minimum one test cycle (one round of test execution) would have been completed and bugs would have been reported. Once the development team fixes the bugs, then a second round of testing begins. This testing could be mere correction verification testing, that is checking only that part of the code that has been corrected. It could also be Regression Testing - where the entire work product is tested to verify that correction to the code has not affected other parts of the code.

Hence this process of :
Testing --> Bug reporting --> Bug fixing (and enhancements) --> Retesting
is carried out as planned. Here is where automation tests are extremely useful to repeat the same test cases again and again.During this phase - review of test cases and test plan could also be carried out.

6. Final Testing and Implementation (Code Freeze Phase):
When the exit criteria is achieved or planned test cycles are completed, then final testing is done. Ideally, this is System or Integration testing. Also any remaining Stress and Performance testing is carried out. Inputs for process improvements in terms of software metrics is given. Test reports are prepared. if required, a test release note, releasing the product for roll out could be prepared. Other remaining documentation is completed.

7. Post Implementation (Process Improvement Phase):

This phase, that looks good on paper, is seldom carried out. In this phase, the testing is evaluated and lessons learnt are documented. Software Metrics (Bug Analysis Metrics) are analyzed statistically and conclusions are drawn. Strategies to prevent similar problems in future projects is identified. Process Improvement Suggestions are implemented. Cleaning up of testing environment and Archival of test cases, records and reports are done.

Software Test Life Cycle (Part 2 of 3)

2.Test Analysis (Documentation Phase)
The Analysis Phase is more an extension of the planning phase. Whereas the planning phase pertains to high level plans - the Analysis phase is where detailed plans are documented. This is when actual test cases and scripts are planned and documented.

This phase can be further broken down into the following steps :

  • Review Inputs : The requirement specification document, feature specification document and other project planning documents are considered as inputs and the test plan is further disintegrated into smaller level test cases.
  • Formats : Generally at this phase a functional validation matrix based on Business Requirements is created. Then the test case format is finalized. Also Software Metrics are designed in this stage. Using some kind of software like Microsoft project, the testing timeline along with milestones are created.
  • Test Cases : Based on the functional validation matrix and other input documents, test cases are written. Also some mapping is done between the features and test cases.
  • Plan Automation : While creating test cases, those cases that should be automated are identified. Ideally those test cases that are relevant for Regression Testing are identified for automation. Also areas for performance, load and stress testing are identified.
  • Plan Regression and Correction Verification Testing : The testing cycles, i.e. number of times that testing will be redone to verify that bugs fixed have not introduced newer errors is planned.

3. Test Design (Architecture Document and Review Phase):

One has to realize that the testing life cycle runs parallel to the software development life cycle. So by the time, one reaches this phase - the development team would have created some code or at least some prototype or minimum a design document would be have been created.

Hence in the Test Design (Architecture Document Phase) - all the plans, test cases, etc. from the Analysis phase are revised and finalized. In other words, looking at the work product or design - the test cases, test cycles and other plans are finalized. Newer test cases are added. Also some kind of Risk Assessment Criteria is developed. Also writing of automated testing scripts begin. Finally - the testing reports (especially unit testing reports) are finalized. Quality checkpoints, if any, are included in the test cases based on the SQA Plan.

(Continued...)

Software Test Life Cycle (Part 1 of 3)

Usually, testing is considered as a part of the System Development Life Cycle, but it can be also termed as Software Testing Life Cycle or Test Development Life Cycle.

Software Testing Life Cycle consists of the following phases:
1. Planning
2. Analysis
3. Design
4. Execution
5. Cycles
6. Final Testing and Implementation
7. Post Implementation


1. Test Planning (Product Definition Phase):
The test plan phase mainly signifies preparation of a test plan. A test plan is a high level planning document derived from the project plan (if one exists) and details the future course of testing. Sometimes, a quality assurance plan - which is more broader in scope than a test plan is also made.

Contents of a Test Plan are as follows :

  • Scope of testing
  • Entry Criteria (When testing will begin?)
  • Exit Criteria (When testing will stop?)
  • Testing Strategies (Black Box, White Box, etc.)
  • Testing Levels (Integration testing, Regression testing, etc.)
  • Limitation (if any)
  • Planned Reviews and Code Walkthroughs
  • Testing Techniques (Boundary Value Analysis, Equivalence Partitioning, etc.)
  • Testing Tools and Databases (Automatic Testing Tools, Performance testing tools)
  • Reporting (How would bugs be reported)
  • Milestones
  • Resources and Training

Contents of a SQA Plan, more broader than a test plan, are as follows :

The IEEE standard for SQA Plan Preparation contains the following outline :

  • Purpose
  • Reference Documents
  • Management
  • Documentation
  • Standards, Practices and Conventions
  • Reviews and Audits
  • Software Configuration Management
  • Problem Reporting and Corrective Action (Software Metrics to be used can be identified at this stage)
  • Tools, Techniques and Methodologies
  • Code Control
  • Media Control
  • Supplier Control
  • Records, Collection, maintenance and Retention

(Continued...)

 

© blogger templates 3 column | Make Money Online