Sample Questions for IBM Rational Unified Process (RUP) Exam continued...

21. Which of the following is not a phase in the Rational Unified Process?
  • Elaboration
  • Inception
  • Implementation
  • Transition
Correct Answer: C

22. What statement is true of iterations?
  • You establish plans for each phase, but not for iterations.
  • A lifecycle phase may contain many iterations.
  • A construction iteration cannot include any activities from the Requirements workflow.
  • Iterations may contain many phases.
  • A major milestone marks the end of every iteration.
Correct Answer: B

23. What does the term "artifact" indicate in the Rational Unified Process?
  • A unit of work to be performed.
  • A piece of information that the process produces, modifies, or uses.
  • That a software engineer must perform the activity.
  • The specific job position that must perform the activity.
Correct Answer: B

24. Which of the following statements is not true regarding risk management in the Rational Unified Process?
  • You must update the risk list in each iteration.
  • The risk list should include technical, business, and management risks.
  • It is best to develop low-risk portions of the system in early iterations to ensure the schedule is met.
  • Risk reduction drives iterations.
Correct Answer: C

25. Which of the following is an activity of the Tester?
  • Test integrated components
  • Test developed components as units
  • Define the organization of the code in terms of implementation subsystems
  • Implement classes and objects in terms of components
  • Integrate the results produced by individual implementers (or teams) into an executable system.
Correct Answer: A

Sample Questions for IBM Rational Unified Process (RUP) Exam continued...

16. Which of the following statements is true?

  • Most systems do not need any supplementary specifications.
  • Supplementary specifications capture requirements that cannot easily be expressed as use cases.
  • Legal and regulatory requirements are usually best captured in use cases.
  • Most systems have no performance requirements.

Correct Answer: B

17. Which of the following Concepts do not belong to the Configuration & Change Management discipline?

  • Mapping from Design to Code
  • Product Directory Structure
  • Change Request Management
  • Baselining

Correct Answer: A

18. Which of the following is not software engineering best practices recommended by Rational?

  • Manage change.
  • Develop iteratively.
  • Use component architectures.
  • Maximize reuse.
  • Freeze requirements at project start.

Correct Answer: E

19. What is a development case?

  • Another name for a key mechanism
  • A development cycle specifically devoted to maintenance
  • A sample architectural design used to guide architectural decisions
  • The development process that you have chosen to follow in your project

Correct Answer: D

20. In which lifecycle phase is software architecture the primary focus?

  • Design
  • Inception
  • Elaboration
  • Transition
Correct Answer: C

Sample Questions for IBM Rational Unified Process (RUP) Exam continued...

11. Which of the following statements does not describe iterative assessment?
  • Provides insight into the technical progress of the project.
  • Is optional and often eliminated during early iterations.
  • Can result in updates to the plan for the subsequent iteration.
  • Includes a comparison of the iteration plan with the actual cost, schedule, and content accomplished.
Correct Answer: B

12. Which of the following statements is false concerning the development case?
  • It tells you which artifacts to produce.
  • You can use it to specify the tools used to produce an artifact.
  • You can use it to specify the degree of formality associated with an artifact.
  • It is fixed during Inception and does not generally change over the course of a project.
Correct Answer: D

13. Which of the following does not relate to project planning for iterative development?
  • Produces a coarse-grained plan identifying the number of iterations per phase, their objectives, and duration.
  • At the most detailed level, consists of an iteration plan establishing the fine-grained activities and milestones for each iteration.
  • Incorporates risk management.
  • Drives to details for complete plan by end of Inception
Correct Answer: D

14. Which of the following statements does not characterize a typical iteration in the Elaboration phase?
  • Involves more analysis and design than testing effort.
  • Updates the risk list.
  • Contributes significantly to the Software Architecture Document.
  • Spends a significant amount of effort in developing the vision and business case.
  • Requires the leadership of the software architect.
Correct Answer: D

15. Which of the following statements regarding requirements is true?
  • Only external requests for change need to be approved by the change review team.
  • To avoid confusion, all requirements should be standalone with no links to other requirements.
  • Requirements attributes eliminate the need for configuration and change management of requirements.
  • Traceability links and requirements attributes are useful in impact assessment of proposed changes.
Correct Answer: D

Sample Questions for IBM Rational Unified Process (RUP) Exam continued...

6. Which of the following is not a purpose of the Requirements discipline?
  • To agree with the customer and users on what the system should do.
  • To give system developers a better understanding of the requirements of the system.
  • To establish a complete requirements baseline before starting the Design phase.
  • To delimit the system.
Correct Answer: C

7. Which of the following statements are false concerning the Implementation discipline?
  • Implementation occurs only in the Construction phase.
  • Implementers unit test the developed components.
  • You implement classes and objects in terms of components.
  • The software architect structures the implementation model.
Correct Answer: A

8. Which of the following has the least impact on the scope of an iteration?
  • Required system functionality
  • The project's specific objectives
  • The phase the project is in
  • The project's top risks
Correct Answer: B

9. Which of the following is true with respect to an iterative project?
  • Planning is done only at the start of each phase.
  • Planning is rarely done.
  • All planning, including iteration plans, must be completed during the Inception Phase.
  • Planning is done incrementally with detailed iteration planning done first.
  • Planning is done incrementally with coarse-grained planning done first.
Correct Answer: F

10. Which of the following statements does not characterize a typical iteration in the Inception phase?
  • Includes development of the Development Case for the project.
  • Spends a significant amount of effort in developing the vision and business case.
  • Includes a significant amount of testing.
  • Includes creation of a risk list for the project.
Correct Answer: C

Sample Questions for IBM Rational Unified Process (RUP) Exam

Hi all,
I have started this new series in which I would be presenting to you some of the certification exam questions for IBM Rational Unified Process (RUP). Please note that I have taken maximum care to accurately provide the answers. However, I am not responsible for any unforeseen error here.

1. What statement is correct concerning change management?

  • You need it only on the later iterations, after the end of Elaboration.
  • You must change manage all project artifacts, not just source code.
  • All the changes to be managed originate with the customer and other external entities.
  • It is the practice of preventing changes to the requirements after the end of Elaboration.
Correct Answer: B

2. Which of the following statements does not characterize a use case? (Select all that apply.)
  • Is a complete and meaningful flow of events.
  • Is initiated by an actor to invoke functionality.
  • A description of the user interface
  • Models a dialogue between an actor and the system.
Correct Answer: C

3. Which of the following help to define an iteration plan?
  • A list of risks you must address by the end of the iteration
  • Development-organization assessment
  • A list of scenarios or use cases you must complete by the end of the iteration
  • The current status of the project
Correct Answer: B

4. Which of the following statements characterize project metrics?
  • Should not be collected on early iterations.
  • Are useful in tracking trends in important variables such as rework.
  • Are the basis for iteration assessment.
  • Provide insight into progress and quality.
Correct Answer: A

5. What does continuous integration mean in the context of the iterative lifecycle?
  • Integration only for every external release
  • Integration only at the end of Elaboration, Construction, and Transition
  • Integration only at the end of Elaboration and Construction
  • Integration at the end of every iteration
Correct Answer: D

Where Do You See Yourself AFTER 5 YEARS

How many times have you cringed on being asked this question? Kunal Guha explains the logic behind it

If a poll was conducted on ‘worst interview questions’, this one would probably top the list. And as pointless as it may seem, recruiters don’t seem to tire of asking the question-‘where do you see yourself five years from now?’ Let’s find out why your answer to the above is so important.

TESTING SINCERITY

Contrary to technical questions that could be answered with mechanical ease, this one requires one to think, unless you’ve subscribed to ‘Accepted Interview Answers Digest’. Recruiters, however, can easily see through rehearsed answers and pick out the genuine ones. “A lot of interview questions are based on the person’s accomplishments and choices. Answers to these questions demonstrate a pattern, on the basis of which one can evaluate the applicant’s future aspirations. The trick is to observe how well the applicant’s responses tie in with what one has done, and how well one is able to logically break down one’s career steps. I have seen candidates who broke down their long-term goals into small logical steps,” explains Manoj Varghese, HR Head, Google India.

Recruiters also pay a lot of attention to a candidate’s career record. “Responses backed by a documented track record are valued most. The response may be rehearsed, but the same can be easily validated against the candidate’s track record. Also, an employer’s own experience helps assess whether the candidate’s aspirations are realistic,” explains M V Subramanian, Director-Staffing, HP India. Satish Venkatachaliah, HR Head, SAP Labs, seconds the thought, “We believe that past behaviour is the most reasonable predictor of future behaviour. In case of freshers, we try and look back at the nature of the candidates’ college behaviour - whether he/she was interactive and enterprising, has participated in extra-curricular activities and worked in teams for campus projects.”

JUDGING CHARACTER
Since where you see yourself in the future is based on your imagination, your answer could divulge vital learnings about your thinking pattern and how you would respond to any given situation. These factors could be crucial in evaluating a candidates’ potential. “In our case, if we ask this question, the reason would be to understand the individual’s thinking and quality of reasoning, self-awareness, etc. For example, the person may not know the mechanism of the organisation. Therefore, if he/ she responds to the question in the context of the organisation without asking exploratory questions regarding career growth in that organisation, then the person is obviously shooting from the hip. On the other hand, if the person responds generically, then it is important to understand why the person considers that path and not other alternatives. This will give an idea of the person’s self-evaluation,” explains Pankaj Bhargava-HR Head Marico Limited.

MATCHING EXPECTATIONS
It is crucial to understand how realistic a candidate’s expectations are and whether there is a match between the candidate’s goals and the organisations’. “Investigating a prospective candidate’s future aspirations tells us in advance about the candidates personal goals. This helps us match the candidates’ personal goals with those that we have set out for the role and the organisation. If we find a strong dissonance with the two, it would definitely affect the selection process,” explains Mona Cheriyan, General Manager, Employee Engagement & Europe Liaison, i-flex solutions limited.

Subramanian elaborates, “This question, though perhaps not the most significant, actually attempts to co-relate the candidate’s aspirational level with that of his/her potential. The belief here is that an employee is motivated when he/she sees the role in question as a path toward a larger career goal.”

PRESENTING ASPIRATIONS
You may say that you see yourself as a delivery head or a project director in 3 years, but unless you are able to logically break down your aspirations, it will be a lost cause. So, where you see yourself in the future must have a strong link with the current capacity being offered to you. “Aspirations should be always linked with the role and what the applicant would like to do within a period of time. I have seen candidates create a very good impression with the interviewers by spending time in understanding what their current role is going to be and thinking about ways to add value to that. Such candidates plan their long-term aspirations around mastering their current role, contributing in that role and then checking about the next level of contributions. Even titles can be misleading from organisation to organisation; hence an applicant expressing a desire to be a senior manager in five years may sometimes not make sense in the context of the organisation one is interviewing with,” elucidates Varghese.

It is certain that there is no right or wrong answer, when asked how you envision yourself in the future. But answers could be weak or strong and if your answers are backed by stong conviction, a weak response could be transformed into a strong one.So, now the next time you appear for an interview, make sure you’re honest, reasonably ambitious but not overtly aggressive and it should work out just fine!

Software Test Automation - Myths and Facts continued...

Observations
I have met a number of QA managers who are frustrated with their automation. According to them the tool is not doing what it is supposed to do. Here is a true story, the client (I had the opportunity to work with them for some time) found out that the tool they have just bought does not support the application they are testing (I am not making it up). How can this happen! – It does happen more often than one would think. I will get back on this when I discuss possible solutions. A manager of one of the major telecom companies that I had a recent interview with told me that after three years and more than a million dollar he is still struggling with automation. This is pretty sad and I get the feeling that he is not alone.

Solutions/Suggestions

Let’s discuss some of the reasons for this frustration and some of the solutions to this problem.

  • Unrealistic expectations: Most managers have their first encounter with any automation tool when they look at the demo and everything looks nice and simple. But everything is not so nice and simple when you try to use the tool with your application. The vendors will only tell you the things you want to hear (how easy to use, how simple to set up, how it will save time and money, how it will help you find more bugs etc.). This builds a false set of hopes and expectations.
  • Lack of planning: A great deal of planning is required from selection to implementation of the tool. “Evaluating Tools” by Elisabeth Hendrickson is a very good article on step by step process of selecting a tool. She talks about “Tool Audience” as one of the steps. This would be an ideal way to select a tool. It may not happen in every place because of the everyday workload of the people involved. But the participation of the users in the process is very important, because they are the ones who will use the tool day in and day out. I am almost certain that what happened to one of my clients (the tool they have bought did not support the application they were testing) would not have happened if the users were involved in the selection process.
  • Lack of a process: Lack of a process may also contribute to failure of automation. Most places do have some kind of process in place. In most cases (although it differs from place to place) developers write code against a set of requirements. If the requirement does not call for a change in GUI then, there should not be any change in GUI. But if the GUI keep changing constantly from one release to another without any requirement for that change then, there is a problem in the process. You may have the best tool and the best (for your environment) architecture is in place and you will still have problems with your automation because of a faulty process.

Software Test Automation - Myths and Facts


Introduction
Today software test automation is becoming more and more popular in both C/S and web environment. As the requirements keep changing (mostly new requirements are getting introduced on daily basis) constantly and the testing window is getting smaller and smaller everyday, the managers are realizing a greater need for test automation. This is good news for us (people who do test automation). But, I am afraid this is the only good news.


Myths & Facts
A number of articles and books are written on different aspects of Software Test Automation. “Test Automation Snake Oil” by, James Bach is an excellent article on some of the myths of automation. I like to discuss some of these myths and will try to point out the facts about these myths. I also like to discuss some of my observations and hopefully point out possible solutions. These are based on my experience with a number of automation projects I was involved with.


Myth 1: Find more bugs

Fact: Some QA managers think that by doing automation they should be able to find more bugs. It’s a myth. Let’s think about it for a minute. The process of automation involves a set of written test cases. In most places the test cases are written by test engineers who are familiar with the application they are testing. The test cases are then given to the automation engineers. In most cases the automation engineers are not very familiar with the test cases they are automating. From test cases to test scripts, automation does not add anything in the process to find more bugs. The test scripts will work only as good as the test cases when comes to finding bugs. So, it’s the test cases that find bugs (or don’t find bugs), not the test scripts.

Myth 2: Eliminate or reduce manual testers

Fact: In order to justify automation, some point out that they should be able to eliminate or reduce the number of manual testers in the long run and thus save money in the process. Absolutely not true. Elimination or reduction of manual testers is not any of the objectives of test automation. Here is why – as I have pointed out earlier that the test scripts are only as good as the test cases and the test cases are written primarily by manual testers. They are the ones who know the application inside out. If the word gets out (it usually does) that the number of manual testers will be reduced by introducing automation then, most if not all manual testers will walk out the door and quality will go with them as well.

Some Classic Testing Mistakes

The role of testing

  • Thinking the testing team is responsible for assuring quality.
  • Thinking that the purpose of testing is to find bugs.
  • Not finding the important bugs.
  • Not reporting usability problems.
  • No focus on an estimate of quality (and on the quality of that estimate).
  • Reporting bug data without putting it into context.
  • Starting testing too late (bug detection, not bug reduction)

Planning the complete testing effort

  • A testing effort biased toward functional testing.
  • Underemphasizing configuration testing.
  • Putting stress and load testing off to the last minute.
  • Not testing the documentation
  • Not testing installation procedures.
  • An overreliance on beta testing.
  • Finishing one testing task before moving on to the next.
  • Failing to correctly identify risky areas.
  • Sticking stubbornly to the test plan.

Personnel issues

  • Using testing as a transitional job for new programmers.
  • Recruiting testers from the ranks of failed programmers.
  • Testers are not domain experts.
  • Not seeking candidates from the customer service staff or technical writing staff.
  • Insisting that testers be able to program.
  • A testing team that lacks diversity.
  • A physical separation between developers and testers.
  • Believing that programmers can’t test their own code.
  • Programmers are neither trained nor motivated to test.

The tester at work

  • Paying more attention to running tests than to designing them.
  • Unreviewed test designs.
  • Being too specific about test inputs and procedures.
  • Not noticing and exploring “irrelevant” oddities.
  • Checking that the product does what it’s supposed to do, but not that it doesn’t do what it isn’t supposed to do.
  • Test suites that are understandable only by their owners.
  • Testing only through the user-visible interface.
  • Poor bug reporting.
  • Adding only regression tests when bugs are found.
  • Failing to take notes for the next testing effort.
Test automation
  • Attempting to automate all tests.
  • Expecting to rerun manual tests.
  • Using GUI capture/replay tools to reduce test creation cost.
  • Expecting regression tests to find a high proportion of new bugs.
Code coverage
  • Embracing code coverage with the devotion that only simple numbers can inspire.
  • Removing tests from a regression test suite just because they don’t add coverage.
  • Using coverage as a performance goal for testers.
  • Abandoning coverage entirely.

Testing Mistakes continued...

From the “find important bugs” standpoint, the first testing effort was superior. It found 100 bugs before release, whereas the second found only 74. But I think you can make a strong case that the second effort is more useful in practical terms. Let me restate the twosituations in terms of what a test manager might say before release:
1. “We have tested subsystem 1 very thoroughly, and we believe we’ve found almost allof the priority 1 bugs. Unfortunately, we don’t know anything about the bugginess ofthe remaining five subsystems.”
2. “We’ve tested all subsystems moderately thoroughly. Subsystem 1 is still very buggy.The other subsystems are about 1/10th as buggy, though we’re sure bugs remain.

”This is, admittedly, an extreme example, but it demonstrates an important point. Theproject manager has a tough decision: would it be better to hold on to the product formore work, or should it be shipped now? Many factors - all rough estimates of possiblefutures - have to be weighed: Will a competitor beat us to release and tie up the market? Will dropping an unfinished feature to make it into a particular magazine’s special “JavaDevelopment Environments” issue cause us to suffer in the review? Will critical customerX be more annoyed by a schedule slip or by a shaky product? Will the product be buggyenough that profits will be eaten up by support costs or, worse, a recall?

The testing team will serve the project manager better if it concentrates first on providingestimates of product bugginess (reducing uncertainty), then on finding more of the bugsthat are estimated to be there. That affects test planning, the topic of the next theme.

It also affects status reporting. Test managers often err by reporting bug data without putting it into context. Without context, project management tends to focus on one graph:


The flattening in the curve of bugs found will be interpreted in the most optimistic possible
way unless you as test manager explain the limitations of the data:
· “Only half the planned testing tasks have been finished, so little is known about half
the areas in the project. There could soon be a big spike in the number of bugs
found.”
· “That’s especially likely because the last two weekly builds have been lightly tested.

“Furthermore, based on previous projects with similar amounts and kinds of testing effort, it’s reasonable to expect at least 45 priority-1 bugs remain undiscovered. Historically, that’s pretty high for a successful product.”

Testing Mistakes continued...

What’s an important bug? Important to whom? To a first approximation, the answer must be “to customers”. Almost everyone will nod their head upon hearing this definition, but do they mean it? Here’s a test of your organization’s maturity. Suppose your product is a system that accepts email requests for service. As soon as a request is received, it sends a reply that says “your request of 5/12/97 was accepted and its reference ID is NIC-051297-
3”.

A tester who sends in many requests per day finds she has difficulty keeping track of which request goes with which ID. She wishes that the original request were appended to the acknowledgement. Furthermore, she realizes that some customers will also generate many requests per day, so would also appreciate this feature. Would she:

  1. file a bug report documenting a usability problem, with the expectation that it will be assigned a reasonably high priority (because the fix is clearly useful to everyone, important to some users, and easy to do)?
  2. file a bug report with the expectation that it will be assigned “enhancement request” priority and disappear forever into the bug database?
  3. file a bug report that yields a “works as designed” resolution code, perhaps with an email “nastygram” from a programmer or the development manager?
  4. not bother with a bug report because it would end up in cases (2) or (3)?
    If usability problems are not considered valid bugs, your project defines the
    testing task too narrowly. Testers are restricted to checking whether the product does what was intended, not whether what was intended is useful. Customers do not care about the distinction, and testers shouldn’t either.

Testers are often the only people in the organization who use the system as heavily as an expert. They notice usability problems that experts will see. (Formal usability testing almost invariably concentrates on novice users.) Expert customers often don’t report usability problems, because they’ve been trained to know it’s not worth their time.

Instead, they wait (in vain, perhaps) for a more usable product and switch to it. Testers can prevent that lost revenue. While defining the purpose of testing as “finding bugs important to customers” is a step forward, it’s more restrictive than I like. It means that there is no focus on an estimate of quality (and on the quality of that estimate). Consider these two situations for a product with five subsystems.

  1. 100 bugs are found in subsystem 1 before release. (For simplicity, assume that all bugs are of the highest priority.) No bugs are found in the other subsystems. After release, no bugs are reported in subsystem 1, but 12 bugs are found in each of the other subsystems.
  2. Before release, 50 bugs are found in subsystem 1. 6 bugs are found in each of the other subsystems. After release, 50 bugs are found in subsystem 1 and 6 bugs in each of the other subsystems.

Theme One: The Role of Testing

A first major mistake people make is thinking that the testing team is responsible for assuring quality. This role, often assigned to the first testing team in an organization, makes it the last defense, the barrier between the development team (accused of producing bad quality) and the customer (who must be protected from them).

It’s characterized by a testing team (often called the “Quality Assurance Group”) that has formal authority to prevent shipment of the product. That in itself is a disheartening task: the testing team can’t improve quality, only enforce a minimal level. Worse, that authority is usually more apparent than real. Discovering that, together with the perverse incentives of telling developers that quality is someone else’s job, leads to testing teams and testers who are disillusioned, cynical, and view themselves as victims. We’ve learned from
Deming and others that products are better and cheaper to produce when everyone, at every stage in development, is responsible for the quality of their work.

In practice, whatever the formal role, most organizations believe that the purpose of testing is to find bugs. This is a less pernicious definition than the previous one, but it’s missing a key word. When I talk to programmers and development managers about testers, one key sentence keeps coming up: “Testers aren’t finding the important bugs.” Sometimes that’s just griping, sometimes it’s because the programmers have a skewed sense of what’s important, but I regret to say that all too often it’s valid criticism.

Too many bug reports from testers are minor or irrelevant, and too many important bugs are missed.

Classic Testing Mistakes

It's easy to make mistakes when testing software or planning a testing effort. Some mistakes are made so often, so repeatedly, by so many different people, that they deserve the label Classic Mistake.

Classic mistakes cluster usefully into five groups, which I’ve called “themes”:
· The Role of Testing: who does the testing team serve, and how does it do that?
· Planning the Testing Effort: how should the whole team’s work be organized?
· Personnel Issues: who should test?
· The Tester at Work: designing, writing, and maintaining individual tests.
· Technology Rampant: quick technological fixes for hard problems.

I have two goals for this paper. First, it should identify the mistakes, put them in context, describe why they’re mistakes, and suggest alternatives. Because the context of one mistake is usually prior mistakes, the paper is written in a narrative style rather than as a list that can be read in any order. Second, the paper should be a handy checklist of mistakes. For that reason, the classic mistakes are printed in a larger bold font when they appear in the text, and they’re also summarized at the end.

Although many of these mistakes apply to all types of software projects, my specific focus is the testing of commercial software products, not custom software or software that is safety critical or mission critical.

Software Development Lifecycles - Iterative Lifecycle Models

An iterative lifecycle model does not attempt to start with a full specification of
requirements. Instead, development begins by specifying and implementing just part of the software, which can then be reviewed in order to identify further requirements. This process is then repeated, producing a new version of the software for each cycle of the model. Consider an iterative lifecycle model which consists of repeating the four phases in sequence, as illustrated by following figure.



· A Requirements phase, in which the requirements for the software are gathered and analyzed. Iteration should eventually result in a requirements phase which produces a complete and final specification of requirements.
· Design phase, in which a software solution to meet the requirements is designed. This may be a new design, or an extension of an earlier design.
· An Implementation and Test phase, when the software is coded, integrated and
tested.
· A Review phase, in which the software is evaluated, the current requirements are reviewed, and changes and additions to requirements proposed.

For each cycle of the model, a decision has to be made as to whether the software produced by the cycle will be discarded, or kept as a starting point for the next cycle (sometimes referred to as incremental prototyping). Eventually a point will be reached where the requirements are complete and the software can be delivered, or it becomes impossible to enhance the software as required, and a freash start has to be made.

The iterative lifecycle model can be likened to producing software by successive
approximation. Drawing an analogy with mathematical methods which use successive approximation to arrive at a final solution, the benefit of such methods depends on how rapidly they converge on a solution.

Continuing the analogy, successive approximation may never find a solution. The iterations may oscillate around a feasible solution or even diverge. The number of iterations required may become so large as to be unrealistic. We have all seen software developments which have made this mistake!

The key to successful use of an iterative software development lifecycle is rigorous validation of requirements, and verification (including testing) of each version of the software against those requirements within each cycle of the model. The first three phases of the example iterative model are in fact an abbreviated form a sequential V or waterfall lifecycle model. Each cycle of the model produces software which requires testing at the unit level, for software integration, for system integration and for acceptance. As the software evolves through successive cycles, tests have to be repeated and extended to verify each version of the software.

Software Development Lifecycles - Progressive Development Lifecycle Models

The sequential V and waterfall lifecycle models represent an idealised model of software development. Other lifecycle models may be used for a number of reasons, such as volatility of requirements, or a need for an interim system with reduced functionality when long timescales are involved. As an example of other lifecycle models, let us look at progressive development and iterative lifecycle models.


A common problem with software development is that software is needed quickly, but itwill take a long time to fully develop. The solution is to form a compromise between timescales and functionality, providing "interim" deliveries of software, with reduced functionality, but serving as a stepping stones towards the fully functional software. It is also possible to use such a stepping stone approach as a means of reducing risk.

The usual names given to this approach to software development are progressive
development or phased implementation. The corresponding lifecycle model is referred to as a progressive development lifecycle. Within a progressive development lifecycle, each individual phase of development will follow its own software development lifecycle, typically using a V or waterfall model. The actual number of phases will depend upon the development.

Software Development Lifecycles - Sequential Lifecycle Models

The software development lifecycle begins with the identification of a requirement for software and ends with the formal verification of the developed software against that requirement. Traditionally, the models used for the software development lifecycle have been sequential, with the development progressing through a number of well defined phases. The sequential phases are usually represented by a V or waterfall diagram. These models are respectively called a V lifecycle model and a waterfall lifecycle model.


There are in fact many variations of V and waterfall lifecycle models, introducing different phases to the lifecycle and creating different boundaries between phases. The following set of lifecycle phases fits in with the practices of most professional software developers.


· The Requirements phase, in which the requirements for the software are gathered and analyzed, to produce a complete and unambiguous specification of what the software is required to do.


· The Architectural Design phase, where a software architecture for the implementation of the requirements is designed and specified, identifying the components within the software and the relationships between the components.



· The Detailed Design phase, where the detailed implementation of each component is specified.
· The Code and Unit Test phase, in which each component of the software is coded and tested to verify that it faithfully implements the detailed design.
· The Software Integration phase, in which progressively larger groups of tested software components are integrated and tested until the software works as a whole.
· The System Integration phase, in which the software is integrated to the overall product and tested.
· The Acceptance Testing phase, where tests are applied and witnessed to validate that the software faithfully implements the specified requirements.


Software specifications will be products of the first three phases of this lifecycle model. The remaining four phases all involve testing the software at various levels, requiring test specifications against which the testing will be conducted as an input to each of these phases.

Software Testing & Software Development Lifecycles

The various activities which are undertaken when developing software are commonly modelled as a software development lifecycle. The software development lifecycle begins with the identification of a requirement for software and ends with the formal verification of the developed software against that requirement.

The software development lifecycle does not exist by itself, it is in fact part of an overall product lifecycle. Within the product lifecycle, software will undergo maintenance to correct errors and to comply with changes to requirements. The simplest overall form is where the product is just software, but it can become much more complicated, with multiple software developments each forming part of an overall system to comprise a product.

There are a number of different models for software development lifecycles. One thing which all models have in common, is that at some point in the lifecycle, software has to be tested. This paper outlines some of the more commonly used software development lifecycles, with particular emphasis on the testing activities in each model.

The models under consideration here are:

  • Sequential Lifecycle Model - V Lifecycle Model & Waterfall Lifecycle Model
  • Progressive Development Lifecycle Model
  • Iterative Lifecycle Model

Let's have a brief look at all of these models...

QA Process Management - The Challenges Encountered (Summary)

Summarily, the challenges faced during these stages fell into three categories:

Category I- Team Collaboration: Conducting effective Testing Cycle, generation of good quality Test Cases, sharing and creation of the numerous indispensable information assets of the QA process required a high degree of collaboration among Quality Managers, Developers, Test Case Reviewers and Testers.

Category II- Information Assets Management: Host of documents formed an integral part of the QA Cycle
  • Test Plan
  • Detailed Feature Specifications
  • Test Case
  • Test Case Review Template

Documents generated as an output of the QA Cycle included

  • Test Logs
  • Defects Tracking Sheet
  • Session Reports
  • QA Reports
For process consistency and uniformity, well-defined standards in the form of published templates or documentation guidelines were needed for each of these assets. Also mandatory for each, was a create-approve-publish document lifecycle. Post QA Cycle, archiving of assets in a secure, well-organized repository was needed. Search, security and tracking of assets were basic requirements on such a repository.

Category III- Scheduling, Execution and Tracking of Test Plans: The actual Test Cycle needed to be planned, approved, efficiently executed, and tracked.
In the existing work scenario collaboration and planning happened through formal or informal meetings, asset sharing and management through mails or shared file servers all of which were cumbersome to track, slow and time inefficient.

QA Process Management - The Challenges Encountered

Case Study Example:

An Offshore Service Center in Wipro was providing Testing Services to a desktop software development corporation named ABC* Corp. Three modules of their desktop suite: DeskPro, DeskNet and DeskSecure had to be tested at the Offshore Center. Most of the time the testing of the modules were performed independently but a release of the desktop suite mandated that they work in sync, thus multiplying the problems of the managers.

The problems being faced by this team were analyzed and a QA Process Management Solution was modeled. The tribulations faced are elucidated below.

The Problem Statement

Phase 1: Test Case Generation
ABC* Corp. used to send detailed feature and policy documentation for features to be tested at the Off Shore Center. It would also send guidelines for writing Test Cases (TC) for the feature. Two or three engineers had to collaborate on Test Case generation for each feature as per the guidelines sent by ABC* Corp. The first draft of the Test Cases had to undergo review and approval by peer-teams within or across modules. Review comments had to be incorporated by the authors and the modifications had to be reviewed again.

After the first draft and peer review, Test Cases had to be reviewed by the QA leads of each module and a consolidated TC Review Report for the module was generated. QM had to inspect the modular TC Review Reports, consolidate them and generate a single complete Test Case Review Report. After this the Project Manager (PM) had to approve the test report and dispatch the TCs as well as TC Review Report to ABC* Corp for approval. The client then approved or suggested modifications to Test Cases. Modifications, if needed, were made, reviewed and finally with ABC* Corp’s approval the Test Cases were published.

Phase 2: Test Cycle Planning
QA leads of all the three modules generated respective test plans and communicated it to the Quality Manager (QM). QM used to check for feasibility of plans and plausibility of estimates. Inputs, if any, on plan modification had to be communicated to the QA leads and modifications made had to be reviewed again. Issues, if any, relating to resource availability, reconsideration of deadlines etc, had to be escalated to the Project Manager by the QM. Inputs, if any, from the PM had to be incorporated into the execution plan by the QM and reviewed by the PM again. With the Project Managers’ approval the Test Plans were published.

Phase 3: Test Cycle Execution
The Test Plans, Test Case documents, Log Sheet Templates and guidelines to fill the Log Sheets had to be communicated to test engineers. Testing was carried out and defects were logged into a shared Defect Database. Any ad hoc problem faced by testers during QA process had to be addressed quickly. Progress and pace of testing were tracked. At the end of the test cycle, Test Logs were inspected. Any missing pertinent data in the logs were pointed out to the Testers and complete Log Sheets solicited. Test Logs had to be archived in a shared repository. Module leads were to generate report based on test results logged.

The Quality Manager reviewed each module’s report and consolidated it into a final QA Report. The final Report was submitted to the Project Manager. Tick off Discussions ensued between the QM, PM and the client. Based on severity of bugs reported, product modules - DeskNet, DeskPro and DeskSecure were declared ready for release. The release information and QA Reports were published.

(Continued...)

QA Process Management - Significance of QA Process

Procedurally, the entire QA cycle consists of three stages:
  • Test Case Generation: This stage starts quite early in the SDLC, just after the Design Phase. After the feature requirements are frozen and the design agreed upon, Test Cases are generated. These Test Cases are used for feature verification at the end of Development Phase. However this initial draft may undergo several modifications based on tester and end user feedback. Succinctly put, the test cases must test for expected functionality while also ensuring functioning of software as per expectations. The former
    depends on the captured requirement specifications while the latter is the end users’
    expectations from the software. Thus maturity of Test Cases is a long drawn process with need for continuous review, updating, and tracking of updating.
  • Test Cycle Planning: This stage involves drawing of schedules and resource allocation charts for testing and their approval by supervisors before execution. It gains significance because effectiveness of the Test Cycle depends on balanced deployment of manpower and testing resources like servers, peripherals etc based on criticality of tested features. A timely release requires planning an efficient test schedule.
  • Test Cycle Execution: The actual testing takes place in this stage. Activities may include logging of defect(s)/suggestions encountered in testing, test case modifications, analysis of test logs by leads and generation of QA reports based on findings. The test reports are crucial to the actual release of the software.
Significance of QA Process
QA Phase vindicates the engineering effort and ensures completeness of the generated software before it is released for general availability. It forms a very critical node of the Software Development Lifecycle as can be deduced from the activities enumerated above.
  • As its chief objective, the QA Phase brings out the lacunae in requirement gathering, design and development of the software through reporting of defects and suggestion of improvements.
  • Beyond its main purpose of catching and reporting bugs in the software, QA ensures compliance of functionality with pre-set expectations.
  • It also ensures reliability of software over and above the expected functionality.
  • It guarantees scalability, concurrent accessibility and satisfactory performance of software under different usage conditions.
  • Specialized testing such as Beta Testing validates that the software not only functions as per the defined specifications but also meets customer expectations and market needs. Receiving and incorporating feedback from end users belonging to different market segments brings in competitiveness in the features also.

The actual testing of an application may be of different types viz Black Box Testing, White Box Testing, Graphics Interface Testing etc depending on the type of application. It can also be carried out either manually by a team of Test Engineers or through the use of Test Automation Tools with minimal manual effort.

Irrespective of these specifics, the QA Phase poses several management challenges as is illustrated through an example (check out in the next posts).

QA Process Management - QA Process Activities

The activities and implications of the QA Phase can vary based on the various software development paradigms in the industry. These are depicted in the table below:
  1. Software Product Development - A software product release is preceded by an internal QA cycle wherein Test Engineers from the product development group verify the product functionality against the feature specifications gathered from market sources/sales partners. This is followed by Beta Testing of the product by external users - customers, partners, patrons or volunteering testers.
  2. Software Application Development or Software Servicing - In this case three levels of testing may be carried out. The party involved in software servicing or application development carries out the first level of testing at the end of the development phase. At this level the developed application’s functionality is verified strictly against client’s specific requirements. This is typically followed by System Integration and testing of the service or application at the client site, either by test engineers from the servicing project or by test engineers hired by the client. If the client is servicing a customer in turn, testing may be done at the end-customer site also by test engineers hired at the customer end.
  3. Software Testing Services - This is a category of software services in which the development partner outsources the complete testing of its product, application or service to a third party. Dedicated teams of Test Engineers at the third party site solely perform testing of the developed software.
The example elucidated in this paper falls in the third category but the challenges that are faced and the solution that has been presented are applicable to the other two categories also.

A checklist of model requirements

These requirements are grouped according to testing activity.

A test model should:
  1. force a testing reaction to every code handoff in the project.
  2. require the test planner to take explicit, accountable action in response to dropped handoffs, new handoffs, and changes to the contents of handoffs.
  3. explicitly encourage the use of sources of information other than project
    documentation during test design.
  4. allow the test effort to be degraded by poor or late project documentation, but prevent it from being blocked entirely.
  5. allow individual tests to be designed using information combined from various sources.
  6. allow tests to be redesigned as new sources of information appear.
  7. include feedback loops so that test design takes into account what’s learned by running tests.
  8. allow testers to consider the possible savings of deferring test execution.
  9. allow tests of a component to be executed before the component is fully assembled.

Summary

The V model is fatally flawed, as is any model that:
  1. Ignores the fact that software is developed in a series of handoffs, where each handoff changes the behavior of the previous handoff,
  2. Relies on the existence, accuracy, completeness, and timeliness of development documentation,
  3. Asserts a test is designed from a single document, without being modified by later or earlier documents, or
  4. Asserts that tests derived from a single document are all executed together.
I have sketched – but not elaborated – a replacement model. It organizes the testing effort around code handoffs or milestones. It takes explicit account of the economics of testing: that the goal of test design is to discover inputs that will find bugs, and that the goal of test implementation is to deliver those inputs in any way that minimizes lifecycle costs.

The model assumes imperfect and changing information about the product. Testing a product is a learning process. In the past, I haven’t thought much about models. I ostensibly used the V model. I built my plans according to it, but seemed to spend a lot of my time wrestling with issues that the model didn’t address. For other issues, the model got in my way, so I worked around it.

I hope that thinking explicitly about requirements will be as useful for developing a testing model as it is when developing a product. I hope that I can elaborate on the model presented in this paper to the point that it provides as much explicit guidance as the V model seems to.

A different model

Let’s step back for a second. What is our job?

There are times when some person or group of people hands some code to other peopleand says, “Hope you like it.” That happens when the whole project puts bits on a CD andgives them to customers. It also happens within a project:·
  • One development team says to other teams, “We’ve finished the XML enhancementsto the COMM library. The source is now in the master repository; the executablelibrary is now in the build environment. The XARG team should now be unblocked –go for it!”·
  • One programmer checks in a bug fix and sends out email saying, “I fixed the bug inallocAttList. Sorry about that.” The other programmers who earlier stumbled overthat code can now proceed.
In all cases, we have people handing code to other people, possibly causing themdamage. Testers intervene in this process. Before the handoff, testers execute the code,find bugs (the damage), and ask the question, “Do you really want to hand this off?” Inresponse, the handoff may be deferred until bugs are fixed.

This act is fundamental to testing, regardless of the other things you may do. If you don’texecute the code to uncover possible damage, you’re not a tester.Our test models should be built around the essential fact of our lives: code handoffs.Therefore, a test model should force a testing reaction to every code handoff in theproject. I’ll use the XML-enhanced COMM library as an example. That’s a handoff from oneteam to the rest of the project. Who could be damaged?
  • It might immediately damage the XARG team, who will be using those XMLenhancements in their code.
  • It might later damage the marketing people, who will be giving a demonstration ofthe “partner release” version of the product at a trade show. XML support is animportant part of their sales pitch.
  • Still later, it might damage a partner who adopts our product.

We immediately have some interesting test planning questions. The simple thing to do would be to test the XML enhancements completely at the time of handoff. (By “completely,” I mean “design as many tests for them as you ever will.”) But maybe some XML features aren’t required by the XARG team, so it makes sense to test them through
the integrated partner release system. That means moving some of the XML-inspired testing to a later handoff. Or we might move it later for less satisfying reasons, such as that other testing tasks must take precedence in the near term. The XARG team will have to resign itself to stumbling over XML bugs for a while.

Our testing plan might be represented by a testing schedule that annotates the
development timeline.

What’s wrong with the V model? (Concluded...)

In the previous diagram, we had arrows pointing upward (effectively, later in time). It can also make sense to have arrows pointing downward (earlier in time):


In that case, the boxes on the left might be better labeled “Whatever test design can be done with the information available at this point”.

Therefore, when test design is derived from a description of a component of the system, the model must allow such tests to be executed before the component is fully assembled.

I have to admit my picture is awfully ugly – all those arrows going every which way. I have two comments about that:
  1. We are not in the business of producing beauty. We’re in the business of finding as many serious bugs as possible as cheaply as possible.
  2. The ugliness is, in part, a consequence of assuming that the order in which developers produce system description documents, and the relationships among those documents, is the mighty oak tree about which the slender vine of testing twines. If we adopt a different organizing principle, things get a bit more pleasing. But they’re still complicated, because we’re working in a complicated field.
The V model fails because it divides system development into phases with firm
boundaries between them. It discourages people from carrying testing information across those boundaries. Some tests are executed earlier than makes economic sense. Others are executed later than makes sense.


Moreover, it discourages you from combining information from different levels of system description. For example, organizations sometimes develop a fixation on “signing off” on test designs. The specification leads to the system test design. That’s reviewed and signed off. From that point on, it’s done. It’s not revised unless the specification is. If information relevant to those tests is uncovered later – if, for example, the architectural design reveals that some tests are redundant – well, that’s too bad. Or, if the detailed design reveals an internal boundary that could easily be incorporated into existing system tests, that’s tough: separate unit tests need to be written.

Therefore, the model must allow individual tests to be designed using information combined from various sources.

And further, the model must allow tests to be redesigned as new sources of
information appear
.

What’s wrong with the V model? (Continued...)

There’s always some dispute over how big a unit should be (a function? a class? a collection of related classes?) but that doesn’t affect my argument. I believe, a unit is the smallest chunk of code that the developers can stand to talk about as an independent entity.

The V model says that someone should first test each unit. When all the subsystem’s units are tested, they should be aggregated and the subsystem tested to see if it works as a whole.


So how do we test the unit? We look at its interface as specified in the detailed design, or at the code, or at both, pick inputs that satisfy some test design criteria, feed those inputs to the interface, then check the results for correctness. Because the unit usually can’t be executed in isolation, we have to surround it with stubs and drivers, as shown at the right. The arrow represents the execution trace of a test.

That’s what most people mean when they say “unit testing”.


I think that approach is sometimes a bad idea. The same inputs can often be delivered to the unit through the subsystem, which thus acts as both stub and driver. That looks like the picture to the right.

The decision about which approach to take is a matter of weighing tradeoffs. How much would the stubs cost? How likely are they to be maintained? How likely are failures to be masked by the subsystem? How difficult would debugging through the subsystem be? If tests aren’t run until integration, some bugs will be found later. How does the estimated cost of that compare to the cost of stubs and drivers? And so forth.

The V model precludes these questions. They don’t make sense. Unit tests get executed when units are done. Integration tests get executed when subsystems are integrated. End of story. It used to be surprising and disheartening to me how often people simply wouldn’t think about the tradeoffs – they were trapped by their model.

Therefore, a useful model must allow testers to consider the possible savings of deferring test execution.


A test designed to find bugs in a particular unit might be best run with the unit in isolation, surrounded by unit-specific stubs and drivers. Or it might be tested as part as the subsystem – along with tests designed to find integration problems. Or, since a subsystem will itself need stubs and drivers to emulate connections to other subsystems, it might sometimes make sense to defer both unit and integration tests until the whole system is at least partly integrated. At that point, the tester is executing unit, integration, and system tests through the product’s external interface. Again, the purpose is to minimize total lifecycle cost, balancing cost of testing against cost of delayed bug discovery. The distinction between “unit”, “integration”, and “system” tests begins to break down. In effect, you have the above picture.

It would be better to label each box on the right “execution of appropriate tests” and be done with it. What of the left side? Consider system test design, an activity driven by the specification. Suppose that you knew that two units in a particular subsystem, working in conjunction, implemented a particular statement in the specification. Why not test that specification statement just after the subsystem was integrated, at the same time as tests derived from the design?

If the statement’s implementation depends on nothing outside the subsystem, why wait until the whole system is available? Wouldn’t finding the bugs earlier be cheaper?

What’s wrong with the V model?

I will use the V model as an example of a bad model. I use it because it’s the most familiar.


A typical version of the V model begins by describing software development as
following the stages shown here:


That’s the age-old waterfall model. As a development model, it has a lot of problems. Those don’t concern us here – although it is indicative of the state of testing models that a development model so widely disparaged is the basis for our most common testing model. My criticisms also apply to testing models that are embellishments on better development models, such as the spiral model.

Testing activities are added to the model as follows:

Unit testing checks whether code meets the detailed design. Integration testing checks whether previously tested components fit together. System testing checks if the fully integrated product meets the specification. And acceptance testing checks whether the product meets the final user requirements.


To be fair, users of the V model will often separate test design from test implementation. The test design is done when the appropriate development document is ready.

This model, with its appealing symmetry, has led many people astray.

Software Test Life Cycle (Part 3 of 3)

4. Test Execution (Unit / Functional Testing Phase):
By this time. the development team would have been completed creation of the work products. Of Course, the work product would still contain bugs. So, in the execution phase - developers would carry out unit testing with testers help, if required. Testers would execute the test plans. Automatic testing Scripts would be completed. Stress and performance Testing would be executed. White box testing, code reviews, etc. would be conducted. As and when bugs are found - reporting would be done.

5. Test Cycle (Re-Testing Phase):

By this time, minimum one test cycle (one round of test execution) would have been completed and bugs would have been reported. Once the development team fixes the bugs, then a second round of testing begins. This testing could be mere correction verification testing, that is checking only that part of the code that has been corrected. It could also be Regression Testing - where the entire work product is tested to verify that correction to the code has not affected other parts of the code.

Hence this process of :
Testing --> Bug reporting --> Bug fixing (and enhancements) --> Retesting
is carried out as planned. Here is where automation tests are extremely useful to repeat the same test cases again and again.During this phase - review of test cases and test plan could also be carried out.

6. Final Testing and Implementation (Code Freeze Phase):
When the exit criteria is achieved or planned test cycles are completed, then final testing is done. Ideally, this is System or Integration testing. Also any remaining Stress and Performance testing is carried out. Inputs for process improvements in terms of software metrics is given. Test reports are prepared. if required, a test release note, releasing the product for roll out could be prepared. Other remaining documentation is completed.

7. Post Implementation (Process Improvement Phase):

This phase, that looks good on paper, is seldom carried out. In this phase, the testing is evaluated and lessons learnt are documented. Software Metrics (Bug Analysis Metrics) are analyzed statistically and conclusions are drawn. Strategies to prevent similar problems in future projects is identified. Process Improvement Suggestions are implemented. Cleaning up of testing environment and Archival of test cases, records and reports are done.

Software Test Life Cycle (Part 2 of 3)

2.Test Analysis (Documentation Phase)
The Analysis Phase is more an extension of the planning phase. Whereas the planning phase pertains to high level plans - the Analysis phase is where detailed plans are documented. This is when actual test cases and scripts are planned and documented.

This phase can be further broken down into the following steps :

  • Review Inputs : The requirement specification document, feature specification document and other project planning documents are considered as inputs and the test plan is further disintegrated into smaller level test cases.
  • Formats : Generally at this phase a functional validation matrix based on Business Requirements is created. Then the test case format is finalized. Also Software Metrics are designed in this stage. Using some kind of software like Microsoft project, the testing timeline along with milestones are created.
  • Test Cases : Based on the functional validation matrix and other input documents, test cases are written. Also some mapping is done between the features and test cases.
  • Plan Automation : While creating test cases, those cases that should be automated are identified. Ideally those test cases that are relevant for Regression Testing are identified for automation. Also areas for performance, load and stress testing are identified.
  • Plan Regression and Correction Verification Testing : The testing cycles, i.e. number of times that testing will be redone to verify that bugs fixed have not introduced newer errors is planned.

3. Test Design (Architecture Document and Review Phase):

One has to realize that the testing life cycle runs parallel to the software development life cycle. So by the time, one reaches this phase - the development team would have created some code or at least some prototype or minimum a design document would be have been created.

Hence in the Test Design (Architecture Document Phase) - all the plans, test cases, etc. from the Analysis phase are revised and finalized. In other words, looking at the work product or design - the test cases, test cycles and other plans are finalized. Newer test cases are added. Also some kind of Risk Assessment Criteria is developed. Also writing of automated testing scripts begin. Finally - the testing reports (especially unit testing reports) are finalized. Quality checkpoints, if any, are included in the test cases based on the SQA Plan.

(Continued...)

Software Test Life Cycle (Part 1 of 3)

Usually, testing is considered as a part of the System Development Life Cycle, but it can be also termed as Software Testing Life Cycle or Test Development Life Cycle.

Software Testing Life Cycle consists of the following phases:
1. Planning
2. Analysis
3. Design
4. Execution
5. Cycles
6. Final Testing and Implementation
7. Post Implementation


1. Test Planning (Product Definition Phase):
The test plan phase mainly signifies preparation of a test plan. A test plan is a high level planning document derived from the project plan (if one exists) and details the future course of testing. Sometimes, a quality assurance plan - which is more broader in scope than a test plan is also made.

Contents of a Test Plan are as follows :

  • Scope of testing
  • Entry Criteria (When testing will begin?)
  • Exit Criteria (When testing will stop?)
  • Testing Strategies (Black Box, White Box, etc.)
  • Testing Levels (Integration testing, Regression testing, etc.)
  • Limitation (if any)
  • Planned Reviews and Code Walkthroughs
  • Testing Techniques (Boundary Value Analysis, Equivalence Partitioning, etc.)
  • Testing Tools and Databases (Automatic Testing Tools, Performance testing tools)
  • Reporting (How would bugs be reported)
  • Milestones
  • Resources and Training

Contents of a SQA Plan, more broader than a test plan, are as follows :

The IEEE standard for SQA Plan Preparation contains the following outline :

  • Purpose
  • Reference Documents
  • Management
  • Documentation
  • Standards, Practices and Conventions
  • Reviews and Audits
  • Software Configuration Management
  • Problem Reporting and Corrective Action (Software Metrics to be used can be identified at this stage)
  • Tools, Techniques and Methodologies
  • Code Control
  • Media Control
  • Supplier Control
  • Records, Collection, maintenance and Retention

(Continued...)

SDLC Model: Progressive Development Lifecycle Model

One of the common problems with software development is that software is needed quickly, but it will take a long time to fully develop.


The solution is to form a compromise between time scales and functionality, providing "interim" deliveries of software, with reduced functionality, but serving as a stepping stones towards the fully functional software. It is also possible to use such a stepping stone approach as a means of reducing risk.


The usual names given to this approach to software development are progressive development or phased implementation. The corresponding lifecycle model is referred to as a progressive development lifecycle. Within a progressive development lifecycle, each individual phase of development will follow its own software development lifecycle, typically using a V or waterfall model.

The actual number of phases will depend upon the development.Each delivery of software will have to pass acceptance testing to verify the software fulfils the relevant parts of the overall requirements. The testing and integration of each phase will require time and effort. Therefore, there is a point at which an increase in the number of development phases will actually become counter productive, giving an increased cost and time scale, which will have to be weighed carefully against the need for an early solution. The software produced by an early phase of the model may never actually be used; it may just serve as a prototype.

A prototype will take short cuts in order to provide a quick means of validating key requirements and verifying critical areas of design. These short cuts may be in areas such as reduced documentation and testing. When such short cuts are taken, it is essential to plan to discard the prototype and implement the next phase from scratch, because the reduced quality of the prototype will not provide a good foundation for continued development.

SDLC Model: V Model



A variant of the waterfall model — the V-model — associates each development activity with a test or validation at the same level of abstraction.

Each development activity builds a more detailed model of the system than the one before it, and each validation tests a higher abstraction than its predecessor.

SDLC Model: Typical Spiral Model



Developed by Barry Boehm in 1988. it provides the potential for rapid development of incremental versions of the software. In the spiral model, software is developed in a series of incremental releases. During early iterations , the incremental release might be a paper model or prototype.Each iteration consists ofPlanning, Risk Analysis, Engineering, Construction & Release & Customer Evaluation.

Customer Communication: Tasks required to establish effective communication between developer and customer.Planning: Tasks required to define resources, timelines, and other project related information.Risk Analysis: Tasks required to assess both technical and management risks.Engineering: Tasks required to build one or more representatives of the application.Construction & Release: Tasks required to construct, test, install and provide user support (e.g., documentation and training)Customer evaluation: Tasks required to obtain customer feedback based on evaluation of the software representations created during the engineering stage and implemented during the installation state.

SDLC Model: Classic Waterfall Model




In a typical model, a project begins with feasibility analysis. On successfully demonstrating the feasibility of a project, the requirements analysis and project planning begins.

The design starts after the requirements analysis is complete, and coding begins after the design is complete. Once the programming is completed, the code is integrated and testing is done. On successful completion of testing, the system is installed.

After this, the regular operation and maintenance of the system takes place.
 

© blogger templates 3 column | Make Money Online