Testing Glossary - Volume Testing

The purpose of Volume Testing is to find weaknesses in the system with respect to its handling of large amounts of data during short time periods. For example, this kind of testing ensures that the system will process data across physical and logical boundaries such as across servers and across disk partitions on one server.

Testing Glossary - Benchmarking

To a certain degree load testing, stress testing & benchmarking, all describe the same kind of process. They all use a community of virtual users to submit a workload to real hardware and software. Benchmarking has been used for decades to describe
  • real application stressing
  • sizing of undeveloped applications
  • competitive positioning by vendors of servers and networks

Sizing is an estimate of the vendor's proposed configuration usually in response to an RFP or Request for Proposal.

Testing Glossary - Stress Testing

The purpose of Stress Testing is to show that the system has the capacity to handle large numbers of processing transactions during peak periods. The stress testing tests the system beyond the maximum design load of the system. It tests the defect or failure behavior of the system. Defects may come to light, which are unlikely to cause system failures in normal usage. Stress testing checks that the overloading causes the system to “fail-soft” rather than collapse under its load.

How many simultaneous users can my site take without slowing down significantly or crashing? Stress testing is nothing more than applying a steadily increasing load to your site until it reaches the breaking point (when site performance degrades to unacceptable levels). What this test tells you is how well your site will handle unexpected loads that may come as a result of unplanned events like, changing market forces, the failure of your competitors to fill the needs of their customers or serendipitous national publicity.

Testing Glossary - Load Testing

The creation of a simulated "load" on a real computer system by using virtual users who submit work as real users would do at real client workstations and thus "testing" the system's ability to support such a workload. The virtual users are in software running on a "driver machine" or "injector". Testing of critical software during its development and before its deployment has three components:
  • Functional testing: Does it conform to the design specifications?
  • Performance testing: Does each unit offer acceptable response time?
  • Load testing: What hardware and software system configuration will be required to provide acceptable response times and handle the "load" that will be created by the entire community of users when deployed?

Load testing cannot be accomplished before the application software is available. Unlike other modeling methods for estimating future performance, load testing uses real hardware and software. Load testing was historically done with real people at real terminals following a script and recording response times with stop watches.

For systems with hundreds of users, today one can use a load testing tool rather than real people; can capture the activities of real users and then create scripts that "virtual" users submit to the SUT (system under test) and measure the resulting response times.

Software Test Automation Myths & Facts

Introduction
Today software test automation is becoming more and more popular in both C/S and web environment. As the requirements keep changing (mostly new requirements are getting introduced on daily basis) constantly and the testing window is getting smaller and smaller everyday, the managers are realizing a greater need for test automation. This is good news for us (people who do test automation). But, I am afraid this is the only good news.

Myths & Facts
A number of articles and books are written on different aspects of Software Test Automation. “Test Automation Snake Oil” by, James Bach is an excellent article on some of the myths of automation. I like to discuss some of these myths and will try to point out the facts about these myths. I also like to discuss some of my observations and hopefully point out possible solutions. These are based on my experience with a number of automation projects I was involved with.

Myth 1 - Find more bugs: Some QA managers think that by doing automation they should be able to find more bugs. It’s a myth. Let’s think about it for a minute. The process of automation involves a set of written test cases. In most places the test cases are written by test engineers who are familiar with the application they are testing. The test cases are then given to the automation engineers. In most cases the automation engineers are not very familiar with the test cases they are automating. From test cases to test scripts, automation does not add anything in the process to find more bugs. The test scripts will work only as good as the test cases when comes to finding bugs. So, it’s the test cases that find bugs (or don’t find bugs), not the test scripts.

Myth 2 - Eliminate or reduce manual testers: In order to justify automation, some point out that they should be able to eliminate or reduce the number of manual testers in the long run and thus save money in the process. Absolutely not true. Elimination or reduction of manual testers is not any of the objectives of test automation. Here is why – as I have pointed out earlier that the test scripts are only as good as the test cases and the test cases are written primarily by manual testers. They are the ones who know the application inside out. If the word gets out (it usually does) that the number of manual testers will be reduced by introducing automation then, most if not all manual testers will walk out the door and quality will go with them as well.

Observations


I have met a number of QA managers who are frustrated with their automation. According to them the tool is not doing what it is supposed to do. Here is a true story, the client (I had the opportunity to work with them for some time) found out that the tool they have just bought does not support the application they are testing (I am not making it up). How can this happen! – It does happen more often than one would think. I will get back on this when I discuss possible solutions. A manager of one of the major telecom companies that I had a recent interview with told me that after three years and more than a million dollar he is still struggling with automation. This is pretty sad and I get the feeling that he is not alone.

Solutions / Suggestions

Let’s discuss some of the reasons for this frustration and some of the solutions to this problem.

Fact 1 - Unrealistic expectations: Most managers have their first encounter with any automation tool when they look at the demo and everything looks nice and simple. But everything is not so nice and simple when you try to use the tool with your application. The vendors will only tell you the things you want to hear (how easy to use, how simple to set up, how it will save time and money, how it will help you find more bugs etc.). This builds a false set of hopes and expectations.

Fact 2 - Lack of planning: A great deal of planning is required from selection to implementation of the tool. “Evaluating Tools” by Elisabeth Hendrickson is a very good article on step by step process of selecting a tool. She talks about “Tool Audience” as one of the steps. This would be an ideal way to select a tool. It may not happen in every place because of the everyday workload of the people involved. But the participation of the users in the process is very important, because they are the ones who will use the tool day in and day out. I am almost certain that what happened to one of my clients (the tool they have bought did not support the application they were testing) would not have happened if the users were involved in the selection process.

Fact 3 - Lack of a process: Lack of a process may also contribute to failure of automation. Most places do have some kind of process in place. In most cases (although it differs from place to place) developers write code against a set of requirements. If the requirement does not call for a change in GUI then, there should not be any change in GUI. But if the GUI keep changing constantly from one release to another without any requirement for that change then, there is a problem in the process. You may have the best tool and the best (for your environment) architecture is in place and you will still have problems with your automation because of a faulty process.

Conclusion

I think there is a need to educate QA managers about the benefits and limitations of automation. There is a need to separate the facts from the fictions. But here is the problem, in most cases consultants are brought in to fix problems of prior attempts instead of initial setup. At this point the managers have already learned (painfully) the pitfalls of automation. In order to avoid this painful experience I would recommend (most automation engineers will agree with me) to spend more time up front doing research about the styles and techniques of automation and find an architecture that fits the environment.

There is no doubt that automation adds a great value to overall QA process but, short of knowledge and understanding about automation and lack of planning can also cause a nightmare.

Testing Tool - Rational ClearCase

The Rational ClearCase tool empowers to manage all of the changes the projects encounter. This comprehensive software configuration management (SCM) solution provides version control, workspace management, process configurability, and builds management. With Rational ClearCase, the development team gets a scalable, best practices-based development process that simplifies change - shortening the development cycles, ensuring the accuracy of releases, and delivering reliable builds and patches for previously shipped products.

1. Simplifying the Process of Change

Software development and change: The two go hand in hand. To succeed at the first, need tools and processes for managing the second. Rational ClearCase, when combined with Rational ClearQuest, a flexible defect and change tracking tool, is the market leading software configuration management solution that manages change and complexity. To software teams of all sizes, Rational ClearCase offers tools and processes can implement today and tailor as grow. Rational ClearCase provides a family of products that scale from small project workgroups to the distributed global enterprise, enabling:

  • Accelerate release cycles by supporting unlimited parallel development
  • Unify the change process across the software development lifecycle
  • Scale from small teams to the enterprise without changing tools or processes

2. Software Configuration Management

Software development is an inherently complex process. There are so many protocols, mixed platforms, distributed teams, and various roles to contend with, that team invariably gets tangled in administrative minutia. Rational ClearCase, a robust software artifact management tool, combined with Rational ClearQuest, the most flexible defect and change tracking tool on the market, creates a software configuration management (SCM) solution that helps team to handle the rigors of software development. Rational's SCM solution helps to manage complex change throughout the development lifecycle.

  • Share code effortlessly and automate error-prone processes
  • Rational's SCM solution offers the essential functions of transparent code sharing, version control, and advanced workspace and builds management. By automating many of the necessary, yet error-prone tasks associated with software development, Rational's SCM solution frees teams of all sizes to build better software faster.
  • Unite team with a process that optimizes efficiency
  • Process is critical to streamlining software development. A sound process will improve quality, increase development speed and ultimately enhance overall team collaboration and productivity. Rational's SCM solution offers Unified Change Management (UCM), a best practices process for managing change at the activity level and controlling workflow.
  • Choose a solution that scales and make it last SCM decision
  • Rational has an SCM solution that meets the needs of all size development teams. From small project teams to the global enterprise, Rational has the right size solution for the team. Using the same proven technology, processes and protocols, will be able to select the right product today and seamlessly grow with the product tomorrow – no conversion headaches, data disasters, or process changes. Just smooth scalability.


3. Unified Change Management
Managing the ongoing process of change is important for any development team. However, the issue is further complicated as specialized, distributed teams strive to build high-quality software in less time. Rational has responded with a solution that simplifies the process of change. Unified Change Management (UCM) integrates artifact and activity management. It is among Rational's "best practices" for software development.

UCM is delivered through an integration of Rational ClearCase, for software asset management, and Rational ClearQuest for defect and change tracking – both part of Rational Suite. UCM is a powerful, out-of-the-box workflow for automating change across the software lifecycle and across distributed multi-functional development teams.

UCM helps managers reduce risk by coordinating and prioritizing the activities of developers and by ensuring that they work with the right sets of artifacts. Extending across the lifecycle to accommodate all project domain information – requirements, visual models, code, and test artifacts, it helps teams effectively "baseline" requirements together with code and test assets. The result: accelerated team development in which quality standards are met or exceeded on time and on budget.

4. Rational ClearCase Highlights

  • Offers version control, workspace management, build management and process configurability
  • Versions all development artifacts
  • Enables nonstop parallel development — even across geographically distributed sites
  • Provides transparent workspaces for global data access
  • Integrates with Rational ClearQuest to provide a seamless approach to defect and change tracking
  • Enables Unified Change Management — Rational's activity-based process for managing change
  • Scales from small project teams to the global enterprise
  • Ships with Rational Suite for a complete change process across the lifecycle
  • Offers process configurability without expensive customization
  • Provides advanced build auditing
  • Provides Web interface for universal data access
  • Features graphical interface for easier focus on priority tasks
  • Integrates with leading IDEs and development tools, and Web development and authoring tools

Capability Maturity Model (CMM) Interview Questions

1. Give examples for which causal analysis can be done
Ans:
  • High / low schedule variance,
  • High / low effort variance,
  • Poor rating,
  • Too many re-opened calls,
  • High defect density,
  • Very Low resource utilization etc
2. How are the goals categorized in the each process area?
Ans: Goals are categorized as generic and specific goals in each process area.

3. Which Process Area deals with team structure in project and how team goals are aligned to project goals?
Ans: Integrated Teaming

4. Which Process Area deals with doing causal analysis and take actions?
Ans: Causal Analysis and resolution

5. What is the appraisal methodology of CMMI called?
Ans: SCAMPI (Standard CMMI Appraisal Method for Process Improvement)

6. Which is the process area that deals with objective evaluation of processes and associated work products?
Ans: Process and Product QA

7. What are the common features for the generic goals of any Process Area?
Ans:
  • Commitment to Perform,
  • Ability to Perform,
  • Directing Implementation, and
  • Verifying Implementation & Generic Practices
8. There are ____________ Process Areas in at Maturity Level 2
Ans: 7

9. List down the acquisition types where in you have contract as on today.
Ans: Buy, Sell, Rent, Outsourcing

10. To provide measurement results; first _______ measurement data ; ______ measurement data.
Ans: COLLECT, ANALYZE

12. There are ____________ Process Areas in at Maturity Level 3.
Ans: 11

12. List down the process areas from Level 3.
Ans:
  • Requirement Development,
  • Technical Solution,
  • Product Integration,
  • Verification,
  • Validation,
  • OPF,
  • OPD,
  • Organizational Training,
  • Risk Management,
  • Decision Analysis and Resolution,
  • Integrated Supplier Management
13. There are ___ Process Areas in at Maturity Level 4; List down the names of the Process Areas.
Ans: 2

14. List down the process areas from Level 4.
Ans:
  • Quantitative Project Management,
  • Organizational process performance
15. Which is the process area that deals with objective evaluation of processes and associated work products?
Ans: Process and Product QA

Capability Maturity Model (CMM) Interview Questions

1. How many representations does the CMMI Model have? Name the representations.
Ans: 2 - Staged and continuous

2. Which are the 5 levels in CMMi Framework in Staged Representation?
Ans: 5 Levels in Staged are:
  • Performed,
  • Managed,
  • Defined,
  • Quantitatively Managed, and
  • Optimizing
3. Which are the disciplines in the base model of CMMI?
Ans:
  • Software Engineering,
  • System Engineering,
  • Integrated product development,
  • Software acquisition
4. Which Process Area deals with doing causal analysis and take actions?
Ans: CAR

5. State True or False - In Staged representations all Process Areas are at Level 5.
Ans: TRUE

6. Go through the process areas listed below and identify the measures that can be collected to implement the concepts of quantitative management in each of the practices. The process areas are:
  • Project Planning
  • Requirement Management
  • Verification
  • Configuration Management
Ans:
  • Project Planning – Effort Variance, Schedule Variance, Effort distribution & productivity
  • Requirement Management – Requirement Stability Index
  • Verification – Number of defects, Defect density, % defect distribution by phase, %effort spent on review and testing, review and testing effectiveness
  • Configuration Management – Effort spent on configuration audits
7. Write down the 7 process areas.
Ans:
  • Requirement Management,
  • Project Planning,
  • Project Monitoring and Control,
  • Supplier Agreement and Management,
  • Measurement and Analysis,
  • Process and Product Quality Assurance,
  • Configuration Management
8. Estimates to be done during the ______ phase.
Ans: PLANNING

Software Testing - Frequently Asked Questions (FAQ) Part 3

1. What is SEI- CMM? How many levels are there? What are they?
SEI = 'Software Engineering Institute' at Carnegie-Mellon University; initiated by the U.S. Defense Department to help improve software development processes.
CMM = 'Capability Maturity Model', developed by the SEI. It's a model of 5 levels of Organizational 'maturity' that determine effectiveness in delivering quality software.

The Five levels are

  • Initial
  • Repeatable
  • Defined
  • Managed
  • Optimizing.
2. What is ISO standard for Software? Explain.
ISO = 'International Organization for Standards' :- The ISO 9001 is a Standard often used by software development organizations. It covers documentation, design, development, production, testing, installation, servicing, and other processes.

ISO 9000-3 (not the same as 9003) is a guideline for applying ISO 9001 to software development organizations. To be ISO 9001 certified, a third-party auditor assesses an organization, and certification is typically good for about 3 years, after which a complete reassessment is required. Note that ISO 9000 certification does not necessarily indicate quality products - it indicates only that documented processes are followed.


3. What is load testing?
Load testing is a performance test which subjects the target-of-test to varying workloads to measure and evaluate the performance behaviors and ability of the target-of-test to continue to function properly under these different workloads. The goal of load testing is to determine and ensure that the system functions properly beyond the expected maximum workload. Additionally, load testing evaluates the performance characteristics, such as response times, transaction rates, and other time sensitive issues. The main purpose of this testing is to test whether the system is capable to handle a large number of users at a
particular time.


4. When to stop testing?
  • Deadlines (release deadlines, testing deadlines, etc.)
  • Testing with Test cases completed with certain percentage passed
  • Test budget depleted / Test group decides.
  • Coverage of code/functionality/requirements reaches a specified point
  • Bug rate falls below a certain acceptance level ( Metrics use).
  • Beta or alpha testing period ends
  • Sometimes User decides.

Software Testing - Frequently Asked Questions (FAQ)

1. What is static testing and dynamic testing?
Static testing is verifying that all documents in the project is as per the organizational standards. This type of testing can be done by reviews, walkthrough, and inspection. Dynamic testing is the process of executing a program or system with the intent of finding error or making sure that the system meets its intended requirements.

2. What are the metrics you collect during testing life cycle?
Defect resident time, Defect removal efficiency, Test case effectiveness, Testing efficiency and defect rate, defect origin, Defects Severity type.

3. What are the different approaches of integration testing available?
Top-down approach, Bottom-up approach, Incremental Integration.

4. How defect severities are normally defined?
System failure, Major, Minor/ Normal/ Moderate, Suggestion, OR Critical, High priority, Medium priority, Low priority.

5. What is defect density?
Total no. of Defects found during Testing / Size of the Software product.

6. How a software size is normally defined?
Function points (FP), KLOC (Kilo Lines of Code), SLOC ( source lines of code), Man-Hrs.

7. Write the sequential order in which the following Tests are conducted for a project: Integration, acceptance, Unit, System Testing.
Ans: Unit, Integration System and Acceptance Testing.

8. Name some automated testing tools available in the market?
Rational Test Suite, Mercury Win Runner ,Load Runner, Emprix E-Test suite,
Segue silk test, QA Load, Astra load test / Quick test.


9. What is software 'quality'?
Quality software is reasonably bug-free, delivered on time and within budget, meets requirements and/or expectations, and is maintainable. However, quality is obviously a subjective term. It will depend on who the 'customer' is and their overall influence in the scheme of things. A wide-angle view of the 'customers' of a software development project might include end-users, customer acceptance testers, customer contract officers, customer management, the development organization's management/accountants/testers/salespeople, future software maintenance engineers, stockholders, magazine columnists, etc.

Each type of 'customer' will have their own slant on 'quality' - the accounting department might define quality in terms of profits while an end-user might define quality as user-friendly and bug-free.

10. What is 'Software Quality Assurance'?
Software QA involves the entire software development PROCESS - monitoring and improving the process, making sure that any agreed-upon standards and procedures are followed, and ensuring that problems are found and dealt with. It is oriented for defects 'prevention'.

Six Rules of Unit Testing

1. Write the test first
2. Never write a test that succeeds the first time
3. Start with the null case, or something that doesn't work
4. Don't be afraid of doing something trivial to make the test work
5. Loose coupling and testability go hand in hand
6. Use mock objects

1. Write the test first

This is the Extreme Programming maxim, and my experience is that it works. First you write the test, and enough application code that the test will compile (but no more!). Then you run the test to prove it fails (see point two, below). Then you write just enough code that the test is successful (see point four, below). Then you write another test.

The benefits of this approach come from the way it makes you approach the code you are writing. Every bit of your code becomes goal-oriented. Why am I writing this line of code? I'm writing it so that this test runs. What do I have to do to make the test run? I have to write this line of code. You are always writing something that pushes your program towards being fully functional.

In addition, writing the test first means that you have to decide how to make your code testable before you start coding it. Because you can't write anything before you've got a test to cover it, you don't write any code that isn't testable.

2. Never write a test that succeeds the first time

After you've written your test, run it immediately. It should fail. The essence of science is falsifiability. Writing a test that works first time proves nothing. It is not the green bar of success that proves your test, it is the process of the red bar turning green. Whenever I write a test that runs correctly the first time, I am suspicious of it. No code works right the first time.

3. Start with the null case, or something that doesn't work

Where to start is often a stumbling point. When you're thinking of the first test to run on a method, pick something simple and trivial. Is there a circumstance in which the method should return null, or an empty collection, or an empty array? Test that case first. Is your method looking up something in a database? Then test what happens if you look for something that isn't there.

Often, these are the simplest tests to write, and they give you a good starting-point from which to launch into more complex interactions. They get you off the mark.

4. Don't be afraid of doing something trivial to make the test work

So you've followed the advice in point 3, and written the following test:

public void testFindUsersByEmailNoMatch()
{
assertEquals("nothing returned", 0, new
UserRegistry().findUsersByEmail("not@in.database").length);
}

The obvious, smallest amount of code required to make this test run is:

public User[] findUsersByEmail(String address) {
return new User[0];
}

The natural reaction to writing code like that just to get the test to run is "But that's cheating!". It's not cheating, because almost always, writing code that looks for a user and sees he isn't there will be a waste of time - it'll be a natural extension of the code you write when you actively start looking for users.

What you're really doing is proving that the test works, by adding the simple code and changing the test from failure to success. Later, when you write testFindUsersByEmailOneMatch and testFindUsersByEmailMultipleMatches, the test will keep an eye on you and make sure that you don't change your behaviour in the trivial cases - make sure you don't suddenly start throwing an exception instead, or return null.

Together, points 3 and 4 combine to provide you with a bedrock of tests that make sure you don't forget the trivial cases when you start dealing with the non-trivial ones.

5. Loose coupling and testability go hand in hand

When you're testing a method, you want the test to only be testing that method. You don't want things to build up, or you'll be left with a maintenance nightmare. For example, if you have a database-backed application then you have a set of unit tests that make sure your database-access layer works. So you move up a layer and start testing the code that talks to the access layer. You want to be able to control what the database layer is producing. You may want to simulate a database failure.

So it's best to write your application in self-contained, loosely coupled components, and have your tests be able to generate dummy components (see mock objects below) in order to tests the way each component talks to each other. This also allows you to write one part of the application and test it thoroughly, even when other parts that the component you are writing will depend on don't exist.

Divide your application into components. Represent each component to the rest of the application as an interface, and limit the extent of that interface as much as possible. When one component needs to send information to another, consider implementing it as an EventListener-like publish/subscribe relationship. You'll find all these things make testing easier and not-so-coincidentally lead to more maintainable code.

6. Use mock objects

A mock object is an object that pretends to be a particular type, but is really just a sink, recording the methods that have been called on it. One implementation of mock objects I wrote in Java using the java.lang.reflect.

A mock object gives you more power when testing isolated components, because it gives you a clear view of what one component does to another when they interact. You can clearly see that yes, the component you're testing called "removeUser" on the user registry component, and passed in an argument of "cmiller", without ever having to use a real user registry component.

One useful application of mock objects comes when testing session EJBs, without the hassle of going through the EJB container to do it. Here's a test class that checks a session EJB correctly rolls back the containing transaction when something goes wrong. Notice also how I'm passing a factory into the EJB - this is something that happens quite often when you want to be able to alternate implementations between test time and deployment.

QTP Interview Questions continued...

  1. Explain about the Test Fusion Report of QTP?
  2. Once a tester has run a test, a Test Fusion report displays all aspects of the test run: a high-level results overview, an expandable Tree View of the test specifying exactly where application failures occurred, the test data used, application screen shots for every step that highlight any discrepancies, and detailed explanations of each checkpoint pass and failure. By combining Test Fusion reports with Quick Test Professional, you can share reports across an entire QA and development team.

  3. Which environments does QTP support?
  4. Quick Test Professional supports functional testing of all enterprise environments, including Windows, Web,..NET, Java/J2EE, SAP, Siebel, Oracle, PeopleSoft, Visual Basic, ActiveX, mainframe terminal emulators, and Web services.

  5. What is QTP?
  6. Quick Test is a graphical interface record-playback automation tool. It is able to work with any web, java or windows client application. Quick Test enables you to test standard web objects and ActiveX controls. In addition to these environments, Quick Test Professional also enables you to test Java applets and applications and multimedia objects on Applications as well as standard Windows applications, Visual Basic 6 applications and.NET framework applications

  7. Explain QTP Testing process?
  8. Quick Test testing process consists of 6 main phases:
    Create your test plan - Prior to automating there should be a detailed description of the test including the exact steps to follow, data to be input, and all items to be verified by the test. The verification information should include both data validations and existence or state verifications of objects in the application.
    Recording a session on your application - As you navigate through your application, Quick Test graphically displays each step you perform in the form of a collapsible icon-based test tree. A step is any user action that causes or makes a change in your site, such as clicking a link or image, or entering data in a form.
    Enhancing your test - Inserting checkpoints into your test lets you search for a specific value of a page, object or text string, which helps you identify whether or not your application is functioning correctly. NOTE: Checkpoints can be added to a test as you record it or after the fact via the Active Screen. It is much easier and faster to add the checkpoints during the recording process. Broadening the scope of your test by replacing fixed values with parameters lets you check how your application performs the same operations with multiple sets of data. Adding logic and conditional statements to your test enables you to add sophisticated checks to your test.
    Debugging your test - If changes were made to the script, you need to debug it to check that it operates smoothly and without interruption.
    Running your test on a new version of your application - You run a test to check the behavior of your application. While running, Quick Test connects to your application and performs each step in your test.
    Analyzing the test results - You examine the test results to pinpoint defects in your application.
    Reporting defects - As you encounter failures in the application when analyzing test results, you will create defect reports in Defect Reporting Tool.


  9. Explain the QTP Tool interface.
  10. It contains the following key elements: Title bar, displaying the name of the currently open test, Menu bar, displaying menus of Quick Test commands, File toolbar, containing buttons to assist you in managing tests, Test toolbar, containing buttons used while creating and maintaining tests, Debug toolbar, containing buttons used while debugging tests. Note: The Debug toolbar is not displayed when you open Quick Test for the first time. You can display the Debug toolbar by choosing View — Toolbars — Debug. Action toolbar, containing buttons and a list of actions, enabling you to view the details of an individual action or the entire test flow. Note: The Action toolbar is not displayed when you open Quick Test for the first time. You can display the Action toolbar by choosing View — Toolbars — Action. If you insert a reusable or external action in a test, the Action toolbar is displayed automatically. Test pane, containing two tabs to view your test-the Tree View and the Expert View ,Test Details pane, containing the Active Screen. Data Table, containing two tabs, Global and Action, to assist you in parameterizing your test. Debug Viewer pane, containing three tabs to assist you in debugging your test-Watch Expressions, Variables, and Command. (The Debug Viewer pane can be opened only when a test run pauses at a breakpoint.) Status bar, displaying the status of the test.

  11. How does QTP recognize Objects in AUT?
  12. Quick Test stores the definitions for application objects in a file called the Object Repository. As you record your test, Quick Test will add an entry for each item you interact with. Each Object Repository entry will be identified by a logical name (determined automatically by Quick Test), and will contain a set of properties (type, name, etc) that uniquely identify each object. Each line in the Quick Test script will contain a reference to the object that you interacted with, a call to the appropriate method (set, click, check) and any parameters for that method (such as the value for a call to the set method). The references to objects in the script will all be identified by the logical name, rather than any physical, descriptive properties.

General Interview Questions on Software Testing (Part 3)

  1. What is alpha and beta testing?

  2. Alpha Test: Testing of an application when development is nearing completion; minor design changes may still be made as a result of such testing. Typically done by the end-users or others, not by the programmers or testers.
    Beta Test: Testing when development and testing are essentially completed and final bugs and problems need to be found before final release. Typically done by the end-users or others, not by the programmers or testers.

  3. Write the testing life cycle / test work flow for a project?

  4. Risk Analysis,
    Test Planning,
    Test case Design,
    Test Execution,
    Test Logs,
    Defect tracking and management.
    Test report,
    Test metrics collection.


  5. What is Test Plan and what are the contents of a test plan?

  6. A software project test plan is a document that describes the objectives, scope, approach, and focus of a software testing effort. Test Plan contains Objective/ Purpose, Scope, Project identification, Reference documents, Test Strategy, Test environment, level of test and test coverage, Types of testing, Acceptance criteria, Testing milestones, Defect tracking and management, Tools, Resources (System & Workers), Test deliverables (Test cases, Test logs, Test Summary reports, Test scripts), Metrics collection.

  7. What are the techniques used for writing test cases?

  8. Test Cases are written for each requirement.
    Boundary value analysis,
    Equivalence partitioning,
    Error guessing.

  9. What's a 'test case'?
  10. A test case is a document that describes an input, action, or event and an expected response, to determine if a feature of an application is working correctly. A test case should contain particulars such as test case identifier, test case name, objective, test conditions/setup, input data requirements, steps, and expected results.

General Interview Questions on Software Testing (Part 2)

  1. What is Unit Testing?
    Unit testing is the testing of individual components(units) of the Software. Unit testing is the most 'micro' scale of testing; to test particular functions or code modules. Typically done by the programmer and not by testers, as it requires detailed knowledge of the internal program design, logic and code. Not always easily done unless the application has a well-designed architecture with tight code; may require developing test driver modules or test harnesses. The test is based on coverage of code statements, paths, branches and conditions.

  2. What is Integration Testing?
    It is the testing of combined parts of an application to determine if they function together correctly. The 'parts' can be code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems.

  3. Describe System Testing.
    The System testing is to test the System on its overall performance with the Requirements. It is a block-box type testing , which involves in checking the System for Functional requirements, Response time/Performance , Security, Load/Stress/Volume, Usability, Interoperability, etc., (Depends on Technology like eCom, Web, Client / Server, Embedded). Software once validated for meeting the functional requirements must be verified for proper interface with other system elements like hardware, database and people. It verifies that all these system elements mesh properly and the software achieves overall function/performance.
  4. What is acceptance testing?
    Formal testing conducted to determine whether or not a system satisfies its acceptance criteria and to enable the customer to determine whether or not to accept the system. The goals of these tests are to verify actual data acceptance, processing, and retrieval, and the appropriate implementation of the business rules.

  5. What is regression testing?
    Re-testing the application after bug fixes or modifications of the software or its environment.

  6. How testing is different from debugging?
    Testing is more to do with establishing that defects exist in a software component or a system, Where as debugging is the action of isolating where the defect is. Thus testing and debugging are complementary activities.

General Interview Question on Software Testing

  1. What is the 'software life cycle'?

    The life cycle begins when an application is first conceived and ends when it is no longer in use. It includes aspects such as initial concept, requirements analysis, functional design, internal design, documentation planning, test planning, coding, document preparation, integration, testing, maintenance, updates, retesting, phase-out, and other aspects.

  2. What is software testing? What are the stages of testing available in SDLC? (Software Development Life Cycle).
    Testing involves operation of a system or application under controlled conditions and evaluating the results. Testing is to find out the Defects. The stages of testing are unit, integration, system and User acceptance testing.

  3. What are the different types of testing you know?
    Black box testing / functional testing, White box testing, load testing, stress testing, volume testing, Sanity testing, performance testing, compatibility testing, security testing, database testing, installation testing, recovery testing, and component testing.

  4. What is the difference between black box testing and white box testing?
    In black box testing, the test is based on requirements and functionality. The test is based on inputs, the required action and the output response without considering how the application works. In white box testing, the test is based on internal logic /design of the application code. The test is based on coverage of code statements, paths, branches and conditions.

  5. What is verification and validation?, and IV & V?
    Verification refers to the set of activities that ensure that software correctly implements a specific function. It typically involves reviews and meetings to evaluate documents, plans, code, requirements, and specifications. This can be done with checklists, issues lists, walkthroughs, and inspection meetings.IV & V is independent validation and verification. Validation is achieved through a series of black-box tests that demonstrate conformity with requirements.

QTP Interview Questions...

  1. What are the Features & Benefits of Quick Test Pro (QTP 8.0)?
    Operates stand-alone, or integrated into Mercury Business Process Testing and Mercury Quality Center. Introduces next-generation zero-configuration Keyword Driven testing technology in Quick Test Professional 8.0 allowing for fast test creation, easier maintenance, and more powerful data-driving capability. Identifies objects with Unique Smart Object Recognition, even if they change from build to build, enabling reliable unattended script execution. Collapses test documentation and test creation to a single step with Auto-documentation technology. Enables thorough validation of applications through a full complement of checkpoints.


  2. How to handle the exceptions using recovery scenario manager in QTP?
    There are 4 trigger events during which a recovery scenario should be activated. A pop up window appears in an opened application during the test run: A property of an object changes its state or value, A step in the test does not run successfully, An open application fails during the test run, These triggers are considered as exceptions.You can instruct QTP to recover unexpected events or errors that occurred in your testing environment during test run. Recovery scenario manager provides a wizard that guides you through the defining recovery scenario. Recovery scenario has three steps: 1. Triggered Events 2. Recovery steps 3. Post Recovery Test-Run


  3. What is the use of Text output value in QTP?
    Output values enable to view the values that the application talks during run time. When parameterized, the values change for each iteration. Thus by creating output values, we can capture the values that the application takes for each run and output them to the data table.


  4. How to use the Object spy in QTP 8.0 version?
    There are two ways to Spy the objects in QTP: 1) Thru file toolbar, In the File Toolbar click on the last toolbar button (an icon showing a person with hat). 2) True Object repository Dialog, In Object repository dialog click on the button object spy. In the Object spy Dialog click on the button showing hand symbol. The pointer now changes in to a hand symbol and we have to point out the object to spy the state of the object if at all the object is not visible. or window is minimized then, hold the Ctrl button and activate the required window to and release the Ctrl button.


  5. How Does Run time data (Parameterization) is handled in QTP?
    You can then enter test data into the Data Table, an integrated spreadsheet with the full functionality of Excel, to manipulate data sets and create multiple test iterations, without programming, to expand test case coverage. Data can be typed in or imported from databases, spreadsheets, or text files.


  6. What is keyword view and Expert view in QTP?
    Quick Test’s Keyword Driven approach, test automation experts have full access to the underlying test and object properties, via an integrated scripting and debugging environment that is round-trip synchronized with the Keyword View. Advanced testers can view and edit their tests in the Expert View, which reveals the underlying industry-standard VBScript that Quick Test Professional automatically generates. Any changes made in the Expert View are automatically synchronized with the Keyword View.

Abstract Test Suites Development - A standard approach continued...

Abstract Test Suite - Contents

The TTCN is part of the ISO/IEC 9646 conformance-testing framework and is specially designed for the specification of tests of communication systems. The standard introduces the concept of abstract test suites (consisting of abstract test cases), a description of a set of test cases that should be executed for a system. The abstract tests are to be described using a formal language rather than informal language. TTCN is defined in order to describe the abstract test cases.

The reference of Abstract Syntax Notation (ASN.1) definitions from protocol specifications into TTCN ensures consistency between the information transferred in the system specification and test specification.

Abstract test suite consists of four parts,
Ø Suite Overview part
Ø Declarations part
Ø Constraints part
Ø Dynamic part

(a) Suites Overview part
The suite overview part is documentary feature comprised of indexes and page references. It contains table of contents and a description of the test suite, and its purpose is mainly to document the test suite to increase clarity and readability. The quick overview of the entire test suite is possible. The test suite overview consists of four tables:
· Test Suite Structure
· Test Case Index
· Test Step Index
· Default Index

(b) Declarations part
The declaration part is used for declaring types, variables, timers, PCO’s and test components. All the data types used in the test suite are declared in this section. The types can be TTCN or ASN.1 types. Declaring types in TTCN or ASN.1 is similar to type declarations in other programming language except that in TTCN or ASN.1 tables are used instead of files. The declarations part is concerned both with the definition of new (i.e. not predefined) data types and operations and the declaration of all the test suite components.

(c) Constraints Part
The constraints part is used for describing the values sent or received. The structured types PDUs and ASPs defined in the declarations part are used as models to describe the messages sent on PCO. The instances used for sending must be complete, but for receiving there is the possibility to define incomplete values using wildcards, ranges and lists. The constraint part contains the tables for all the ASP, PDU, structure and CM constraints both in the tabular form and the ASN.1.

(d) Dynamic Part

The actual tests are described in dynamic part. It contains all test cases, test steps, and default tables with test events and verdicts. The building blocks are test groups, test cases, test steps and test events. The dynamic part contains all the test cases, all the test steps in the test step library and the all the defaults in the default library.

Test Component










The test component comprises of the test group, test case, test events and test step.

Test event – is the smallest unit of test suite. It corresponds to sending and receiving of the messages and operations for manipulating timers.

Test step – grouping of test events, which is similar to subroutines and procedures in other programming languages.

Test case - fundamental building block in test suite. A test case tests particular feature or function in the implementation under test (IUT). Test case assigns a verdict depends on outcome of the test case.

Test group – grouping of test cases. The grouping can be done based on functionality or features. Test suite – highest level encompassing all the test components and serving as root of the tree.

Abstract Test Suites Development - A standard approach continued...

ISO 9646 and TTCN
ISO 9646 is a framework for conformance testing. ISO 9646 incorporates the TTCN language as ISO 9646-3. The ISO 9646 is a seven part standard as shown below















Testing Configuration or Architecture
One part of the standard is Test specification/Configuration. How is the test is set up, the responsibilities of different entities involved and the handling of the connections between these are all regulated in the standard.













PICS and PIXIT
The important item in the configuration of a test is to extract the abstract information from the test and this information can be provided during start up of the execution. The Protocol Implementation Conformance Statement (PICS) and Protocol Implementation Extra information (PIXIT) is structures as informal questionnaire. The answers can be mapped to parameters in TTCN and imported. PICS and PIXT contains different information.

PICS – It contains information regarding protocol. This could be optional parts, specific restrictions or adds-on. This information will serve as basis for determining which test cases are applicable.PIXIT – It contains information regarding physical set up and connection of the test that is not part of the protocol. This could be information regarding system under test hardware, socket.

Abstract Test Suites Development - A standard approach

INTRODUCTION

Over the years, the size and complexity of software in multi-process, distributed architecture systems running in different environment have grown dramatically. Verification & Validation has become an all-important activity in this context. The terms conformance and interoperability are both important characteristics for implementations. An implementation conforms if it meets all of the mandatory requirements of the written specification fully, and also meets those conditional requirements that it claims to meet. Conforming implementations have high probabilities of interoperating with other implementations conforming to the same specification.

Test suites formalize the means with which to establish conformance or interoperability. Test suites come in two forms: abstract and executable. A test suite comprises multiple test cases each designed to test a single requirement or option. An abstract test suite is like un-compiled source code. ISO (International Standards Organization) and ITU-T, while addressing this issue, suggest the usage of TTCN (Tree and tabular combined notation), which is test-equipment-independent. The TTCN language is part of the ISO/IEC 9646 (International Electro-technical Committee) conformance-testing framework and is specially designed for the specification tests of communication systems.

We will discuss standard approach for abstract test suite development. Definition of key concepts associated with the Abstract Test Suites. It also describes about form of abstract test suites and maintenance of ATS (Abstract Test Suite).

TTCN is the defacto standard test environment/Language for communication systems. This is used world wide to test telecommunication and data-communication equipment ranging from built in communication chips to huge switched and intelligent network services. TTCN is widely used for conformance testing. Conformance testing is the process of verifying that an implementation performs in accordance with a particular standard/specification. It is concerned with external behavior (black box) but not with performance/reliability/fault tolerance.

ISO/IEC 9646 (ITU X.290 series) is a seven part standard which defines a framework and methodology for conformance testing of implementations of OSI and ITU protocols. The TTCN is the third part of this standard, i.e. ISO/IEC 9646-3. The Versions 1 and 2 are developed by ISO as part of the widely used ISO/IEC-9646 conformance-testing standard. The ISO/IEC 9646-3 and ITU-T X.292 Updates/maintenance are done by ETSI. The Version 3 is developed by ETSI.

Conformance testing

Conformance testing is verification that an implementation meets the formal requirements of the referenced standards and, more precisely, that it meets the conformance clauses contained in the standards. During the test phase, the implementation is referred to as the Implementation Under Test (IUT). The primary objective of conformance testing is to increase the probability that different product implementations actually interoperate. The testing of performance and robustness are not part of the conformance testing process. No amount of testing can give a full guarantee of successful inter-working. The exhaustive testing of every possible aspect of protocol behavior is unrealistic and impractical for technical and economical reasons.

Conformance testing can however give a reasonable degree of confidence that an implementation, which passes the tests, will comply with the requirements in its communication with other systems. As such, conformance testing can be regarded as a prerequisite for inter-working. Any test can be easily contentious. When comparing a product with its specification, using testing tools, we could ask, in case of discrepancies: Is the product wrong? Is the specification ambiguous? Is the test biased? Is the testing method suitable? Is the testing process agreed and understood?


One-way to solve some of these questions in advance is to standardize: Use of standard protocol specifications, use of standard tests, use of standard methods, use of standard testing process. This is what Conformance Testing is about, based upon a standard testing methodology, as defined by ISO and CCITT. The benefits of conformance testing can be increased further. The use of standard methods, based on approved test suites developed for each OSI standard protocol, and on testing procedures, lead to the comparability of results produced by different testers, and thereby to the mutual recognition of test reports. This will minimize the need for repeated conformance testing, and minimize the associated costs.
 

© blogger templates 3 column | Make Money Online