Testing Glossary - Volume Testing
The purpose of Volume Testing is to find weaknesses in the system with respect to its handling of large amounts of data during short time periods. For example, this kind of testing ensures that the system will process data across physical and logical boundaries such as across servers and across disk partitions on one server.
Testing Glossary - Benchmarking
To a certain degree load testing, stress testing & benchmarking, all describe the same kind of process. They all use a community of virtual users to submit a workload to real hardware and software. Benchmarking has been used for decades to describe
- real application stressing
- sizing of undeveloped applications
- competitive positioning by vendors of servers and networks
Sizing is an estimate of the vendor's proposed configuration usually in response to an RFP or Request for Proposal.
Testing Glossary - Stress Testing
The purpose of Stress Testing is to show that the system has the capacity to handle large numbers of processing transactions during peak periods. The stress testing tests the system beyond the maximum design load of the system. It tests the defect or failure behavior of the system. Defects may come to light, which are unlikely to cause system failures in normal usage. Stress testing checks that the overloading causes the system to “fail-soft” rather than collapse under its load.
How many simultaneous users can my site take without slowing down significantly or crashing? Stress testing is nothing more than applying a steadily increasing load to your site until it reaches the breaking point (when site performance degrades to unacceptable levels). What this test tells you is how well your site will handle unexpected loads that may come as a result of unplanned events like, changing market forces, the failure of your competitors to fill the needs of their customers or serendipitous national publicity.
How many simultaneous users can my site take without slowing down significantly or crashing? Stress testing is nothing more than applying a steadily increasing load to your site until it reaches the breaking point (when site performance degrades to unacceptable levels). What this test tells you is how well your site will handle unexpected loads that may come as a result of unplanned events like, changing market forces, the failure of your competitors to fill the needs of their customers or serendipitous national publicity.
Testing Glossary - Load Testing
The creation of a simulated "load" on a real computer system by using virtual users who submit work as real users would do at real client workstations and thus "testing" the system's ability to support such a workload. The virtual users are in software running on a "driver machine" or "injector". Testing of critical software during its development and before its deployment has three components:
- Functional testing: Does it conform to the design specifications?
- Performance testing: Does each unit offer acceptable response time?
- Load testing: What hardware and software system configuration will be required to provide acceptable response times and handle the "load" that will be created by the entire community of users when deployed?
Load testing cannot be accomplished before the application software is available. Unlike other modeling methods for estimating future performance, load testing uses real hardware and software. Load testing was historically done with real people at real terminals following a script and recording response times with stop watches.
For systems with hundreds of users, today one can use a load testing tool rather than real people; can capture the activities of real users and then create scripts that "virtual" users submit to the SUT (system under test) and measure the resulting response times.
Software Test Automation Myths & Facts
Introduction
Today software test automation is becoming more and more popular in both C/S and web environment. As the requirements keep changing (mostly new requirements are getting introduced on daily basis) constantly and the testing window is getting smaller and smaller everyday, the managers are realizing a greater need for test automation. This is good news for us (people who do test automation). But, I am afraid this is the only good news.
Myths & Facts
A number of articles and books are written on different aspects of Software Test Automation. “Test Automation Snake Oil” by, James Bach is an excellent article on some of the myths of automation. I like to discuss some of these myths and will try to point out the facts about these myths. I also like to discuss some of my observations and hopefully point out possible solutions. These are based on my experience with a number of automation projects I was involved with.
Myth 1 - Find more bugs: Some QA managers think that by doing automation they should be able to find more bugs. It’s a myth. Let’s think about it for a minute. The process of automation involves a set of written test cases. In most places the test cases are written by test engineers who are familiar with the application they are testing. The test cases are then given to the automation engineers. In most cases the automation engineers are not very familiar with the test cases they are automating. From test cases to test scripts, automation does not add anything in the process to find more bugs. The test scripts will work only as good as the test cases when comes to finding bugs. So, it’s the test cases that find bugs (or don’t find bugs), not the test scripts.
Myth 2 - Eliminate or reduce manual testers: In order to justify automation, some point out that they should be able to eliminate or reduce the number of manual testers in the long run and thus save money in the process. Absolutely not true. Elimination or reduction of manual testers is not any of the objectives of test automation. Here is why – as I have pointed out earlier that the test scripts are only as good as the test cases and the test cases are written primarily by manual testers. They are the ones who know the application inside out. If the word gets out (it usually does) that the number of manual testers will be reduced by introducing automation then, most if not all manual testers will walk out the door and quality will go with them as well.
Observations
I have met a number of QA managers who are frustrated with their automation. According to them the tool is not doing what it is supposed to do. Here is a true story, the client (I had the opportunity to work with them for some time) found out that the tool they have just bought does not support the application they are testing (I am not making it up). How can this happen! – It does happen more often than one would think. I will get back on this when I discuss possible solutions. A manager of one of the major telecom companies that I had a recent interview with told me that after three years and more than a million dollar he is still struggling with automation. This is pretty sad and I get the feeling that he is not alone.
Solutions / Suggestions
Let’s discuss some of the reasons for this frustration and some of the solutions to this problem.
Fact 1 - Unrealistic expectations: Most managers have their first encounter with any automation tool when they look at the demo and everything looks nice and simple. But everything is not so nice and simple when you try to use the tool with your application. The vendors will only tell you the things you want to hear (how easy to use, how simple to set up, how it will save time and money, how it will help you find more bugs etc.). This builds a false set of hopes and expectations.
Fact 2 - Lack of planning: A great deal of planning is required from selection to implementation of the tool. “Evaluating Tools” by Elisabeth Hendrickson is a very good article on step by step process of selecting a tool. She talks about “Tool Audience” as one of the steps. This would be an ideal way to select a tool. It may not happen in every place because of the everyday workload of the people involved. But the participation of the users in the process is very important, because they are the ones who will use the tool day in and day out. I am almost certain that what happened to one of my clients (the tool they have bought did not support the application they were testing) would not have happened if the users were involved in the selection process.
Fact 3 - Lack of a process: Lack of a process may also contribute to failure of automation. Most places do have some kind of process in place. In most cases (although it differs from place to place) developers write code against a set of requirements. If the requirement does not call for a change in GUI then, there should not be any change in GUI. But if the GUI keep changing constantly from one release to another without any requirement for that change then, there is a problem in the process. You may have the best tool and the best (for your environment) architecture is in place and you will still have problems with your automation because of a faulty process.
Conclusion
I think there is a need to educate QA managers about the benefits and limitations of automation. There is a need to separate the facts from the fictions. But here is the problem, in most cases consultants are brought in to fix problems of prior attempts instead of initial setup. At this point the managers have already learned (painfully) the pitfalls of automation. In order to avoid this painful experience I would recommend (most automation engineers will agree with me) to spend more time up front doing research about the styles and techniques of automation and find an architecture that fits the environment.
There is no doubt that automation adds a great value to overall QA process but, short of knowledge and understanding about automation and lack of planning can also cause a nightmare.
Today software test automation is becoming more and more popular in both C/S and web environment. As the requirements keep changing (mostly new requirements are getting introduced on daily basis) constantly and the testing window is getting smaller and smaller everyday, the managers are realizing a greater need for test automation. This is good news for us (people who do test automation). But, I am afraid this is the only good news.
Myths & Facts
A number of articles and books are written on different aspects of Software Test Automation. “Test Automation Snake Oil” by, James Bach is an excellent article on some of the myths of automation. I like to discuss some of these myths and will try to point out the facts about these myths. I also like to discuss some of my observations and hopefully point out possible solutions. These are based on my experience with a number of automation projects I was involved with.
Myth 1 - Find more bugs: Some QA managers think that by doing automation they should be able to find more bugs. It’s a myth. Let’s think about it for a minute. The process of automation involves a set of written test cases. In most places the test cases are written by test engineers who are familiar with the application they are testing. The test cases are then given to the automation engineers. In most cases the automation engineers are not very familiar with the test cases they are automating. From test cases to test scripts, automation does not add anything in the process to find more bugs. The test scripts will work only as good as the test cases when comes to finding bugs. So, it’s the test cases that find bugs (or don’t find bugs), not the test scripts.
Myth 2 - Eliminate or reduce manual testers: In order to justify automation, some point out that they should be able to eliminate or reduce the number of manual testers in the long run and thus save money in the process. Absolutely not true. Elimination or reduction of manual testers is not any of the objectives of test automation. Here is why – as I have pointed out earlier that the test scripts are only as good as the test cases and the test cases are written primarily by manual testers. They are the ones who know the application inside out. If the word gets out (it usually does) that the number of manual testers will be reduced by introducing automation then, most if not all manual testers will walk out the door and quality will go with them as well.
Observations
I have met a number of QA managers who are frustrated with their automation. According to them the tool is not doing what it is supposed to do. Here is a true story, the client (I had the opportunity to work with them for some time) found out that the tool they have just bought does not support the application they are testing (I am not making it up). How can this happen! – It does happen more often than one would think. I will get back on this when I discuss possible solutions. A manager of one of the major telecom companies that I had a recent interview with told me that after three years and more than a million dollar he is still struggling with automation. This is pretty sad and I get the feeling that he is not alone.
Solutions / Suggestions
Let’s discuss some of the reasons for this frustration and some of the solutions to this problem.
Fact 1 - Unrealistic expectations: Most managers have their first encounter with any automation tool when they look at the demo and everything looks nice and simple. But everything is not so nice and simple when you try to use the tool with your application. The vendors will only tell you the things you want to hear (how easy to use, how simple to set up, how it will save time and money, how it will help you find more bugs etc.). This builds a false set of hopes and expectations.
Fact 2 - Lack of planning: A great deal of planning is required from selection to implementation of the tool. “Evaluating Tools” by Elisabeth Hendrickson is a very good article on step by step process of selecting a tool. She talks about “Tool Audience” as one of the steps. This would be an ideal way to select a tool. It may not happen in every place because of the everyday workload of the people involved. But the participation of the users in the process is very important, because they are the ones who will use the tool day in and day out. I am almost certain that what happened to one of my clients (the tool they have bought did not support the application they were testing) would not have happened if the users were involved in the selection process.
Fact 3 - Lack of a process: Lack of a process may also contribute to failure of automation. Most places do have some kind of process in place. In most cases (although it differs from place to place) developers write code against a set of requirements. If the requirement does not call for a change in GUI then, there should not be any change in GUI. But if the GUI keep changing constantly from one release to another without any requirement for that change then, there is a problem in the process. You may have the best tool and the best (for your environment) architecture is in place and you will still have problems with your automation because of a faulty process.
Conclusion
I think there is a need to educate QA managers about the benefits and limitations of automation. There is a need to separate the facts from the fictions. But here is the problem, in most cases consultants are brought in to fix problems of prior attempts instead of initial setup. At this point the managers have already learned (painfully) the pitfalls of automation. In order to avoid this painful experience I would recommend (most automation engineers will agree with me) to spend more time up front doing research about the styles and techniques of automation and find an architecture that fits the environment.
There is no doubt that automation adds a great value to overall QA process but, short of knowledge and understanding about automation and lack of planning can also cause a nightmare.
Subscribe to:
Posts (Atom)