Difference between revisions of "INDEPENDENT TEST CASES"
m (Topic titles in capital letters) |
|||
Line 1: | Line 1: | ||
<div id="content_view" class="wiki" style="display: block"><span style="font-size: 14px">.................................................................................................................[[Main Page]] / Back to [[Design Patterns]] / Back to [[Test Automation Patterns]]</span> | <div id="content_view" class="wiki" style="display: block"><span style="font-size: 14px">.................................................................................................................[[Main Page]] / Back to [[Design Patterns]] / Back to [[Test Automation Patterns]]</span> | ||
− | =<span style="font-size: 16px">Pattern summary</span>= | + | =<span style="font-size: 16px">'''Pattern summary'''</span>= |
<span style="font-size: 16px">Make each automated test case self-contained</span> | <span style="font-size: 16px">Make each automated test case self-contained</span> | ||
− | =<span style="font-size: 16px">Category</span>= | + | =<span style="font-size: 16px">'''Category'''</span>= |
<span style="font-size: 16px">Design</span> | <span style="font-size: 16px">Design</span> | ||
− | =<span style="font-size: 16px">Context</span>= | + | =<span style="font-size: 16px">'''Context'''</span>= |
<span style="font-size: 16px">This pattern is necessary if you want to implement long lasting and efficient test automation.</span><br /> <span style="font-size: 16px">An exception is when one test specifically checks the results from the prior test (e.g., using a separate test to ensure data was written into a database and not just retrieved from working memory).</span><br /> <span style="font-size: 16px">It is not necessary for just writing disposable scripts.</span> | <span style="font-size: 16px">This pattern is necessary if you want to implement long lasting and efficient test automation.</span><br /> <span style="font-size: 16px">An exception is when one test specifically checks the results from the prior test (e.g., using a separate test to ensure data was written into a database and not just retrieved from working memory).</span><br /> <span style="font-size: 16px">It is not necessary for just writing disposable scripts.</span> | ||
− | =<span style="font-size: 16px">Description</span>= | + | =<span style="font-size: 16px">'''Description'''</span>= |
<span style="font-size: 16px">Automated test cases run independently of each other, so that they can be started separately and are not affected if tests running earlier have failed. The test may consist of a large number of different scripts or actions, some to set up the conditions needed for a test, others to execute the test steps. </span><br /> <br /> <span style="font-size: 16px">Automated tests should be short and well-defined. For example, if you have one long test that takes 30 minutes to run, maybe it fails after 5 minutes the first time, after 10 the next and after 15 the 3rd time. So far you have used half an hour and have 3 bugs (which you have fixed). But you are only half-way through the test. If you separate that test into 10 tests of 3 minutes each, you can start all of them. If you can run them in parallel, execution will only take 3 minutes. Maybe 3 or 4 of them fail, but you now know that all of the others have passed, so after you fix these, the whole set of tests should be passing, and it takes a lot less time.</span> | <span style="font-size: 16px">Automated test cases run independently of each other, so that they can be started separately and are not affected if tests running earlier have failed. The test may consist of a large number of different scripts or actions, some to set up the conditions needed for a test, others to execute the test steps. </span><br /> <br /> <span style="font-size: 16px">Automated tests should be short and well-defined. For example, if you have one long test that takes 30 minutes to run, maybe it fails after 5 minutes the first time, after 10 the next and after 15 the 3rd time. So far you have used half an hour and have 3 bugs (which you have fixed). But you are only half-way through the test. If you separate that test into 10 tests of 3 minutes each, you can start all of them. If you can run them in parallel, execution will only take 3 minutes. Maybe 3 or 4 of them fail, but you now know that all of the others have passed, so after you fix these, the whole set of tests should be passing, and it takes a lot less time.</span> | ||
− | =<span style="font-size: 16px">Implementation</span>= | + | =<span style="font-size: 16px">'''Implementation'''</span>= |
* <span style="font-size: 16px">Every test starts with a [[FRESH SETUP]] before performing any action. Each test has proprietary access to its resources.</span> | * <span style="font-size: 16px">Every test starts with a [[FRESH SETUP]] before performing any action. Each test has proprietary access to its resources.</span> | ||
* <span style="font-size: 16px">If a test fails (stops before completion), it must reset the Software Under Test (SUT) and or the tool so that the following tests can run normally (see the pattern [[FAIL GRACEFULLY]])</span> | * <span style="font-size: 16px">If a test fails (stops before completion), it must reset the Software Under Test (SUT) and or the tool so that the following tests can run normally (see the pattern [[FAIL GRACEFULLY]])</span> | ||
− | * <span style="font-size: 16px">A self contained test does NOT mean that just one test case tests the whole application! On the contrary each test should have [[ONE CLEAR PURPOSE ]]derived from one business rule.</span> | + | * <span style="font-size: 16px">A self contained test does NOT mean that just one test case tests the whole application! On the contrary each test should have [[ONE CLEAR PURPOSE ]] derived from one business rule.</span> |
* <span style="font-size: 16px">If you [[PRIORITIZE TESTS]], you will also be able to run or rerun the tests independently from each other.</span> | * <span style="font-size: 16px">If you [[PRIORITIZE TESTS]], you will also be able to run or rerun the tests independently from each other.</span> | ||
− | =<span style="font-size: 16px">Possible problems</span>= | + | =<span style="font-size: 16px">'''Possible problems'''</span>= |
<span style="font-size: 16px">Manual tests are often performed sequentially in order to spare set-up time. They should be redesigned before automating to make sure that the automation is as efficient as possible.</span><br /> <span style="font-size: 16px">If the initial set-up is very complicated or takes too much time and there is no other option, then this pattern should not be used, instead use [[CHAINED TESTS]].</span> | <span style="font-size: 16px">Manual tests are often performed sequentially in order to spare set-up time. They should be redesigned before automating to make sure that the automation is as efficient as possible.</span><br /> <span style="font-size: 16px">If the initial set-up is very complicated or takes too much time and there is no other option, then this pattern should not be used, instead use [[CHAINED TESTS]].</span> | ||
− | =<span style="font-size: 16px">Issues addressed by this pattern</span>= | + | =<span style="font-size: 16px">'''Issues addressed by this pattern'''</span>= |
''<span style="font-size: 16px">[[FALSE FAIL]]</span>''<br /> ''<span style="font-size: 16px">[[FLAKY TESTS]]</span>''<br /> ''<span style="font-size: 16px">[[INEFFICIENT EXECUTION]]</span>''<br /> ''<span style="font-size: 16px">[[INFLEXIBLE AUTOMATION]]</span>''<br /> ''<span style="font-size: 16px">[[INTERDEPENDENT TEST CASES]]</span>''<br /> ''<span style="font-size: 16px">[[OBSCURE TESTS]]</span>'' | ''<span style="font-size: 16px">[[FALSE FAIL]]</span>''<br /> ''<span style="font-size: 16px">[[FLAKY TESTS]]</span>''<br /> ''<span style="font-size: 16px">[[INEFFICIENT EXECUTION]]</span>''<br /> ''<span style="font-size: 16px">[[INFLEXIBLE AUTOMATION]]</span>''<br /> ''<span style="font-size: 16px">[[INTERDEPENDENT TEST CASES]]</span>''<br /> ''<span style="font-size: 16px">[[OBSCURE TESTS]]</span>'' | ||
− | =<span style="font-size: 16px">Experiences</span>= | + | =<span style="font-size: 16px">'''Experiences'''</span>= |
<span style="font-size: 16px">If you have used this pattern, please add your name and a brief story of how you used this pattern: your context, what you did, and how well it worked - or how it didn't work!</span><br /> <br /> <span style="font-size: 14px">.................................................................................................................[[Main Page]] / Back to [[ Design Patterns]] / Back to [[Test Automation Patterns]]</span><br /> <span style="font-size: 14px">B5</span></div> | <span style="font-size: 16px">If you have used this pattern, please add your name and a brief story of how you used this pattern: your context, what you did, and how well it worked - or how it didn't work!</span><br /> <br /> <span style="font-size: 14px">.................................................................................................................[[Main Page]] / Back to [[ Design Patterns]] / Back to [[Test Automation Patterns]]</span><br /> <span style="font-size: 14px">B5</span></div> |
Revision as of 14:15, 28 April 2018
Pattern summary
Make each automated test case self-contained
Category
Design
Context
This pattern is necessary if you want to implement long lasting and efficient test automation.
An exception is when one test specifically checks the results from the prior test (e.g., using a separate test to ensure data was written into a database and not just retrieved from working memory).
It is not necessary for just writing disposable scripts.
Description
Automated test cases run independently of each other, so that they can be started separately and are not affected if tests running earlier have failed. The test may consist of a large number of different scripts or actions, some to set up the conditions needed for a test, others to execute the test steps.
Automated tests should be short and well-defined. For example, if you have one long test that takes 30 minutes to run, maybe it fails after 5 minutes the first time, after 10 the next and after 15 the 3rd time. So far you have used half an hour and have 3 bugs (which you have fixed). But you are only half-way through the test. If you separate that test into 10 tests of 3 minutes each, you can start all of them. If you can run them in parallel, execution will only take 3 minutes. Maybe 3 or 4 of them fail, but you now know that all of the others have passed, so after you fix these, the whole set of tests should be passing, and it takes a lot less time.
Implementation
- Every test starts with a FRESH SETUP before performing any action. Each test has proprietary access to its resources.
- If a test fails (stops before completion), it must reset the Software Under Test (SUT) and or the tool so that the following tests can run normally (see the pattern FAIL GRACEFULLY)
- A self contained test does NOT mean that just one test case tests the whole application! On the contrary each test should have ONE CLEAR PURPOSE derived from one business rule.
- If you PRIORITIZE TESTS, you will also be able to run or rerun the tests independently from each other.
Possible problems
Manual tests are often performed sequentially in order to spare set-up time. They should be redesigned before automating to make sure that the automation is as efficient as possible.
If the initial set-up is very complicated or takes too much time and there is no other option, then this pattern should not be used, instead use CHAINED TESTS.
Issues addressed by this pattern
FALSE FAIL
FLAKY TESTS
INEFFICIENT EXECUTION
INFLEXIBLE AUTOMATION
INTERDEPENDENT TEST CASES
OBSCURE TESTS
Experiences
If you have used this pattern, please add your name and a brief story of how you used this pattern: your context, what you did, and how well it worked - or how it didn't work!.................................................................................................................Main Page / Back to Design Patterns / Back to Test Automation Patterns
B5