INDEPENDENT TEST CASES

From Test Automation Patterns
Revision as of 09:07, 4 April 2018 by Cathal (talk | contribs) (Created page with "<div id="content_view" class="wiki" style="display: block"><span style="font-size: 14px">.........................................................................................")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search
.................................................................................................................Main Page / Back to Design Patterns / Back to Test Automation Patterns

Pattern summary

Make each automated test case self-contained

Category

Design

Context

This pattern is necessary if you want to implement long lasting and efficient test automation.
An exception is when one test specifically checks the results from the prior test (e.g., using a separate test to ensure data was written into a database and not just retrieved from working memory).
It is not necessary for just writing disposable scripts.

Description

Automated test cases run independently of each other, so that they can be started separately and are not affected if tests running earlier have failed. The test may consist of a large number of different scripts or actions, some to set up the conditions needed for a test, others to execute the test steps.

Automated tests should be short and well-defined. For example, if you have one long test that takes 30 minutes to run, maybe it fails after 5 minutes the first time, after 10 the next and after 15 the 3rd time. So far you have used half an hour and have 3 bugs (which you have fixed). But you are only half-way through the test. If you separate that test into 10 tests of 3 minutes each, you can start all of them. If you can run them in parallel, execution will only take 3 minutes. Maybe 3 or 4 of them fail, but you now know that all of the others have passed, so after you fix these, the whole set of tests should be passing, and it takes a lot less time.

Implementation

  • Every test starts with a FRESH SETUP before performing any action. Each test has proprietary access to its resources.
  • If a test fails (stops before completion), it must reset the Software Under Test (SUT) and or the tool so that the following tests can run normally (see the pattern FAIL GRACEFULLY)
  • A self contained test does NOT mean that just one test case tests the whole application! On the contrary each test should have ONE CLEAR PURPOSE derived from one business rule.
  • If you PRIORITIZE TESTS, you will also be able to run or rerun the tests independently from each other.

Possible problems

Manual tests are often performed sequentially in order to spare set-up time. They should be redesigned before automating to make sure that the automation is as efficient as possible.
If the initial set-up is very complicated or takes too much time and there is no other option, then this pattern should not be used, instead use CHAINED TESTS.

Issues addressed by this pattern

FALSE FAIL
FLAKY TESTS
INEFFICIENT EXECUTION
INFLEXIBLE AUTOMATION
INTERDEPENDENT TEST CASES
OBSCURE TESTS

Experiences

If you have used this pattern, please add your name and a brief story of how you used this pattern: your context, what you did, and how well it worked - or how it didn't work!

.................................................................................................................Main Page / Back to Design Patterns / Back to Test Automation Patterns
B5