FAIL GRACEFULLY

From Test Automation Patterns
Revision as of 16:03, 21 August 2018 by Dorothy (talk | contribs) (→‎Experiences)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search
.................................................................................................................Main Page / Back to Execution Patterns / Back to Test Automation Patterns

Pattern summary

If a test fails it should restore the system and the environment so that the successive tests are not affected.

Category

Execution

Context

This pattern is applicable if you want your test automation to run unattended, and where the system being tested is mature and you are not expecting a lot of failures.
This pattern is not appropriate if you will need to do failure analysis after a test has completed (or failed), or for one-off scripts.

Description

See to it that when a test fails, you clean-up and exit, so that the next tests can be performed normally.

Implementation

Build in your scripts error-catching functionality that resets the system and the environment and exits the failed test.

Potential problems

This pattern is the opposite approach to FRESH SETUP, where the tests don't clean up after they have been run.

Issues addressed by this pattern

FLAKY TESTS
LITTER BUG

Experiences

Bryan Bakker says: I have two experiences for this pattern:

  1. One product I was testing could fail in a crash of the whole system. The information at the time of the crash should be preserved (logging, crashdumps) but I also wanted to continue the testing, otherwise there would only be 1 failure over the whole weekend. When the test framework detected such a failure it collected all the necessary information, stored it in a secure place, and then resetted the whole system. The system was a medical device (consisting of software but also electronics and mechanics), the reset was done by switching off and on the mains supply, triggered by the framework. This way we were able to collect several failure situations over the weekend.
  2. The same product consisted of a generator. This generator was used by several automated test cases. It could happen that this generator overheated, resulting in failing test cases. Of course the failing generator was intended functionality (otherwise it would be damaged). The framework detected whether the generator was too hot, and selected test cases to be executed which were not relying on the generator. In the meantime the generator could cool down. The framework also detected when the generator was operational again, and other test cases could be scheduled for execution. This way the failing situation could be dealt with, resulting in optimal system-under-test usage.



If you have also used this pattern and would like to contribute your experience to the wiki, please go to Feedback to submit your experience or comment.

.................................................................................................................Main Page / Back to Execution Patterns / Back to Test Automation Patterns