FRESH SETUP

From Test Automation Patterns
Jump to: navigation, search
.................................................................................................................Main Page / Back to Design Patterns / Back to Test Automation Patterns

Pattern summary

Before executing, each test prepares its initial conditions from scratch. Tests don’t clean up afterwards

Category

Design

Context

Use this pattern for long lasting and maintainable automation, but it can be useful also when writing disposable scripts.

Description

Each test prepares its initial conditions from scratch before executing, so that it makes sure to run under defined initial conditions. In this way each test can run independently from all other tests. Tests don’t clean up afterwards, so that if you want to check the results you can immediately control the state of the Software Under Test (SUT).

Implementation

Initial conditions can be very diverse. Here some suggestions:

  • Database configuration:
    1. Copy the table structure of the database for the current release of the SUT: in this way you will always have the current database. Since this may take some time you should do it only once at the beginning of a test suite.
    2. Insert in the database the standard configuration data that you will need for each of the following test cases. This should also be done only once at the beginning of the test suite
    3. For each test case: make sure the relevant variable database entries are empty. If not, remove or initialise content (after ensuring that you have not wiped out the traces of prior test errors)‍‍.‍
    4. For each test case: ‍‍insert the variable data that your test case expects‍‍.

  • File configuration:
    • Copy input or comparison files to a predefined standard directory.

  • SUT:
    • to be sure that the SUT is in the required state, it should be started anew for each test case and driven to the foreseen starting point.

  • Virtual machines:
    • for complex environments it may pay to start each time from a known VM snapshot. This has the added benefit that in the case of an erratic test then the failed vm can be automatically stored away for further analysis. Storage and time limitations would probably make this impractical for non-regression scenarios where you expect a lot of failures.


Leave the SUT as it is after the test is run. In this way, after running the test, you can immediately check the state of the SUT, database contents etc. without having to restart the test and stopping it before it cleans up. Even if you do have to restart it, e.g. because it was followed by other tests, you don’t have to change the scripts in any way to repeat the test and check the results.

Potential problems

If the setup is very slow, you should consider using instead a SHARED SETUP

Issues addressed by this pattern

FALSE FAIL
INCONSISTENT DATA
INEFFICIENT FAILURE ANALYSIS
INTERDEPENDENT TEST CASES
HARD-TO-AUTOMATE RESULTS

Experiences

Experiences contrasting this pattern (FRESH SETUP) and SHARED SETUP
From Doug Hoffman:

  • I prefer to use a shared setup approach (and have generally avoided the Fresh Setup approach altogether after a few experiments a few decades ago). It's a personal preference rather than a professional judgement because both are reasonable approaches.
  • With a shared setup approach each of the automated tests can be run independently, but the expectation may be that the entire data set has to be set in place for a clean run. A Fresh Setup approach should always work for test independence while a Shared Setup works most of the time. Most of the time the precondition of a test isn't effected by the post condition, so it doesn't really matter. I prefer to extract the few cases where the Fresh Setup works better and handle them as exceptions:
  • I'll contrast the two approaches for the example of a test for resetting preferences. (The test doesn't do the same thing the second time unless the preferences are reset back to the initial state somewhere. Changing from "don't notify" to "notify" isn't the same test as changing from "notify" to "notify.")
  • My Fresh Setup approach is to create a test that sets the preferences to the initial state, verifies them, resets them, and verifies they've been reset. (The one test initializes, changes, and verifies the data.) This is compact and makes the test independent of others, although it doesn't guarantee that the modified data was saved. With this approach it's possible to cover up evidence of previous bugs and incompletely verify the correctness of the saved data (it may not have been actually saved).
  • My preferred Shared Setup approach is to assume the data was initialized with the initial data set, create a test that resets the preferences and then have a second test that verifies the changed preferences and sets them back to the original state. (The paired tests change, [independently] verify, and reinitialize the data.) This is less compact, but a bit more compresensive while the pair run together are still independent from other tests. If the fist test was successful, the second is extremely likely to reset the state correctly. If the test fails because the initial state isn't correct it's most likely because some other test mucked with data it shouldn't have.
  • I've found that assigning a 'data czar' to aggregate required input and identify expected output is much cleaner than having any more than two or three people managing set-up, tear-down, and results-checking 'independently' in the same data areas. Even with very small teams over time, the test developers roll over, and after two or three years there have been too many people with their hands in the data to reliably know which data is used by other tests and which is is filler. Over time the interference gets subtler and more frequent, even when the team has gone into maintenance mode (and so aren't adding tests) this still happens. In a heavy development mode and Fresh Setup the data gets stepped on frequently or the tests distorted because only records within allocated ranges can be used.



If you have also used this pattern and would like to contribute your experience to the wiki, please go to Feedback to submit your experience or comment.

.................................................................................................................Main Page / Back to Design Patterns / Back to Test Automation Patterns