Difference between revisions of "SHARED SETUP"

From Test Automation Patterns
Jump to navigation Jump to search
Line 29: Line 29:
 
* <span style="font-size: 16px">My preferred Shared Setup approach is to assume the data was initialized with the initial data set, create a test that resets the preferences and then have a second test that verifies the changed preferences and sets them back to the original state. (The paired tests change, [independently]] verify, and reinitialize the data.) This is less compact, but a bit more compresensive while the pair run together are still independent from other tests. If the fist test was successful, the second is extremely likely to reset the state correctly. If the test fails because the initial state isn't correct it's most likely because some other test mucked with data it shouldn't have.</span>
 
* <span style="font-size: 16px">My preferred Shared Setup approach is to assume the data was initialized with the initial data set, create a test that resets the preferences and then have a second test that verifies the changed preferences and sets them back to the original state. (The paired tests change, [independently]] verify, and reinitialize the data.) This is less compact, but a bit more compresensive while the pair run together are still independent from other tests. If the fist test was successful, the second is extremely likely to reset the state correctly. If the test fails because the initial state isn't correct it's most likely because some other test mucked with data it shouldn't have.</span>
 
* <span style="font-size: 16px">I've found that assigning a 'data czar' to aggregate required input and identify expected output is much cleaner than having any more than two or three people managing set-up, tear-down, and results-checking 'independently' in the same data areas. Even with very small teams over time, the test developers roll over, and after two or three years there have been too many people with their hands in the data to reliably know which data is used by other tests and which is is filler. Over time the interference gets subtler and more frequent, even when the team has gone into maintenance mode (and so aren't adding tests) this still happens. In a heavy development mode and Fresh Setup the data gets stepped on frequently or the tests distorted because only records within allocated ranges can be used.</span>
 
* <span style="font-size: 16px">I've found that assigning a 'data czar' to aggregate required input and identify expected output is much cleaner than having any more than two or three people managing set-up, tear-down, and results-checking 'independently' in the same data areas. Even with very small teams over time, the test developers roll over, and after two or three years there have been too many people with their hands in the data to reliably know which data is used by other tests and which is is filler. Over time the interference gets subtler and more frequent, even when the team has gone into maintenance mode (and so aren't adding tests) this still happens. In a heavy development mode and Fresh Setup the data gets stepped on frequently or the tests distorted because only records within allocated ranges can be used.</span>
<br /> <span style="font-size: 14px">.................................................................................................................[[Main Page]] / Back to [[Design Patterns]] / Back to [[Test Automation Patterns]]</span></div>
+
<br />  
 +
 
 +
<span style="font-size: 16px">If you have also used this pattern and would like to contribute your experience to the wiki, please go to [[Experiences]] to submit your experience or comment.</span><br /> <br />
 +
 
 +
 
 +
<span style="font-size: 14px">.................................................................................................................[[Main Page]] / Back to [[Design Patterns]] / Back to [[Test Automation Patterns]]</span></div>

Revision as of 21:35, 4 July 2018

.................................................................................................................Main Page / Back to Design Patterns / Back to Test Automation Patterns

Pattern summary

Data and other conditions are set for all tests before beginning the automated test suite.

Category

Design

Context

Use this pattern for long lasting and maintainable automation. By separating data required by each test within a common data set, it is possible to run the tests independently.

Description

Leave the Software Under Test (SUT) as it is after the test is run. In this way, after running the test, you can immediately check the state of the SUT, database contents etc. without having to restart the test and stopping it before it cleans up.

The initial conditions (primarily data) are set for all tests before executing the automated suite. Data required by each test is already populated so no further setup is required. Tests don’t clean up afterwards, so that if you want to check the results you can immediately control the status of the SUT.

Implementation

Initial conditions can be very diverse. Here some suggestions:

  • Database configuration:
    1. Create the initial database data by adding independent data required by each test and padding as necessary
    2. Avoid reusing data except for cases where tests are checking one another
    3. Copy the initial database data once at the beginning of the test suite
  • File configuration:
    • Copy input or comparison files to a predefined standard directory
  • SUT: each test should leave the SUT in the same state as the starting point for the next test case

Potential problems

If the data set is dynamic (changing independently of the test), you should consider using instead a FRESH SETUP .

Issues addressed by this pattern

LONG SET-UP

Experiences

Experiences contrasting FRESH SETUP and this pattern (SHARED SETUP)
From Doug Hoffman:
I prefer to use a shared setup approach (and have generally avoided the Fresh Setup approach altogether after a few experiments a few decades ago). It's a personal preference rather than a professional judgement because both are reasonable approaches.

  • With a shared setup approach each of the automated tests can be run independently, but the expectation may be that the entire data set has to be set in place for a clean run. A Fresh Setup approach should always work for test independence while a Shared Setup works most of the time. Most of the time the precondition of a test isn't effected by the post condition, so it doesn't really matter. I prefer to extract the few cases where the Fresh Setup works better and handle them as exceptions:
  • --> I'll contrast the two approaches for the example of a test for resetting preferences. (The test doesn't do the same thing the second time unless the preferences are reset back to the initial state somewhere. Changing from "don't notify" to "notify" isn't the same test as changing from "notify" to "notify.")
  • My Fresh Setup approach is to create a test that sets the preferences to the initial state, verifies them, resets them, and verifies they've been reset. (The one test initializes, changes, and verifies the data.) This is compact and makes the test independent of others, although it doesn't guarantee that the modified data was saved. With this approach it's possible to cover up evidence of previous bugs and incompletely verify the correctness of the saved data (it may not have been actually saved).
  • My preferred Shared Setup approach is to assume the data was initialized with the initial data set, create a test that resets the preferences and then have a second test that verifies the changed preferences and sets them back to the original state. (The paired tests change, [independently]] verify, and reinitialize the data.) This is less compact, but a bit more compresensive while the pair run together are still independent from other tests. If the fist test was successful, the second is extremely likely to reset the state correctly. If the test fails because the initial state isn't correct it's most likely because some other test mucked with data it shouldn't have.
  • I've found that assigning a 'data czar' to aggregate required input and identify expected output is much cleaner than having any more than two or three people managing set-up, tear-down, and results-checking 'independently' in the same data areas. Even with very small teams over time, the test developers roll over, and after two or three years there have been too many people with their hands in the data to reliably know which data is used by other tests and which is is filler. Over time the interference gets subtler and more frequent, even when the team has gone into maintenance mode (and so aren't adding tests) this still happens. In a heavy development mode and Fresh Setup the data gets stepped on frequently or the tests distorted because only records within allocated ranges can be used.


If you have also used this pattern and would like to contribute your experience to the wiki, please go to Experiences to submit your experience or comment.


.................................................................................................................Main Page / Back to Design Patterns / Back to Test Automation Patterns