Difference between revisions of "FRESH SETUP"
(7 intermediate revisions by 3 users not shown) | |||
Line 1: | Line 1: | ||
<div id="content_view" class="wiki" style="display: block"><span style="font-size: 14px">.................................................................................................................[[Main Page]] / Back to [[Design Patterns]] / Back to [[Test Automation Patterns]]</span> | <div id="content_view" class="wiki" style="display: block"><span style="font-size: 14px">.................................................................................................................[[Main Page]] / Back to [[Design Patterns]] / Back to [[Test Automation Patterns]]</span> | ||
− | =<span style="font-size: 16px">Pattern summary</span>= | + | =<span style="font-size: 16px">'''Pattern summary'''</span>= |
− | <span style="font-size: 16px">Before executing each test prepares its initial conditions from scratch. Tests don’t clean up afterwards</span> | + | <span style="font-size: 16px">Before executing, each test prepares its initial conditions from scratch. Tests don’t clean up afterwards</span> |
− | =<span style="font-size: 16px">Category</span>= | + | =<span style="font-size: 16px">'''Category'''</span>= |
<span style="font-size: 16px">Design</span> | <span style="font-size: 16px">Design</span> | ||
− | =<span style="font-size: 16px">Context</span>= | + | =<span style="font-size: 16px">'''Context'''</span>= |
<span style="font-size: 16px">Use this pattern for long lasting and maintainable automation, but it can be useful also when writing disposable scripts.</span> | <span style="font-size: 16px">Use this pattern for long lasting and maintainable automation, but it can be useful also when writing disposable scripts.</span> | ||
− | =<span style="font-size: 16px">Description</span>= | + | =<span style="font-size: 16px">'''Description'''</span>= |
<span style="font-size: 16px">Each test prepares its initial conditions from scratch before executing, so that it makes sure to run under defined initial conditions. In this way each test can run independently from all other tests. Tests don’t clean up afterwards, so that if you want to check the results you can immediately control the state of the Software Under Test (SUT). </span> | <span style="font-size: 16px">Each test prepares its initial conditions from scratch before executing, so that it makes sure to run under defined initial conditions. In this way each test can run independently from all other tests. Tests don’t clean up afterwards, so that if you want to check the results you can immediately control the state of the Software Under Test (SUT). </span> | ||
− | =<span style="font-size: 16px">Implementation</span>= | + | =<span style="font-size: 16px">'''Implementation'''</span>= |
<span style="font-size: 16px">Initial conditions can be very diverse. Here some suggestions:</span><br /> | <span style="font-size: 16px">Initial conditions can be very diverse. Here some suggestions:</span><br /> | ||
− | |||
* <span style="font-size: 16px">Database configuration:</span> | * <span style="font-size: 16px">Database configuration:</span> | ||
*# <span style="font-size: 16px">Copy the table structure of the database for the current release of the SUT: in this way you will always have the current database. Since this may take some time you should do it only once at the beginning of a test suite.</span> | *# <span style="font-size: 16px">Copy the table structure of the database for the current release of the SUT: in this way you will always have the current database. Since this may take some time you should do it only once at the beginning of a test suite.</span> | ||
*# <span style="font-size: 16px">Insert in the database the standard configuration data that you will need for each of the following test cases. This should also be done only once at the beginning of the test suite</span> | *# <span style="font-size: 16px">Insert in the database the standard configuration data that you will need for each of the following test cases. This should also be done only once at the beginning of the test suite</span> | ||
− | *# <span style="font-size: 16px">For each test case: make sure the relevant variable database entries are empty. If not, remove or initialise content (after ensuring that you have not wiped out the traces of prior test errors) .</span> | + | *# <span style="font-size: 16px">For each test case: make sure the relevant variable database entries are empty. If not, remove or initialise content (after ensuring that you have not wiped out the traces of prior test errors).</span> |
*# <span style="font-size: 16px">For each test case: insert the variable data that your test case expects.</span><br /> <br /> | *# <span style="font-size: 16px">For each test case: insert the variable data that your test case expects.</span><br /> <br /> | ||
* <span style="font-size: 16px">File configuration:</span> | * <span style="font-size: 16px">File configuration:</span> | ||
Line 23: | Line 22: | ||
** <span style="font-size: 16px">for complex environments it may pay to start each time from a known VM snapshot. This has the added benefit that in the case of an erratic test then the failed vm can be automatically stored away for further analysis. Storage and time limitations would probably make this impractical for non-regression scenarios where you expect a lot of failures.</span> | ** <span style="font-size: 16px">for complex environments it may pay to start each time from a known VM snapshot. This has the added benefit that in the case of an erratic test then the failed vm can be automatically stored away for further analysis. Storage and time limitations would probably make this impractical for non-regression scenarios where you expect a lot of failures.</span> | ||
<br /> <span style="font-size: 16px">Leave the SUT as it is after the test is run. In this way, after running the test, you can immediately check the state of the SUT, database contents etc. without having to restart the test and stopping it before it cleans up. Even if you do have to restart it, e.g. because it was followed by other tests, you don’t have to change the scripts in any way to repeat the test and check the results. </span> | <br /> <span style="font-size: 16px">Leave the SUT as it is after the test is run. In this way, after running the test, you can immediately check the state of the SUT, database contents etc. without having to restart the test and stopping it before it cleans up. Even if you do have to restart it, e.g. because it was followed by other tests, you don’t have to change the scripts in any way to repeat the test and check the results. </span> | ||
− | =<span style="font-size: 16px"> | + | =<span style="font-size: 16px">'''Potential problems'''</span>= |
<span style="font-size: 16px">If the setup is very slow, you should consider using instead a [[ SHARED SETUP]]</span> | <span style="font-size: 16px">If the setup is very slow, you should consider using instead a [[ SHARED SETUP]]</span> | ||
− | =<span style="font-size: 16px">Issues addressed by this pattern</span>= | + | =<span style="font-size: 16px">'''Issues addressed by this pattern'''</span>= |
− | ''<span style="font-size: 16px">[[FALSE FAIL]]</span>''<br /> ''<span style="font-size: 16px">[[INCONSISTENT DATA]]</span>''<br /> ''<span style="font-size: 16px">[[ INEFFICIENT FAILURE ANALYSIS]]</span>''<br /> ''<span style="font-size: 16px">[[INTERDEPENDENT TEST CASES]]</span>''<br /> ''<span style="font-size: 16px">[[HARD-TO-AUTOMATE RESULTS]]</span>''<br /> | + | ''<span style="font-size: 16px">[[FALSE FAIL]]</span>''<br /> ''<span style="font-size: 16px">[[INCONSISTENT DATA]]</span>''<br /> ''<span style="font-size: 16px">[[ INEFFICIENT FAILURE ANALYSIS]]</span>''<br /> ''<span style="font-size: 16px">[[INTERDEPENDENT TEST CASES]]</span>''<br /> ''<span style="font-size: 16px">[[HARD-TO-AUTOMATE RESULTS]]</span>''<br /> |
− | <span style="font-size: | + | |
+ | =<span style="font-size: 16px">'''Experiences'''</span>= | ||
+ | |||
<span style="font-size: 16px">Experiences contrasting this pattern (FRESH SETUP) </span> | <span style="font-size: 16px">Experiences contrasting this pattern (FRESH SETUP) </span> | ||
<span style="font-size: 16px">and </span> | <span style="font-size: 16px">and </span> | ||
− | <span style="font-size: 16px">[[ | + | <span style="font-size: 16px">[[SHARED SETUP]]</span><br /><span style="font-size: 16px">From Doug Hoffman:</span><br /> |
− | |||
* ''I prefer to use a shared setup approach (and have generally avoided the Fresh Setup approach altogether after a few experiments a few decades ago). It's a personal preference rather than a professional judgement because both are reasonable approaches.'' | * ''I prefer to use a shared setup approach (and have generally avoided the Fresh Setup approach altogether after a few experiments a few decades ago). It's a personal preference rather than a professional judgement because both are reasonable approaches.'' | ||
− | |||
* ''With a shared setup approach each of the automated tests can be run independently, but the expectation may be that the entire data set has to be set in place for a clean run. A Fresh Setup approach should always work for test independence while a Shared Setup works most of the time. Most of the time the precondition of a test isn't effected by the post condition, so it doesn't really matter. I prefer to extract the few cases where the Fresh Setup works better and handle them as exceptions:'' | * ''With a shared setup approach each of the automated tests can be run independently, but the expectation may be that the entire data set has to be set in place for a clean run. A Fresh Setup approach should always work for test independence while a Shared Setup works most of the time. Most of the time the precondition of a test isn't effected by the post condition, so it doesn't really matter. I prefer to extract the few cases where the Fresh Setup works better and handle them as exceptions:'' | ||
− | + | * ''I'll contrast the two approaches for the example of a test for resetting preferences. (The test doesn't do the same thing the second time unless'' ''the preferences are reset back to the initial state somewhere. Changing from "don't notify" to "notify" isn't the same test as changing from "notify" to "notify.")'' | |
− | * '' | ||
− | |||
* ''My Fresh Setup approach is to create a test that sets the preferences to the initial state, verifies them, resets them, and verifies they've been reset. (The one test initializes, changes, and verifies the data.) This is compact and makes the test independent of others, although it doesn't guarantee that the modified data was saved. With this approach it's possible to cover up evidence of previous bugs and incompletely verify the correctness of the saved data (it may not have been actually saved).'' | * ''My Fresh Setup approach is to create a test that sets the preferences to the initial state, verifies them, resets them, and verifies they've been reset. (The one test initializes, changes, and verifies the data.) This is compact and makes the test independent of others, although it doesn't guarantee that the modified data was saved. With this approach it's possible to cover up evidence of previous bugs and incompletely verify the correctness of the saved data (it may not have been actually saved).'' | ||
− | |||
* ''My preferred Shared Setup approach is to assume the data was initialized with the initial data set, create a test that resets the preferences and then have a second test that verifies the changed preferences and sets them back to the original state. (The paired tests change, [independently] verify, and reinitialize the data.) This is less compact, but a bit more compresensive while the pair run together are still independent from other tests. If the fist test was successful, the second is extremely likely to reset the state correctly. If the test fails because the initial state isn't correct it's most likely because some other test mucked with data it shouldn't have.'' | * ''My preferred Shared Setup approach is to assume the data was initialized with the initial data set, create a test that resets the preferences and then have a second test that verifies the changed preferences and sets them back to the original state. (The paired tests change, [independently] verify, and reinitialize the data.) This is less compact, but a bit more compresensive while the pair run together are still independent from other tests. If the fist test was successful, the second is extremely likely to reset the state correctly. If the test fails because the initial state isn't correct it's most likely because some other test mucked with data it shouldn't have.'' | ||
− | |||
* ''I've found that assigning a 'data czar' to aggregate required input and identify expected output is much cleaner than having any more than two or three people managing set-up, tear-down, and results-checking 'independently' in the same data areas. Even with very small teams over time, the test developers roll over, and after two or three years there have been too many people with their hands in the data to reliably know which data is used by other tests and which is is filler. Over time the interference gets subtler and more frequent, even when the team has gone into maintenance mode (and so aren't adding tests) this still happens. In a heavy development mode and Fresh Setup the data gets stepped on frequently or the tests distorted because only records within allocated ranges'' ''can be used.'' | * ''I've found that assigning a 'data czar' to aggregate required input and identify expected output is much cleaner than having any more than two or three people managing set-up, tear-down, and results-checking 'independently' in the same data areas. Even with very small teams over time, the test developers roll over, and after two or three years there have been too many people with their hands in the data to reliably know which data is used by other tests and which is is filler. Over time the interference gets subtler and more frequent, even when the team has gone into maintenance mode (and so aren't adding tests) this still happens. In a heavy development mode and Fresh Setup the data gets stepped on frequently or the tests distorted because only records within allocated ranges'' ''can be used.'' | ||
− | <br /> <span style="font-size: 14px">.................................................................................................................[[Main Page]] / Back to [[ Design Patterns]] / Back to [[ Test Automation Patterns]]</span | + | <br /> <br /> |
+ | |||
+ | <span style="font-size: 16px">If you have also used this pattern and would like to contribute your experience to the wiki, please go to [[Feedback]] to submit your experience or comment.</span><br /> <br /> | ||
+ | |||
+ | <span style="font-size: 14px">.................................................................................................................[[Main Page]] / Back to [[ Design Patterns]] / Back to [[ Test Automation Patterns]]</span> |
Latest revision as of 09:40, 4 October 2018
Pattern summary
Before executing, each test prepares its initial conditions from scratch. Tests don’t clean up afterwards
Category
Design
Context
Use this pattern for long lasting and maintainable automation, but it can be useful also when writing disposable scripts.
Description
Each test prepares its initial conditions from scratch before executing, so that it makes sure to run under defined initial conditions. In this way each test can run independently from all other tests. Tests don’t clean up afterwards, so that if you want to check the results you can immediately control the state of the Software Under Test (SUT).
Implementation
Initial conditions can be very diverse. Here some suggestions:
- Database configuration:
- Copy the table structure of the database for the current release of the SUT: in this way you will always have the current database. Since this may take some time you should do it only once at the beginning of a test suite.
- Insert in the database the standard configuration data that you will need for each of the following test cases. This should also be done only once at the beginning of the test suite
- For each test case: make sure the relevant variable database entries are empty. If not, remove or initialise content (after ensuring that you have not wiped out the traces of prior test errors).
- For each test case: insert the variable data that your test case expects.
- File configuration:
- Copy input or comparison files to a predefined standard directory.
- Copy input or comparison files to a predefined standard directory.
- SUT:
- to be sure that the SUT is in the required state, it should be started anew for each test case and driven to the foreseen starting point.
- to be sure that the SUT is in the required state, it should be started anew for each test case and driven to the foreseen starting point.
- Virtual machines:
- for complex environments it may pay to start each time from a known VM snapshot. This has the added benefit that in the case of an erratic test then the failed vm can be automatically stored away for further analysis. Storage and time limitations would probably make this impractical for non-regression scenarios where you expect a lot of failures.
Leave the SUT as it is after the test is run. In this way, after running the test, you can immediately check the state of the SUT, database contents etc. without having to restart the test and stopping it before it cleans up. Even if you do have to restart it, e.g. because it was followed by other tests, you don’t have to change the scripts in any way to repeat the test and check the results.
Potential problems
If the setup is very slow, you should consider using instead a SHARED SETUP
Issues addressed by this pattern
FALSE FAIL
INCONSISTENT DATA
INEFFICIENT FAILURE ANALYSIS
INTERDEPENDENT TEST CASES
HARD-TO-AUTOMATE RESULTS
Experiences
Experiences contrasting this pattern (FRESH SETUP)
and
SHARED SETUP
From Doug Hoffman:
- I prefer to use a shared setup approach (and have generally avoided the Fresh Setup approach altogether after a few experiments a few decades ago). It's a personal preference rather than a professional judgement because both are reasonable approaches.
- With a shared setup approach each of the automated tests can be run independently, but the expectation may be that the entire data set has to be set in place for a clean run. A Fresh Setup approach should always work for test independence while a Shared Setup works most of the time. Most of the time the precondition of a test isn't effected by the post condition, so it doesn't really matter. I prefer to extract the few cases where the Fresh Setup works better and handle them as exceptions:
- I'll contrast the two approaches for the example of a test for resetting preferences. (The test doesn't do the same thing the second time unless the preferences are reset back to the initial state somewhere. Changing from "don't notify" to "notify" isn't the same test as changing from "notify" to "notify.")
- My Fresh Setup approach is to create a test that sets the preferences to the initial state, verifies them, resets them, and verifies they've been reset. (The one test initializes, changes, and verifies the data.) This is compact and makes the test independent of others, although it doesn't guarantee that the modified data was saved. With this approach it's possible to cover up evidence of previous bugs and incompletely verify the correctness of the saved data (it may not have been actually saved).
- My preferred Shared Setup approach is to assume the data was initialized with the initial data set, create a test that resets the preferences and then have a second test that verifies the changed preferences and sets them back to the original state. (The paired tests change, [independently] verify, and reinitialize the data.) This is less compact, but a bit more compresensive while the pair run together are still independent from other tests. If the fist test was successful, the second is extremely likely to reset the state correctly. If the test fails because the initial state isn't correct it's most likely because some other test mucked with data it shouldn't have.
- I've found that assigning a 'data czar' to aggregate required input and identify expected output is much cleaner than having any more than two or three people managing set-up, tear-down, and results-checking 'independently' in the same data areas. Even with very small teams over time, the test developers roll over, and after two or three years there have been too many people with their hands in the data to reliably know which data is used by other tests and which is is filler. Over time the interference gets subtler and more frequent, even when the team has gone into maintenance mode (and so aren't adding tests) this still happens. In a heavy development mode and Fresh Setup the data gets stepped on frequently or the tests distorted because only records within allocated ranges can be used.
If you have also used this pattern and would like to contribute your experience to the wiki, please go to Feedback to submit your experience or comment.
.................................................................................................................Main Page / Back to Design Patterns / Back to Test Automation Patterns