Difference between revisions of "ONE CLEAR PURPOSE"
m (Topic titles in capital letters) |
|||
(5 intermediate revisions by the same user not shown) | |||
Line 8: | Line 8: | ||
=<span style="font-size: 16px">'''Description'''</span>= | =<span style="font-size: 16px">'''Description'''</span>= | ||
<span style="font-size: 16px">Examples and the resulting tests should have a single clear purpose derived from one business rule. </span> | <span style="font-size: 16px">Examples and the resulting tests should have a single clear purpose derived from one business rule. </span> | ||
+ | |||
=<span style="font-size: 16px">'''Implementation'''</span>= | =<span style="font-size: 16px">'''Implementation'''</span>= | ||
− | <span style="font-size: 16px"> | + | <span style="font-size: 16px">Make sure that each automated test has just one clearly defined job to do. If you have [[SET STANDARDS]], there should be guidelines for this, and how to describe the purpose of the test.</span> |
+ | |||
=<span style="font-size: 16px">'''Potential problems'''</span>= | =<span style="font-size: 16px">'''Potential problems'''</span>= | ||
=<span style="font-size: 16px">'''Issues addressed by this pattern'''</span>= | =<span style="font-size: 16px">'''Issues addressed by this pattern'''</span>= | ||
''<span style="font-size: 16px">[[GIANT SCRIPTS]]</span>''<br /> ''<span style="font-size: 16px">[[MANUAL MIMICRY]]</span>''<br /> <span style="font-size: 16px">''[[OBSCURE TESTS]]''</span> | ''<span style="font-size: 16px">[[GIANT SCRIPTS]]</span>''<br /> ''<span style="font-size: 16px">[[MANUAL MIMICRY]]</span>''<br /> <span style="font-size: 16px">''[[OBSCURE TESTS]]''</span> | ||
=<span style="font-size: 16px">'''Experiences'''</span>= | =<span style="font-size: 16px">'''Experiences'''</span>= | ||
− | + | ||
+ | <span style="font-size: 16px">Added by ''Wietze Veld''</span><br /> <br /> <span style="font-size: 16px">In our company we started to automate the tests for our existing application by rewriting the manual tests to automated tests. The manual tests were all scripted in documents. The test steps were generally copied one by one from the manual script to the automated script -we use an in-house tool built over Selenium webdriver that allows our testers to write command driven test scripts in MS Excel-. Our first goal in this process was to get as much coverage with automated tests as possible.</span><br /> <br /> <span style="font-size: 16px">After this entire process of converting manual test scripts to automated tests, we experienced some problems.</span><br /> | ||
* <span style="font-size: 16px">Other teams executing these tests were often not able to easily determine why a test failed</span> | * <span style="font-size: 16px">Other teams executing these tests were often not able to easily determine why a test failed</span> | ||
− | * <span style="font-size: 16px">Tests failed for reasons not in the scope of the test script</span> | + | * <span style="font-size: 16px">Tests failed for reasons not in the scope of the test script</span><br /> |
− | + | <span style="font-size: 16px">Reasons why:</span> | |
* <span style="font-size: 16px">Too much was being validated</span> | * <span style="font-size: 16px">Too much was being validated</span> | ||
* <span style="font-size: 16px">Multiple scenarios in one test script</span> | * <span style="font-size: 16px">Multiple scenarios in one test script</span> | ||
* <span style="font-size: 16px">Unrelated scenarios in one test script</span> | * <span style="font-size: 16px">Unrelated scenarios in one test script</span> | ||
− | * <span style="font-size: 16px">Purpose of the test was not | + | * <span style="font-size: 16px">Purpose of the test was not clearly documented</span> |
− | + | <span style="font-size: 16px">Identified causes:</span> | |
* <span style="font-size: 16px">Original manual tests already covered too much, test were not being reevaluated before converting to automated tests (''[[MANUAL MIMICRY]]'')</span> | * <span style="font-size: 16px">Original manual tests already covered too much, test were not being reevaluated before converting to automated tests (''[[MANUAL MIMICRY]]'')</span> | ||
− | * <span style="font-size: 16px">Testers tended not to stick to the scenario but also validate what happens to be in the user interface in that | + | * <span style="font-size: 16px">Testers tended not to stick to the scenario but also validate what happens to be in the user interface in that test step</span> |
* <span style="font-size: 16px">Test efficiency was being misunderstood, less setup time if you put multiple scenarios in one test script was often the solution</span> | * <span style="font-size: 16px">Test efficiency was being misunderstood, less setup time if you put multiple scenarios in one test script was often the solution</span> | ||
− | + | <span style="font-size: 16px">Solution:</span><br /> | |
− | <span style="font-size: 16px">Write test scripts with one clearly documented scenario.</span><br /> <span style="font-size: 16px">The testers needed not to concern themselves with efficiency of the steps in the test setup -which in our application shows the same pattern with variable input-, this was improved by the Test Automation team.</span><br /> <br /> <span style="font-size: 16px">This is an ongoing process wherein the teams will be reevaluating their test scripts and optimize where needed.</span> | + | <span style="font-size: 16px">Write test scripts with one clearly documented scenario.</span><br /> <span style="font-size: 16px">The testers needed not to concern themselves with efficiency of the steps in the test setup -which in our application shows the same pattern with variable input-, this was improved by the Test Automation team.</span><br /> <br /> <span style="font-size: 16px">This is an ongoing process wherein the teams will be reevaluating their test scripts and optimize where needed.</span><br /> <br /> |
− | + | <span style="font-size: 16px">Solution for the near future:</span><br /> | |
+ | |||
+ | * <span style="font-size: 16px">Introduce the same in-house test framework but without MS Excel but for coded tests.</span> | ||
+ | * <span style="font-size: 16px">Use Behavior Driven Development with SpecFlow, so the scenarios are clearly defined and documented.</span> | ||
+ | <br /> <br /> <br /> | ||
+ | |||
+ | <span style="font-size: 16px">If you have also used this pattern and would like to contribute your experience to the wiki, please go to [[Feedback]] to submit your experience or comment.</span><br /> <br /> | ||
+ | |||
− | + | <span style="font-size: 14px">.................................................................................................................[[Main Page]] / Back to [[Design Patterns]] / Back to [[Test Automation Patterns]]</span></div> | |
− | |||
− |
Latest revision as of 15:56, 21 August 2018
Pattern summary
Each test has only one clear purpose
Category
Design
Context
Use this pattern to build efficient, modular and maintainable testware
It's not necessary for disposable scripts
Description
Examples and the resulting tests should have a single clear purpose derived from one business rule.
Implementation
Make sure that each automated test has just one clearly defined job to do. If you have SET STANDARDS, there should be guidelines for this, and how to describe the purpose of the test.
Potential problems
Issues addressed by this pattern
GIANT SCRIPTS
MANUAL MIMICRY
OBSCURE TESTS
Experiences
Added by Wietze Veld
In our company we started to automate the tests for our existing application by rewriting the manual tests to automated tests. The manual tests were all scripted in documents. The test steps were generally copied one by one from the manual script to the automated script -we use an in-house tool built over Selenium webdriver that allows our testers to write command driven test scripts in MS Excel-. Our first goal in this process was to get as much coverage with automated tests as possible.
After this entire process of converting manual test scripts to automated tests, we experienced some problems.
- Other teams executing these tests were often not able to easily determine why a test failed
- Tests failed for reasons not in the scope of the test script
Reasons why:
- Too much was being validated
- Multiple scenarios in one test script
- Unrelated scenarios in one test script
- Purpose of the test was not clearly documented
Identified causes:
- Original manual tests already covered too much, test were not being reevaluated before converting to automated tests (MANUAL MIMICRY)
- Testers tended not to stick to the scenario but also validate what happens to be in the user interface in that test step
- Test efficiency was being misunderstood, less setup time if you put multiple scenarios in one test script was often the solution
Solution:
Write test scripts with one clearly documented scenario.
The testers needed not to concern themselves with efficiency of the steps in the test setup -which in our application shows the same pattern with variable input-, this was improved by the Test Automation team.
This is an ongoing process wherein the teams will be reevaluating their test scripts and optimize where needed.
Solution for the near future:
- Introduce the same in-house test framework but without MS Excel but for coded tests.
- Use Behavior Driven Development with SpecFlow, so the scenarios are clearly defined and documented.
If you have also used this pattern and would like to contribute your experience to the wiki, please go to Feedback to submit your experience or comment.