Difference between revisions of "ONE CLEAR PURPOSE"
Line 16: | Line 16: | ||
''<span style="font-size: 16px">[[GIANT SCRIPTS]]</span>''<br /> ''<span style="font-size: 16px">[[MANUAL MIMICRY]]</span>''<br /> <span style="font-size: 16px">''[[OBSCURE TESTS]]''</span> | ''<span style="font-size: 16px">[[GIANT SCRIPTS]]</span>''<br /> ''<span style="font-size: 16px">[[MANUAL MIMICRY]]</span>''<br /> <span style="font-size: 16px">''[[OBSCURE TESTS]]''</span> | ||
=<span style="font-size: 16px">'''Experiences'''</span>= | =<span style="font-size: 16px">'''Experiences'''</span>= | ||
− | + | ||
+ | <span style="font-size: 16px">Added by ''Wietze Veld''</span><br /> <br /> <span style="font-size: 16px">In our company we started to automate the tests for our existing application by rewriting the manual tests to automated tests. The manual tests were all scripted in documents. The test steps were generally copied one by one from the manual script to the automated script -we use an in-house tool built over Selenium webdriver that allows our testers to write command driven test scripts in MS Excel-. Our first goal in this process was to get as much coverage with automated tests as possible.</span><br /> <br /> <span style="font-size: 16px">After this entire process of converting manual test scripts to automated tests, we experienced some problems.</span><br /> | ||
* <span style="font-size: 16px">Other teams executing these tests were often not able to easily determine why a test failed</span> | * <span style="font-size: 16px">Other teams executing these tests were often not able to easily determine why a test failed</span> | ||
Line 38: | Line 39: | ||
* <span style="font-size: 16px">Introduce the same in-house test framework but without MS Excel but for coded tests.</span> | * <span style="font-size: 16px">Introduce the same in-house test framework but without MS Excel but for coded tests.</span> | ||
* <span style="font-size: 16px">Use Behavior Driven Development with SpecFlow, so the scenarios are clearly defined and documented.</span> | * <span style="font-size: 16px">Use Behavior Driven Development with SpecFlow, so the scenarios are clearly defined and documented.</span> | ||
− | <br /> <br /> <br /> <br /> <br /> <span style="font-size: 14px">.................................................................................................................[[Main Page]] / Back to [[Design Patterns]] / Back to [[Test Automation Patterns]]</span></div> | + | <br /> <br /> <br /> <br /> |
+ | |||
+ | <span style="font-size: 16px">If you have also used this pattern and would like to contribute your experience to the wiki, please go to [[Experiences]] to submit your experience or comment.</span><br /> <br /> | ||
+ | |||
+ | |||
+ | <br /> | ||
+ | |||
+ | |||
+ | <span style="font-size: 14px">.................................................................................................................[[Main Page]] / Back to [[Design Patterns]] / Back to [[Test Automation Patterns]]</span></div> |
Revision as of 14:53, 4 July 2018
Pattern summary
Each test has only one clear purpose
Category
Design
Context
Use this pattern to build efficient, modular and maintainable testware
It's not necessary for disposable scripts
Description
Examples and the resulting tests should have a single clear purpose derived from one business rule.
Implementation
Make sure that each automated test has just one clearly defined job to do. If you have SET STANDARDS, there should be guidelines for this, and how to describe the purpose of the test.
Potential problems
Issues addressed by this pattern
GIANT SCRIPTS
MANUAL MIMICRY
OBSCURE TESTS
Experiences
Added by Wietze Veld
In our company we started to automate the tests for our existing application by rewriting the manual tests to automated tests. The manual tests were all scripted in documents. The test steps were generally copied one by one from the manual script to the automated script -we use an in-house tool built over Selenium webdriver that allows our testers to write command driven test scripts in MS Excel-. Our first goal in this process was to get as much coverage with automated tests as possible.
After this entire process of converting manual test scripts to automated tests, we experienced some problems.
- Other teams executing these tests were often not able to easily determine why a test failed
- Tests failed for reasons not in the scope of the test script
Reasons why:
- Too much was being validated
- Multiple scenarios in one test script
- Unrelated scenarios in one test script
- Purpose of the test was not clearly documented
Identified causes:
- Original manual tests already covered too much, test were not being reevaluated before converting to automated tests (MANUAL MIMICRY)
- Testers tended not to stick to the scenario but also validate what happens to be in the user interface in that test step
- Test efficiency was being misunderstood, less setup time if you put multiple scenarios in one test script was often the solution
Solution:
Write test scripts with one clearly documented scenario.
The testers needed not to concern themselves with efficiency of the steps in the test setup -which in our application shows the same pattern with variable input-, this was improved by the Test Automation team.
This is an ongoing process wherein the teams will be reevaluating their test scripts and optimize where needed.
Solution for the near future:
- Introduce the same in-house test framework but without MS Excel but for coded tests.
- Use Behavior Driven Development with SpecFlow, so the scenarios are clearly defined and documented.
If you have also used this pattern and would like to contribute your experience to the wiki, please go to Experiences to submit your experience or comment.