Difference between revisions of "Design Patterns"
Jump to navigation
Jump to search
Line 65: | Line 65: | ||
| <span style="font-size: 16px">Design the reports to be easy for the recipient to read and understand.</span><br /> | | <span style="font-size: 16px">Design the reports to be easy for the recipient to read and understand.</span><br /> | ||
|- | |- | ||
− | | <span style="font-size: 16px">[[RIGHT INTERACTION LEVEL(3) | + | | <span style="font-size: 16px">[[RIGHT INTERACTION LEVEL]](3)</span><br /> |
| <span style="font-size: 16px">Be aware of the interaction level of the test approach on the SUT and its risks (intrusion)</span><br /> | | <span style="font-size: 16px">Be aware of the interaction level of the test approach on the SUT and its risks (intrusion)</span><br /> | ||
|- | |- |
Revision as of 08:37, 4 April 2018
Design patterns show how to design the test automation testware so that it will be efficient and easy to maintain.
[1] Suggested by Thorsten Schönfelder
[2] This pattern is known as PAGE OBJECT in the Selenium Community and has been suggested by Lisa Crispin
[3] Suggested by Bryan Bakker
[4] Suggested by the testing team at BREDEX
Be aware of the interaction level of the test approach on the SUT and its risks (intrusion)
The table below gives a list of the Design patterns with a short description.The Design Patterns Mind Map shows which other patterns are used by the design patterns
Pattern |
Description |
---|---|
ABSTRACTION LEVELS |
Build testware that has one or more abstraction layers. |
AUTOMATE GOOD TESTS |
Automate only the tests that bring the most Return on Investment (ROI). |
AUTOMATE THE METRICS |
Automate metrics collection. |
CAPTURE-REPLAY |
Capture a manual test with an appropriate tool and replay it to run the test again. |
CHAINED TESTS |
Automate the tests so that they run in a predefined sequence. |
COMPARISON DESIGN |
Design the comparison of test results to be as efficient as possible, balancing Dynamic and Post-Execution Comparison, and using a mixture of Sensitive and Robust/Specific comparisons. |
DATA-DRIVEN TESTING |
Write the test cases as scripts that read their data from external files. |
DATE INDEPENDENCE |
Write your test cases to be date independent |
DEFAULT DATA |
Use default data to simplify data input |
DESIGN FOR REUSE |
Design reusable testware. |
DOMAIN-DRIVEN TESTING |
Develop a domain-specific language for testers to write their automated test cases with. |
DON'T REINVENT THE WHEEL |
Use available know-how, tools and processes whenever possible. |
FRESH SETUP |
Before executing each test prepares its initial conditions from scratch. Tests don’t clean up afterwards. |
INDEPENDENT TEST CASES |
Make each automated test case self-contained. |
KEEP IT SIMPLE |
Use the simplest solution you can imagine. |
KEYWORD-DRIVEN TESTING |
Tests are driven by keywords that represent actions of a test, and that include input data and expected results. |
MAINTAINABLE TESTWARE |
Design your testware so that it does not have to be updated for every little change in the Software Under Test (SUT). |
MODEL-BASED TESTING |
Derive test cases from a model of the SUT, typically using an automated test case generator. |
ONE CLEAR PURPOSE |
Each test has only one clear purpose. |
READABLE REPORTS |
Design the reports to be easy for the recipient to read and understand. |
RIGHT INTERACTION LEVEL(3) |
Be aware of the interaction level of the test approach on the SUT and its risks (intrusion) |
SENSITIVE COMPARE |
Expected results are sensitive to changes beyond the specific test case. |
SHARED SETUP |
Data and other conditions are set for all tests before beginning the automated test suite. |
SINGLE PAGE SCRIPTS(2) |
Develop an automation script for each window or page. |
SPECIFIC COMPARE |
Expected results are specific to the test case so changes to objects not processed in the test case don't affect the test results. |
TEMPLATE TEST |
Define template test cases as a standard from which you can drive all kinds of test case variations. |
TEST AUTOMATION FRAMEWORK |
Use a test automation framework. |
TEST SELECTOR |
Implement your test cases so that you can turn on various selection criteria for whether or not you include a given test in an execution run. |
TESTWARE ARCHITECTURE |
Design the structure of your testware so that your automators and testers can work as efficiently as possible. |
THINK OUT-OF-THE-BOX |
The best automation solutions are often the most creative |
TOOL INDEPENDENCE |
Separate the technical implementation that is specific for the tool from the functional implementation. |
VERIFY-ACT-VERIFY (4) |
The action to test is surrounded by two verifications that check the initial and the final state |
Main Page
Back to Test Automation Patterns
Back to Management Patterns
Forward to Execution Patterns
[1] Suggested by Thorsten Schönfelder
[2] This pattern is known as PAGE OBJECT in the Selenium Community and has been suggested by Lisa Crispin
[3] Suggested by Bryan Bakker
[4] Suggested by the testing team at BREDEX
Be aware of the interaction level of the test approach on the SUT and its risks (intrusion)