Automation scripts have to be reworked for any small change to the Software Under Test (SUT)
Scripts are created using the capture functionality of an automation tool. If in the meantime something has been changed in the application, the tests will break unless recorded anew.
A small change to the application (such as moving something to a different screen, or changing the text of a button) causes many scripts to fail because this information is embedded in the scripts for many tests.
How do you develop automation scripts? (Capture is not good long-term though ok for short complex actions.)
Is there repeated code in any scripts? (Keep your automation code DRY - Don't' Repeat Yourself - put common code and actions into scripts called by other scripts.)
What kinds of changes are most likely to happen to the application? (Make your automation most flexible for those changes.)
- TESTWARE ARCHITECTURE: This pattern encompasses the overall approach to the automation artefacts, and is best thought about right from the start of automation so that you can avoid having this issue. This pattern is implemented using:
- ABSTRACTION LEVELS: This is the pattern to apply if you want to delegate some of the maintenance effort to the testers. It will enable you to write test cases that are both independent from the SUT and from the technical implementation of the automation (including the tools(s)).
- MAINTAINABLE TESTWARE: This is the pattern to apply if you want to get rid of the issue once and for all. If you haven't implemented it yet, you may want to apply at least some aspects of this pattern.
- MANAGEMENT SUPPORT: This is the pattern to apply if you are missing support or resources that you need in order to develop MAINTAINABLE TESTWARE.
- MODEL-BASED TESTING: This pattern involves considerable effort at the beginning, but is the most efficient in the long run. Using a test model, the test cases can be cleanly separated from the technical details. Frequently used sequences of test steps can be defined as reusable and parametrisable building blocks. If the SUT changes, usually only a few building blocks need to be adapted while the test scripts are updated automatically.
- COMPARISON DESIGN: Design the comparison of test results to be as efficient as possible, balancing Dynamic and Post-Execution Comparison, and using a mixture of Sensitive and Robust/Specific comparisons.
Other useful patterns:
- GOOD PROGRAMMING PRACTICES: This pattern should already be in use! If not, you should apply it for all new automation efforts. Apply it also every time you have to change current testware.
.................................................................................................................Main Page / Back to Design Issues / Back to Test Automation Issues