Maintenance expectations not met

From Test Automation Patterns
Revision as of 09:48, 26 June 2018 by Seretta (talk | contribs)
Jump to navigation Jump to search
Please select what you think is the main reason for the maintenance expectations not being met:

  1. Maintenance costs too high. Maintenance of the automation is (considered) too expensive.
  2. Not reusing existing data or other problems with test data. If it's faster (and easier) to create new data than to reuse existing data, or no-one knows what data is available to use, the issue to look at is TEST DATA LOSS. If you can only use test data once before it gets corrupted, test data is only valid for one release, or data has to be updated from release to release, then the issue to look at is INCONSISTENT DATA.
  3. Other people have trouble understanding what the tests are, what the scripts do, how the automation works etc. The issue to look at is INADEQUATE DOCUMENTATION.
  4. You have written all your scripts in the language and structure of your current tool, but that tool is no longer appropriate for you. The issue you should look up is TOOL DEPENDENCY.
  5. Documentation is non-existent, so only the developer of the automation can work with it. The issue OBSCURE TESTS will give you a good start on how to solve this problem.
  6. It's difficult to pair the automated scripts to the correct release of the Software Under Test (SUT). Look up the issue INADEQUATE REVISION CONTROL for suggestions.


Return to Improve or revive test Automation