HARD-TO-AUTOMATE RESULTS

From Test Automation Patterns
Revision as of 16:08, 2 April 2019 by Seretta (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search
.................................................................................................................Main Page / Back to Design Issues / Back to Test Automation Issues

Issue Summary

Preparing the expected results is slow and difficult

Category

Design

Examples

  1. Results can only be compared on the GUI
  2. Results are spread through different databases or media and it's difficult to put them together in a meaningful way
  3. Results change randomly
  4. Results change from release to release
  5. Results depend on what test cases have been run before
  6. API or AI tests deliver non-predictable results

Questions

How complex are the automated test cases? Can they be split up?
Are complex "scenario" test cases really necessary?

Resolving Patterns

Most recommended:

  • FRESH SETUP: apply this pattern if you have issues like Example 5
  • DEDICATED RESOURCES: look up this pattern if you have to share resources
  • THINK OUT-OF-THE-BOX: try to look at the problem from unusual viewpoints. This is especially important for issues like Example 6
  • WHOLE TEAM APPROACH: if your team follows an agile development process this is the pattern to use in order to avoid this kind of problems from the beginning


Other useful patterns:


.................................................................................................................Main Page / Back to Design Issues / Back to Test Automation Issues