FALSE FAIL

From Test Automation Patterns
Revision as of 11:40, 4 April 2018 by Cathal (talk | contribs) (Created page with "<div id="content_view" class="wiki" style="display: block"><span style="font-size: 14px">.........................................................................................")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search
.................................................................................................................Main Page / Back to Execution Issues / Back to Test Automation Issues

Issue summary

The tests fail not because of errors in the SUT, but because of errors in the test automation testware or environment issues

Category

Execution

Examples

  1. Tests fail because a window pops up that was not considered when developing the automation
  2. Tests fail because the initial conditions have been corrupted by another test
  3. A test always passes if it runs by itself but always fails when run in the test suite
  4. Tests fail because something else is running on the same machine
  5. Tests fail because the database has been corrupted or changed by another application
  6. Tests fail because you are using a SENSITIVE COMPARE and so changes that have nothing to do with your test case affect the results

Questions

Are the initial conditions set correctly?
Are there dependencies between the test cases?
Have the tests been sufficiently tested?

Resolving Patterns

Most recommended:

  • COMPARISON DESIGN: design the comparison of test results to be as efficient as possible, balancing Dynamic and Post-Execution Comparison, and using a mixture of Sensitive and Robust/ Specific comparisons.
  • DEDICATED RESOURCES: Use this pattern if you have issues similar to Examples 4. or 5.
  • FRESH SETUP: This pattern guards against issues like Example 2.
  • INDEPENDENT TEST CASES: apply this pattern to get rid of issues like Example 3.
  • RIGHT INTERACTION LEVEL: make sure that your tests interact with the SUT at the most effective level


Other useful patterns:

  • MAINTAIN THE TESTWARE: apply this pattern to make sure that your automation keeps working even is the SUT is being changed
  • SHARE INFORMATION: this pattern helps you learn in time when development is planning big or small changes that affect automation
  • SPECIFIC COMPARE: Expected results are specific to the test case so changes to objects not processed in the test case don't affect the test results. Use this pattern for issues like Example 6
  • TEST THE TESTS: use this pattern always (even for quick fixes!)

.................................................................................................................Main Page / Back to Execution Issues Execution Issues / Back to Test Automation Issues