FALSE PASS

From Test Automation Patterns
Revision as of 11:44, 4 April 2018 by Cathal (talk | contribs) (Created page with "<div id="content_view" class="wiki" style="display: block"><span style="font-size: 14px">.........................................................................................")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search
.................................................................................................................Main Page / Back to Execution Issues / Back to Test Automation Issues

Issue summary

The tests pass even if the SUT actually reacts erroneously

Category

Execution Examples

  1. The automated tests drive the SUT, but don't compare the results with expected results so that they always pass
  2. The new results are erroneously written over the expected results so that the compares always pass
  3. Test automation has been developed with an erroroneus SUT and the results have been accepted as expected results without first checking that they are correct
  4. The tests aren't picking up bugs even though they are comparing the results for the right things and should find them.

Questions

How are the expected results collected?
Have the tests been sufficiently tested?

Resolving Patterns

Most recommended:

  • COMPARISON DESIGN: design the comparison of test results to be as efficient as possible, balancing Dynamic and Post-Execution Comparison, and using a mixture of Sensitive and Robust/ Specific comparisons.
  • RIGHT INTERACTION LEVEL: make sure that your tests interact with the SUT at the most effective Level
  • TEST THE TESTS: This is an important pattern to make sure that your tests are doing what you think they are doing.
  • VERIFY-ACT-VERIFY: use this pattern if you cannot apply FRESH SETUP.


Other useful Patterns:

  • MAINTAIN THE TESTWARE: use this pattern to keep your automation workable
  • SENSITIVE COMPARE: If the tests are missing bugs because they are comparing too narrow a set of results (e.g. only the changed fields), the use of sensitive comparison can find unexpected results, where the test (correctly) fails.

.................................................................................................................Main Page / Back to Execution Issues / Back to Test Automation Issues