SPECIFIC COMPARE

From Test Automation Patterns
Revision as of 10:59, 4 April 2018 by Cathal (talk | contribs) (Created page with "<div id="content_view" class="wiki" style="display: block"><span style="font-size: 14px">.........................................................................................")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search
.................................................................................................................Main Page / Back to Design Patterns / Back to Test Automation Patterns

Pattern summary

Expected results are specific to the test case so changes to objects not processed in the test case don't affect the test results

Category

Design

Context

This pattern is applicable when your automated tests will be around for a long time, and/or when there are frequent changes to the SUT.
This pattern is not applicable for one-off or disposable scripts.

Description

The expected results check only that what has been performed in the test is correct. For example, if a test changes just two fields, only those fields are checked, not the rest of the window or screen containing them.

Implementation

Implementation depends strongly on what you are testing. Some ideas:

  • Extract from a database only the data that is processed by the test case
  • When checking a log, delete first all entries that don't directly pertain to the test case
  • On the GUI check only the objects touched by the test case

Potential problems

If all your test cases use this pattern you could miss important changes and get FALSE PASS. It makes sense to have at least some test cases using a SENSITIVE COMPARE.

Issues addressed by this pattern

BRITTLE SCRIPTS
FALSE FAIL

Experiences

Seretta:
I had to test if some options were set or not. To check if the test case had passed I just extracted the complete options table from the database. That worked fine for a time, but then the developers added an option and all my test cases failed. Why? Because the now changed table was not identical with the expected result even if actually the test case had passed!
I then applied this pattern, I extracted from the table only the option that in the test case had been handled and voilá all my test cases were passing again!
.................................................................................................................Main Page / Back to Design Patterns / Back to Test Automation Patterns