Difference between revisions of "SENSITIVE COMPARE"

From Test Automation Patterns
Jump to navigation Jump to search
 
(One intermediate revision by the same user not shown)
Line 21: Line 21:
 
=<span style="font-size: 16px">'''Experiences'''</span>=
 
=<span style="font-size: 16px">'''Experiences'''</span>=
  
<span style="font-size: 16px">If you have also used this pattern and would like to contribute your experience to the wiki, please go to [[Experiences]] to submit your experience or comment.</span><br /> <br />  
+
<span style="font-size: 16px">If you have used this pattern and would like to contribute your experience to the wiki, please go to [[Feedback]] to submit your experience or comment.</span><br /> <br />  
  
  
 
<span style="font-size: 14px">.................................................................................................................[[Main Page]] / Back to [[Design Patterns]] / Back to [[Test Automation Patterns]]</span></div>
 
<span style="font-size: 14px">.................................................................................................................[[Main Page]] / Back to [[Design Patterns]] / Back to [[Test Automation Patterns]]</span></div>

Latest revision as of 15:57, 21 August 2018

.................................................................................................................Main Page / Back to Design Patterns / Back to Test Automation Patterns

Pattern summary

Expected results are sensitive to changes beyond the specific test case

Category

Design

Context

This pattern is applicable when your automated tests will be around for a long time, and/or when there are frequent changes to the SUT.
This pattern is not applicable for one-off or disposable scripts.

Description

The expected results compares a large amount of information, more than just what the test case might have changed. For example, the comparison of an entire screen or window (possibly masking out some data). Sensitive tests are likely to find unexpected differences and regression defects.

Implementation

Implementation depends strongly on what you are testing. Some ideas:

  • Extract from a database the entire tables touched by processing the test case
  • Check the whole log and not only the parts directly pertaining to the test case
  • On the GUI check all the objects on each page


If you are checking the whole of a window or screen, you may want to mask out data that you are not interested in, such as the date and time of the test. Otherwise, the date/time would be a difference shown up by the comparison, but you don't want that information!

Potential problems

If all your test cases use this pattern you would probably often get FALSE FAIL! It makes sense to have at least some test cases using this pattern, for example in a smoke test or high-level regression test. Other tests should use SPECIFIC COMPARE

Issues addressed by this pattern

FALSE PASS

Experiences

If you have used this pattern and would like to contribute your experience to the wiki, please go to Feedback to submit your experience or comment.


.................................................................................................................Main Page / Back to Design Patterns / Back to Test Automation Patterns