Difference between revisions of "SPECIFIC COMPARE"

From Test Automation Patterns
Jump to navigation Jump to search
m (Topic titles in capital letters)
 
(One intermediate revision by the same user not shown)
Line 19: Line 19:
 
''<span style="font-size: 16px">[[BRITTLE SCRIPTS]]</span>''<br /> ''<span style="font-size: 16px">[[FALSE FAIL]]</span>''
 
''<span style="font-size: 16px">[[BRITTLE SCRIPTS]]</span>''<br /> ''<span style="font-size: 16px">[[FALSE FAIL]]</span>''
 
=<span style="font-size: 16px">'''Experiences'''</span>=
 
=<span style="font-size: 16px">'''Experiences'''</span>=
<span style="font-size: 16px">Seretta:</span><br /> <span style="font-size: 16px">I had to test if some options were set or not. To check if the test case had passed I just extracted the complete options table from the database. That worked fine for a time, but then the developers added an option and all my test cases failed. Why? Because the now changed table was not identical with the expected result even if actually the test case had passed! </span><br /> <span style="font-size: 16px">I then applied this pattern, I extracted from the table only the option that in the test case had been handled and voilá all my test cases were passing again!</span><br /> <span style="font-size: 14px">.................................................................................................................[[Main Page]] / Back to [[Design Patterns]] / Back to [[Test Automation Patterns]]</span></div>
+
<span style="font-size: 16px">Seretta:</span><br /> <span style="font-size: 16px">I had to test if some options were set or not. To check if the test case had passed I just extracted the complete options table from the database. That worked fine for a time, but then the developers added an option and all my test cases failed. Why? Because the now changed table was not identical with the expected result even if actually the test case had passed! </span><br /> <span style="font-size: 16px">I then applied this pattern, I extracted from the table only the option that in the test case had been handled and voilá all my test cases were passing again!</span><br />  
 +
<br /><br />
 +
 
 +
<span style="font-size: 16px">If you have also used this pattern and would like to contribute your experience to the wiki, please go to [[Feedback]] to submit your experience or comment.</span><br /> <br />
 +
 
 +
 
 +
<span style="font-size: 14px">.................................................................................................................[[Main Page]] / Back to [[Design Patterns]] / Back to [[Test Automation Patterns]]</span></div>

Latest revision as of 15:58, 21 August 2018

.................................................................................................................Main Page / Back to Design Patterns / Back to Test Automation Patterns

Pattern summary

Expected results are specific to the test case so changes to objects not processed in the test case don't affect the test results

Category

Design

Context

This pattern is applicable when your automated tests will be around for a long time, and/or when there are frequent changes to the SUT.
This pattern is not applicable for one-off or disposable scripts.

Description

The expected results check only that what has been performed in the test is correct. For example, if a test changes just two fields, only those fields are checked, not the rest of the window or screen containing them.

Implementation

Implementation depends strongly on what you are testing. Some ideas:

  • Extract from a database only the data that is processed by the test case
  • When checking a log, delete first all entries that don't directly pertain to the test case
  • On the GUI check only the objects touched by the test case

Potential problems

If all your test cases use this pattern you could miss important changes and get FALSE PASS. It makes sense to have at least some test cases using a SENSITIVE COMPARE.

Issues addressed by this pattern

BRITTLE SCRIPTS
FALSE FAIL

Experiences

Seretta:
I had to test if some options were set or not. To check if the test case had passed I just extracted the complete options table from the database. That worked fine for a time, but then the developers added an option and all my test cases failed. Why? Because the now changed table was not identical with the expected result even if actually the test case had passed!
I then applied this pattern, I extracted from the table only the option that in the test case had been handled and voilá all my test cases were passing again!


If you have also used this pattern and would like to contribute your experience to the wiki, please go to Feedback to submit your experience or comment.


.................................................................................................................Main Page / Back to Design Patterns / Back to Test Automation Patterns