Difference between revisions of "COMPARISON DESIGN"

From Test Automation Patterns
Jump to navigation Jump to search
m (removed link to process patterns)
Line 17: Line 17:
 
''<span style="font-size: 16px">[[FALSE FAIL]]</span>''<br /> ''<span style="font-size: 16px">[[FALSE PASS]]</span>''<br /> ''<span style="font-size: 16px">[[INEFFICIENT FAILURE ANALYSIS]]</span>''
 
''<span style="font-size: 16px">[[FALSE FAIL]]</span>''<br /> ''<span style="font-size: 16px">[[FALSE PASS]]</span>''<br /> ''<span style="font-size: 16px">[[INEFFICIENT FAILURE ANALYSIS]]</span>''
 
=<span style="font-size: 16px">'''Experiences'''</span>=
 
=<span style="font-size: 16px">'''Experiences'''</span>=
<span style="font-size: 16px">If you have used this pattern, please add your name and a brief story of how you used this pattern: your context, what you did, and how well it worked - or how it didn't work!</span><br /> <span style="font-size: 14px">.................................................................................................................[[Main Page]] / Back to [[Design Patterns]] / Back to [[Test Automation Patterns]]</span></div>
+
 
 +
<span style="font-size: 16px">If you have used this pattern and would like to contribute your experience to the wiki, please go to [[Experiences]] to submit your experience or comment.</span><br /> <br />
 +
 
 +
<span style="font-size: 14px">.................................................................................................................[[Main Page]] / Back to [[Design Patterns]] / Back to [[Test Automation Patterns]]</span></div>

Revision as of 17:24, 3 July 2018

.................................................................................................................Main Page / Back to Design Patterns / Back to Test Automation Patterns

Pattern summary

Design the comparison of test results to be as efficient as possible, balancing Dynamic and Post-Execution Comparison, and using a mixture of Sensitive and Robust/ Specific comparisons.

Category

Design

Context

This pattern is applicable to any automated test.

Description

Automated comparison of test results can be done while a test is running (Dynamic Comparison) or after a test has completed (Post-Execution Comparison). Results that could influence the progress of a test should be done during the test, but the comparison of the contents of a file or database is best done after the test has completed. Choosing the right type of comparison will give more efficient automation.

Test sensitivity is related to the amount that is compared in a single comparison. A SENSITIVE COMPARE compares as much as possible, e.g. a whole screen, and a SPECIFIC COMPARE compares the minimum that is useful for the test, e.g. a single field.

Implementation

Dynamic comparison are programmed into the script of a test so that they are carried out during the execution of that test.
Post-execution comparisons are carried out as a separate step after a test has completed execution, as part of post-processing for that test, or as a separate activity.

Sensitive tests look at a large amount of information, such as an entire screen or window (possibly using masks or filters to exclude any results that this test is not looking at). See SENSITIVE COMPARE.

Specific tests look at only the specific information that is of interest to a particular test. See SPECIFIC COMPARE.

Use Sensitive tests for high level Smoke tests or Breadth tests, and use Specific tests for Depth tests or detailed tests of functions and features.

Potential problems

If you have Dynamic comparisons that would be better as Post-Execution comparison, your tests will take a lot longer to run than is necessary.

If you have Post-Execution comparison that would be better as Dynamic comparison, then you won't be able to use the intermediate results of tests to skip over irrelevant steps or abort tests that it's not worth continuing with, so you will be wasting time and the tests will run for longer than is necessary.
If all of your tests are Specific, you will miss unexpected changes which could be serious bugs, so your tests will be passing even though there are problems which should fail the tests (FALSE PASS).

If all of your tests are Sensitive, every unexpected problem will trip up all of your tests, even though you are not interested in this after the first time (FALSE FAIL). You could use EXPECTED FAIL STATUS to overcome this if you can't change enough of the tests to be Specific. The tests will also take longer to run, as there will be more checking to do for each test.

Issues addressed by this pattern

FALSE FAIL
FALSE PASS
INEFFICIENT FAILURE ANALYSIS

Experiences

If you have used this pattern and would like to contribute your experience to the wiki, please go to Experiences to submit your experience or comment.

.................................................................................................................Main Page / Back to Design Patterns / Back to Test Automation Patterns