COMPARISON DESIGN

From Test Automation Patterns
Revision as of 10:19, 22 February 2019 by Seretta (talk | contribs) (→‎Potential problems)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search
.................................................................................................................Main Page / Back to Design Patterns / Back to Test Automation Patterns

Pattern summary

Design the comparison of test results to be as efficient as possible, balancing Dynamic and Post-Execution Comparison, and using a mixture of Sensitive and Robust/ Specific comparisons.

Category

Design

Context

This pattern is applicable to any automated test.

Description

Automated comparison of test results can be done while a test is running (Dynamic Comparison) or after a test has completed (Post-Execution Comparison). Results that could influence the progress of a test should be done during the test, but the comparison of the contents of a file or database is best done after the test has completed. Choosing the right type of comparison will give more efficient automation.

Test sensitivity is related to the amount that is compared in a single comparison. A SENSITIVE COMPARE compares as much as possible, e.g. a whole screen, and a SPECIFIC COMPARE compares the minimum that is useful for the test, e.g. a single field.

Implementation

Dynamic comparison are programmed into the script of a test so that they are carried out during the execution of that test.
Post-execution comparisons are carried out as a separate step after a test has completed execution, as part of post-processing for that test, or as a separate activity.

Sensitive tests look at a large amount of information, such as an entire screen or window (possibly using masks or filters to exclude any results that this test is not looking at). See SENSITIVE COMPARE.

Specific tests look at only the specific information that is of interest to a particular test. See SPECIFIC COMPARE.

Use Sensitive tests for high level Smoke tests or Breadth tests, and use Specific tests for Depth tests or detailed tests of functions and features.

Potential problems

If you have Dynamic comparisons that would be better as Post-Execution comparison, your tests will take a lot longer to run than is necessary.

If you have Post-Execution comparison that would be better as Dynamic comparison, the main problem is that your tests may continue to run when it would be more efficient to stop them. For example, in a dynamic test, you can use the intermediate result of a test to skip over irrelevant steps, or abort a test that has got into such a state that is not worth continuing. Checking this type of result only after the test has finished means that you will be wasting time and the tests will run for longer than is necessary.
If all of your tests are Specific, you will miss unexpected changes which could be serious bugs, so your tests will be passing even though there are problems which should fail the tests (FALSE PASS).

If all of your tests are Sensitive, every unexpected problem will trip up all of your tests, even though you are not interested in this after the first time (FALSE FAIL). You could use EXPECTED FAIL STATUS to overcome this if you can't change enough of the tests to be Specific. The tests will also take longer to run, as there will be more checking to do for each test.

Issues addressed by this pattern

FALSE FAIL
FALSE PASS
INEFFICIENT FAILURE ANALYSIS

Experiences

If you have used this pattern and would like to contribute your experience to the wiki, please go to Feedback to submit your experience or comment.

.................................................................................................................Main Page / Back to Design Patterns / Back to Test Automation Patterns