SENSITIVE COMPARE
Pattern summary
Expected results are sensitive to changes beyond the specific test case
Category
Design
Context
This pattern is applicable when your automated tests will be around for a long time, and/or when there are frequent changes to the SUT.
This pattern is not applicable for one-off or disposable scripts.
Description
The expected results compares a large amount of information, more than just what the test case might have changed. For example, the comparison of an entire screen or window (possibly masking out some data). Sensitive tests are likely to find unexpected differences and regression defects.
Implementation
Implementation depends strongly on what you are testing. Some ideas:
- Extract from a database the entire tables touched by processing the test case
- Check the whole log and not only the parts directly pertaining to the test case
- On the GUI check all the objects on each page
If you are checking the whole of a window or screen, you may want to mask out data that you are not interested in, such as the date and time of the test. Otherwise, the date/time would be a difference shown up by the comparison, but you don't want that information!
Potential problems
If all your test cases use this pattern you would probably often get FALSE FAIL! It makes sense to have at least some test cases using this pattern, for example in a smoke test or high-level regression test. Other tests should use SPECIFIC COMPARE
Issues addressed by this pattern
Experiences
If you have used this pattern, please add your name and a brief story of how you used this pattern: your context, what you did, and how well it worked - or how it didn't work!.................................................................................................................Main Page / Back to Design Patterns / Back to Test Automation Patterns