Difference between revisions of "SPECIFIC COMPARE"
(Created page with "<div id="content_view" class="wiki" style="display: block"><span style="font-size: 14px">.........................................................................................") |
m (Topic titles in capital letters) |
||
Line 1: | Line 1: | ||
<div id="content_view" class="wiki" style="display: block"><span style="font-size: 14px">.................................................................................................................[[Main Page]] / Back to [[Design Patterns]] / Back to [[Test Automation Patterns]]</span><br /> | <div id="content_view" class="wiki" style="display: block"><span style="font-size: 14px">.................................................................................................................[[Main Page]] / Back to [[Design Patterns]] / Back to [[Test Automation Patterns]]</span><br /> | ||
− | =<span style="font-size: 16px">Pattern summary</span>= | + | =<span style="font-size: 16px">'''Pattern summary'''</span>= |
<span style="font-size: 16px">Expected results are specific to the test case so changes to objects not processed in the test case don't affect the test results</span> | <span style="font-size: 16px">Expected results are specific to the test case so changes to objects not processed in the test case don't affect the test results</span> | ||
− | =<span style="font-size: 16px">Category</span>= | + | =<span style="font-size: 16px">'''Category'''</span>= |
<span style="font-size: 16px">Design</span> | <span style="font-size: 16px">Design</span> | ||
− | =<span style="font-size: 16px">Context</span>= | + | =<span style="font-size: 16px">'''Context'''</span>= |
<span style="font-size: 16px">This pattern is applicable when your automated tests will be around for a long time, and/or when there are frequent changes to the SUT.</span><br /> <span style="font-size: 16px">This pattern is not applicable for one-off or disposable scripts.</span> | <span style="font-size: 16px">This pattern is applicable when your automated tests will be around for a long time, and/or when there are frequent changes to the SUT.</span><br /> <span style="font-size: 16px">This pattern is not applicable for one-off or disposable scripts.</span> | ||
− | =<span style="font-size: 16px">Description</span>= | + | =<span style="font-size: 16px">'''Description'''</span>= |
<span style="font-size: 16px">The expected results check only that what has been performed in the test is correct. For example, if a test changes just two fields, only those fields are checked, not the rest of the window or screen containing them.</span> | <span style="font-size: 16px">The expected results check only that what has been performed in the test is correct. For example, if a test changes just two fields, only those fields are checked, not the rest of the window or screen containing them.</span> | ||
− | =<span style="font-size: 16px">Implementation</span>= | + | =<span style="font-size: 16px">'''Implementation'''</span>= |
<span style="font-size: 16px">Implementation depends strongly on what you are testing. Some ideas:</span><br /> | <span style="font-size: 16px">Implementation depends strongly on what you are testing. Some ideas:</span><br /> | ||
Line 14: | Line 14: | ||
* <span style="font-size: 16px">When checking a log, delete first all entries that don't directly pertain to the test case</span> | * <span style="font-size: 16px">When checking a log, delete first all entries that don't directly pertain to the test case</span> | ||
* <span style="font-size: 16px">On the GUI check only the objects touched by the test case</span> | * <span style="font-size: 16px">On the GUI check only the objects touched by the test case</span> | ||
− | =<span style="font-size: 16px">Potential problems</span>= | + | =<span style="font-size: 16px">'''Potential problems'''</span>= |
<span style="font-size: 16px">If all your test cases use this pattern you could miss important changes and get ''[[FALSE PASS]]''. It makes sense to have at least some test cases using a [[SENSITIVE COMPARE]].</span> | <span style="font-size: 16px">If all your test cases use this pattern you could miss important changes and get ''[[FALSE PASS]]''. It makes sense to have at least some test cases using a [[SENSITIVE COMPARE]].</span> | ||
− | =<span style="font-size: 16px">Issues addressed by this pattern</span>= | + | =<span style="font-size: 16px">'''Issues addressed by this pattern'''</span>= |
''<span style="font-size: 16px">[[BRITTLE SCRIPTS]]</span>''<br /> ''<span style="font-size: 16px">[[FALSE FAIL]]</span>'' | ''<span style="font-size: 16px">[[BRITTLE SCRIPTS]]</span>''<br /> ''<span style="font-size: 16px">[[FALSE FAIL]]</span>'' | ||
− | =<span style="font-size: 16px">Experiences</span>= | + | =<span style="font-size: 16px">'''Experiences'''</span>= |
<span style="font-size: 16px">Seretta:</span><br /> <span style="font-size: 16px">I had to test if some options were set or not. To check if the test case had passed I just extracted the complete options table from the database. That worked fine for a time, but then the developers added an option and all my test cases failed. Why? Because the now changed table was not identical with the expected result even if actually the test case had passed! </span><br /> <span style="font-size: 16px">I then applied this pattern, I extracted from the table only the option that in the test case had been handled and voilá all my test cases were passing again!</span><br /> <span style="font-size: 14px">.................................................................................................................[[Main Page]] / Back to [[Design Patterns]] / Back to [[Test Automation Patterns]]</span></div> | <span style="font-size: 16px">Seretta:</span><br /> <span style="font-size: 16px">I had to test if some options were set or not. To check if the test case had passed I just extracted the complete options table from the database. That worked fine for a time, but then the developers added an option and all my test cases failed. Why? Because the now changed table was not identical with the expected result even if actually the test case had passed! </span><br /> <span style="font-size: 16px">I then applied this pattern, I extracted from the table only the option that in the test case had been handled and voilá all my test cases were passing again!</span><br /> <span style="font-size: 14px">.................................................................................................................[[Main Page]] / Back to [[Design Patterns]] / Back to [[Test Automation Patterns]]</span></div> |
Revision as of 14:46, 28 April 2018
Pattern summary
Expected results are specific to the test case so changes to objects not processed in the test case don't affect the test results
Category
Design
Context
This pattern is applicable when your automated tests will be around for a long time, and/or when there are frequent changes to the SUT.
This pattern is not applicable for one-off or disposable scripts.
Description
The expected results check only that what has been performed in the test is correct. For example, if a test changes just two fields, only those fields are checked, not the rest of the window or screen containing them.
Implementation
Implementation depends strongly on what you are testing. Some ideas:
- Extract from a database only the data that is processed by the test case
- When checking a log, delete first all entries that don't directly pertain to the test case
- On the GUI check only the objects touched by the test case
Potential problems
If all your test cases use this pattern you could miss important changes and get FALSE PASS. It makes sense to have at least some test cases using a SENSITIVE COMPARE.
Issues addressed by this pattern
Experiences
Seretta:I had to test if some options were set or not. To check if the test case had passed I just extracted the complete options table from the database. That worked fine for a time, but then the developers added an option and all my test cases failed. Why? Because the now changed table was not identical with the expected result even if actually the test case had passed!
I then applied this pattern, I extracted from the table only the option that in the test case had been handled and voilá all my test cases were passing again!
.................................................................................................................Main Page / Back to Design Patterns / Back to Test Automation Patterns