Difference between revisions of "TEST THE TESTS"

From Test Automation Patterns
Jump to navigation Jump to search
 
(3 intermediate revisions by 2 users not shown)
Line 5: Line 5:
 
<span style="font-size: 16px">Process</span>
 
<span style="font-size: 16px">Process</span>
 
=<span style="font-size: 16px">'''Context'''</span>=
 
=<span style="font-size: 16px">'''Context'''</span>=
<span style="font-size: 16px">‍‍This pattern is needed if you want to have reliable automation‍‍. (or if you don't believe in luck)</span>
+
<span style="font-size: 16px">‍‍This pattern is needed if you want to have reliable automation‍‍ (or if you don't believe in luck).</span>
 
=<span style="font-size: 16px">'''Description'''</span>=
 
=<span style="font-size: 16px">'''Description'''</span>=
 
<span style="font-size: 16px">Test your scripts individually, but also make sure that a failure in one test doesn’t cause the following tests to fail too. </span><br /> <br /> <span style="font-size: 16px">If you don't pay attention to testing your automated tests, you will probably end up spending a lot of time looking for bugs in the software being tested, which are actually bugs in your testware. You may well also have tests that pass when they should fail and vice versa, so your automation will become unreliable.</span><br /> <br /> <span style="font-size: 16px">'James Tony': A broken test is best fixed soon after the check-in that broke it (otherwise you will spend more time trying to find that out)</span>
 
<span style="font-size: 16px">Test your scripts individually, but also make sure that a failure in one test doesn’t cause the following tests to fail too. </span><br /> <br /> <span style="font-size: 16px">If you don't pay attention to testing your automated tests, you will probably end up spending a lot of time looking for bugs in the software being tested, which are actually bugs in your testware. You may well also have tests that pass when they should fail and vice versa, so your automation will become unreliable.</span><br /> <br /> <span style="font-size: 16px">'James Tony': A broken test is best fixed soon after the check-in that broke it (otherwise you will spend more time trying to find that out)</span>
Line 12: Line 12:
 
* <span style="font-size: 16px">Implement [[INDEPENDENT TEST CASES]], so that a test cannot fail just because a preceding one did.</span>
 
* <span style="font-size: 16px">Implement [[INDEPENDENT TEST CASES]], so that a test cannot fail just because a preceding one did.</span>
 
* <span style="font-size: 16px">Make sure that when the tests run, they have all the resources they need (for instance enough memory, cpu etc) and that nothing else is using the same resources (databases, files etc)</span>
 
* <span style="font-size: 16px">Make sure that when the tests run, they have all the resources they need (for instance enough memory, cpu etc) and that nothing else is using the same resources (databases, files etc)</span>
=<span style="font-size: 16px">'''Recommendations'''</span>=
+
* <span style="font-size: 16px">If you regularly [[PAIR UP]] you can avoid many problems right from the beginning. As the saying goes: two pairs of eyes see better than one!</span><br />
<span style="font-size: 16px">If you regularly [[PAIR UP]] you can avoid many problems right from the beginning. As the saying goes: two pairs of eyes see better than one!</span><br />
 
 
<span style="font-size: 16px">Comment from Hans Buwalda:</span>
 
<span style="font-size: 16px">Comment from Hans Buwalda:</span>
 
<span style="color: #333333; font-size: 16px">I like to distinguish between testing the test and testing the automation (like the keywords implementations, interface mappings and technologies used).</span><br /> ''<span style="color: #333333; font-size: 16px; line-height: 24px">Very good point - both need to be tested!</span>''
 
<span style="color: #333333; font-size: 16px">I like to distinguish between testing the test and testing the automation (like the keywords implementations, interface mappings and technologies used).</span><br /> ''<span style="color: #333333; font-size: 16px; line-height: 24px">Very good point - both need to be tested!</span>''
Line 21: Line 20:
 
''<span style="font-size: 16px">[[FALSE FAIL]]</span>''<br /> ''<span style="font-size: 16px">[[FALSE PASS]]</span>''<br /> ''<span style="font-size: 16px">[[FLAKY TESTS]]</span>''
 
''<span style="font-size: 16px">[[FALSE FAIL]]</span>''<br /> ''<span style="font-size: 16px">[[FALSE PASS]]</span>''<br /> ''<span style="font-size: 16px">[[FLAKY TESTS]]</span>''
 
=<span style="font-size: 16px">'''Experiences'''</span>=
 
=<span style="font-size: 16px">'''Experiences'''</span>=
<span style="font-size: 16px">''Jochim Van Dorpe'' writes:</span><br /> <span style="font-size: 16px">We use the same reviewing technique for automated tests as for the software code itself.</span><br /> <br /> <span style="font-size: 16px"> The software code is reviewed by the senior developer, and when he is OK with it, the whole thing is sent to the test analyst who reviews and reruns the automated tests.</span><br /> <br /> <span style="font-size: 16px">Within our good "agile practices”, developers aren’t allowed to pass their stories to me for review if any of their tests aren't passing. So, with the automated tests already defined and running green, the only bugs that I can discover are bugs in the tests themself, or bugs in the code because faulty tests didn’t check for the expected result</span>.<br /> <br /> <span style="font-size: 16px">Note:</span><br /> <span style="font-size: 16px">If you have also used this pattern, please add your name and a brief story of how you used this pattern: your context, what you did, and how well it worked - or how it didn't work!</span><br /> <br /> <br /> ''<span style="font-size: 120%">Michael Stahl:</span>''<br /> <span style="font-size: 120%">Lately I started looking into the use of Google Test unit test framework (http://code.google.com/p/googletest/)</span><br /> <span style="font-size: 120%">to test our internally-developed test-tools (I am talking about small tools, implementing very specific actions, which cannot be found externally; not large test frameworks). I tried it on one tool and it seems to be a good direction. </span><br /> <br /> <span style="font-size: 120%">We use Visual Studio for developing test tools. The process is to add a "test" project to the same solution as the test tool, and write the tests there. The framework creates a .exe with the test-tool's tests. The tool's developer is expected to run this test often and definitely before releasing a new version of the test tool. </span><br /> <br /> <span style="font-size: 120%">The tests call the test tool executable, using the system() call. Test results are directed to a file and then analyzed. A bit of a kludge - but very simple to implement. The Google Test framework takes less then a day to learn and helps remove the overhead of managing the tests, creating logs etc. </span><br /> <br /> <span style="font-size: 120%">PS - for those using Visual Studio - there is a small gottcha you need to know to avoid failing compilation. See here:</span><br /> http://stackoverflow.com/questions/12558327/google-test-in-visual-studio-2012<br /> <br /> <br /> <span style="font-size: 14px">.................................................................................................................[[Main Page]] / Back to [[Process Patterns]] / Back to [[Test Automation Patterns]]</span></div>
+
<span style="font-size: 16px">''Jochim Van Dorpe'' writes:</span><br /> <span style="font-size: 16px">We use the same reviewing technique for automated tests as for the software code itself.</span><br /> <br /> <span style="font-size: 16px"> The software code is reviewed by the senior developer, and when he is OK with it, the whole thing is sent to the test analyst who reviews and reruns the automated tests.</span><br /> <br /> <span style="font-size: 16px">Within our good "agile practices”, developers aren’t allowed to pass their stories to me for review if any of their tests aren't passing. So, with the automated tests already defined and running green, the only bugs that I can discover are bugs in the tests themself, or bugs in the code because faulty tests didn’t check for the expected result</span>.<br /> <br /> <span style="font-size: 16px">Note:</span><br /> <span style="font-size: 16px">If you have also used this pattern, please add your name and a brief story of how you used this pattern: your context, what you did, and how well it worked - or how it didn't work!</span><br /> <br /> <br /> ''<span style="font-size: 120%">Michael Stahl:</span>''<br /> <span style="font-size: 120%">Lately I started looking into the use of Google Test unit test framework (http://code.google.com/p/googletest/)</span><br /> <span style="font-size: 120%">to test our internally-developed test-tools (I am talking about small tools, implementing very specific actions, which cannot be found externally; not large test frameworks). I tried it on one tool and it seems to be a good direction. </span><br /> <br /> <span style="font-size: 120%">We use Visual Studio for developing test tools. The process is to add a "test" project to the same solution as the test tool, and write the tests there. The framework creates a .exe with the test-tool's tests. The tool's developer is expected to run this test often and definitely before releasing a new version of the test tool. </span><br /> <br /> <span style="font-size: 120%">The tests call the test tool executable, using the system() call. Test results are directed to a file and then analyzed. A bit of a kludge - but very simple to implement. The Google Test framework takes less then a day to learn and helps remove the overhead of managing the tests, creating logs etc. </span><br /> <br /> <span style="font-size: 120%">PS - for those using Visual Studio - there is a small gottcha you need to know to avoid failing compilation. See here:</span><br /> http://stackoverflow.com/questions/12558327/google-test-in-visual-studio-2012<br /> <br /> <br />  
 +
 
 +
<span style="font-size: 16px">If you have also used this pattern and would like to contribute your experience to the wiki, please go to [[Feedback]] to submit your experience or comment.</span><br /> <br />
 +
 
 +
 
 +
<span style="font-size: 14px">.................................................................................................................[[Main Page]] / Back to [[Process Patterns]] / Back to [[Test Automation Patterns]]</span></div>

Latest revision as of 14:43, 21 August 2018

.................................................................................................................Main Page / Back to Process Patterns / Back to Test Automation Patterns

Pattern summary

Test the scripts just as you would test production code.

Category

Process

Context

‍‍This pattern is needed if you want to have reliable automation‍‍ (or if you don't believe in luck).

Description

Test your scripts individually, but also make sure that a failure in one test doesn’t cause the following tests to fail too.

If you don't pay attention to testing your automated tests, you will probably end up spending a lot of time looking for bugs in the software being tested, which are actually bugs in your testware. You may well also have tests that pass when they should fail and vice versa, so your automation will become unreliable.

'James Tony': A broken test is best fixed soon after the check-in that broke it (otherwise you will spend more time trying to find that out)

Implementation

New automation scripts should be tested just as other software.
Scripts can also be reviewed or Inspected before they are run, and can be assessed using a static analysis tool (looking automatically for common types of script error).
Automation scripts can also be tested by running them regularly and always checking the results. This is also the way to avoid SCRIPT CREEP because when the Software Under Test (SUT) changes the scripts can be updated or, if they no longer bring value, removed.

There are various actions to take in order to make your testware more robust:

  • Implement INDEPENDENT TEST CASES, so that a test cannot fail just because a preceding one did.
  • Make sure that when the tests run, they have all the resources they need (for instance enough memory, cpu etc) and that nothing else is using the same resources (databases, files etc)
  • If you regularly PAIR UP you can avoid many problems right from the beginning. As the saying goes: two pairs of eyes see better than one!

Comment from Hans Buwalda: I like to distinguish between testing the test and testing the automation (like the keywords implementations, interface mappings and technologies used).
Very good point - both need to be tested!

Potential problems

Testing the tests will take time, and you may find that you get into almost a recursive situation, where tests use other tests that should be tested, which in turn use other tests and where do you stop?

Issues addressed by this pattern

FALSE FAIL
FALSE PASS
FLAKY TESTS

Experiences

Jochim Van Dorpe writes:
We use the same reviewing technique for automated tests as for the software code itself.

The software code is reviewed by the senior developer, and when he is OK with it, the whole thing is sent to the test analyst who reviews and reruns the automated tests.

Within our good "agile practices”, developers aren’t allowed to pass their stories to me for review if any of their tests aren't passing. So, with the automated tests already defined and running green, the only bugs that I can discover are bugs in the tests themself, or bugs in the code because faulty tests didn’t check for the expected result.

Note:
If you have also used this pattern, please add your name and a brief story of how you used this pattern: your context, what you did, and how well it worked - or how it didn't work!


Michael Stahl:
Lately I started looking into the use of Google Test unit test framework (http://code.google.com/p/googletest/)
to test our internally-developed test-tools (I am talking about small tools, implementing very specific actions, which cannot be found externally; not large test frameworks). I tried it on one tool and it seems to be a good direction.

We use Visual Studio for developing test tools. The process is to add a "test" project to the same solution as the test tool, and write the tests there. The framework creates a .exe with the test-tool's tests. The tool's developer is expected to run this test often and definitely before releasing a new version of the test tool.

The tests call the test tool executable, using the system() call. Test results are directed to a file and then analyzed. A bit of a kludge - but very simple to implement. The Google Test framework takes less then a day to learn and helps remove the overhead of managing the tests, creating logs etc.

PS - for those using Visual Studio - there is a small gottcha you need to know to avoid failing compilation. See here:
http://stackoverflow.com/questions/12558327/google-test-in-visual-studio-2012


If you have also used this pattern and would like to contribute your experience to the wiki, please go to Feedback to submit your experience or comment.


.................................................................................................................Main Page / Back to Process Patterns / Back to Test Automation Patterns