TEST THE TESTS
Pattern summary
Test the scripts just as you would test production code.
Category
Process
Context
This pattern is needed if you want to have reliable automation (or if you don't believe in luck).
Description
Test your scripts individually, but also make sure that a failure in one test doesn’t cause the following tests to fail too.
If you don't pay attention to testing your automated tests, you will probably end up spending a lot of time looking for bugs in the software being tested, which are actually bugs in your testware. You may well also have tests that pass when they should fail and vice versa, so your automation will become unreliable.
'James Tony': A broken test is best fixed soon after the check-in that broke it (otherwise you will spend more time trying to find that out)
Implementation
New automation scripts should be tested just as other software.
Scripts can also be reviewed or Inspected before they are run, and can be assessed using a static analysis tool (looking automatically for common types of script error).
Automation scripts can also be tested by running them regularly and always checking the results. This is also the way to avoid SCRIPT CREEP because when the Software Under Test (SUT) changes the scripts can be updated or, if they no longer bring value, removed.
There are various actions to take in order to make your testware more robust:
- Implement INDEPENDENT TEST CASES, so that a test cannot fail just because a preceding one did.
- Make sure that when the tests run, they have all the resources they need (for instance enough memory, cpu etc) and that nothing else is using the same resources (databases, files etc)
- If you regularly PAIR UP you can avoid many problems right from the beginning. As the saying goes: two pairs of eyes see better than one!
Comment from Hans Buwalda:
I like to distinguish between testing the test and testing the automation (like the keywords implementations, interface mappings and technologies used).
Very good point - both need to be tested!
Potential problems
Testing the tests will take time, and you may find that you get into almost a recursive situation, where tests use other tests that should be tested, which in turn use other tests and where do you stop?
Issues addressed by this pattern
FALSE FAIL
FALSE PASS
FLAKY TESTS
Experiences
Jochim Van Dorpe writes:
We use the same reviewing technique for automated tests as for the software code itself.
The software code is reviewed by the senior developer, and when he is OK with it, the whole thing is sent to the test analyst who reviews and reruns the automated tests.
Within our good "agile practices”, developers aren’t allowed to pass their stories to me for review if any of their tests aren't passing. So, with the automated tests already defined and running green, the only bugs that I can discover are bugs in the tests themself, or bugs in the code because faulty tests didn’t check for the expected result.
Note:
If you have also used this pattern, please add your name and a brief story of how you used this pattern: your context, what you did, and how well it worked - or how it didn't work!
Michael Stahl:
Lately I started looking into the use of Google Test unit test framework (http://code.google.com/p/googletest/)
to test our internally-developed test-tools (I am talking about small tools, implementing very specific actions, which cannot be found externally; not large test frameworks). I tried it on one tool and it seems to be a good direction.
We use Visual Studio for developing test tools. The process is to add a "test" project to the same solution as the test tool, and write the tests there. The framework creates a .exe with the test-tool's tests. The tool's developer is expected to run this test often and definitely before releasing a new version of the test tool.
The tests call the test tool executable, using the system() call. Test results are directed to a file and then analyzed. A bit of a kludge - but very simple to implement. The Google Test framework takes less then a day to learn and helps remove the overhead of managing the tests, creating logs etc.
PS - for those using Visual Studio - there is a small gottcha you need to know to avoid failing compilation. See here:
http://stackoverflow.com/questions/12558327/google-test-in-visual-studio-2012
If you have also used this pattern and would like to contribute your experience to the wiki, please go to Feedback to submit your experience or comment.