AUTOMATE WHAT'S NEEDED
Automate what the developers or the testers need, even if it isn’t tests!
This pattern is appropriate when your automated tests will be around for a long time, but also when you just want to write one-off or disposable scripts.
Automating what isn't needed is never a good idea!
Automate the tests that will give the most value. "Smoke tests" that are run every time there is any change, for example, would be good candidates for automation, as they would be run frequently and give confidence that the latest change has not destroyed everything.
When you think about test automation, the first tests that usually come to mind are regression tests that execute themselves at night or on week-ends. Actually with automation one can support testers in many other ways, because even when testing manually (even with exploratory tests) there are lots of repetitive tasks that could be easily automated. You may need only a little script in Python or SQL to make a tester’s life that much easier.
James Tony: Keep a balance between trying to test all the code paths (which is generally good) and trying to test every possible combination of inputs (which is generally bad, because it means the same lines of code get executed thousands of times, and the test suite takes so long that it doesn’t get used) – the aim should be to maximize the (customer-relevant) “bugs for your buck”, i.e. the maximum number of customer-relevant issues highlighted for the smallest expenditure of time and money
- AUTOMATE GOOD TESTS: Automate only the tests that bring the most Return on Investment (ROI)
- KNOW WHEN TO STOP: Not all test cases can or should be automated
- SHARE INFORMATION: what do testers need? What could you deliver them? Start supporting them there.
- Try to get at least some developers "into the boat".
Good candidates for automation in addition to tests are:
- Complex set-ups: they can be easily automated and are great time savers for testers.
- DB-Data: Database-data can be automatically extracted or loaded for use in creating initial conditions or checking results. Such support is valued by developers and testers alike.
When people get "stuck in" to automation, they can get carried away with what can be done, and may want to automate tests that aren't really important enough to automate.
Issues addressed by this pattern
Jochim Van Dorpe writes:
I like the idea/theory of automated tests vs. automated testing. That's why I look as much as possible for things that we could automate in our test process, things I used to do manually, but are repetitive, boring and not interesting. So we automated some stuff, some right from the beginning of the project, others I still come up with.
Some solutions are interesting for me as a test-analyst, others are for my Project Lead (PL) or even for the client. Some are offered by the tools we use, some are custom-made by our team.
Some examples are:
- Before any tests begin, we automatically drop the test Database (DB)
- We populate the DB automatically with 'the things we want': typecodes, dataset, ...
- Of course we have the scripts that run the tests ...
- Tests (unitils, selenium, soapUi) are automatically run after a new build
- Test results are automatically transferred from the continuous integration server that executes them (jenkins) to the tool that contains the test suites with high level test cases (testlink)
- HTML- test reports are made automatically on a scheduled basis, or on demand
- A readable document of all the test cases can be composed on demand (testlink)
- A TSH (test scenario/case hierarchy) can be composed automatically
The custom-made things are re-usable by other projects in our company.
Gaby Spengler describes how she helps testers find out if a test case has already been implemented:
| Sometimes testers don’t know if a test case for a specific requirement has already been implemented. I have written a macro in Excel that allows testers to search the test case directory for a specified keyword and that creates a list of all the test cases that contain the keyword.
Here is how the macro is coded:
| 'Public full_name As String
Public TFDir As String
Public Searchtext As String
'Public Makro As String
'The directory to be searched and the search argument are taken from the register „Testcase search“.
TFDir = ""
TFDir = Cells(2, 1)
Searchtext = Cells(5, 1)
'The following procedure browses thru the test case directory and looks for test cases that contain the Searchtext and lists them in the register “Testcase search”
Sergiusz Golec writes:
One day a set of sample clients/partners observed a not desired end state for their transactions. "Hard to find & reproduce". Critical details & triggers - missing. Except the known day. Hard to reproduce & analyze.
Drop from webapp debug logs contained 9 files around 90 MB. Jms message response content with an issue not to be covered with known log analyzers. Every file around 30'000 lines long, 10'486'203 characters long. Interesting details - spreaded among different hours (timestamps), contexts and transactions. As if extremely thin layer of a butter on a 90 MB slice of bread.
Automatically it was half a day task. To write and check the needed outcome.
Script read the input file(s), line by line, and with a function (fWorthOrNotWriting) based upon keywords (like interesting XML tags, Partner ID(s) (shown in a different formats), environment, transaction type, state or known errors) started writing the desired output (into another file). Script collected the entire XML messages into a separate file. 808 lines from around 9*30'000 lines ( = 270 K lines).
808 lines long content manually was trimmed and boiled eventually to the single perfect example: 36-lines-long XML. Alienated & identified issue. Easy to work with and fix by developers.
Manually - it would make my soul cry for a week. To look for that specific issue.
If you have also used this pattern and would like to contribute your experience to the wiki, please go to Feedback to submit your experience or comment.