Difference between revisions of "AUTOMATE WHAT'S NEEDED"
Line 49: | Line 49: | ||
| <br /> | | <br /> | ||
|} | |} | ||
− | <br /> <span style="font-size: 16px">If you have also used this pattern, please | + | <br /> <span style="font-size: 16px">If you have also used this pattern and would like to contribute your experience to the wiki, please go to [[Experiences]] to submit your experience or comment.</span><br /> <br /> |
+ | |||
+ | <span style="font-size: 14px">.................................................................................................................[[Main Page]] / Back to [[Process Patterns]] / Back to [[Test Automation Patterns]]</span></div> |
Revision as of 09:28, 3 July 2018
.................................................................................................................Main Page / Back to Process Patterns / Back to Test Automation Patterns
Pattern summary
Automate what the developers or the testers need, even if it isn’t tests!
Category
Process
Context
This pattern is appropriate when your automated tests will be around for a long time, but also when you just want to write one-off or disposable scripts.
Automating what isn't needed is never a good idea!
Description
Automate the tests that will give the most value. "Smoke tests" that are run every time there is any change, for example, would be good candidates for automation, as they would be run frequently and give confidence that the latest change has not destroyed everything.
When you think about test automation, the first tests that usually come to mind are regression tests that execute themselves at night or on week-ends. Actually with automation one can support testers in many other ways, because even when testing manually (even with exploratory tests) there are lots of repetitive tasks that could be easily automated. You may need only a little script in Python or SQL to make a tester’s life that much easier.
James Tony: Keep a balance between trying to test all the code paths (which is generally good) and trying to test every possible combination of inputs (which is generally bad, because it means the same lines of code get executed thousands of times, and the test suite takes so long that it doesn’t get used) – the aim should be to maximize the (customer-relevant) “bugs for your buck”, i.e. the maximum number of customer-relevant issues highlighted for the smallest expenditure of time and money
Implementation
Some suggestions:
- AUTOMATE GOOD TESTS: Automate only the tests that bring the most Return on Investment (ROI)
- KNOW WHEN TO STOP: Not all test cases can or should be automated
- SHARE INFORMATION: what do testers need? What could you deliver them? Start supporting them there.
- Try to get at least some developers "into the boat".
Good candidates for automation in addition to tests are:
- Complex set-ups: they can be easily automated and are great time savers for testers.
- DB-Data: Database-data can be automatically extracted or loaded for use in creating initial conditions or checking results. Such support is valued by developers and testers alike.
Potential problems
When people get "stuck in" to automation, they can get carried away with what can be done, and may want to automate tests that aren't really important enough to automate.
Issues addressed by this pattern
INADEQUATE COMMUNICATION
NO PREVIOUS TEST AUTOMATION
UNFOCUSED AUTOMATION
Experiences
Example 1
Jochim Van Dorpe writes:
I like the idea/theory of automated tests vs. automated testing. That's why I look as much as possible for things that we could automate in our test process, things I used to do manually, but are repetitive, boring and not interesting. So we automated some stuff, some right from the beginning of the project, others I still come up with.
Some solutions are interesting for me as a test-analyst, others are for my Project Lead (PL) or even for the client. Some are offered by the tools we use, some are custom-made by our team.
Some examples are:
- Before any tests begin, we automatically drop the test Database (DB)
- We populate the DB automatically with 'the things we want': typecodes, dataset, ...
- Of course we have the scripts that run the tests ...
- Tests (unitils, selenium, soapUi) are automatically run after a new build
- Test results are automatically transferred from the continuous integration server that executes them (jenkins) to the tool that contains the test suites with high level test cases (testlink)
- HTML- test reports are made automatically on a scheduled basis, or on demand
- A readable document of all the test cases can be composed on demand (testlink)
- A TSH (test scenario/case hierarchy) can be composed automatically
The custom-made things are re-usable by other projects in our company.
Example 2
Gaby Spengler describes how she helps testers find out if a test case has already been implemented:
Sometimes testers don’t know if a test case for a specific requirement has already been implemented. I have written a macro in Excel that allows testers to search the test case directory for a specified keyword and that creates a list of all the test cases that contain the keyword. Here is how the macro is coded: |
|
'Public full_name As String Public TFDir As String Public Searchtext As String 'Public Makro As String Sub TF_Search() 'The directory to be searched and the search argument are taken from the register „Testcase search“. TFDir = "" Worksheets("Testcase search").Activate TFDir = Cells(2, 1) Searchtext = Cells(5, 1) 'The following procedure browses thru the test case directory and looks for test cases that contain the Searchtext and lists them in the register “Testcase search” Rows(LineSource).Select 'Workbooks(TCFile(i)).Worksheets("Testcases").Activate |
If you have also used this pattern and would like to contribute your experience to the wiki, please go to Experiences to submit your experience or comment.