Difference between revisions of "AUTOMATE THE METRICS"
Jump to navigation
Jump to search
m (Put topic titles in capital letters) |
|||
Line 23: | Line 23: | ||
* <span style="font-size: 16px">Number of tests failed after retest</span> | * <span style="font-size: 16px">Number of tests failed after retest</span> | ||
* <span style="font-size: 16px">Average time to remove an error</span> | * <span style="font-size: 16px">Average time to remove an error</span> | ||
− | =<span style="font-size: 16px">''' | + | =<span style="font-size: 16px">'''Potential problems'''</span>= |
<span style="font-size: 16px">If possible keep track of which bugs were found with the test automation: it will help you retain support from management and testers</span> | <span style="font-size: 16px">If possible keep track of which bugs were found with the test automation: it will help you retain support from management and testers</span> | ||
=<span style="font-size: 16px">'''Issues addressed by this pattern'''</span>= | =<span style="font-size: 16px">'''Issues addressed by this pattern'''</span>= |
Revision as of 09:44, 28 June 2018
.................................................................................................................Main Page / Back to Design Patterns / Back to Test Automation Patterns
.................................................................................................................Main Page / Back to Design Patterns / Back to Test Automation Patterns
Pattern summary
Automate metrics collection.
Category
Design
Context
This pattern allows you to collect metrics efficiently and reliably. If you just write disposable scripts you will not need it
Description
By automating metrics collection, your metrics will be more reliable because they will be collected consistently and will not be so easily biased as manually collected
Implementation
If your tool doesn’t support collecting metrics, consider implementing a TEST AUTOMATION FRAMEWORK.
Some suggestions what to collect with each test run:
- Number of tests available
- Number of tests executed
- Number of tests passed
- Number of tests failed (eventually classified by error severity)
- Execution time
- Date
- SUT Release
You should also try to associate bug-fix information to your test run metrics. For instance:
- Number of errors removed
- Number of errors not yet removed
- Number of retests
- Number of tests failed after retest
- Average time to remove an error
Potential problems
If possible keep track of which bugs were found with the test automation: it will help you retain support from management and testers
Issues addressed by this pattern
Experiences
If you have used this pattern, please add your name and a brief story of how you used this pattern: your context, what you did, and how well it worked - or how it didn't work!.................................................................................................................Main Page / Back to Design Patterns / Back to Test Automation Patterns