Difference between revisions of "AUTOMATE THE METRICS"
Jump to navigation
Jump to search
(Created page with "<div id="content_view" class="wiki" style="display: block"><span style="font-size: 14px">.........................................................................................") |
|||
Line 3: | Line 3: | ||
<span style="font-size: 16px">Automate metrics collection.</span> | <span style="font-size: 16px">Automate metrics collection.</span> | ||
=<span style="font-size: 16px">Category</span>= | =<span style="font-size: 16px">Category</span>= | ||
− | <span style="font-size: 16px">Design</span> | + | <span style="font-size: 16px">Design</span> |
=<span style="font-size: 16px">Context</span>= | =<span style="font-size: 16px">Context</span>= | ||
<span style="font-size: 16px">This pattern allows you to collect metrics efficiently and reliably. If you just write disposable scripts you will not need it</span> | <span style="font-size: 16px">This pattern allows you to collect metrics efficiently and reliably. If you just write disposable scripts you will not need it</span> |
Revision as of 08:01, 4 April 2018
.................................................................................................................Main Page / Back to Design Patterns / Back to Test Automation Patterns
.................................................................................................................Main Page / Back to Design Patterns / Back to Test Automation Patterns
Pattern summary
Automate metrics collection.
Category
Design
Context
This pattern allows you to collect metrics efficiently and reliably. If you just write disposable scripts you will not need it
Description
By automating metrics collection, your metrics will be more reliable because they will be collected consistently and will not be so easily biased as manually collected
Implementation
If your tool doesn’t support collecting metrics, consider implementing a TEST AUTOMATION FRAMEWORK.
Some suggestions what to collect with each test run:
- Number of tests available
- Number of tests executed
- Number of tests passed
- Number of tests failed (eventually classified by error severity)
- Execution time
- Date
- SUT Release
You should also try to associate bug-fix information to your test run metrics. For instance:
- Number of errors removed
- Number of errors not yet removed
- Number of retests
- Number of tests failed after retest
- Average time to remove an error
Possible problems
If possible keep track of which bugs were found with the test automation: it will help you retain support from management and testers
Issues addressed by this pattern
INSUFFICIENT METRICS
Experiences
If you have used this pattern, please add your name and a brief story of how you used this pattern: your context, what you did, and how well it worked - or how it didn't work!.................................................................................................................Main Page / Back to Design Patterns / Back to Test Automation Patterns