AUTOMATE THE METRICS

From Test Automation Patterns
Revision as of 15:51, 21 August 2018 by Dorothy (talk | contribs) (→‎Experiences)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search
.................................................................................................................Main Page / Back to Design Patterns / Back to Test Automation Patterns

Pattern summary

Automate metrics collection.

Category

Design

Context

This pattern allows you to collect metrics efficiently and reliably. If you just write disposable scripts you will not need it

Description

By automating metrics collection, your metrics will be more reliable because they will be collected consistently and will not be so easily biased as manually collected

Implementation

If your tool doesn’t support collecting metrics, consider implementing a TEST AUTOMATION FRAMEWORK.
Some suggestions what to collect with each test run:

  • Number of tests available
  • Number of tests executed
  • Number of tests passed
  • Number of tests failed (eventually classified by error severity)
  • Execution time
  • Date
  • SUT Release

You should also try to associate bug-fix information to your test run metrics. For instance:

  • Number of errors removed
  • Number of errors not yet removed
  • Number of retests
  • Number of tests failed after retest
  • Average time to remove an error

Potential problems

If possible keep track of which bugs were found with the test automation: it will help you retain support from management and testers

Issues addressed by this pattern

INSUFFICIENT METRICS

Experiences

If you have used this pattern and would like to contribute your experience to the wiki, please go to Feedback to submit your experience or comment.

.................................................................................................................Main Page / Back to Design Patterns / Back to Test Automation Patterns