AUTOMATE THE METRICS

From Test Automation Patterns
Revision as of 08:20, 4 May 2018 by Seretta (talk | contribs) (Put topic titles in capital letters)
Jump to navigation Jump to search
.................................................................................................................Main Page / Back to Design Patterns / Back to Test Automation Patterns

Pattern summary

Automate metrics collection.

Category

Design

Context

This pattern allows you to collect metrics efficiently and reliably. If you just write disposable scripts you will not need it

Description

By automating metrics collection, your metrics will be more reliable because they will be collected consistently and will not be so easily biased as manually collected

Implementation

If your tool doesn’t support collecting metrics, consider implementing a TEST AUTOMATION FRAMEWORK.
Some suggestions what to collect with each test run:

  • Number of tests available
  • Number of tests executed
  • Number of tests passed
  • Number of tests failed (eventually classified by error severity)
  • Execution time
  • Date
  • SUT Release

You should also try to associate bug-fix information to your test run metrics. For instance:

  • Number of errors removed
  • Number of errors not yet removed
  • Number of retests
  • Number of tests failed after retest
  • Average time to remove an error

Possible problems

If possible keep track of which bugs were found with the test automation: it will help you retain support from management and testers

Issues addressed by this pattern

INSUFFICIENT METRICS

Experiences

If you have used this pattern, please add your name and a brief story of how you used this pattern: your context, what you did, and how well it worked - or how it didn't work!
.................................................................................................................Main Page / Back to Design Patterns / Back to Test Automation Patterns