SET CLEAR GOALS

From Test Automation Patterns
Jump to navigation Jump to search
The printable version is no longer supported and may have rendering errors. Please update your browser bookmarks and please use the default browser print function instead.

.................................................................................................................Main Page / Back to Management Patterns / Back to Test Automation Patterns

Pattern summary

Define the automation goals from the very beginning in a way that is clear and understandable to all.

Category

Management

Context

This pattern is always applicable, although the goals may be different at different stages. You need to know what your objectives and goals are when you first consider test automation, when an automation initiative is getting started, when your automation is going well, and when you want to re-vitalise a stalled automation effort.

If you don't know what your goals are, how do you know you are going in the right direction?

This pattern applies to the goals and objectives for the automated tests, but there should also be goals and objectives for monitoring the on-going health of the automation as a whole.

Also note that the goals for testing should be different to the goals for automation!

Description

If the automation objectives are defined clearly up front, we hope that people will not be disappointed later on because they expected different results.

Inform your managers about what is feasible and what is not. Show them magazine articles or papers that support your view. If you can, bring in an expert to clear up any misunderstandings.

Goals should be measurable so that you can tell if you have achieved them or not.

Implementation

Before you develop the goals you should SHARE INFORMATION with management, testers and developers in order to understand what they need and what they are expecting from test automation. This should also be the time to inform management and testers about what test automation can deliver and what not.

Goals vary in different contexts and at different times, but here some suggested examples:

  • Selected regression tests should run x-times faster and y-times more often.
  • Tests too complex to perform manually are to be automated. Define exactly which ones.
  • Support activities for manual tests that are to be automated.
  • Increase the parts of the system that are tested in a release (coverage).
  • Make it easy for business users to write and run automated tests
  • Ensure repeatability of automated regression tests
  • Free testers from repetitive and boring test running so they can spend more time doing exploratory testing


Define from the beginning the scope of the automation: which tests are going to be automated and which are not, based on the identified goals. Be specific!

It may also be useful to consider the investment needed to achieve these goals, asking questions such as:

  • What roles do you need on your test automation team?
  • What return on investment do you expect?
  • How high will the investments have to be? How much time and resources will you need?

Not good goals
There are some goals or objectives that seem attractive when you first encounter them, but they are not actually good goals for automation.

A common mistake is to confuse the goals for automation with goals for testing. This leads to goals like "Automated tests should find lots of bugs". This is fine for some types of automation, but not for regression test automation (which is what most people automate). Regression tests by their nature don't find many bugs, as they are running tests that passed the last time they were run, and are checking to see that nothing has changed.

Another poor goal for automation is "Automate all of our manual tests". First, not all manual tests should be automated. For example, tests of some usability aspects need people to assess; it is a waste of time to automate a test that takes 2 weeks to automate, takes 1 hour to run manually, and is only needed once a quarter. We should only AUTOMATE WHAT'S NEEDED. But this erroneous goal also leads people to think that it is only manual tests that can be automated - but automation can do much more than just automate tests that can be run manually. For example, an automated test can check object states which are not visible to a human tester.

Example goals for tool selection*: (See also RIGHT TOOLS)
The automated tests should be:

  • Maintainable – to reduce the amount of test maintenance effort;
  • Relevant – to provide clear traceability of the business value of automation;
  • Reusable – to provide modularity of test cases and function libraries;
  • Manageable – to provide effective test design, execution and traceability;
  • Accessible – to enable collaboration on concurrent design and development;
  • Robust – to provide object/event/error handling and recovery;
  • Portable – to be independent of the SUT and be completely scalable;
  • Reliable – to provide fault tolerance over a number of scalable test agents;
  • Diagnosable – to provide actionable defects for accelerated defect investigation;
  • Measurable – provide a testing dashboard along with customisable reporting.

Potential problems

Be sure to remind management that it may not be possible or advisable to automate every test case.

It may be difficult to get everyone to agree on what the goals should be, and particularly how they can be measured. Beware of inappropriate goals, such as "find lots of defects" when regression tests are automated.

Issues addressed by this pattern

AD-HOC AUTOMATION
HIGH ROI EXPECTATIONS
INADEQUATE TEAM
NO PREVIOUS TEST AUTOMATION
UNREALISTIC EXPECTATIONS

Experiences

If you have used this pattern and would like to contribute your experience to the wiki, please go to Feedback to submit your experience or comment.


.................................................................................................................Main Page / Back to Management Patterns / Back to Test Automation Patterns


* The goals for tool selection have been suggested by Jonathon Wright. They are taken from 'The Big Picture of Test Automation: Test Trustworthiness' – Alan Page, Microsoft (2012).