DO A PILOT

From Test Automation Patterns
Jump to navigation Jump to search
.................................................................................................................Main Page / Back to Management Patterns / Back to Test Automation Patterns

Pattern summary

Start a pilot project to explore how to best automate tests on the application.

Category

Management

Context

This pattern is useful when you start an automation project from scratch, but it can also be very useful when trying to find the reasons your automation effort is not as successful as you expected.
This pattern is not needed for one-off or disposable scripts.

Description

You start a pilot project to explore how to best automate tests on your application. The advantage of such a pilot is that it is time boxed and limited in scope, so that you can concentrate in finding out what the problems are and how to solve them. In a pilot project nobody will expect that you automate a lot of tests, but that you find out what are the best tools for your application, the best design strategy and so on.

You can also deal with problems that occur and will affect everyone doing automation, and solve them in a standard way before rolling out automation practices more widely. You will gain confidence in your approach to automation. Alternatively you may discover that something doesn't work as well as you thought, so you find a better way - this is good to do as early as possible!. Tom Gilb says: "If you are going to have a disaster, have it on a small scale"!

Implementation

Here some suggestions and additional patterns to help:

  • First of all SET CLEAR GOALS: with the pilot project you should achieve one or more of the following goals:
    • Prove that automation works on your application
    • Chose a test automation architecture
    • Select one or more tools
    • Define a set of standards
    • Show that test automation delivers a good return on investment
    • Show what test automation can deliver and what it cannot deliver
    • Get experience with the application and the tools
  • Try out different tools in order to select the RIGHT TOOLS that fit best for your SUT, but if possible PREFER FAMILIAR SOLUTIONS because you will be able to benefit from available know-how from the very beginning.
  • Do not be afraid to MIX APPROACHES
  • AUTOMATION ROLES: see that you get the people with the necessary skills right from the beginning
  • TAKE SMALL STEPS, for instance start by automating a STEEL THREAD: in this way you can get a good feeling about what kind of problems you will be facing, for instance check if you have TESTABLE SOFTWARE
  • Take time for debriefing when you are thru and don't forget to LEARN FROM MISTAKES
  • In order to get fast feedback adopt SHORT ITERATIONS

What kind of areas are explored in a pilot? This is the ideal opportunity to try out different ways of doing things, to determine what works best for you. These three areas are very important:

  • Building new automated tests. Try different ways to build tests, using different scripting techniques (DATA-DRIVEN TESTING, KEYWORD-DRIVEN TESTING). Experiment with different ways of organising the tests, i.e. different types of TESTWARE ARCHITECTURE. Find out how to most efficiently interface from your structure and architecture to the tool you are using. Take 10 or 20 stable tests and automate them in different ways, keeping track of the effort needed.
  • Maintenance of automated tests. When the application changes, the automated tests will be affected. How easy will it be to cope with those changes? If your automation is not well structured, with a good TESTWARE ARCHITECTURE, then even minor changes in the application can result in a disproportionate amount of maintenance to the automated tests - this is what often "kills" an automation effort! It is important in the pilot to experiment with different ways to build the tests in order to minimise later maintenance. Putting into practice GOOD PROGRAMMING PRACTICES and a GOOD DEVELOPMENT PROCESS are key to success. In the pilot, use different versions of the application - build the tests for one version, and then run them on a different version, and measure how much effort it takes to update the tests. Plan your automation to cope the best with application changes that are most likely to occur.
  • Failure analysis. When tests fail, they need to be analysed, and this requires human effort. In the pilot, experiment with how the failure information will be made available for the people who need to figure out what happened. What you want to have are EASY TO DEBUG FAILURES. A very important area to address here is how the automation will cope with common problems that may affect many tests. This would be a good time to put in place standard error-handling that every test can call on.

Potential problems

Trying to do too much: Don’t bite off more than you can chew - if you have too many goals you will have problems achieving them all.
Worthless experiment: Do the pilot on something that is worth automating, but not on the critical path.
Under-resourcing the pilot: Make sure that the people involved in the pilot are available when needed - managers need to understand that this is "real work"!

Issues addressed by this pattern

AD-HOC AUTOMATION
CAN'T FIND WHAT I WANT
COMPLEX ENVIRONMENT
HARD-TO-AUTOMATE
HIGH ROI EXPECTATIONS
LIMITED EXPERIENCE
STALLED AUTOMATION
SUT REMAKE
TOO EARLY AUTOMATION
UNREALISTIC EXPECTATIONS

Experiences

If you have used this pattern and would like to contribute your experience to the wiki, please go to Feedback to submit your experience or comment.

.................................................................................................................Main Page / Back to Management Patterns / Back to Test Automation Patterns