TOOL INDEPENDENCE

From Test Automation Patterns
Revision as of 11:01, 29 April 2018 by Seretta (talk | contribs) (Topic titles in capital letters)
Jump to navigation Jump to search
.................................................................................................................Main Page / Back to Design Patterns / Back to Test Automation Patterns

Pattern summary

Separate the technical implementation that is specific for the tool from the functional implementation of the tests.

Category

Design

Context

This pattern is applicable if you want to run automated tests on multiple platforms or environments, or if the tool you are using might change at some point (which is more likely the longer you have automated tests!)
This pattern is not applicable for short-term automation, e.g. disposable scripts.

Description

Design the structure for the testware so that tool-specific elements are kept to a minimum. Make the scripts modular, where tool-specific scripts are called by the scripts implementing the tests.

Implementation

Some suggestions:

  • Use a TEST AUTOMATION FRAMEWORK that supports KEYWORD-DRIVEN TESTING or DOMAIN-DRIVEN TESTING. Having separated the functional scripts from the tool-dependent ones means that if you change the tool, you only have to rewrite the tool-specific command scripts and all the others can be used without change.
  • Use an OBJECT MAP to name the GUI elements in your application. If you change your tool, you will have to map the GUI elements again in the new tool, but you will not need to change the functional scripts.
  • Use GOOD PROGRAMMING PRACTICES to keep tool-specific aspects in a small number of scripts that can be called by other scripts.

Potential problems

The effort spent on making testware separate from tool-specific aspects may be considered a waste of time by those who cannot see that the tool "engine" will change in the future.

Issues addressed by this pattern

HIGH ROI EXPECTATIONS
OBSCURE TESTS
TOOL DEPENDENCY

Experiences

If you have used this pattern, please add your name and a brief story of how you used this pattern: your context, what you did, and how well it worked - or how it didn't work!
Example Seretta:

In my company we use Command-Driven Testing, a variation of KEYWORD-DRIVEN TESTING that enables us to write scripts that are in fact tool independent.
The trick is that our keywords are just plain commands that don’t contain any domain specific information. The test scripts use the keywords to specify how the SUT has to be driven for a particular test.
To show how this works, here is an extract from one the scripts that we use to test our own test automation framework:
…….
GOTO,FTestSuite
INPUT,ComboBox,cboPriority,<Priority>
INPUT,ComboBox,cboTestType,<TestType>
SELECT,Button,<ButtonDRIVER>
GOTO,<SelectDirDRIVER>
INPUT,edtDirectory,<DRIVERDirName>
SELECT,Button,<ConfirmSelectionDRIVER>
…….

The commands GOTO, INPUT or SELECT are then implemented in the specific script language of our tool. If we want to use another tool we have to duplicate this implementation (GOTO, INPUT or SELECT) in the script language of the second tool, but our test script doesn’t have to be changed. (In practice we also must make sure that the GUI-objects that we need to drive get the same names in both tools. For further explanations look up the pattern OBJECT MAP.)
Since the number of commands doesn’t usually exceed 30-50, this is much cheaper than having to rewrite all the functional test scripts!




.................................................................................................................Main Page / Back to Design Patterns / Back to Test Automation Patterns