KEYWORD-DRIVEN TESTING
Pattern Summary
Tests are driven by keywords (also called action words) that represent actions of a test, and may include input data and expected results.
Category
Design
Context
This pattern is appropriate:
- When you want to write test cases that are practically independent from the Software under Test (SUT). If the SUT changes, the functionality behind the keyword must be adapted, but most of the time the test cases themselves are still valid.
- When testers should be able to write and run automated tests even if they are not adept with the automation tools.
- If you want testers to start writing test cases for automation before the (SUT) is available to test.
The pattern is not appropriate for very small-scale or one-off automation efforts.
Description
Keywords are the verbs of the language that the tester uses to specify tests, typically from a business or domain perspective. A keyword specifies a sequence of actions together with any required input data and/or expected results.
Keyword | Input Data | Expected Results | ||||
---|---|---|---|---|---|---|
1 | 2 | ... | n | 1 | ... | |
Action 1 | Data 1 | Data 1.2 | ... | ... | ||
Action 2 | Data 2 | Data 2.2 | ... | Data 2.n | Result 1 | ... |
... | ... | ... | ... | ... | ... | ... |
Keywords are most powerfully used at a high level, representing a business domain. Different domains would have different keywords. High level keywords for an insurance application, for example, might include "Create New Policy", "Process Claim" or "Renew Policy". High level keywords for a mobile phone, for example, might include "Make Call", "Update Contact" or "Send Text Message". There may be some keywords that are common across more than one domain, particularly at lower levels, such as "Log In" and "Print Page".
Implementation
Each keyword is processed by an associated script, which may call other reusable scripts from a library. A keyword script can be written in a common coding language, as calls to other keywords, or in the scripting language of the tool. It is a good idea to keep the tool-specific scripts to a minimum as this will help to reduce script maintenance costs. The keyword script reads and processes the input data for the keyword, and/or checks the expected output. The implementation of the architecture of the keywords is often referred to as a TEST AUTOMATION FRAMEWORK, which determines the choice of scripting language and how composite keywords are defined.
Potential problems
Take care that the keywords describe a clear and cohesive action. If the keywords build the “words” for a domain specific language (as in DOMAIN-DRIVEN TESTING), your testers will easily be able to write automation test cases and will be able to start even before the SUT is available for running tests.
Another problem can come up when you need too many parameters (input data) for a given keyword: it becomes unwieldy if you have to scroll repeatedly in order to see or input all the parameters. Break the unwieldy keyword into shorter independent functions (which can call and be called from the others).
Issues addressed by this pattern
BRITTLE SCRIPTS
LATE TEST CASE DESIGN
MANUAL MIMICRY
NON-TECHNICAL-TESTERS
TOO EARLY AUTOMATION
Experiences
Example from Seretta:
We use a variation of this pattern that we called Command-Driven Testing. Here is how it works: First of all, and that’s why we gave it its name, our keywords are not words in a domain-specific language, but just plain commands like “SELECT,Button” or “INPUT”. These commands drive the test tool and they have to be written in the proprietary script language of the tool. The functional scripts are written in a kind of Meta language that calls the commands. An interpreter script in the tool script language parses the Meta scripts and executes the appropriate commands. In this way, you can extract the test and application know-how from the proprietary tool scripts. This again means that when we had to migrate to a different tool, from QARun to TestComplete, we only needed to rewrite the interpreter script for the basic commands in TestComplete, and to remap our GUI-Objects in the OBJECT MAP, but that otherwise the tests didn’t have to be touched. Another advantage was that we were thus able use the same “commands” (the interpreter scripts) for all our different applications (they all run in the same environment). Having studied the advantages of the pattern DATA-DRIVEN TESTING we devised a way to get the same benefits also for Command-Driven Testing: we split our command scripts into a DRIVER part and a DATA part. In the DRIVER-file the variable data is replaced with placeholders and in the DATA-file the placeholders are substituted with the actual data. In this way we can have any number of DATA files to one DRIVER file. To make the scripting even more flexible the interpreter script ignores statements that contain placeholders that are not to be found in the corresponding DATA file (SKIP VOID INPUTS) Here some examples of our Meta language: GOTO,FTestSuite INPUT,ComboBox,cboPriority,<Priority> |
If you have also used this pattern and would like to contribute your experience to the wiki, please go to Feedback to submit your experience or comment.
.................................................................................................................Main Page / Back to Design Patterns / Back to Test Automation Patterns