Build testware that has one or more abstraction layers.
Apply this pattern for long lasting and maintainable automation. It's not necessary for disposable scripts
The most effective way to build automated testware is to construct it so that it has one or more abstraction layers (or levels). For example, in software development, the code to drive the GUI is usually separated from the code that actually implements the business functionality and also separated from the code that implements access to the database. Each part communicates with the others only through an interface. In this way each can be individually changed without breaking the whole as long as the interface is not changed (this is the theory, it is not always practiced).
For test automation this means that the testware is built so that you write in the scripting language of the tool only technical implementations (for instance scripts that drive a window or some GUI-component of the SUT). This is the lowest layer. The test cases call these scripts and add the necessary data. This is the next layer. And if you implement a kind of meta-language you get another layer. As for software code, the charm of abstraction layers is that since the layers are independent of each other they can be substituted without having to touch the other layers. The only thing that has to be maintained is the interface between them (how the test cases are supposed to call the scripts).
By separating the technical implementations in the tool's scripting language from the functional tests, you can later change automation tools with relative ease, because you will only need to rewrite the tool-specific scripts. Also if you keep the development technicalities apart from the test cases, even testers with no development knowledge will be able to write and maintain the testware. And they can start writing the tests even before the SUT has been completely developed. Another advantage is that you can reuse the technical scripts for other test automation efforts.
There are different ways to implement abstraction layers. Which to chose depends on how evolved your test automation framework is.
- DATA-DRIVEN TESTING means that you separate the data from the execution scripts (drivers). In this case you must take care to correctly pair the data to the drivers.
- In KEYWORD-DRIVEN TESTING you specify a keyword that controls how the data is to be processed. Generally keywords correspond to words in a domain-specific language (for instance insurance or manufacturing). It is usually used for DOMAIN-DRIVEN TESTING
- In MODEL-BASED TESTING you create a test model of the SUT. Typically, the modelling of test sequences starts at a very abstract level and is later refined step by step. From the model, a generation tool can automatically create test cases, test data, and even executable test scripts.
- Use TOOL INDEPENDENCE to separate the technical implementation that is specific for the tool from the functional implementation
Building a good TESTWARE ARCHITECTURE takes time and effort, and should be planned from the beginning of an automation effort. There is a temptation (sometimes encouraged by management) to "just do it". This is fine as a way of experimenting and getting started, for example if you DO A PILOT. However, you soon find that you have many tests that are not well structured and get STALLED AUTOMATION with high maintenance costs, but now you are "locked in" to the wrong solution if you are not aware of the importance of abstraction levels.
Issues addressed by this pattern
Bryan Bakker says: I see that test engineers often have difficulties in defining a good abstraction level (or levels). Software developers can help here. Ask them for help, or let them define the abstraction levels. An experienced test engineer can then implement technical layers, and test engineers with less automation experience (but often with more domain knowledge) can implement the actual test cases.
By using abstraction layers the complexities in the test cases (caused by the complex interface to the software under test) are hidden from the actual test cases. The test cases become understandable and maintainable.
There are some useful implementation patterns (code level) for this high-level pattern by Vinoth Selvaraj on his blog testautomationguru.com. The link takes you to the series of blogs about his patterns. [].
If you have also used this pattern and would like to contribute your experience to the wiki, please go to Feedback to submit your experience or comment.