RIGHT INTERACTION LEVEL

From Test Automation Patterns
Revision as of 14:40, 28 April 2018 by Seretta (talk | contribs) (Topic titles in capital letters)
Jump to navigation Jump to search
.................................................................................................................Main Page / Back to Design Patterns / Back to Test Automation Patterns

Pattern summary

Be aware of the interaction level of the test approach on the Software Under Test (SUT) and its risks (intrusion)
This pattern has been added by Bryan Bakker

Category

Design

Context

This pattern is useful when test automation on the SUT is started, or changed. The level of interaction can still be influenced.

Description

It is important to realize that there are different ways/levels to interact with the SUT. The different levels reflect different implementations, but also different risks on false positives and false negatives. High intrusiveness of the SUT increases the risk of false test results.

Implementation

When automating test cases it must be chosen in which way to interact with the SUT. This can be done on multiple levels, and this has impact on the intrusiveness of the test cases. Some examples:

  • Automating test cases in the same way as the end user would use the system (via the same interfaces). The SUT is not adapted in this approach. This can be done by testing via hardware interfaces of the SUT (e.g. via USB or TCP/IP interfaces, see also chapter 13 of Experiences of Test Automation). Especially in embedded environments (where software interacts with other disciplines) this is a typical approach. Although this approach can be quite complex and expensive, the intrusion of the test cases on the SUT is (almost) zero, resulting in reliable test results (assuming the test cases have been implemented correctly): the SUT will not behave differently because of the test approach. This level is rarely used for software only applications.
  • User Interface (UI) automation: when performing test automation via the GUI, the environment of the SUT is changed in order to allow this way of testing: a specific tool, or libraries need to run on the SUT (this will affect the timing of the SUT). Although the user is simulated realistically, the level of intrusion is higher than in the previous point, increasing the risk of false positives and false negatives.
  • Test automation is often performed on the API (Application Programming Interface), this way it is quite easy to implement automated test cases. Often dedicated test-interfaces are implemented to support the test automation approach (Design for Testability). Although this way of testing is very powerful and efficient, the level of intrusion is very high: the SUT is changed for testing purposes. Failures may be found, that are unreproducible by end-users. These failures are caused by test approach. Test cases do not necessarily reflect realistic user behavior.

Which level of interaction to use should be a conscious choice when test automation starts (any level can be a good choice) considering also the risks of this choice, not only the benefits.

Potential Problems

Different levels of interaction result in different levels of intrusion. See above for examples.

Issues addressed by this pattern

FALSE FAIL
FALSE PASS
FLAKY TESTS

Experiences

Bryan Bakker says: I have experiences all 3 described examples. All 3 examples in fact worked out fine, but it is important to realize the risks, and accepting the consequences (e.g. numerous false positives/negatives).

Bas Dijkstra says: One example where test automation on API level is a particularly good idea is when the UI of the SUT itself uses the same API to communicate with the back end and (optionally) related systems. In this case, you emulate the API calls that are generated by a user interacting with the system on UI level to perform your automated tests.

What you gain in this case: your test automation scripts are going to be much easier to set up and maintain (no more brittle UI level test automation) and will execute much faster. What you lose: your test cases do not cover the logic that is built into the screens, but properly designed systems usually have a minimum amount of UI logic anyway.

I used this approach when building an automated test suite for applications built on the Cordys (now OpenText) Business Process Management System. It provides a web service API that enables you to execute nearly all relevant actions on your SUT and on the engine itself without the need to use a UI. In fact, the UI is nothing more than a user-friendly way to communicate with this API, so you come very close in end user emulation in this way.

.................................................................................................................Main Page / Back to Design Patterns / Back to Test Automation Patterns