The Software Under Test (SUT) doesn’t support automated testing very well or not at all.
The problem range is:
- Non-identifiable (uniquely) GUI-Objects; caused by lack of unique name or other property to make it unique.
- Third Party or Customized GUI-Objects that are not recognized properly or are not 'open' (exposed methods and properties).
- Random processing
- Fluctuating response times.
- The expected results change after each new version of the SUT
- Different systems (hardware or software) that interact in some way.
- Tests have to run on a steadily growing number of browsers or environments
- Test results consist in entries in many database tables or files related to each other.
- For embedded systems or mobile devices results are difficult to check because of timing issues or not very visible results (e.g. intermediate results)
- Test data is hard to create due to the number of systems involved or a limited refresh cycle.
- Unexpected pop-ups
- The SUT is very slow so it takes a lot of time to test the automation scripts
- APIs or AI deliver non-predictable results
Where does the complexity lie? In preparing the initial conditions? Test execution? Checking the results? Dependencies?
Do you have the right resources? What do you need?
Can you simulate hardware? or software?
Has the current tool been selected especially for the SUT or has it been "inherited" from previous applications?
Do the developers know about the automation problems? Do they care?
Do the developers have guidelines/standards for building in 'testability'? If not, why not?.
How do manual testers check the results?
How complex are the test cases?
How do you collect the necessary data?
- DO A PILOT: use this pattern to find out what the problems are and ways to tackle them
- RIGHT TOOLS: use this pattern if you are confronting issues like Examples 1 or 2.
- TAKE SMALL STEPS: use this pattern to break up the problems in more chewable chunks
- TESTABLE SOFTWARE: use this pattern for issues similar to Examples 1. 2. and 3.
- THINK OUT-OF-THE-BOX: try to look at the problem from unusual viewpoints. This is especially important for issues like Example 13
- VARIABLE DELAYS: use this pattern for issues Like Example 4
Other useful patterns:
- EASY TO DEBUG FAILURES. use this pattern for issues like Example 8.
- GET ON THE CLOUD: this is the pattern of choice for issues like Example 7.
- KEEP IT SIMPLE: this pattern is always helpful
- SHARE INFORMATION: use this pattern for better communication between development, testing and automation (Examples 1, 2, 4., 5., 11. and 12.)
Points 1 and 2 in Examples are contributed by Jim Hazen, Point 10 by Thorsten Schönfelder.