HARD-TO-AUTOMATE

From Test Automation Patterns
Revision as of 15:48, 4 April 2018 by Cathal (talk | contribs) (Created page with "<div id="content_view" class="wiki" style="display: block"><span style="font-size: 14px">.........................................................................................")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search
.................................................................................................................Main Page / Back to Design Issues / Back to Test Automation Issues

Issue summary

The Software Under Test (SUT) doesn’t support automated testing very well or not at all.

Category

Design

Examples

The problem range is:

  1. Non-identifiable (uniquely) GUI-Objects; caused by lack of unique name or other property to make it unique.
  2. Third Party or Customized GUI-Objects that are not recognized properly or are not 'open' (exposed methods and properties).
  3. Random processing
  4. Fluctuating response times.
  5. The expected results change after each new version of the SUT
  6. Different systems (hardware or software) that interact in some way.
  7. Tests have to run on a steadily growing number of browsers or environments
  8. Test results consist in entries in many database tables or files related to each other.
  9. For embedded systems or mobile devices results are difficult to check because of timing issues or not very visible results (e.g. intermediate results)
  10. Test data is hard to create due to the number of systems involved or a limited refresh cycle.
  11. Unexpected pop-ups
  12. The SUT is very slow so it takes a lot of time to test the automation scripts


Questions

Where does the complexity lie? In preparing the initial conditions? Test execution? Checking the results? Dependencies?
Do you have the right resources? What do you need?
Can you simulate hardware? or software?
Has the current tool been selected especially for the SUT or has it been "inherited" from previous applications?
Do the developers know about the automation problems? Do they care?
Do the developers have guidelines/standards for building in 'testability'? If not, why not?.
How do manual testers check the results?
How complex are the test cases?
How do you collect the necessary data?

Resolving Patterns

Most recommended:

  • DO A PILOT: use this pattern to find out what the problems are and ways to tackle them
  • RIGHT TOOLS: use this pattern if you are confronting issues like Examples 1 or 2.
  • TAKE SMALL STEPS: use this pattern to break up the problems in more chewable chunks
  • TESTABLE SOFTWARE: use this pattern for issues similar to Examples 1. 2. and 3.
  • THINK OUT-OF-THE-BOX: try to look at the problem from unusual viewpoints
  • VARIABLE DELAYS: use this pattern for issues Like Example 4


Other useful patterns:


.................................................................................................................Main Page / Back to Design Issues / Back to Test Automation Issues




Points 1 and 2 in Examples are contributed by Jim Hazen, Point 10 by Thorsten Schönfelder.