Difference between revisions of "INEFFICIENT FAILURE ANALYSIS"

From Test Automation Patterns
Jump to navigation Jump to search
 
Line 29: Line 29:
 
<br /> <span style="font-size: 16px">Other useful patterns:</span><br />  
 
<br /> <span style="font-size: 16px">Other useful patterns:</span><br />  
  
* <span style="font-size: 16px">[[COMPARE WITH PREVIOUS VERSION]]: this pattern makes it easy to find failures, but not to analyze them</span>
+
* <span style="font-size: 16px">[[COMPARE WITH PREVIOUS VERSION]]: this pattern makes it easy to find failures, but not to analyse them</span>
 
* <span style="font-size: 16px">[[FRESH SETUP]]: apply this pattern if you have issues like Example 2.</span>
 
* <span style="font-size: 16px">[[FRESH SETUP]]: apply this pattern if you have issues like Example 2.</span>
 
* <span style="font-size: 16px">[[TAKE SMALL STEPS]]: this pattern is always useful</span>
 
* <span style="font-size: 16px">[[TAKE SMALL STEPS]]: this pattern is always useful</span>
 
<br /> <span style="font-size: 14px">.................................................................................................................[[Main Page]] / Back to [[Execution Issues]] / Back to [[Test Automation Issues]]</span></div>
 
<br /> <span style="font-size: 14px">.................................................................................................................[[Main Page]] / Back to [[Execution Issues]] / Back to [[Test Automation Issues]]</span></div>

Latest revision as of 16:28, 27 June 2018

.................................................................................................................Main Page / Back to Execution Issues / Back to Test Automation Issues

Issue summary

Failure Analysis is slow and difficult

Category

Execution

Examples

  1. Automated test cases perform complex scenarios, so that it's difficult to tag the exact cause for a failure.
  2. Automated test cases depend on previous tests. If the failure is caused in a previous test it is very difficult to find it.
  3. Failures are caused by differences in long amorphous text files that are difficult to parse manually (even with compare tools).
  4. In order to examine the status of the SUT after a test has been run, it's often necessary to rerun it. If tests routinely clean up after themselves, you have to remove temporarily the clean up actions before rerunning the test.
  5. Results can only be compared on the GUI
  6. Results are spread through different databases or media and it's difficult to put them together in a meaningful way
  7. Results change randomly
  8. Results change from release to release
  9. A minor bug which isn't going to be fixed is making too many of the automated tests fail, but they still need to be looked at (and then ignored)
  10. Comparison of results takes a long time, or results which should abort a test run are not picked up until the test has finished.

Questions

How complex are the automated test cases? Can they be split up?
Are the failures in the SUT or in the automation?
Are complex "scenario" test cases really necessary?
What information do developers need to facilitate bug-fixing?
What information do automators need to isolate why an automated test has failed?
What information do testers need to determine the cause of a test failure?

Resolving Patterns

Most recommended:


Other useful patterns:


.................................................................................................................Main Page / Back to Execution Issues / Back to Test Automation Issues