Difference between revisions of "INEFFICIENT FAILURE ANALYSIS"
Jump to navigation
Jump to search
m (Topic titles in capital letters) |
|||
(One intermediate revision by the same user not shown) | |||
Line 24: | Line 24: | ||
* <span style="font-size: 16px">[[COMPARISON DESIGN]]: Are you using the right type and sensitivity of result comparisons? Use this pattern to make sure your test result comparisons are as efficient as possible.</span> | * <span style="font-size: 16px">[[COMPARISON DESIGN]]: Are you using the right type and sensitivity of result comparisons? Use this pattern to make sure your test result comparisons are as efficient as possible.</span> | ||
* <span style="font-size: 16px">[[KEEP IT SIMPLE]]: Apply this pattern always.</span> | * <span style="font-size: 16px">[[KEEP IT SIMPLE]]: Apply this pattern always.</span> | ||
− | * <span style="font-size: 16px">[[EXPECTED FAIL STATUS]]: Use this pattern when a minor bug is causing many automated tests to fail.</span> | + | * <span style="font-size: 16px">[[EXPECTED FAIL STATUS]]: Use this pattern when a minor bug is causing many automated tests to fail (Example 9).</span> |
* <span style="font-size: 16px">[[ONE-CLICK RETEST]]: Apply this pattern to solve issues like Example 3.</span> | * <span style="font-size: 16px">[[ONE-CLICK RETEST]]: Apply this pattern to solve issues like Example 3.</span> | ||
* <span style="font-size: 16px">[[READABLE REPORTS]]: This pattern together with [[EASY TO DEBUG FAILURES]] should help to solve this issue.</span> | * <span style="font-size: 16px">[[READABLE REPORTS]]: This pattern together with [[EASY TO DEBUG FAILURES]] should help to solve this issue.</span> | ||
<br /> <span style="font-size: 16px">Other useful patterns:</span><br /> | <br /> <span style="font-size: 16px">Other useful patterns:</span><br /> | ||
− | * <span style="font-size: 16px">[[COMPARE WITH PREVIOUS VERSION]]: this pattern makes it easy to find failures, but not to | + | * <span style="font-size: 16px">[[COMPARE WITH PREVIOUS VERSION]]: this pattern makes it easy to find failures, but not to analyse them</span> |
* <span style="font-size: 16px">[[FRESH SETUP]]: apply this pattern if you have issues like Example 2.</span> | * <span style="font-size: 16px">[[FRESH SETUP]]: apply this pattern if you have issues like Example 2.</span> | ||
* <span style="font-size: 16px">[[TAKE SMALL STEPS]]: this pattern is always useful</span> | * <span style="font-size: 16px">[[TAKE SMALL STEPS]]: this pattern is always useful</span> | ||
<br /> <span style="font-size: 14px">.................................................................................................................[[Main Page]] / Back to [[Execution Issues]] / Back to [[Test Automation Issues]]</span></div> | <br /> <span style="font-size: 14px">.................................................................................................................[[Main Page]] / Back to [[Execution Issues]] / Back to [[Test Automation Issues]]</span></div> |
Latest revision as of 16:28, 27 June 2018
.................................................................................................................Main Page / Back to Execution Issues / Back to Test Automation Issues
.................................................................................................................Main Page / Back to Execution Issues / Back to Test Automation Issues
Issue summary
Failure Analysis is slow and difficult
Category
Execution
Examples
- Automated test cases perform complex scenarios, so that it's difficult to tag the exact cause for a failure.
- Automated test cases depend on previous tests. If the failure is caused in a previous test it is very difficult to find it.
- Failures are caused by differences in long amorphous text files that are difficult to parse manually (even with compare tools).
- In order to examine the status of the SUT after a test has been run, it's often necessary to rerun it. If tests routinely clean up after themselves, you have to remove temporarily the clean up actions before rerunning the test.
- Results can only be compared on the GUI
- Results are spread through different databases or media and it's difficult to put them together in a meaningful way
- Results change randomly
- Results change from release to release
- A minor bug which isn't going to be fixed is making too many of the automated tests fail, but they still need to be looked at (and then ignored)
- Comparison of results takes a long time, or results which should abort a test run are not picked up until the test has finished.
Questions
How complex are the automated test cases? Can they be split up?
Are the failures in the SUT or in the automation?
Are complex "scenario" test cases really necessary?
What information do developers need to facilitate bug-fixing?
What information do automators need to isolate why an automated test has failed?
What information do testers need to determine the cause of a test failure?
Resolving Patterns
Most recommended:
- EASY TO DEBUG FAILURES: This is the pattern you need to solve this issue.
- COMPARISON DESIGN: Are you using the right type and sensitivity of result comparisons? Use this pattern to make sure your test result comparisons are as efficient as possible.
- KEEP IT SIMPLE: Apply this pattern always.
- EXPECTED FAIL STATUS: Use this pattern when a minor bug is causing many automated tests to fail (Example 9).
- ONE-CLICK RETEST: Apply this pattern to solve issues like Example 3.
- READABLE REPORTS: This pattern together with EASY TO DEBUG FAILURES should help to solve this issue.
Other useful patterns:
- COMPARE WITH PREVIOUS VERSION: this pattern makes it easy to find failures, but not to analyse them
- FRESH SETUP: apply this pattern if you have issues like Example 2.
- TAKE SMALL STEPS: this pattern is always useful
.................................................................................................................Main Page / Back to Execution Issues / Back to Test Automation Issues