Difference between revisions of "INEFFICIENT FAILURE ANALYSIS"
Jump to navigation
Jump to search
(Created page with "<div id="content_view" class="wiki" style="display: block"><span style="font-size: 14px">.........................................................................................") |
m (Topic titles in capital letters) |
||
Line 1: | Line 1: | ||
<div id="content_view" class="wiki" style="display: block"><span style="font-size: 14px">.................................................................................................................[[Main Page]] / Back to [[Execution Issues]] / Back to [[Test Automation Issues]]</span> | <div id="content_view" class="wiki" style="display: block"><span style="font-size: 14px">.................................................................................................................[[Main Page]] / Back to [[Execution Issues]] / Back to [[Test Automation Issues]]</span> | ||
− | =<span style="font-size: 16px">Issue summary</span>= | + | =<span style="font-size: 16px">'''Issue summary'''</span>= |
<span style="font-size: 16px">Failure Analysis is slow and difficult</span> | <span style="font-size: 16px">Failure Analysis is slow and difficult</span> | ||
− | =<span style="font-size: 16px">Category</span>= | + | =<span style="font-size: 16px">'''Category'''</span>= |
<span style="font-size: 16px">Execution</span> | <span style="font-size: 16px">Execution</span> | ||
− | =<span style="font-size: 16px">Examples</span>= | + | =<span style="font-size: 16px">'''Examples'''</span>= |
# <span style="font-family: Arial; font-size: 16px">Automated test cases perform complex scenarios, so that it's difficult to tag the exact cause for a failure.</span> | # <span style="font-family: Arial; font-size: 16px">Automated test cases perform complex scenarios, so that it's difficult to tag the exact cause for a failure.</span> | ||
Line 16: | Line 16: | ||
# <span style="font-size: 16px">A minor bug which isn't going to be fixed is making too many of the automated tests fail, but they still need to be looked at (and then ignored)</span> | # <span style="font-size: 16px">A minor bug which isn't going to be fixed is making too many of the automated tests fail, but they still need to be looked at (and then ignored)</span> | ||
# <span style="font-size: 16px">Comparison of results takes a long time, or results which should abort a test run are not picked up until the test has finished.</span> | # <span style="font-size: 16px">Comparison of results takes a long time, or results which should abort a test run are not picked up until the test has finished.</span> | ||
− | =<span style="font-size: 16px">Questions</span>= | + | =<span style="font-size: 16px">'''Questions'''</span>= |
<span style="font-size: 16px">How complex are the automated test cases? Can they be split up?</span><br /> <span style="font-size: 16px">Are the failures in the SUT or in the automation?</span><br /> <span style="font-family: Arial; font-size: 16px">Are complex "scenario" test cases really necessary?</span><br /> <span style="font-family: Arial; font-size: 16px">What information do developers need to facilitate bug-fixing?</span><br /> <span style="font-family: Arial; font-size: 16px">What information do automators need to isolate why an automated test has failed?</span><br /> <span style="font-family: Arial; font-size: 16px">What information do testers need to determine the cause of a test failure?</span> | <span style="font-size: 16px">How complex are the automated test cases? Can they be split up?</span><br /> <span style="font-size: 16px">Are the failures in the SUT or in the automation?</span><br /> <span style="font-family: Arial; font-size: 16px">Are complex "scenario" test cases really necessary?</span><br /> <span style="font-family: Arial; font-size: 16px">What information do developers need to facilitate bug-fixing?</span><br /> <span style="font-family: Arial; font-size: 16px">What information do automators need to isolate why an automated test has failed?</span><br /> <span style="font-family: Arial; font-size: 16px">What information do testers need to determine the cause of a test failure?</span> | ||
− | =<span style="font-size: 16px">Resolving Patterns</span>= | + | =<span style="font-size: 16px">'''Resolving Patterns'''</span>= |
<span style="font-size: 16px">Most recommended:</span><br /> | <span style="font-size: 16px">Most recommended:</span><br /> | ||
Revision as of 14:54, 28 April 2018
.................................................................................................................Main Page / Back to Execution Issues / Back to Test Automation Issues
.................................................................................................................Main Page / Back to Execution Issues / Back to Test Automation Issues
Issue summary
Failure Analysis is slow and difficult
Category
Execution
Examples
- Automated test cases perform complex scenarios, so that it's difficult to tag the exact cause for a failure.
- Automated test cases depend on previous tests. If the failure is caused in a previous test it is very difficult to find it.
- Failures are caused by differences in long amorphous text files that are difficult to parse manually (even with compare tools).
- In order to examine the status of the SUT after a test has been run, it's often necessary to rerun it. If tests routinely clean up after themselves, you have to remove temporarily the clean up actions before rerunning the test.
- Results can only be compared on the GUI
- Results are spread through different databases or media and it's difficult to put them together in a meaningful way
- Results change randomly
- Results change from release to release
- A minor bug which isn't going to be fixed is making too many of the automated tests fail, but they still need to be looked at (and then ignored)
- Comparison of results takes a long time, or results which should abort a test run are not picked up until the test has finished.
Questions
How complex are the automated test cases? Can they be split up?
Are the failures in the SUT or in the automation?
Are complex "scenario" test cases really necessary?
What information do developers need to facilitate bug-fixing?
What information do automators need to isolate why an automated test has failed?
What information do testers need to determine the cause of a test failure?
Resolving Patterns
Most recommended:
- EASY TO DEBUG FAILURES: This is the pattern you need to solve this issue.
- COMPARISON DESIGN: Are you using the right type and sensitivity of result comparisons? Use this pattern to make sure your test result comparisons are as efficient as possible.
- KEEP IT SIMPLE: Apply this pattern always.
- EXPECTED FAIL STATUS: Use this pattern when a minor bug is causing many automated tests to fail.
- ONE-CLICK RETEST: Apply this pattern to solve issues like Example 3.
- READABLE REPORTS: This pattern together with EASY TO DEBUG FAILURES should help to solve this issue.
Other useful patterns:
- COMPARE WITH PREVIOUS VERSION: this pattern makes it easy to find failures, but not to analyze them
- FRESH SETUP: apply this pattern if you have issues like Example 2.
- TAKE SMALL STEPS: this pattern is always useful
.................................................................................................................Main Page / Back to Execution Issues / Back to Test Automation Issues