Difference between revisions of "Design Issues"
Jump to navigation
Jump to search
m (Contour lines added) |
|||
Line 57: | Line 57: | ||
<br /> <span style="font-size: 16px">[[Main Page]]</span><br /> <span style="font-size: 16px">Back to [[ Test Automation Issues]]</span><br /> <span style="font-size: 16px">Back to [[Management Issues]]</span><br /> <span style="font-size: 16px">Forward to [[Execution Issues]]</span><br /> <br /> <br /> | <br /> <span style="font-size: 16px">[[Main Page]]</span><br /> <span style="font-size: 16px">Back to [[ Test Automation Issues]]</span><br /> <span style="font-size: 16px">Back to [[Management Issues]]</span><br /> <span style="font-size: 16px">Forward to [[Execution Issues]]</span><br /> <br /> <br /> | ||
---- | ---- | ||
− | [1 | + | [1] Suggested by Michael Stahl<br /> [2] Suggested by Dave Martin </div> |
Latest revision as of 15:44, 27 June 2018
Design issues list the test automation problems that can occur when an efficient testware architecture and maintainability are not built in from the very beginning. The table below gives a short list of the issues. Clicking on an issue shows more detail and the patterns needed to solve it.
The Design Issues Mind Map shows an overview of the Design Issues, together with the top level of resolving patterns, both the Most Recommended and other Useful patterns.
[1] Suggested by Michael Stahl
[2] Suggested by Dave Martin
The Design Issues Mind Map shows an overview of the Design Issues, together with the top level of resolving patterns, both the Most Recommended and other Useful patterns.
Issue |
Description |
---|---|
BRITTLE SCRIPTS |
Automation scripts have to be reworked for any small change to the Software Under Test (SUT) |
CAN'T FIND WHAT I WANT |
There is a script or file or dataset but you don't remember what it's called or where to find it. |
COMPLEX ENVIRONMENT |
The environment where the Software Under Test (SUT) has to run is complex |
DATE DEPENDENCY [2] |
Tests are dependent on a specific date |
GIANT SCRIPTS [2] |
Scripts that span thousands of lines |
HARD-TO-AUTOMATE |
The Software Under Test (SUT) doesn't support automated testing very well or not at all. |
HARD-TO-AUTOMATE RESULTS |
Preparing the expected results is slow and difficult |
INCONSISTENT DATA |
The data needed for the automated test cases changes unpredictably |
INTERDEPENDENT TEST CASES |
Test cases depend on each other, that is they can only be executed in a fixed sequence |
LONG SET-UP |
Set-up of the initial conditions for the test cases is long and complicated. |
MANUAL MIMICRY [1] |
Automation mimics manual tests without searching for more efficient solutions. |
MULTIPLE PLATFORMS |
The same tests are supposed to run in many different operating systems or browsers |
OBSCURE TESTS |
Automated tests are very complex and difficult to understand. |
REPETITIOUS TESTS |
Test cases repeat the same actions on different data. |
TOO EARLY AUTOMATION |
Test automation starts too early on an immature application or on the wrong aspect of an application, and produces only “noise”. |
TOOL DEPENDENCY |
Test automation is strongly dependent on some special tool. |
UNAUTOMATABLE TEST CASES |
Existing test cases are “unautomatable”, i.e. difficult if not impossible to automate. |
Main Page
Back to Test Automation Issues
Back to Management Issues
Forward to Execution Issues
[1] Suggested by Michael Stahl
[2] Suggested by Dave Martin