Difference between revisions of "TOOL-DRIVEN AUTOMATION"
m (Topic titles in capital letters) |
(Added TOOL INDEPENDENCE) |
||
Line 1: | Line 1: | ||
<div id="content_view" class="wiki" style="display: block"><span style="font-size: 14px">.................................................................................................................[[Main Page]] / Back to [[Process Issues]] / Back to [[Test Automation Issues]]</span> | <div id="content_view" class="wiki" style="display: block"><span style="font-size: 14px">.................................................................................................................[[Main Page]] / Back to [[Process Issues]] / Back to [[Test Automation Issues]]</span> | ||
=<span style="font-size: 16px">'''Issue Summary'''</span>= | =<span style="font-size: 16px">'''Issue Summary'''</span>= | ||
− | <span style="font-size: 16px">Test cases are automated using “as is” the features of a test automation tool </span | + | <span style="font-size: 16px">Test cases are automated using “as is” the features of a test automation tool </span> |
=<span style="font-size: 16px">'''Category'''</span>= | =<span style="font-size: 16px">'''Category'''</span>= | ||
<span style="font-size: 16px">Process </span><br /> | <span style="font-size: 16px">Process </span><br /> | ||
=<span style="font-size: 16px">'''Examples'''</span>= | =<span style="font-size: 16px">'''Examples'''</span>= | ||
− | <span style="font-size: 16px">In older tools this means using the capture functionality to develop test cases: while a test case is executed manually the tool records all the tester actions in a proprietary script. By replaying the script the test case can be executed again automatically (look up the pattern [[CAPTURE-REPLAY]] for more details). </span><br /> <span style="font-size: 16px">In more modern tools you can record "keyword" scripts, but the mode of operation is actually much the same as in capture-replay.</span><br /> <br /> <span style="font-size: 16px">Both approaches harbour serious problems:</span><br /> <span style="font-size: 16px">The speed and ease in recording test cases can induce testers to record more and more tests (keyword scripts), without considering [[GOOD PROGRAMMING PRACTICES]] such as modularity, setting standards and so on that enhance reuse and thus maintainability. Without reuse, the effort to update the automation in parallel with the changes in the SUT (System under Test) becomes more and more grievous until you end up with ''[[STALLED AUTOMATION]]''.</span | + | <span style="font-size: 16px">In older tools this means using the capture functionality to develop test cases: while a test case is executed manually the tool records all the tester actions in a proprietary script. By replaying the script the test case can be executed again automatically (look up the pattern [[CAPTURE-REPLAY]] for more details). </span><br /> <span style="font-size: 16px">In more modern tools you can record "keyword" scripts, but the mode of operation is actually much the same as in capture-replay.</span><br /> <br /> <span style="font-size: 16px">Both approaches harbour serious problems:</span><br /> <span style="font-size: 16px">The speed and ease in recording test cases can induce testers to record more and more tests (keyword scripts), without considering [[GOOD PROGRAMMING PRACTICES]] such as modularity, setting standards and so on that enhance reuse and thus maintainability. Without reuse, the effort to update the automation in parallel with the changes in the SUT (System under Test) becomes more and more grievous until you end up with ''[[STALLED AUTOMATION]]''.</span> |
=<span style="font-size: 16px">'''Resolving Patterns'''</span>= | =<span style="font-size: 16px">'''Resolving Patterns'''</span>= | ||
<span style="font-size: 16px">Most recommended:</span><br /> | <span style="font-size: 16px">Most recommended:</span><br /> | ||
− | + | * <span style="font-size: 16px"><span style="font-size: 16px">[[TOOL INDEPENDENCE]]: </span>Separate the technical implementation that is specific for the tool from the functional implementation.</span> | |
* <span style="font-size: 16px"><span style="font-size: 16px">[[GOOD PROGRAMMING PRACTICES]]: </span>Use the same good programming practices for test code as in software development for production code.</span> | * <span style="font-size: 16px"><span style="font-size: 16px">[[GOOD PROGRAMMING PRACTICES]]: </span>Use the same good programming practices for test code as in software development for production code.</span> | ||
* <span style="font-size: 16px"><span style="font-size: 16px">[[LAZY AUTOMATOR]]: </span>Lazy people are the best automation engineers.</span> | * <span style="font-size: 16px"><span style="font-size: 16px">[[LAZY AUTOMATOR]]: </span>Lazy people are the best automation engineers.</span> |
Revision as of 06:52, 5 May 2018
Issue Summary
Test cases are automated using “as is” the features of a test automation tool
Category
Process
Examples
In older tools this means using the capture functionality to develop test cases: while a test case is executed manually the tool records all the tester actions in a proprietary script. By replaying the script the test case can be executed again automatically (look up the pattern CAPTURE-REPLAY for more details).
In more modern tools you can record "keyword" scripts, but the mode of operation is actually much the same as in capture-replay.
Both approaches harbour serious problems:
The speed and ease in recording test cases can induce testers to record more and more tests (keyword scripts), without considering GOOD PROGRAMMING PRACTICES such as modularity, setting standards and so on that enhance reuse and thus maintainability. Without reuse, the effort to update the automation in parallel with the changes in the SUT (System under Test) becomes more and more grievous until you end up with STALLED AUTOMATION.
Resolving Patterns
Most recommended:
- TOOL INDEPENDENCE: Separate the technical implementation that is specific for the tool from the functional implementation.
- GOOD PROGRAMMING PRACTICES: Use the same good programming practices for test code as in software development for production code.
- LAZY AUTOMATOR: Lazy people are the best automation engineers.
- TEST AUTOMATION OWNER: Appoint an owner for the test automation effort
- TESTWARE ARCHITECTURE: Design the structure of your testware so that your automators and testers can work as efficiently as possible
- WHOLE TEAM APPROACH: Testers, coders and other roles work together on one team to develop test automation along with production code
Other useful patterns:
- LOOK FOR TROUBLE: Keep an eye on possible problems in order to solve them before they become unmanageable
- MAINTAINABLE TESTWARE: Design your testware so that it does not have to be updated for every little change in the Software Under Test (SUT).
- SET STANDARDS: Set and follow standards for the automation artefacts.
- GET TRAINING: Plan to get training for all those involved in the test automation project
.................................................................................................................Main Page / Back to Process Issues / Back to Test Automation Issues
B9 11