Difference between revisions of "MANUAL MIMICRY"
(Created page with "<div id="content_view" class="wiki" style="display: block"><span style="font-size: 14px">.........................................................................................") |
|||
(3 intermediate revisions by 2 users not shown) | |||
Line 1: | Line 1: | ||
− | <div id="content_view" class="wiki" style="display: block"><span style="font-size: 14px">.................................................................................................................[[Main Page]] / Back to [[Design Issues]] / Back to [[Test Automation Issues]]</span | + | <div id="content_view" class="wiki" style="display: block"><span style="font-size: 14px">.................................................................................................................[[Main Page]] / Back to [[Design Issues]] / Back to [[Test Automation Issues]]</span> |
− | =<span style="font-family: Arial; font-size: 16px">Issue Summary</span>= | + | =<span style="font-family: Arial; font-size: 16px">'''Issue Summary'''</span>= |
− | <span style="font-size: 16px">Automation mimics manual tests without searching for more efficient solutions</span | + | <span style="font-size: 16px">Automation mimics manual tests without searching for more efficient solutions</span> |
− | =<span style="font-size: 16px">Category</span>= | + | =<span style="font-size: 16px">'''Category'''</span>= |
<span style="font-size: 16px">Design</span> | <span style="font-size: 16px">Design</span> | ||
− | =<span style="font-family: Arial; font-size: 16px">Examples</span>= | + | =<span style="font-family: Arial; font-size: 16px">'''Examples'''</span>= |
<span style="font-size: 16px">The story, which we have from Michael Stahl who called this issue the Sorcerer's Apprentice Syndrome, goes as following:</span><br /> <br /> <span style="font-size: 16px">Everyone, I assume, is familiar with Disney’s Fantasia. The piece about the Sorcerer's Apprentice is probably the best known part. Let’s look at what happens in this scene with professional eyes:</span><br /> <br /> <span style="font-size: 16px">Mickey is assigned with a repetitive, boring task, he has to carry water from a big well to a smaller pool located many steps lower. After doing it for a while, he figures out that this problem can be solved by Automation. He takes a broom, and quickly writes a script in his favorite programming language to make the broom execute the job automatically. The design of the automated system mimics exactly the actions Mickey would use to perform the same task. Two hands, two buckets, walk to the well, fill the buckets, walk to the destination pool, empty buckets. </span><br /> <br /> <u><span style="font-size: 16px">What's this got to do with automation?</span></u><br /> <span style="font-size: 16px">This is a common occurrence in test automation: manual test cases are taken step by step, and each step is translated to code that performs the exact same action.</span><br /> <br /> <span style="font-size: 16px">And here lies the mistake. Mimicking human actions sounds straight forward, but is sometimes hard to accomplish programmatically and is frequently inefficient. Consider the problem Mickey is faced with: “Transfer water from one location to another; The water well is at higher elevation than the destination pool”. Mickey’s solution is mechanically unstable and complex: tall, unbalanced structure; mechanical arms that move in a number of direction, and must support the weight of the water; ability to go up and down stairs. Ask a mechanical engineer – building this machine is pretty difficult; some magic would probably help. Compare it to the trivial solution: a pipe and gravity.</span><br /> <br /> <span style="font-size: 16px">However, arriving to this efficient solution means departing from the written test steps – which calls for an ability to distance oneself from the immediate task. </span><br /> <br /> <span style="font-size: 16px">Many automation solutions – definitely those that started small and mushroomed - suffer from this problem. The step-by-step conversion of a manual test to an automated one results in inefficient, complex and brittle test automation system.</span><br /> <br /> <u><span style="font-size: 16px">Derek Bergin explains further:</span></u><br /> <span style="font-size: 16px">The automation team often tackles technical debt by just blindly automating the manual test suite</span><br /> <br /> <span style="font-size: 16px">Most manual test suites have evolved over time with the development of the product under test. Rarely, if ever, are they refactored to remove redundancy. Furthermore even a well designed manual test is planned around the time and boredom limitations of a human. Just simply taking these test cases and automating them is wasting a huge opportunity to use automation properly.</span><br /> <br /> <span style="font-size: 16px">The existing tests should be examined for their purpose and what validation criteria are being used. Evaluate the tests for order of automation. My preference is to use ‘amount of tester time saved’ as a primary indicator of ‘value’ and thus allow the warm brains to focus on the things they do best – like exploratory testing.</span><br /> <br /> <u><span style="font-size: 16px">Comments from Dot:</span></u><br /> <span style="font-size: 16px">Trying to "automate all manual tests" is a mistake in two ways:</span><br /> | <span style="font-size: 16px">The story, which we have from Michael Stahl who called this issue the Sorcerer's Apprentice Syndrome, goes as following:</span><br /> <br /> <span style="font-size: 16px">Everyone, I assume, is familiar with Disney’s Fantasia. The piece about the Sorcerer's Apprentice is probably the best known part. Let’s look at what happens in this scene with professional eyes:</span><br /> <br /> <span style="font-size: 16px">Mickey is assigned with a repetitive, boring task, he has to carry water from a big well to a smaller pool located many steps lower. After doing it for a while, he figures out that this problem can be solved by Automation. He takes a broom, and quickly writes a script in his favorite programming language to make the broom execute the job automatically. The design of the automated system mimics exactly the actions Mickey would use to perform the same task. Two hands, two buckets, walk to the well, fill the buckets, walk to the destination pool, empty buckets. </span><br /> <br /> <u><span style="font-size: 16px">What's this got to do with automation?</span></u><br /> <span style="font-size: 16px">This is a common occurrence in test automation: manual test cases are taken step by step, and each step is translated to code that performs the exact same action.</span><br /> <br /> <span style="font-size: 16px">And here lies the mistake. Mimicking human actions sounds straight forward, but is sometimes hard to accomplish programmatically and is frequently inefficient. Consider the problem Mickey is faced with: “Transfer water from one location to another; The water well is at higher elevation than the destination pool”. Mickey’s solution is mechanically unstable and complex: tall, unbalanced structure; mechanical arms that move in a number of direction, and must support the weight of the water; ability to go up and down stairs. Ask a mechanical engineer – building this machine is pretty difficult; some magic would probably help. Compare it to the trivial solution: a pipe and gravity.</span><br /> <br /> <span style="font-size: 16px">However, arriving to this efficient solution means departing from the written test steps – which calls for an ability to distance oneself from the immediate task. </span><br /> <br /> <span style="font-size: 16px">Many automation solutions – definitely those that started small and mushroomed - suffer from this problem. The step-by-step conversion of a manual test to an automated one results in inefficient, complex and brittle test automation system.</span><br /> <br /> <u><span style="font-size: 16px">Derek Bergin explains further:</span></u><br /> <span style="font-size: 16px">The automation team often tackles technical debt by just blindly automating the manual test suite</span><br /> <br /> <span style="font-size: 16px">Most manual test suites have evolved over time with the development of the product under test. Rarely, if ever, are they refactored to remove redundancy. Furthermore even a well designed manual test is planned around the time and boredom limitations of a human. Just simply taking these test cases and automating them is wasting a huge opportunity to use automation properly.</span><br /> <br /> <span style="font-size: 16px">The existing tests should be examined for their purpose and what validation criteria are being used. Evaluate the tests for order of automation. My preference is to use ‘amount of tester time saved’ as a primary indicator of ‘value’ and thus allow the warm brains to focus on the things they do best – like exploratory testing.</span><br /> <br /> <u><span style="font-size: 16px">Comments from Dot:</span></u><br /> <span style="font-size: 16px">Trying to "automate all manual tests" is a mistake in two ways:</span><br /> | ||
− | |||
# <span style="font-size: 16px">Not all manual tests should be automated! Tests that take a long time to automate and are not run often, tests for usability issues (do the colours look nice? is this the way the users will do it?), and some technical aspects (e.g. captcha) are better as manual tests.</span> | # <span style="font-size: 16px">Not all manual tests should be automated! Tests that take a long time to automate and are not run often, tests for usability issues (do the colours look nice? is this the way the users will do it?), and some technical aspects (e.g. captcha) are better as manual tests.</span> | ||
# <span style="font-size: 16px">If you automate ONLY your manual tests, you are missing some important benefits of automation (as Derek mentions). This includes additional verification, ways of testing other values around a central test point, and some new forms of automated testing using pseudo-random input generation and heuristic oracles.</span> | # <span style="font-size: 16px">If you automate ONLY your manual tests, you are missing some important benefits of automation (as Derek mentions). This includes additional verification, ways of testing other values around a central test point, and some new forms of automated testing using pseudo-random input generation and heuristic oracles.</span> | ||
− | + | =<span style="font-family: Arial; font-size: 16px">'''Questions'''</span>= | |
− | =<span style="font-family: Arial; font-size: 16px">Questions</span>= | + | <span style="font-size: 16px">Who designs the test cases to be automated? Are the automated tests just a copy of the manual test?</span><br /> <span style="font-size: 16px">Do the automators "understand" the application they are automating? Can they see ways of achieving their goals with automation that might differ from the way a manual test would be run?</span><br /> <span style="font-size: 16px">Have different ways of organising the automated tests been considered, taking advantage of things that are easier to do with a computer than with human testers? (e.g. longer test runs but with many short independent tests)</span><br /> <span style="font-size: 16px">Have the tests considered additional verification that could be done with automated tests that would be difficult or impossible with manual tests? (e.g. checking the state of a GUI object "behind the scenes")</span> |
− | <span style="font-size: 16px">Who designs the test cases to be automated? Are the automated tests just a copy of the manual test?</span><br /> <span style="font-size: 16px">Do the automators "understand" the application they are automating? Can they see ways of achieving their goals with automation that might differ from the way a manual test would be run?</span><br /> <span style="font-size: 16px">Have different ways of organising the automated tests been considered, taking advantage of things that are easier to do with a computer than with human testers? (e.g. longer test runs but with many short independent tests)</span><br /> <span style="font-size: 16px">Have the tests considered additional verification that could be done with automated tests that would be difficult or impossible with manual tests? (e.g. checking the state of a GUI object "behind the scenes")</span | + | =<span style="font-family: Arial; font-size: 16px">'''Resolving Patterns'''</span>= |
− | =<span style="font-family: Arial; font-size: 16px">Resolving Patterns</span>= | ||
<span style="font-family: Arial; font-size: 16px">Most recommended:</span><br /> | <span style="font-family: Arial; font-size: 16px">Most recommended:</span><br /> | ||
− | + | * <span style="font-size: 16px"><span style="font-size: 16px">[[KEEP IT SIMPLE]]: </span>Use the simplest solution you can imagine.</span> | |
− | * <span style="font-size: 16px"><span style="font-size: 16px">[[ KEEP IT SIMPLE]]: </span>Use the simplest solution you can imagine.</span> | + | * <span style="font-size: 16px"><span style="font-size: 16px">[[KEYWORD-DRIVEN TESTING]]: </span>Tests are driven by keywords that represent actions of a test, and that include input data and expected results.</span> |
− | * <span style="font-size: 16px"><span style="font-size: 16px">[[ KEYWORD-DRIVEN TESTING]]: </span>Tests are driven by keywords that represent actions of a test, and that include input data and expected results.</span> | + | * <span style="font-size: 16px"><span style="font-size: 16px">[[LAZY AUTOMATOR]]: </span>Lazy people are the best automation engineers.</span> |
− | * <span style="font-size: 16px"><span style="font-size: 16px">[[ LAZY AUTOMATOR]]: </span>Lazy people are the best automation engineers.</span> | + | * <span style="font-size: 16px"><span style="font-size: 16px">[[ONE CLEAR PURPOSE]]: </span>Each test has only one clear purpose.</span> |
− | * <span style="font-size: 16px"><span style="font-size: 16px">[[ ONE CLEAR PURPOSE]]: </span>Each test has only one clear purpose.</span> | + | * <span style="font-size: 16px">[[THINK OUT-OF-THE-BOX]]: try to look at the problem from unusual viewpoints</span> |
− | * <span style="font-size: 16px">[[ | ||
<br /> <span style="font-size: 16px">Other useful patterns:</span><br /> | <br /> <span style="font-size: 16px">Other useful patterns:</span><br /> | ||
− | + | * <span style="font-size: 16px">[[DOMAIN-DRIVEN TESTING]]: Develop a domain-specific language for testers to write their automated test cases with</span> | |
− | * <span style="font-size: 16px">[[DOMAIN-DRIVEN TESTING]]</span> | + | <br /> <span style="font-size: 16px">A related issue is ''[[INTERDEPENDENT TEST CASES]]'' where tests depend on the results of previous tests.</span><br /> <br /> <span style="font-size: 14px">.................................................................................................................[[Main Page]] / Back to [[Design Issues]] / Back to [[Test Automation Issues]]</span></div> |
− | <br /> <span style="font-size: 16px">A related issue is ''[[ |
Latest revision as of 16:10, 27 June 2018
Issue Summary
Automation mimics manual tests without searching for more efficient solutions
Category
Design
Examples
The story, which we have from Michael Stahl who called this issue the Sorcerer's Apprentice Syndrome, goes as following:
Everyone, I assume, is familiar with Disney’s Fantasia. The piece about the Sorcerer's Apprentice is probably the best known part. Let’s look at what happens in this scene with professional eyes:
Mickey is assigned with a repetitive, boring task, he has to carry water from a big well to a smaller pool located many steps lower. After doing it for a while, he figures out that this problem can be solved by Automation. He takes a broom, and quickly writes a script in his favorite programming language to make the broom execute the job automatically. The design of the automated system mimics exactly the actions Mickey would use to perform the same task. Two hands, two buckets, walk to the well, fill the buckets, walk to the destination pool, empty buckets.
What's this got to do with automation?
This is a common occurrence in test automation: manual test cases are taken step by step, and each step is translated to code that performs the exact same action.
And here lies the mistake. Mimicking human actions sounds straight forward, but is sometimes hard to accomplish programmatically and is frequently inefficient. Consider the problem Mickey is faced with: “Transfer water from one location to another; The water well is at higher elevation than the destination pool”. Mickey’s solution is mechanically unstable and complex: tall, unbalanced structure; mechanical arms that move in a number of direction, and must support the weight of the water; ability to go up and down stairs. Ask a mechanical engineer – building this machine is pretty difficult; some magic would probably help. Compare it to the trivial solution: a pipe and gravity.
However, arriving to this efficient solution means departing from the written test steps – which calls for an ability to distance oneself from the immediate task.
Many automation solutions – definitely those that started small and mushroomed - suffer from this problem. The step-by-step conversion of a manual test to an automated one results in inefficient, complex and brittle test automation system.
Derek Bergin explains further:
The automation team often tackles technical debt by just blindly automating the manual test suite
Most manual test suites have evolved over time with the development of the product under test. Rarely, if ever, are they refactored to remove redundancy. Furthermore even a well designed manual test is planned around the time and boredom limitations of a human. Just simply taking these test cases and automating them is wasting a huge opportunity to use automation properly.
The existing tests should be examined for their purpose and what validation criteria are being used. Evaluate the tests for order of automation. My preference is to use ‘amount of tester time saved’ as a primary indicator of ‘value’ and thus allow the warm brains to focus on the things they do best – like exploratory testing.
Comments from Dot:
Trying to "automate all manual tests" is a mistake in two ways:
- Not all manual tests should be automated! Tests that take a long time to automate and are not run often, tests for usability issues (do the colours look nice? is this the way the users will do it?), and some technical aspects (e.g. captcha) are better as manual tests.
- If you automate ONLY your manual tests, you are missing some important benefits of automation (as Derek mentions). This includes additional verification, ways of testing other values around a central test point, and some new forms of automated testing using pseudo-random input generation and heuristic oracles.
Questions
Who designs the test cases to be automated? Are the automated tests just a copy of the manual test?
Do the automators "understand" the application they are automating? Can they see ways of achieving their goals with automation that might differ from the way a manual test would be run?
Have different ways of organising the automated tests been considered, taking advantage of things that are easier to do with a computer than with human testers? (e.g. longer test runs but with many short independent tests)
Have the tests considered additional verification that could be done with automated tests that would be difficult or impossible with manual tests? (e.g. checking the state of a GUI object "behind the scenes")
Resolving Patterns
Most recommended:
- KEEP IT SIMPLE: Use the simplest solution you can imagine.
- KEYWORD-DRIVEN TESTING: Tests are driven by keywords that represent actions of a test, and that include input data and expected results.
- LAZY AUTOMATOR: Lazy people are the best automation engineers.
- ONE CLEAR PURPOSE: Each test has only one clear purpose.
- THINK OUT-OF-THE-BOX: try to look at the problem from unusual viewpoints
Other useful patterns:
- DOMAIN-DRIVEN TESTING: Develop a domain-specific language for testers to write their automated test cases with
A related issue is INTERDEPENDENT TEST CASES where tests depend on the results of previous tests.
.................................................................................................................Main Page / Back to Design Issues / Back to Test Automation Issues