Difference between revisions of "MANUAL MIMICRY"

From Test Automation Patterns
Jump to navigation Jump to search
(Created page with "<div id="content_view" class="wiki" style="display: block"><span style="font-size: 14px">.........................................................................................")
 
Line 19: Line 19:
 
* <span style="font-size: 16px"><span style="font-size: 16px">[[ LAZY AUTOMATOR]]: </span>Lazy people are the best automation engineers.</span>
 
* <span style="font-size: 16px"><span style="font-size: 16px">[[ LAZY AUTOMATOR]]: </span>Lazy people are the best automation engineers.</span>
 
* <span style="font-size: 16px"><span style="font-size: 16px">[[ ONE CLEAR PURPOSE]]: </span>Each test has only one clear purpose.</span>
 
* <span style="font-size: 16px"><span style="font-size: 16px">[[ ONE CLEAR PURPOSE]]: </span>Each test has only one clear purpose.</span>
* <span style="font-size: 16px">[[THINK%20OUT-OF-THE-BOX THINK OUT-OF-THE-BOX]]: try to look at the problem from unusual viewpoints</span>
+
* <span style="font-size: 16px">[[ THINK OUT-OF-THE-BOX]]: try to look at the problem from unusual viewpoints</span>
 
<br /> <span style="font-size: 16px">Other useful patterns:</span><br />  
 
<br /> <span style="font-size: 16px">Other useful patterns:</span><br />  
  
 
* <span style="font-size: 16px">[[DOMAIN-DRIVEN TESTING]]</span>
 
* <span style="font-size: 16px">[[DOMAIN-DRIVEN TESTING]]</span>
<br /> <span style="font-size: 16px">A related issue is ''[[INTERDEPENDENT%20TEST%20CASES INTERDEPENDENT TEST CASES]]'' where tests depend on the results of previous tests.</span><br /> <br /> <span style="font-size: 14px">.................................................................................................................[[Main Page]] / Back to [[Design Issues]] / Back to [[Test Automation Issues]]</span></div>
+
<br /> <span style="font-size: 16px">A related issue is ''[[INTERDEPENDENT TEST CASES]]'' where tests depend on the results of previous tests.</span><br /> <br /> <span style="font-size: 14px">.................................................................................................................[[Main Page]] / Back to [[Design Issues]] / Back to [[Test Automation Issues]]</span></div>

Revision as of 16:10, 4 April 2018

.................................................................................................................Main Page / Back to Design Issues / Back to Test Automation Issues

Issue Summary

Automation mimics manual tests without searching for more efficient solutions

Category

Design

Examples

The story, which we have from Michael Stahl who called this issue the Sorcerer's Apprentice Syndrome, goes as following:

Everyone, I assume, is familiar with Disney’s Fantasia. The piece about the Sorcerer's Apprentice is probably the best known part. Let’s look at what happens in this scene with professional eyes:

Mickey is assigned with a repetitive, boring task, he has to carry water from a big well to a smaller pool located many steps lower. After doing it for a while, he figures out that this problem can be solved by Automation. He takes a broom, and quickly writes a script in his favorite programming language to make the broom execute the job automatically. The design of the automated system mimics exactly the actions Mickey would use to perform the same task. Two hands, two buckets, walk to the well, fill the buckets, walk to the destination pool, empty buckets.

What's this got to do with automation?
This is a common occurrence in test automation: manual test cases are taken step by step, and each step is translated to code that performs the exact same action.

And here lies the mistake. Mimicking human actions sounds straight forward, but is sometimes hard to accomplish programmatically and is frequently inefficient. Consider the problem Mickey is faced with: “Transfer water from one location to another; The water well is at higher elevation than the destination pool”. Mickey’s solution is mechanically unstable and complex: tall, unbalanced structure; mechanical arms that move in a number of direction, and must support the weight of the water; ability to go up and down stairs. Ask a mechanical engineer – building this machine is pretty difficult; some magic would probably help. Compare it to the trivial solution: a pipe and gravity.

However, arriving to this efficient solution means departing from the written test steps – which calls for an ability to distance oneself from the immediate task.

Many automation solutions – definitely those that started small and mushroomed - suffer from this problem. The step-by-step conversion of a manual test to an automated one results in inefficient, complex and brittle test automation system.

Derek Bergin explains further:
The automation team often tackles technical debt by just blindly automating the manual test suite

Most manual test suites have evolved over time with the development of the product under test. Rarely, if ever, are they refactored to remove redundancy. Furthermore even a well designed manual test is planned around the time and boredom limitations of a human. Just simply taking these test cases and automating them is wasting a huge opportunity to use automation properly.

The existing tests should be examined for their purpose and what validation criteria are being used. Evaluate the tests for order of automation. My preference is to use ‘amount of tester time saved’ as a primary indicator of ‘value’ and thus allow the warm brains to focus on the things they do best – like exploratory testing.

Comments from Dot:
Trying to "automate all manual tests" is a mistake in two ways:

  1. Not all manual tests should be automated! Tests that take a long time to automate and are not run often, tests for usability issues (do the colours look nice? is this the way the users will do it?), and some technical aspects (e.g. captcha) are better as manual tests.
  2. If you automate ONLY your manual tests, you are missing some important benefits of automation (as Derek mentions). This includes additional verification, ways of testing other values around a central test point, and some new forms of automated testing using pseudo-random input generation and heuristic oracles.


Questions

Who designs the test cases to be automated? Are the automated tests just a copy of the manual test?
Do the automators "understand" the application they are automating? Can they see ways of achieving their goals with automation that might differ from the way a manual test would be run?
Have different ways of organising the automated tests been considered, taking advantage of things that are easier to do with a computer than with human testers? (e.g. longer test runs but with many short independent tests)
Have the tests considered additional verification that could be done with automated tests that would be difficult or impossible with manual tests? (e.g. checking the state of a GUI object "behind the scenes")

Resolving Patterns

Most recommended:


Other useful patterns:


A related issue is INTERDEPENDENT TEST CASES where tests depend on the results of previous tests.

.................................................................................................................Main Page / Back to Design Issues / Back to Test Automation Issues