Difference between revisions of "LEARN FROM MISTAKES"
m (empty lines removed) |
|||
Line 15: | Line 15: | ||
<span style="font-size: 16px">''[[ DATA CREEP]]''</span><br /> ''<span style="font-size: 16px">[[ INADEQUATE TOOLS]]</span>''<br /> ''<span style="font-size: 16px">[[OBSCURE MANAGEMENT REPORTS]]</span>''<br /> ''<span style="font-size: 16px">[[ OBSCURE TESTS]]</span>''<br /> ''<span style="font-size: 16px">[[ SCRIPT CREEP]] </span>''<br /> ''<span style="font-size: 16px">[[STALLED AUTOMATION]]</span>''<br /> ''<span style="font-size: 16px">[[SUT REMAKE]]</span>''<br /> ''<span style="font-size: 16px">[[ UNFOCUSED AUTOMATION]]</span>''<br /> ''<span style="font-size: 16px">[[ UNMOTIVATED TEAM]]</span>'' | <span style="font-size: 16px">''[[ DATA CREEP]]''</span><br /> ''<span style="font-size: 16px">[[ INADEQUATE TOOLS]]</span>''<br /> ''<span style="font-size: 16px">[[OBSCURE MANAGEMENT REPORTS]]</span>''<br /> ''<span style="font-size: 16px">[[ OBSCURE TESTS]]</span>''<br /> ''<span style="font-size: 16px">[[ SCRIPT CREEP]] </span>''<br /> ''<span style="font-size: 16px">[[STALLED AUTOMATION]]</span>''<br /> ''<span style="font-size: 16px">[[SUT REMAKE]]</span>''<br /> ''<span style="font-size: 16px">[[ UNFOCUSED AUTOMATION]]</span>''<br /> ''<span style="font-size: 16px">[[ UNMOTIVATED TEAM]]</span>'' | ||
=<span style="font-size: 16px">'''Experiences'''</span>= | =<span style="font-size: 16px">'''Experiences'''</span>= | ||
− | <span style="font-size: 16px">''Jochim Van Dorpe'' writes:</span><br /> <span style="font-size: 16px">Here are some mistakes that we learned from:</span><br /> <br /> <u><span style="font-size: 16px"> Checking too much</span></u><br /> <span style="font-size: 16px">Putting too many assertions (checks) in my automated tests caused on the one hand more maintenance, and on the other hand it made tests fail due to bugs that weren't actually related to the item under test.</span><br /> <br /> <span style="font-size: 16px"> For example: Suppose a record has columns A,B,C,D and E. The test should pass if calculations for columns C & D are correcly calculated & stored. I now only add C & D to my expected results, so changes to column A, B & E have no maintenance costs, but neither will the tests fail when a problem arises in A,B or E .</span><br /> <br /> ''<span style="font-size: 16px; line-height: 24px">(Note from Dot: We call this a "robust" test. If you had say one or two tests that checked all five columns, then that would pick up a problem in columns A, B & E - we call that a "sensitive" Tests (look up </span>''<span style="font-size: 16px; line-height: 24px">[[SENSITIVE COMPARE]]</span>''<span style="font-size: 16px; line-height: 24px">). For high level regression, it is good to have sensitive tests, to pick up on any unexpected changes, but for a large number of detailed tests, it is better to have robust tests, so that you are only checking what you are interested in (look up </span>''<span style="font-size: 16px; line-height: 24px">[[SPECIFIC COMPARE]]</span>''<span style="font-size: 16px; line-height: 24px">), and you don't have dozens or hundreds of tests failing for the same reason (that you aren't interested in anyway). Hence you are saving failure analysis time as well as maintenance time.)</span>''<br /> <br /> <u><span style="font-size: 16px"> No comments</span></u><br /> <span style="font-size: 16px"> Automators should comment their automated tests just as developers should do with the program code. 'Cause once our automators were gone, nobody understood anymore what the tests did ...</span><br /> <br /> <span style="font-size: 16px"> So now reviewers check if the tests are well commented for readability and maintainability.</span><br /> <br /> <u><span style="font-size: 16px">Scenario tests</span></u><br /> <span style="font-size: 16px">In the past I would have had 1 massive test case that executed all the different sub-‘test cases’ for a use case or functionality with 1 dataset. So if that test passed, I knew that all the possible outcomes of the use case that were put in the test case were correct. But when something went wrong, I could only be sure that at least one of the possible outcomes had failed to act as expected, but I didn’t know which one(s) ... so I had to figure that out manually.</span><br /> <br /> <span style="font-size: 16px">So we rebuilt our automated testing in such a way that now I can define a specific test case, with a specific result and a specific well-defined dataset. I still can start the thing as before with 1 click, but if somethings fails now, I can immediately see how many test cases went wrong, where it went wrong, and with which part of the dataset.</span><br /> <br /> <span style="font-size: 16px">In this way we saved the time we would have needed to investigate exactly what went wrong.</span><br /> ''<span style="font-size: 16px">(Good way to minimise failure analysis time!)</span>''<br /> <br /> <br /> | + | <span style="font-size: 16px">''Jochim Van Dorpe'' writes:</span><br /> <span style="font-size: 16px">Here are some mistakes that we learned from:</span><br /> <br /> <u><span style="font-size: 16px"> Checking too much</span></u><br /> <span style="font-size: 16px">Putting too many assertions (checks) in my automated tests caused on the one hand more maintenance, and on the other hand it made tests fail due to bugs that weren't actually related to the item under test.</span><br /> <br /> <span style="font-size: 16px"> For example: Suppose a record has columns A,B,C,D and E. The test should pass if calculations for columns C & D are correcly calculated & stored. I now only add C & D to my expected results, so changes to column A, B & E have no maintenance costs, but neither will the tests fail when a problem arises in A,B or E .</span><br /> <br /> ''<span style="font-size: 16px; line-height: 24px">(Note from Dot: We call this a "robust" test. If you had say one or two tests that checked all five columns, then that would pick up a problem in columns A, B & E - we call that a "sensitive" Tests (look up </span>''<span style="font-size: 16px; line-height: 24px">[[SENSITIVE COMPARE]]</span>''<span style="font-size: 16px; line-height: 24px">). For high level regression, it is good to have sensitive tests, to pick up on any unexpected changes, but for a large number of detailed tests, it is better to have robust tests, so that you are only checking what you are interested in (look up </span>''<span style="font-size: 16px; line-height: 24px">[[SPECIFIC COMPARE]]</span>''<span style="font-size: 16px; line-height: 24px">), and you don't have dozens or hundreds of tests failing for the same reason (that you aren't interested in anyway). Hence you are saving failure analysis time as well as maintenance time.)</span>''<br /> <br /> <u><span style="font-size: 16px"> No comments</span></u><br /> <span style="font-size: 16px"> Automators should comment their automated tests just as developers should do with the program code. 'Cause once our automators were gone, nobody understood anymore what the tests did ...</span><br /> <br /> <span style="font-size: 16px"> So now reviewers check if the tests are well commented for readability and maintainability.</span><br /> <br /> <u><span style="font-size: 16px">Scenario tests</span></u><br /> <span style="font-size: 16px">In the past I would have had 1 massive test case that executed all the different sub-‘test cases’ for a use case or functionality with 1 dataset. So if that test passed, I knew that all the possible outcomes of the use case that were put in the test case were correct. But when something went wrong, I could only be sure that at least one of the possible outcomes had failed to act as expected, but I didn’t know which one(s) ... so I had to figure that out manually.</span><br /> <br /> <span style="font-size: 16px">So we rebuilt our automated testing in such a way that now I can define a specific test case, with a specific result and a specific well-defined dataset. I still can start the thing as before with 1 click, but if somethings fails now, I can immediately see how many test cases went wrong, where it went wrong, and with which part of the dataset.</span><br /> <br /> <span style="font-size: 16px">In this way we saved the time we would have needed to investigate exactly what went wrong.</span><br /> ''<span style="font-size: 16px">(Good way to minimise failure analysis time!)</span>''<br /> <br /> <br /> |
+ | |||
+ | <span style="font-size: 16px">If you have also used this pattern and would like to contribute your experience to the wiki, please go to [[Experiences]] to submit your experience or comment.</span><br /> <br /> | ||
+ | |||
+ | |||
+ | <span style="font-size: 14px">.................................................................................................................[[Main Page]] / Back to [[ Process Patterns]] / Back to [[Test Automation Patterns]]</span></div> |
Revision as of 14:08, 4 July 2018
Pattern summary
Use mistakes to learn to do better next time.
Category
Process
Context
This pattern is appropriate when a previous test automation project failed (you can also learn from mistakes other people made!) or when the current automation effort is not getting anywhere.
This pattern is also useful when test automation doesn't deliver the results you expect in some area (for instance reporting, tool use, complexity).
Since we don't know anyone who has never made a mistake, we think this pattern is always applicable.
Description
Analyse any failures and deficiencies to find out what went wrong so that next time you don't make the same mistakes.
Implementation
Get all the concerned people together (managers, testers, the test automation team etc.) and examine what was good and what went wrong. Discuss how you could do better next time and adapt accordingly. This may affect processes, procedures, responsibilities, training, standards, etc.
This is not something that you do only once; regularly look back at your recent or past experiences and see how you can improve.
Potential problems
Sometimes what seems to be a disaster can be a useful catalyst for change. When something goes very wrong, people are much easier to convince that something must change, be it the process, the tool, whatever. Be sure to exploit this advantage!
Be very careful to establish a "learning culture" not a "blame culture". If you "point the finger" at individuals, you will only encourage better hiding of mistakes next time. Mistakes are very rarely down to one person; the context and culture also contribute, so everyone is responsible if mistakes are made.
Issues addressed by this pattern
DATA CREEP
INADEQUATE TOOLS
OBSCURE MANAGEMENT REPORTS
OBSCURE TESTS
SCRIPT CREEP
STALLED AUTOMATION
SUT REMAKE
UNFOCUSED AUTOMATION
UNMOTIVATED TEAM
Experiences
Jochim Van Dorpe writes:
Here are some mistakes that we learned from:
Checking too much
Putting too many assertions (checks) in my automated tests caused on the one hand more maintenance, and on the other hand it made tests fail due to bugs that weren't actually related to the item under test.
For example: Suppose a record has columns A,B,C,D and E. The test should pass if calculations for columns C & D are correcly calculated & stored. I now only add C & D to my expected results, so changes to column A, B & E have no maintenance costs, but neither will the tests fail when a problem arises in A,B or E .
(Note from Dot: We call this a "robust" test. If you had say one or two tests that checked all five columns, then that would pick up a problem in columns A, B & E - we call that a "sensitive" Tests (look up SENSITIVE COMPARE). For high level regression, it is good to have sensitive tests, to pick up on any unexpected changes, but for a large number of detailed tests, it is better to have robust tests, so that you are only checking what you are interested in (look up SPECIFIC COMPARE), and you don't have dozens or hundreds of tests failing for the same reason (that you aren't interested in anyway). Hence you are saving failure analysis time as well as maintenance time.)
No comments
Automators should comment their automated tests just as developers should do with the program code. 'Cause once our automators were gone, nobody understood anymore what the tests did ...
So now reviewers check if the tests are well commented for readability and maintainability.
Scenario tests
In the past I would have had 1 massive test case that executed all the different sub-‘test cases’ for a use case or functionality with 1 dataset. So if that test passed, I knew that all the possible outcomes of the use case that were put in the test case were correct. But when something went wrong, I could only be sure that at least one of the possible outcomes had failed to act as expected, but I didn’t know which one(s) ... so I had to figure that out manually.
So we rebuilt our automated testing in such a way that now I can define a specific test case, with a specific result and a specific well-defined dataset. I still can start the thing as before with 1 click, but if somethings fails now, I can immediately see how many test cases went wrong, where it went wrong, and with which part of the dataset.
In this way we saved the time we would have needed to investigate exactly what went wrong.
(Good way to minimise failure analysis time!)
If you have also used this pattern and would like to contribute your experience to the wiki, please go to Experiences to submit your experience or comment.