LEARN FROM MISTAKES

From Test Automation Patterns
Revision as of 14:39, 21 August 2018 by Dorothy (talk | contribs) (→‎Experiences)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search
.................................................................................................................Main Page / Back to Process Patterns / Back to Test Automation Patterns

Pattern summary

Use mistakes to learn to do better next time.

Category

Process

Context

This pattern is appropriate when a previous test automation project failed (you can also learn from mistakes other people made!) or when the current automation effort is not getting anywhere.
This pattern is also useful when test automation doesn't deliver the results you expect in some area (for instance reporting, tool use, complexity).
Since we don't know anyone who has never made a mistake, we think this pattern is always applicable.

Description

Analyse any failures and deficiencies to find out what went wrong so that next time you don't make the same mistakes.

Implementation

Get all the concerned people together (managers, testers, the test automation team etc.) and examine what was good and what went wrong. Discuss how you could do better next time and adapt accordingly. This may affect processes, procedures, responsibilities, training, standards, etc.
This is not something that you do only once; regularly look back at your recent or past experiences and see how you can improve.

Potential problems

Sometimes what seems to be a disaster can be a useful catalyst for change. When something goes very wrong, people are much easier to convince that something must change, be it the process, the tool, whatever. Be sure to exploit this advantage!
Be very careful to establish a "learning culture" not a "blame culture". If you "point the finger" at individuals, you will only encourage better hiding of mistakes next time. Mistakes are very rarely down to one person; the context and culture also contribute, so everyone is responsible if mistakes are made.

Issues addressed by this pattern

DATA CREEP
INADEQUATE TOOLS
OBSCURE MANAGEMENT REPORTS
OBSCURE TESTS
SCRIPT CREEP
STALLED AUTOMATION
SUT REMAKE
UNFOCUSED AUTOMATION
UNMOTIVATED TEAM

Experiences

Jochim Van Dorpe writes:
Here are some mistakes that we learned from:

Checking too much
Putting too many assertions (checks) in my automated tests caused on the one hand more maintenance, and on the other hand it made tests fail due to bugs that weren't actually related to the item under test.

For example: Suppose a record has columns A,B,C,D and E. The test should pass if calculations for columns C & D are correcly calculated & stored. I now only add C & D to my expected results, so changes to column A, B & E have no maintenance costs, but neither will the tests fail when a problem arises in A,B or E .

(Note from Dot: We call this a "robust" test. If you had say one or two tests that checked all five columns, then that would pick up a problem in columns A, B & E - we call that a "sensitive" Tests (look up SENSITIVE COMPARE). For high level regression, it is good to have sensitive tests, to pick up on any unexpected changes, but for a large number of detailed tests, it is better to have robust tests, so that you are only checking what you are interested in (look up SPECIFIC COMPARE), and you don't have dozens or hundreds of tests failing for the same reason (that you aren't interested in anyway). Hence you are saving failure analysis time as well as maintenance time.)

No comments
Automators should comment their automated tests just as developers should do with the program code. 'Cause once our automators were gone, nobody understood anymore what the tests did ...

So now reviewers check if the tests are well commented for readability and maintainability.

Scenario tests
In the past I would have had 1 massive test case that executed all the different sub-‘test cases’ for a use case or functionality with 1 dataset. So if that test passed, I knew that all the possible outcomes of the use case that were put in the test case were correct. But when something went wrong, I could only be sure that at least one of the possible outcomes had failed to act as expected, but I didn’t know which one(s) ... so I had to figure that out manually.

So we rebuilt our automated testing in such a way that now I can define a specific test case, with a specific result and a specific well-defined dataset. I still can start the thing as before with 1 click, but if somethings fails now, I can immediately see how many test cases went wrong, where it went wrong, and with which part of the dataset.

In this way we saved the time we would have needed to investigate exactly what went wrong.
(Good way to minimise failure analysis time!)


If you have also used this pattern and would like to contribute your experience to the wiki, please go to Feedback to submit your experience or comment.


.................................................................................................................Main Page / Back to Process Patterns / Back to Test Automation Patterns