Difference between revisions of "SET STANDARDS"
Line 86: | Line 86: | ||
<br /> <br /> <br /> <br /> | <br /> <br /> <br /> <br /> | ||
− | <span style="font-size: 16px">If you have also used this pattern and would like to contribute your experience to the wiki, please go to [[ | + | <span style="font-size: 16px">If you have also used this pattern and would like to contribute your experience to the wiki, please go to [[Feedback]] to submit your experience or comment.</span><br /> <br /> |
<span style="font-size: 14px">.................................................................................................................[[Main Page]] / Back to [[Process Patterns]] / Back to [[Test Automation Patterns]]</span></div> | <span style="font-size: 14px">.................................................................................................................[[Main Page]] / Back to [[Process Patterns]] / Back to [[Test Automation Patterns]]</span></div> |
Latest revision as of 14:41, 21 August 2018
Pattern summary
Set and follow standards for the automation artefacts.
Category
Process
Context
This pattern is appropriate for long-lasting automation. It is essential for larger organisations and for large-scale automation efforts. This pattern is not needed for single-use scripts.
Description
Set and follow standards: otherwise when many people work on the same project it can easily happen that everyone uses their own methods and processes. Team members do not “speak the same language” and hence cannot efficiently share their work; you get OBSCURE TESTS or SCRIPT CREEP.
As an extra bonus, standards make it easier for new team members to integrate into the team.
Implementation
Some suggestions for what you should set standards for:
Naming conventions:
- Suites: the names should convey what kind of test cases are contained in each test suite.
- Scripts: if the names are not consistent, an existing suitable script may not be found so a duplicate may be written.
- Keywords: it’s important that the name immediately conveys the functionality implemented by the keyword.
- Data files: it should be possible to recognize from the name what the file is for and its status.
- If you implement OBJECT MAP, the right names facilitate understanding of the scripts and enable you to change tools without having to rewrite all of your automation scripts (only the tool-specific ones).
- Test data: if possible use the same names or IDs in all data files, as this will facilitate reuse.
Organisation of testware:
- Test Definition: Define a standard format or template to document all automated tests, for example as a standard block of comment, where the information is presented in a consistent way across all tests (e.g. same titles and levels of indentation). This should contain the following:
- Test case ID or name
- What this test does
- Materials used (scripts, data files etc)
- Set-up instructions
- How it is called (input variables if any)
- Execution instructions
- What it returns (including output variables)
- Tear-down instructions
- Length (how long it takes to run)
- Related tests
- TEST SELECTOR tags
- Any other useful information such as EMTE (Equivalent Manual Test Effort - how long this test would have taken manually)
- File and folder organisation and naming conventions
Other standards:
- Documentation conventions for scripts and batch files
- Coding conventions for scripts
- Data anonymization rules
- Develop a TEMPLATE TEST
Other advice:
- Document the standards!
- Standards should be reviewed periodically in order to adjust or enhance them.
- Put your standards in a Wiki so that everybody can access them at any time.
- If something has to be changeable, use a translation table so that the scripts can stay stable.
- Allow exceptions when needed.
- Setting standards for test data, for example using the test ID as the customer's name, can help to VISUALIZE EXECUTION as a way of monitoring test progress.
Potential problems
When devising standards, it is useful to get input from a selection of people who will be using the automation, to make sure that the conventions adopted will serve all of its users well. An additional benefit of getting others involved is that they will be more supportive of something that they helped to devise. Not many people like having standards imposed arbitrarily when they can't see the reason for them.
Once you have settled on your standards, however, then you need to make sure that everyone does use them in a consistent way, and this will take effort.
Issues addressed by this pattern
CAN'T FIND WHAT I WANT
INADEQUATE DOCUMENTATION
LOCALISED REGIMES
OBSCURE TESTS
SCRIPT CREEP
TOOL-DRIVEN AUTOMATION
Experiences
Jochim Van Dorpe writes:
One day I was put on a project that had been going on for more than two years. The first tests were automated in one of the first releases. However, for the past months they were only keeping the tests 'green' without adding new ones, although development was still ongoing.
The first reason: staffing has dramaticaly changed over the last two months:
- The old PL was a consultant who had been replaced by a new one
- The developers were consultants who went away
- Most of the analysts were sacked.
The second reason: the automated tests were chaotic!
- Test suites were made per test designer/automator, not per grouping functionality of some sort.
- Naming was meaningless: test01(), test02(), ...
- No comment or documentation was added
Those two reasons combined made that nobody understood what the tests did or what they tested, but the client ... he saw a grass-green chart every time he asked for test results.
I've made up some standards (together with the architect and an analyst) and explained them to all the automators, they contained the following simple rules:
- the lowest level test suites (who are translated to one java test class) contain the tests of 1 Use case, 1 data flow document, 1 end-2-end-test, ...
- naming conventions:
- first characters: UC/DF/E2E defines if we are testing a use case, dataflow, ...
- followed by the number of the document
- followed by TC
- followed by the number of the case in the lowest level test suite
- followed by the identifier in the testtool (which is generated automatically)
For example UC20TC04_abc124() translates to the fourth test case of use case 20, which is test abc124 in the tool
A similar set of standards was made for test analysis and design so we could exactly map the high level test cases with those that were implemented by the automators.
They included:
- a nested structure of the test suites:
- type on the highest level
- use case, data flow, functionality or other grouping and identifying element on the lowest level
- naming conventions of the test cases
- numbering of the test cases
- a way to present the prerequisites
- a way to present the actions to be taken
- a way to present the expected results
- a test importance execution level (going from 1 to 4)
- keywords to add for searching (like: automated, MSS, ...)
- some extra info like the issue, change request, ... or any other reason why the test was added
We also added a set of standards for commenting in and around the tests so even an a non-technical person could easily understand what a certain line of test-code does or what it asserts.
These simple rules added the following advantages:
- readability: everybody knows what a certain test does, and how it does what it does
- maintainability: Not only the original automator can adapt the automated tests
- Now we know what the tests test
- We have a clear view of which high level test cases are automated and which aren't
We also added similar guidelines for:
- the size and content (like id-naming conventions)of datasets
- the amount of 'things' a script or test case chould contain.
If you have also used this pattern and would like to contribute your experience to the wiki, please go to Feedback to submit your experience or comment.