Jump to navigation Jump to search
The most important thing to learn about test automation is the difference between manual and automated tests.
Manual tests can:
Manual tests can:
- depend on each other if that makes it simpler for the tester (for instance one test creates the data for the next test)
- fail, but the tester, knowing what he or she has done, can easily describe the results. Also the tester can go on with testing the next test case even if the first test left the system in an invalid state
- take different times to perform specific actions, but a tester knows when to wait and when to go on
- be performed on specific dates (testers can set and reset the date if needed)
Automated tests should:
- be written as INDEPENDENT TEST CASES so that they don't have to be executed together every time. See also PRIORITIZE TESTS, FRESH SETUP and TEST SELECTOR.
- FAIL GRACEFULLY so that the next test cases can execute normally
- have ONE CLEAR PURPOSE to make it easier to check the results. See also READABLE REPORTS, COMPARISON DESIGN, EXPECTED FAIL STATUS.
- reuse data whenever possible (see DEFAULT DATA, TESTWARE ARCHITECTURE)
- use VARIABLE DELAYS to wait until some action in the System under Test (SUT) is finished instead of immediately moving on and therefore losing synchronisation
- be able to perform date sensitive test cases (look up DATE INDEPENDENCE)
- be written in such a way that changing the testing tool doesn't mean starting again from scratch. The patterns to look up in this case are TOOL INDEPENDENCE and OBJECT MAP
- be maintainable. For instance writing modular scripts. This means that actions that are performed in many test cases are written each in a specific script that can be called when needed. In this way changes affect only one script and not the whole testware (see GOOD PROGRAMMING PRACTICES, KEYWORD-DRIVEN TESTING, SINGLE PAGE SCRIPTS). Other important patterns are MAINTAINABLE TESTWARE, DESIGN FOR REUSE and ABSTRACTION LEVELS