Difference between revisions of "Tester"
Jump to navigation
Jump to search
m (corrected link nto expected fail status) |
|||
Line 9: | Line 9: | ||
* <span style="font-size: 16px">be written as [[INDEPENDENT TEST CASES]] so that they don't have to be executed together every time. See also [[ PRIORITIZE TESTS]], [[FRESH SETUP]] and [[TEST SELECTOR]].</span> | * <span style="font-size: 16px">be written as [[INDEPENDENT TEST CASES]] so that they don't have to be executed together every time. See also [[ PRIORITIZE TESTS]], [[FRESH SETUP]] and [[TEST SELECTOR]].</span> | ||
* <span style="font-size: 16px">[[FAIL GRACEFULLY ]]so that the next test cases can execute normally</span> | * <span style="font-size: 16px">[[FAIL GRACEFULLY ]]so that the next test cases can execute normally</span> | ||
− | * <span style="font-size: 16px">have [[ONE CLEAR PURPOSE]] to make it easier to check the results. See also [[READABLE REPORTS]], [[COMPARISON DESIGN]], [[EXPECTED FAIL STATUS | + | * <span style="font-size: 16px">have [[ONE CLEAR PURPOSE]] to make it easier to check the results. See also [[READABLE REPORTS]], [[COMPARISON DESIGN]], [[EXPECTED FAIL STATUS]].</span> |
* <span style="font-size: 16px">reuse data whenever possible (see [[DEFAULT DATA]], [[TESTWARE ARCHITECTURE]])</span> | * <span style="font-size: 16px">reuse data whenever possible (see [[DEFAULT DATA]], [[TESTWARE ARCHITECTURE]])</span> | ||
* <span style="font-size: 16px">use [[VARIABLE DELAYS]] to wait until some action in the System under Test (SUT) is finished instead of immediately moving on and therefore losing synchronisation</span> | * <span style="font-size: 16px">use [[VARIABLE DELAYS]] to wait until some action in the System under Test (SUT) is finished instead of immediately moving on and therefore losing synchronisation</span> |
Latest revision as of 07:15, 5 May 2018
The most important thing to learn about test automation is the difference between manual and automated tests.
Manual tests can:
Back
Manual tests can:
- depend on each other if that makes it simpler for the tester (for instance one test creates the data for the next test)
- fail, but the tester, knowing what he or she has done, can easily describe the results. Also the tester can go on with testing the next test case even if the first test left the system in an invalid state
- take different times to perform specific actions, but a tester knows when to wait and when to go on
- be performed on specific dates (testers can set and reset the date if needed)
Automated tests should:
- be written as INDEPENDENT TEST CASES so that they don't have to be executed together every time. See also PRIORITIZE TESTS, FRESH SETUP and TEST SELECTOR.
- FAIL GRACEFULLY so that the next test cases can execute normally
- have ONE CLEAR PURPOSE to make it easier to check the results. See also READABLE REPORTS, COMPARISON DESIGN, EXPECTED FAIL STATUS.
- reuse data whenever possible (see DEFAULT DATA, TESTWARE ARCHITECTURE)
- use VARIABLE DELAYS to wait until some action in the System under Test (SUT) is finished instead of immediately moving on and therefore losing synchronisation
- be able to perform date sensitive test cases (look up DATE INDEPENDENCE)
- be written in such a way that changing the testing tool doesn't mean starting again from scratch. The patterns to look up in this case are TOOL INDEPENDENCE and OBJECT MAP
- be maintainable. For instance writing modular scripts. This means that actions that are performed in many test cases are written each in a specific script that can be called when needed. In this way changes affect only one script and not the whole testware (see GOOD PROGRAMMING PRACTICES, KEYWORD-DRIVEN TESTING, SINGLE PAGE SCRIPTS). Other important patterns are MAINTAINABLE TESTWARE, DESIGN FOR REUSE and ABSTRACTION LEVELS
Back