Difference between revisions of "DATA-DRIVEN TESTING"
m (Topic titles in capital letters) |
|||
(6 intermediate revisions by 2 users not shown) | |||
Line 10: | Line 10: | ||
=<span style="font-size: 16px">'''Implementation'''</span>= | =<span style="font-size: 16px">'''Implementation'''</span>= | ||
<span style="font-size: 16px">You write a script with variables whose content is read sequentially from a file such as a spreadsheet. Every line in the file delivers the data for a different test case.</span><br /> <span style="font-size: 16px">An easy way to implement this pattern is to use [[CAPTURE-REPLAY]] to capture the tests initially. The captured test will have constant data (i.e. specific test inputs for every field). You can then replace these constants with a call to data that is read from an external file. </span> | <span style="font-size: 16px">You write a script with variables whose content is read sequentially from a file such as a spreadsheet. Every line in the file delivers the data for a different test case.</span><br /> <span style="font-size: 16px">An easy way to implement this pattern is to use [[CAPTURE-REPLAY]] to capture the tests initially. The captured test will have constant data (i.e. specific test inputs for every field). You can then replace these constants with a call to data that is read from an external file. </span> | ||
− | =<span style="font-size: 16px">''' | + | =<span style="font-size: 16px">'''Potential problems'''</span>= |
<span style="font-size: 16px">If your data is not contained in only one data file, you must make sure that the script and data are correctly matched. </span> | <span style="font-size: 16px">If your data is not contained in only one data file, you must make sure that the script and data are correctly matched. </span> | ||
=<span style="font-size: 16px">'''Issues addressed by this pattern'''</span>= | =<span style="font-size: 16px">'''Issues addressed by this pattern'''</span>= | ||
<span style="font-size: 16px">''[[BRITTLE SCRIPTS]]''</span><br /> ''<span style="font-size: 16px">[[REPETITIOUS TESTS]]</span>'' | <span style="font-size: 16px">''[[BRITTLE SCRIPTS]]''</span><br /> ''<span style="font-size: 16px">[[REPETITIOUS TESTS]]</span>'' | ||
=<span style="font-size: 16px">'''Experiences'''</span>= | =<span style="font-size: 16px">'''Experiences'''</span>= | ||
− | + | ||
− | + | ||
− | <span style="font-size: 16px">A case can be made for a 3-tier system when selecting test data.</span><br /> <br /> <span style="font-size: 16px">For ‘smoke tests’ then simply providing known good data is probably sufficient – after all you’re just trying to prove that this build isn’t fundamentally broken.</span><br /> <br /> <span style="font-size: 16px">For regression tests then each variable should have successive entries that fall in the following categories: typical, just short of limit, on limit, over limit. The limit can be field size, input value etc. This is still just testing on a single axis though ….</span><br /> <br /> <span style="font-size: 16px">For more complete testing you should attempt to have multiple failures happening at once. Using pairwise testing you should be able to fairly easily set up a test sequence to have each combination of failure points. Warning – this level of testing can take some time to run. I have had some very interesting cases of fault recovery routines ‘colliding’ when faced with this type of testing and it’s the sort of thing that drives support crazy when it’s encountered in the field.</span> | + | <span style="font-size: 16px"><u>Derek Bergin</u> has some good advice for managing the test data from his experience - thanks!</span><br /> |
− | + | <span style="font-size: 16px">Derek says: Selection of test data</span><br /> | |
− | <span style="font-size: 16px">Once you move away from the smoke test level of testing then it becomes important to be able to manage the data sets you are using. Failure combination data sets can be large and may well be specific to a particular build and its limits. Similarly you may well have customer specific data sets which have to be validated at User Acceptance Testing. Your choice of test tools and framework should be informed by this requirement. At the very least the ability manipulate the data in a spreadsheet-like grid should be expected and link the files to a specific test cycle/revision.</span><br /> <br /> <span style="font-size: | + | <span style="font-size: 16px">A case can be made for a 3-tier system when selecting test data.</span><br /> <br /> <span style="font-size: 16px">For ‘smoke tests’ then simply providing known good data is probably sufficient – after all you’re just trying to prove that this build isn’t fundamentally broken.</span><br /> <br /> <span style="font-size: 16px">For regression tests then each variable should have successive entries that fall in the following categories: typical, just short of limit, on limit, over limit. The limit can be field size, input value etc. This is still just testing on a single axis though ….</span><br /> <br /> <span style="font-size: 16px">For more complete testing you should attempt to have multiple failures happening at once. Using pairwise testing you should be able to fairly easily set up a test sequence to have each combination of failure points. Warning – this level of testing can take some time to run. I have had some very interesting cases of fault recovery routines ‘colliding’ when faced with this type of testing and it’s the sort of thing that drives support crazy when it’s encountered in the field.</span><br /><br /> |
− | + | <span style="font-size: 16px">Data Management</span><br /> | |
+ | <span style="font-size: 16px">Once you move away from the smoke test level of testing then it becomes important to be able to manage the data sets you are using. Failure combination data sets can be large and may well be specific to a particular build and its limits. Similarly you may well have customer specific data sets which have to be validated at User Acceptance Testing. Your choice of test tools and framework should be informed by this requirement. At the very least the ability manipulate the data in a spreadsheet-like grid should be expected and link the files to a specific test cycle/revision.</span><br /> <br /> | ||
+ | <span style="font-size: 16px">Example</span><br /> | ||
<span style="font-size: 16px">Seretta:</span><br /> | <span style="font-size: 16px">Seretta:</span><br /> | ||
{| class="wiki_table" | {| class="wiki_table" | ||
Line 29: | Line 31: | ||
<span style="font-size: 16px"> INPUT,edtDirectory,<DRIVERDirName></span><br /> <span style="font-size: 16px"> SELECT,Button,<ConfirmSelectionDRIVER></span><br /> <span style="font-size: 16px">…….</span><br /> <br /> | <span style="font-size: 16px"> INPUT,edtDirectory,<DRIVERDirName></span><br /> <span style="font-size: 16px"> SELECT,Button,<ConfirmSelectionDRIVER></span><br /> <span style="font-size: 16px">…….</span><br /> <br /> | ||
<span style="font-size: 16px">The words in the square brackets represent the variables. </span><br /> <span style="font-size: 16px">In one of the corresponding DATA-scripts you would find the following data:</span><br /> | <span style="font-size: 16px">The words in the square brackets represent the variables. </span><br /> <span style="font-size: 16px">In one of the corresponding DATA-scripts you would find the following data:</span><br /> | ||
− | <span style="font-size: 16px">…….</span><br /> <span style="font-size: 16px"><Priority>,High</span><br /> <span style="font-size: 16px"><TestType>,Automatic</span><br /> <span style="font-size: 16px"><ButtonDRIVER>,btnDRIVERDir</span><br /> <span style="font-size: 16px"><SelectDirDRIVER>,TSelectDirDlg</span><br /> <span style="font-size: 16px"><DRIVERDirName>,c:\General\Data\ScriptData</span><br /> <span style="font-size: 16px"><ConfirmSelectionDRIVER>,btnOK</span><br /> <span style="font-size: 16px">…….</span><br /> | + | <span style="font-size: 16px">…….</span><br /> <span style="font-size: 16px"><Priority>,High</span><br /> <span style="font-size: 16px"><TestType>,Automatic</span><br /> <span style="font-size: 16px"><ButtonDRIVER>,btnDRIVERDir</span><br /> <span style="font-size: 16px"><SelectDirDRIVER>,TSelectDirDlg</span><br /> <span style="font-size: 16px"><DRIVERDirName>,c:\General\Data\ScriptData</span><br /> <span style="font-size: 16px"><ConfirmSelectionDRIVER>,btnOK</span><br /> <span style="font-size: 16px">…….</span><br /><br /> |
|} | |} | ||
− | <span style="font-size: 14px">.................................................................................................................[[Main Page]] / Back to [[Design Patterns]] / Back to [[Test Automation Patterns]]</span | + | |
+ | <span style="font-size: 16px">If you have also used this pattern and would like to contribute your experience to the wiki, please go to [[Feedback]] to submit your experience or comment.</span><br /> <br /> | ||
+ | |||
+ | <span style="font-size: 14px">.................................................................................................................[[Main Page]] / Back to [[Design Patterns]] / Back to [[Test Automation Patterns]]</span> |
Latest revision as of 15:52, 21 August 2018
Pattern summary
Write the test cases as scripts that read their data from external files
Category
Design
Context
One of the most used patterns to easily develop modular automation scripts for long lasting automation
Description
Write the test cases as scripts that read their data from external files. In this way you have only one script to drive the tests but by changing the data you can create any number of test cases. The charm is that if you have to update the script because of some change in the Software Under Test (SUT), you frequently don’t have to change your data so you don’t have to spend too much effort in maintenance.
Implementation
You write a script with variables whose content is read sequentially from a file such as a spreadsheet. Every line in the file delivers the data for a different test case.
An easy way to implement this pattern is to use CAPTURE-REPLAY to capture the tests initially. The captured test will have constant data (i.e. specific test inputs for every field). You can then replace these constants with a call to data that is read from an external file.
Potential problems
If your data is not contained in only one data file, you must make sure that the script and data are correctly matched.
Issues addressed by this pattern
BRITTLE SCRIPTS
REPETITIOUS TESTS
Experiences
Derek Bergin has some good advice for managing the test data from his experience - thanks!
Derek says: Selection of test data
A case can be made for a 3-tier system when selecting test data.
For ‘smoke tests’ then simply providing known good data is probably sufficient – after all you’re just trying to prove that this build isn’t fundamentally broken.
For regression tests then each variable should have successive entries that fall in the following categories: typical, just short of limit, on limit, over limit. The limit can be field size, input value etc. This is still just testing on a single axis though ….
For more complete testing you should attempt to have multiple failures happening at once. Using pairwise testing you should be able to fairly easily set up a test sequence to have each combination of failure points. Warning – this level of testing can take some time to run. I have had some very interesting cases of fault recovery routines ‘colliding’ when faced with this type of testing and it’s the sort of thing that drives support crazy when it’s encountered in the field.
Data Management
Once you move away from the smoke test level of testing then it becomes important to be able to manage the data sets you are using. Failure combination data sets can be large and may well be specific to a particular build and its limits. Similarly you may well have customer specific data sets which have to be validated at User Acceptance Testing. Your choice of test tools and framework should be informed by this requirement. At the very least the ability manipulate the data in a spreadsheet-like grid should be expected and link the files to a specific test cycle/revision.
Example
Seretta:
In my company we use a variation of KEYWORD-DRIVEN TESTING, but with an easy trick we have the same advantages as in DATA-DRIVEN TESTING, that is we can write for any one (DRIVER-)script any number of (DATA-)scripts. We reached this by substituting the data in the (DRIVER-)script with variables. To show how this works, here are some extracts from the scripts that we use to test our own test automation framework: In the DRIVER-script we have for instance: ……. GOTO,FTestSuite The words in the square brackets represent the variables. ……. |
If you have also used this pattern and would like to contribute your experience to the wiki, please go to Feedback to submit your experience or comment.
.................................................................................................................Main Page / Back to Design Patterns / Back to Test Automation Patterns