INCONSISTENT DATA

From Test Automation Patterns
Revision as of 16:17, 4 April 2018 by Cathal (talk | contribs) (Created page with "<div id="content_view" class="wiki" style="display: block"><span style="font-size: 14px">.........................................................................................")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search
.................................................................................................................[Main Page]] / Back to [Design Issues]] / Back to [Test Automation Issues]]

Issue summary

The data needed for the automated test cases changes unpredictably

Category

Design

Examples

  • Test data is obtained by anonymising customer data. This has some major disadvantages:
    - The data must by anonymised (this can be tricky)
    - You must be able to use different IDs or search criteria for every run
    - you probably wont get just the data combination that you need for a special test case and if you add it, it will be overwritten next time you get a new batch of customer data
    - different input data means different output data, so it will be more difficult to compare results


  • Test automation shares databases with testing or development
  • After an update to the Software Under Test (SUT) the data is not compatible any longer
  • Tests alter the data for following tests


Questions

How do you collect the necessary data?


Resolving Patterns

Most recommended:

  • FRESH SETUP should help you in most cases
  • DEDICATED RESOURCES: look up this pattern if you have to share resources
  • WHOLE TEAM APPROACH: if your team follows an agile development process this is the pattern to use in order to avoid this kind of problems from the beginning


Other useful patterns:


.................................................................................................................Main Page / Back to Design Issues / Back to Test Automation Issues