TOOL MUSHROOMING

From Test Automation Patterns
Jump to navigation Jump to search

Failure Pattern summary

A small test tool evolves into a full test framework without ever going through proper scoping and architecture phases

This failure pattern has been added by Michael Stahl. Failure patterns are also called "anti-patterns", as they are things you shouldn't do. They are Issues in this wiki.

Context

Look out for this failure pattern when you develop test tools in-house

Description

A tool created by a single tester to solve a simple automation goal is augmented and patched to support more and more features – including test management features that are generic (not related to the core technology of the developing organization).
The evolution from utility to full-fledged framework goes through different stages. The problems that eventually turn up require different countermeasures.

  1. Small and Localized
    A single tester, who has some programming or scripting skills, gets tired of re-running the same manual regression tests week after week. In her spare time, the tester writes some code to automate parts of her work load. Magically, a set of manual commands that took an hour to execute are done in less than a minute.
    It is done by a simple, small automation tool, targeted to automate a very specific task.
    The tester is naturally (and justifiably) proud of this achievement, and shows the results to the other team members. Although this is a good thing, this will not scale up to fuller automation unless actions are taken now.
    Potential problems/Characteristics (what signals to look for to know that an automation effort has reached this stage):
    - The tool creator is the tool’s user (single user)
    - The tool is usually made unofficially; it’s a personal initiative; many times it is “skunk work” – no one discussed or approved its development
    - Keywords (look for these in status reports or water-cooler conversations):
  2. Generalization
    Team members quickly realize how this solution applies to their own daily work and ask the initiator to help them achieve that: “if you add this simple capability, I will be able to use your tool as well!”.
    It’s flattering to get these requests and they are usually easy to implement. Our tester implements some additional capabilities and more team members can now use the tool.
    By now the tool is a bit more complicated but is still small enough to be supported by the original writer without a noticeable impact on the daily deliveries: instead of running manual tests, the time is put into the automation tool. The tests themselves run automatically, so less time is needed to get the work done.
    Management is quite happy with this development: Test Automation was something that was always on the To-do list and the grassroots emergence is delightful. It looks good in the lab and it definitely looks good on the monthly reports. This is also good, but seeds for future problems are often sown here.
    Potential problems/Characteristics (what signals to look for to know that an automation effort has reached this stage):
    - The tool serves more than one feature
    - There are multiple users for the tool, but still a single owner/developer
    - Maintenance and development of the tool takes >25% of the owner’s time
    - There is an Automation Web Site[1]]
    - Keywords
  3. Staffing
    Life goes on: more code is added to the tool; more users (testers) and more features are covered. At this point, completing a test cycle on time is dependent on the automation tool.
    And it does so happen that sometimes the tool reports incorrect failures; or that a new release is so buggy that it blocks everyone from getting much work done.
    The tool’s author is increasingly busy with doing automation work and has a hard time meeting her commitments to the test cycle. More than that: a lot of requests for added capabilities are being delayed since there is just so much a single person can do.
    Eventually, management realizes that this automation thing is important enough that it can’t be done just as a side job of a single person. Additional heads are added and a new “test automation team” is officially created to continue the development of the automation tool.
    Potential problems/Characteristics (what signals to look for to know that an automation effort has reached this stage):
    - There are requests for additional manpower to work on automation
    - A test automation team exists or in the stages of being formed
    - Automation Face-to-Face meetings take place[[#_ftn3 [3]]
    - Tool-related issues delay the test execution cycles (this is an indication the tool is becoming complex and brittle)
    - Keywords:
  4. Development of Non-Core features
    As more capabilities are added to the tool, and more tests are automated, it becomes clear that some test-management capabilities are needed in order to take full advantage of the automated tests.
    Tests must be grouped together into test cycles; pass/fail results need to be collected and reported efficiently; automating the logistics of assigning test cases to test machines emerges as a dire need.
    Additionally, testers ask for generic features – things that are not related to a specific technology, but more to test automation in general. For example: “When a test fails, run it again on another system”; “Implement timeout, so when a test is stuck, the system aborts the test and moves on”;
    Code is written to automate the installation and configuration of the application under test.
    Code is written to allow links between tests (“if this test fails, marks these tests as blocked”).
    More and more of the automation team’s time is invested in developing a Test Framework – code to manage the test cases and test execution, and not code that automates actual test cases. This is code that addresses something other than your Core Technology. Additionally, the automation team puts a rather large effort keeping the system running, fixing bugs and solving problems that lead to false fails. This is where problems not address earlier will "come home to roost".
    Potential problems/Characteristics (what signals to look for to know that an automation effort has reached this stage):
    - Much of the development effort is going into developing generic features (test-case management, test cycle management, data collection features)
    - The system creates enough false fails to be a concern
    - A lot of time is spent on analysis of test logs
    - Keywords:
  5. Overload
    By now, you have a large testing framework that was developed internally. The framework is central to the daily life of your test and development organizations but suffers from many problems; most of the automation team’s time is spent on keeping the system running, instead of developing new capabilities.
    In fact, so many people are so unhappy with the system that they start blaming it for all kind of problems – some of which it has nothing to do with. It becomes clear that localized fixes won’t do. It needs to be redesigned from the ground up.
    Potential problems/Characteristics (what signals to look for to know that an automation effort has reached this stage):
    - The automation team suffers from maintenance & logistics overload
    - Users and customers overplay the system’s limitations
    - The system loses credibility. Test failures are suspected to be a test problem, not a product problem and manual reproduction is required as a standard procedure
    - Some engineers start developing their own, stage 1 initiatives to solve their specific problem
    - Keywords


We have now come "full circle" and are back at Stage 1, Small and Localized. Unless the resolving patterns are actioned early in this cycle, it will repeat itself again.

Category

Failure

Experiences

[1] A web site being used is a good signal that more than one person is using the tool; it shows there is a need for disseminating information or for update-delivery mechanisms. If the tool is still used by one person - why would you need a web site?

[2] As long as a tool serves a single feature, there is no need for common libraries of code. Once more than one feature is served it makes a design sense to collect the code that is used by all features into a common library.

[3] In geo-dispersed organizations, automation developers are spread across geographical locations. It is common that the isolated teams have different opinions how to progress (what programming language to use; design decisions etc.). When the argument heats up, a common solution is to “get them in a room and don’t leave until you see white smoke”. This means a F2F meeting. Thus, calling a F2F is firstly an indication that many people are now involved with automation, and a possible indication of friction between automation engineers across geographical locations.

If you have also experienced this failure pattern (aka anti-pattern), go to Feedback to provide a brief story of what happened, and if and how you were able to recover from it (showing links to any patterns from this wiki).

Main Page
Back to Test Automation Patterns
Back to Failure Patterns