Discussions and Challenges
Here are some interesting comments provided (by email, Apr 2013?) by Gerard Meszaros, author of "xUnit Test Patterns: Refactoring Test Code" (see References), and some email discussion that followed.
1) You are trying to boil the oceans. You are trying to address a huge scope. Each of the 5 Patterns areas could be a separate book!
2) Your "patterns" are "practices" written in pattern form but they are not a true pattern language. Why? There are very few alternate patterns to choose from to solve any particular problem. There is no discussion of when you should use one alternative pattern over another. I have no doubt that these solutions worked for you and your co-authors on several different projects. But does that really imply that there is no other way that wouldn't be a better solution in a particular context?
My biggest learnings when writing my book was when I tried to explain in what situation(s) one should use the (anti)patterns that other people preferred and I eschewed. This forced me to understand the forces that drive the decision. You'll notice that each of the patterns in my book has several alternatives that each solve the same named problem (See the Problem-Solution summary on the inside back cover.)
Responses and more discussion:
- Thanks a lot for your comments on our Test Automation Patterns Wiki. I have considered them for the last few days and I think that you are right, but also wrong. About trying to boil the oceans (what a wonderful expression!): you are right that it’s a lot of stuff, but automation of black box tests has to face problems on all these kind of issues and it wouldn’t be much help for testers and test automators to look for answers in lots of different books.
To fully cover this topic (and you aren't afraid of "boiling the ocean"), you should include sections (books?) on "Agile Test Automation" and "Test Automation on Agile Projects".
The former is applying agile techniques to the Test Automation Project. Things like prioritizing the tests cases (or better, the test automation capabilities/keywords which enable specific test cases) you want to automate, automating them one by one in priority order.
The latter is how we automate tests on agile projects using ATDD (AKA "Executable Specification" or "Specification by Example") tools and integrating the test automation into the same sprint & team as the corresponding product development work.
- And you are right that our “patterns” are actually “best practices”, but just the same they haven’t been structured and catalogued yet so that most people that start test automation fall in each and every “hole”, every one of them “inventing the wheel” again on his own!
- You are correct that we don’t usually suggest different solutions for different contexts (yet). I started to write the book mainly from my own experiences and they cover only a small slice of applications. In fact that was the main reason why I wanted to have Dorothy on board. She knows everybody and so I hope that we will be able to contact people with other experiences and that will enable us to flesh out our “patterns”
My point is that there are likely many other "patterns" that you don't include so it's not just a matter of "fleshing out" the current patterns. The missing patterns are ones that would help people just as much or more than the current set which are based on the context "Test automation is done by a different team (from development) in a traditional (waterfall) way and is done mostly after the product is built." The challenges of this approach are many (hence all the "holes" you mention earlier). Many of those challenges are avoided by changing the approach and thereby entering a different context (the whole team, highly incremental approach.)
Our "patterns" are a work in progress and I hope that you will have again time to look at them when we have added some more "meat"!
I'd be happy to look at them once they are developed further.
New topic in discussion - Jon Hagar - Missing Patterns
So I don't see several patterns I might have expected for automation. For example:
1. Data creating
2. Test case data selection (e.g. combinatorial, normal case, worse cases, stress case, soap opera, etc)
3. Test data Mutation
4. Model based testing patterns (there is a general pattern listed, but I might expect a number of these as sub patterns, right?)
5. Patterns for some specific domains (e.g. database, web, embedded, mobile, etc)
So, rather than just starting to add these, I am looking for some feedback as to if anyone sees these as patterns in automation. There are tools that can do or help on all of these. So, they are part of automation, but further in the current categories of this wiki, I am not sure where I'd put them either.
I guess some of the patterns you mention could be useful here (I'm thinking of 1. or 2.) but I don't think that they are specific for test automation. Still you have a point: where do we draw the line? On the other hand, as far as I know there are no Test Patterns as such yet so we could start another wiki ;-))
Patterns for Model-Based Testing or other domains are not my speciality and is one of the areas where I hope you guys will help out
Dot: Are these test patterns more than automation patterns?
Well, many of the patterns on the wiki now are not specific (in name) to test automation, but the "solution pattern details" are specific to using automation concepts. So I think at some point (I am leaving on a long biz trip), I will try to create patterns for at least 1 and 2 and may be parts of 5. Test automation in embedded may be different since embedded often use extensive hardware in the loop labs. Can we reference other sites?
For the wiki you can reference what you want, for the book later we will see
Jerry Durant (TECTL1):
Its interesting with test patterns but there are two opposing situations. On one side is the 'test patterns associated with the use of test automation (inclusive of test processes)' and on the other hand 'test patterns associated with the system(s) under test' (in other words where to focus your efforts whether manually or through automation. Systemically the application of a particular test practice is based on the premise of focusing on areas of known defects, created as a result of known failure behaviors (e.g. programmatic skill level, design adherence, requirements fluctuation, etc.). This are best to be referred to as root causes but they also reflects patterns of behavior whose consequences result in undesirable results. Based on these situations one has to then decide what method might be most appropriate to get to and measure potential level of occurrence and not just the existence of. Sometimes this can only be had by automation, in other cases automation can in fact mask the condition and give a false-positive result.
Dot: I think this is more to do with test patterns rather than automation patterns. For example, deciding to test more where known defects have already occurred would be a great test pattern (I would call it "Bugs are Social Creatures") but in this wiki we are focusing on automation patterns. Deciding when to apply automation is an automation pattern - we have AUTOMATE GOOD TESTS and AUTOMATE WHAT'S NEEDED which go some way towards that.