We recently read an article on the QA Revolution website, titled 7 Great Reasons to Write Detailed Test Cases, which claims to give “valid justification to write detailed test cases” and goes as far as to “encourage you to write more detailed test cases in the future.” We strongly disagree with both the premise and the “great reasons” and we’ll argue our counter position in a series of blog posts.
Our first blog covered the claims around test planning claims and our second those on offshore testing teams.
In this third and final part of our series, our focus now turns to the points made in the article around training.
Training: I have found that it is extremely helpful to have detailed test cases in order to train new testing resources. I typically will have the new employees start understanding how things work by executing the functional test cases. This will help them come up to speed a lot faster than they would be able to otherwise.
Let’s go through the assertions made by these statements.
I have found that it is extremely helpful to have detailed test cases in order to train new testing resources.
We note that, again, our experiences suggest the exact opposite. While this probably seems like a sound idea and Lee also once advocated for such an approach, it soon became clear that there were significant downsides to driving learning via an existing test case library including:
- Different people learn in different ways and following written instructions is not a learning style that works well for everyone.
- Performing testing by following detailed test cases is boring. Some of the key drivers of learning – such as genuine engagement and curiosity – are dampened or obliterated by simply following instructions in this way.
- The tacit knowledge held by the test case author during the creation of the test case results in big gaps and unclear instructions when it comes to being executed by another person (even one of similar experience).
- When following detailed instructions, the ability to observe and memorise is severely compromised – you’ll have no doubt experienced this when driving to a destination using GPS as compared to when you follow signs and landmarks to reach the same destination. Mental capability is used up trying to follow a map rather than learning and navigating the terrain, and following written instructions is mentally tiring (perhaps partly due to suppressing the innate human desire to explore and learn rather than living to a script).
I typically will have the new employees start understanding how things work by executing the functional test cases. This will help them come up to speed a lot faster than they would be able to otherwise.
Paul has, for some years now, used exploratory models to train people new to the software being tested. This enables them to use their curiosity while learning how things work. Following other people’s directions (via detailed test scripts) is simply following a map, leading to the possibility of confusing the map for the terrain. Due to their detailed nature, such test cases quickly become out of synch with the product as it is developed and Lee has seen many instances of new testers to a team trying to use such test cases and becoming very confused due to the inevitable mismatches between the test case and the reality that is the product.
A further observation of ours is that when testers learn through exploration, they ask a lot of questions. As they get feedback on their questions, they are also getting constructive feedback on the quality and relevance of those questions. This helps new testers to practice framing important questions about the software, their approach to testing it, their current lack of knowledge and potential areas of system risk. These are all attributes that help to create an excellent tester.
We’d like to point out that following instructions and understanding are not the same things. Rote learning of the software produces a “one dimensional” view as you are following one way paths. In reality, software testing is often more like a freeway with multiple lanes, off ramps, on ramps, pot holes and barricades. You need all your senses available to you to understand the terrain, spot signs of potential trouble and get them repaired before your customers are troubled by them. Notice that while we have a focus on training and learning, we are doing this in the context of system testing and potentially uncovering new sources of risk. This more holistic approach to training is a much closer approximation to what we believe good testers do when testing.
We note that the article’s author suggests that the tester will “come up to speed a lot faster than they would be able to otherwise”, but there are no alternative ways of “coming up to speed” offered against which to compare. Our experience of trying to force learning via following existing test cases is that the resulting understanding is shallow and what might look like a good level of understanding of the software is later revealed to be quite poor when it comes to finding deeper, more important issues.
Summing up our views
In our opinion, while you could use detailed test cases as a training and learning tool, our experiences suggest that this is an approach that is neither engaging or effective compared to allowing the tester to learn through exploration, support and questioning.
If you’ve read all three blogs on our case against test cases, you have probably come to the conclusion that we really do not agree with the assertions made by the article we’re responding to. Detailed test cases in our view provide very few advantages and a lot of disadvantages. It’s hard to support any approach that reduces a tester’s time interacting with the software and asks them to detail what they should test based on a specification that will change and render many test cases pointless. Testers are intelligent people (at least the ones we know well) with boundless curiosity and an appetite for exploring and asking questions. Asking them to suppress these talents in favour of following detailed test cases is a massive disservice to testers. If the context you are engaged in demands detailed test scripts, well that sucks, but at the end of the day you’re stuck with that. However there is no reason why you can’t actively advocate for better approaches and seek to run small experiments that slowly move your testing away from detailed test scripts.
Our suggestions for further reading:
- James Bach/Aaron Hodder “Test Cases Are Not Testing” https://www.satisfice.com/download/test-cases-are-not-testing
- Michael Bolton “The Test Case Is Not The Test” https://www.developsense.com/blog/2017/02/the-test-case-is-not-the-test/
- Michael Bolton’s “Breaking The Test Case Addiction” blog series, e.g. https://www.developsense.com/blog/2019/12/breaking-the-test-case-addiction-part-8/
3 replies on “The case against detailed tests cases (part three)”
Note that there is an equivalence between detailed test cases and test automation. The criticism of test cases should also apply to test automation?
It might also be interesting to review any published insight on testing from development thought leaders. Is writing test cases worse than an absence of testing?
Hi Nilanjan, thanks for taking the time to read our blog and to send through feedback. For this blog series we focused only on testing performed by humans. I dare say at some point we will blog on automation and explore any equivalence. I suspect we both believe that there is a lot of bad automation practices which need to be opened for discussion (although there are others that blog quite a bit about this problem). The “writing test cases vs an absence of testing” is an interesting thought and you have phrased it in a way that is open to interpretation and exploration. Will we blog on it, not sure, but it will be a discussion point for us.
Again, thanks for reading our blog and taking the time to offer your thoughts.
Lee and Paul
[…] The case against detailed tests cases (part three) Written by: Ma Kelly’s Greasy Spoon […]