How we do pre-release testing on our Kanban-team

Scrum master Dirk describes how he gets the most value out of his testing effort.

Today, a friend working at a different company e-mailed me this question:

Our tester doesn't like the PQR (product quality report) format suggested by the QA manager. He is looking for some other form that he'd like better. Probably you have a clue for that?

This got me thinking. In my testing work, I don't use testcases/testscripts. With that I mean: there is no document specifying any type of clickhere/clickthere. I would consider that 'output checking', not testing. (Read James Bach's blog for more about testing vs checking: it is an excellent read.) Repeating the same script over and over again provides very little value: it only tells you that that specific path through the product still works. It will not give you information on all kind of unexpected behaviours of your product. So don't judge the health of your product on scripts.

We do exploratory testing before we release. In this testing, we use a very general checklist which serves as a reminder of the focus area's in the product. For lack of a better name we call this the 'testing script' (mostly because the name 'checklist' feels to mechanical). In this blogpost I'll describe how we use this document to get most value out of our testing effort, and how we get away with having no formal product quality report.  

General outline of our 'testing script'

Our 'testing script' is a general description of the main functions or area's of the product. It consists of high-level topics like 'Login in', 'Onboarding' or 'Making a videocall'. Each of these topics consists of subtopics. For example, we use three different protocols for videocalling (depending on who is trying to call who, and on what devices they are at the moment), so each protocol is a subtopic of topic 'Making a videocall'. So it would look like this:

We keep it vague

Note that I don't decribe what accounts to use, or even how to initiate the call (there are actually 4 ways to do that). That is done on purpose: every time this subtopic is tested, it will be done in a different way. This gives us way more coverage than repeating the same exact steps over and over again, as you would do with very specific testcases.

We add extra info when needed

You might have noticed that sometimes I like to add remarks, such as 'Note: there is a backoffice setting that can overrule this'. Those are notes are a reminder for the tester, usually describing something important, or something that is not obvious from the interface. The notes are updated if the feature changes. Since we release in small batches it is quite easy to keep them updated; this is not part of any formal process. And anytime I run into questions about the functionality, I might add some extra info, enriching the 'testing script'.

Testing flow

Please note that what I describe in this blogpost is our final pre-release testing (regression testing/integration testing/acceptance testing, or whatever term you prefer to use). At this point, major quality effort has already been done on the individual story-level: pair programming, unit testing, code reviewing and exploratory testing. All this is part of the story. The individual stories have been tested up to the point that the dev-team (including tester) are quite sure it works perfectly before they are added to the release.

Then the pre-release testing begins. This is a very pragmatic process. I mark each of the suptopics in our 'testing script' with one of three statusses:

  1. thumbs-down (that was the most convenient symbol in Confluence) for subtopics that I chose NOT to test. I don't test it because I judge the risk to be low (no changes, tested often, very straightforward) or more specifically: if the costs of testing are too high in relation to the risk/impact of an error.
  2. thumbs-up for subtopics that work as expected. With the thumbs-ups and the thumbs-downs, everybody can see the progress of the test, so what parts have been covered. Especially handy since sometimes I'll use scenario's or take 'tours' through the product (see Cem Kamer or Jamer Whittaker for more on that), hitting several topics in one go. I'll mark these as 'done' when I finish the tour, and then look for (sub)topics I haven't hit hard enough yet.
  3. questionmark for the subtopics that are not OK in my opinion. (I used to mark them with a big red cross, but soon found out that in some cases my own misunderstanding of the product was the 'error'. So now I'm a bit more modest, and mark them with a questionmark, without judgement. Yet.) These questionmarks get traiged with a developer and/or productowner. This results in either a) a hotfix for the current release, b) a new story (for a future release), c) a 'not worthy of our attention', or d) an update of the testscript (if I misunderstood the function or it has been changed on purpose).

Ready to release

So at the end of this test we'll have a list of icons (thumbs-up/down), indicating the 'health' of each of the topic and subtopics at (very general) level. Rinse and repeat until all the team is comfortable with shipping. The total time for this entire proces is hours or days, not weeks!

Note: usually I'll do a quick run-through with the team somewhere near the end, to explain what tests I did and did not do. Sometimes the developers will point out some areas that need specific attention (eg because something changed there), leading to additional targeted testing there. I usually do pair testing with a dev too, that really helps a lot in assessing the risks and skipping unnecessary testing.

Conclusion

So this is the process I use, in a specific context, with a specific team and product. I'm sure there are many other (and better) ways out there, but this has served us well. I decided to share because I see quite a lot of teams focussing entirely on test automation; while this is very valuable it does not replace exploratory testing in my opinion. (Maybe I'll write about the relation between the two some other time.) I hope to have made clear that exploratory testing can be quick ánd thorough, providing a lot of value without adding much lead time to your release.

Feel free to contact me [dirk@infi.nl] if you want to discuss the above, or if you have tips!

[Dirk is projectmanager and scrum master at Infi.]

Wil je iets waarmaken met Infi?

Wil jij een eigen webapplicatie of mobiele app waarmee jij het bij anderen maakt?

Waargemaakt door de nerds van Infi.
Nerds met liefde voor softwareontwikkeling en die kunnen communiceren. En heel belangrijk: wat we doen, doen we met veel lol!

Wij willen het fixen. Laat jij van je horen?

Voor wie heb je een vraag?