At European Testing Conference in Valencia in 2019, I presented my first talk on “microheuristics”. I’ve been an avid and skilled exploratory tester for years, but it wasn’t until the summer of 2018 when I did a pairing session with Lisi Hocke and a workshop with Thomas Rinke that I started to notice and explore something rather interesting:

Often, two testers will have the same idea for what the next step or action could be while they are performing exploratory testing. This happens even if that next step is not a “logical” next move (e.g. saving after completing a dialog).

This was intriguing. If they’re not following a script, how do two testers (who don’t work together and don’t work on the same domain) come to the same idea at the same time?

My suspicion: there are shared models, assumptions and patterns that result from learning about testing, knowing about software, experiencing errors, and hours of testing (either hands-on, or automated, or simply asking questions in various development meetings). These things come quickly to mind when we see something that reminds us of them – and give us the idea of what to do next (probably to prompt interesting behaviour).


To find a name for this phenomenon, I coined the term “microheuristic”.

The definition is:

A quick way to determine “what’s my next action” while testing. A microheuristic is our brain applying what we’ve just learned to decide on the next step or experiment. The result of a microheuristic being applied will usually prompt an immediate action. Such actions are rooted in snap judgements that we make explicit and strategic by describing them.

They are heuristics because they help us (in a fallible way) to make a decision. I called them “micro” in comparison to the currently well-known testing heuristics which more often guide us to decisions about charters or risks – not to the very next action in a testing session.

How does this help?

It upsets and worries me when people describe exploratory testing as “just clicking around”. In fact, exploratory testing is very systematic – but we as testers are not very good at explaining that system. Methods like testopsies (term coined by James Bach) help us to make explicit what we’re doing when we’re testing. Describing microheuristics is one way of being explicit about the systematic learning and applying of information.

If we can talk about what we’re doing when we’re testing, then we can improve. We can teach others how to become better testers. We can also recognise when a new method/technology/approach is missing something that exploratory testing can give us.

If we can’t describe what we’re doing, exploratory testing remains a black box. We rely on “experience and intuition” as the sole explanations for our actions and decisions. That’s not teachable – and not practise-able.

Collected microheuristics