Why Was This Check Created?

As I've been thinking more about Checking and Testing, and how to get them working harmoniously, I'm wondering if we are missing something from our checks. This post will focus on automated checks, but it I believe the same applies to non automated checks.

Some teams have become really adapt at writing automated checks. They are following good practices. Classes, methods and objects are all well named, and it's obvious what they do. Assertions are clear, and have a well structured message for when they fail. There are good layers of abstraction and code re-use. They are performant, execute fast and designed to reduce flakiness. It all sounds rather good.

But why is that well designed, well written, easy to read check there? Why does it exist. Why was this check written, over all the other possible checks. I can read the check, it's well written as mentioned, I can clearly see what it is checking, but that is all I have. How do I know, that the steps and the assertion(s) there match the initial intention for it. What was it about this check, this system behaviour, that was worthy of having an automated check created for it. I don't know that.

Why should we care about the why? I believe the results of automated checks are impacting the way we test. I believe this is especially true in an environment that has adopted continuous integration. As before you test, by test here I mean testing once the developer believes she is "code complete", all the automated checks are ran, and the build is either red or green. A generalisation for now, as I am still giving this more thought, but when the build is red, we tend to immediately focus on that, by chasing green. We will then usually read over the other checks in that area to see what else is covered, and then design and execute some tests to see what else we can learn. Then return to the new piece of work. When the build is green, we tend to focus our testing efforts on and around the new piece of work. As I said, it's a generalisation for now, I know I/we don't always do this, but hopefully most can relate.

I believe we aren't always aware of how much trust we put in our automated checks, and all that trust without always knowing why the check exists or it's importance. We all have a lot of knowledge about our systems, a lot of that knowledge is interwoven, this is why we create automated checks, because we can't remember everything. We need to make some of this tacit knowledge, explicit. It's also why we create mindmaps and checklists, to prompt us to remember things. To consider things.

If the why was also included, I feel it would aid us with test design. It would also aid us when reviewing our automated checks, when deciding do amend some, or delete some. Regularly viewing your checks and questioning their value is something I encourage teams to do regularly. Just because a check is green, doesn't mean it helped you in anyway, doesn't mean it added any value to your testing efforts. Going back to test design, lets say a check failed that had the following why message somewhere: "This check was created as we had a major issues in live where the system did X and lead to Y downtime". If I saw such a failed check, I believe I would probably do more testing in that area than if that message wasn't there. If I was reviewing my checks, and saw such a message, I would be able to assess it's value a lot easier/faster.

Here are multiple ways we could add the why in.

  1. Code Comment - No doubt a lot of you have turned your nose up reading that. But I'm not talking about using them to be able to read what the code does, as stated, we can do that. I'm talking about a few lines above a check, explaining why it's been created.
  2. BDD Tool Lovers - While I discourage people using BDD tools to write automated checks, especially those places that aren't practising BDD, I know many of you are using such tools. So you could add the why to the scenario section of the feature file. 
  3. Commit message - Perhaps we ensure to add excellent commit messages when new checks are created, clearly indicating the why there. We could then look at the commit history of the file. Has flaws if checks are moved around a lot during refactoring. 
  4. External document - Or perhaps we could store the why in a document somewhere. Perhaps a mindmap with IDs for the checks
Even though my thoughts are early days, I don't believe adding the why is a huge deal, the fact you are creating it means you already know why, it's just not there later in the checks life. Or available for new team members to read. Or anyone. But I do believe it could play a significant part is assisting our testing efforts, especially in check reviews and test design.

These are some early thoughts, just had an urge to write something after several conversations at Euro Testing Conference on this subject. Would love to hear some of your thoughts if you have the time to engage. 



  1. Great article!
    1. Automation in itself is of less value unless complimented with some documentation that tells what it does.
    2 . apart from code comments, check-in comments, external documents etc, I've personally found that descriptive log messages are also helpful.
    3. In my personal experience, benchmarking software solely based on automation has been challenging. Many a times, failures are attributed to intentional changes on the software side that break the automation. These failures not just need analysis but also a fix to the automation.

    1. Your third point is of great importance.

      We - testers, developers, management - must have always in mind that a black-box automation suite is another software, which has on the "main" product is requirements. When the "main" product changes, the automation suite must change as well - it adds one more layer of cost on requirements evolution. If this cost is not taken in consideration, with time, the automation suite will either be extremely bugged or do not cover new features - losing its value as font of rapid feedback.

  2. Hi Richard - nice post, thanks for sharing!

    I like your ideas for where you can include the intent behind the check.

    I agree - I typically strive to understand why we're doing something to help ascertain what value it's giving us.

    An example I have is conversations discussing which automated checks we can get rid of once a feature is live (similar to your test design example).

    Typically the team is aiming to get rid of as many of those GUI user journeys as possible, because you know, they take so long to run, are brittle & tricky to maintain.

    When a Programmer sees a journey that seems to make a similar assertion to a unit check they have written, they move to get the journey removed as the functionality is suitably covered "lower down the pyramid".

    Having the intention (or why) behind the user journey helps me & the team to understand & put forward the case why that journey should live or die.

    Knowing the intention behind the check has saved me from both unnecessary duplication & gaps in the overall testing coverage. Very useful indeed


  3. Thats great question to ask! After couple of months of automating tests on ongoing dev project, there is not much trace which would give the answer. A bit thinking about it, gave me an idea to put it in reports. So each step saying WHAT it checks, would additionally contain answer to WHY. This way the "responsible person" analysing the reports, could also decide wether the check is still need or not. Or even better, original WHY could change, and based of traces the checks could be adopted, extended. Thnx for asking, Richard!

  4. I'm working on something similar, and your thought echo and expand on my own. I think that the why isn't just useful, but important. I come from the perspective of risk mitigation. What risks do we believe we are mitigating with the tests? And how does the value of those tests change with a change in context? Are the risks reduced by established history and therefore the costs exceed the benefit? Are the risks based on environment and should be reviewed based on environmental changes?

    Thanks for the post!

  5. Hi Richard,

    thank you for this post. I think the intend for each check is really important. In addition to your 4 places you can put this information I found a 5th solution for my project: Since my gherkin tool allows tagging the checks I created a new tag "Intent" for my checks.

    The benefit is that the information now shows up in the most accessible place I have: the report.
    So now everybody reading it knows why I created this specific check here. Furthermore putting the intent to the checks really helped me cleaning up again. :-)

    Best regards,