Testability Question at CAST2015

I was watching the live stream of CAST2015 earlier, in particular I was listening to Maria Kedemo talking about "Visualising Testability". Having done her's and Ben Kellys workshop at LetsTest, I was interested to hear Maria talk about this topic again, and also to see if anything from the workshop was in the talk.

Wanting to get involved more, I posted a question to the twitter printer, which on a side note, is an awesome idea.

If I could rewrite it, I would write. "Should testers with coding skills focus some of their time on increasing testability of the product/testing, instead of focusing on creating automated checks, which I believe is where the majority spend their time" sadly, that doesn't fit in 140 char.

I believe they should, as I believe automation is just a tool. The most common use being reducing testers time spent checking, by automating those checks. Which I also believe is where most focus their efforts. However some testers skills are on par with some developers now, especially those occupying the role of SDET and such. So surely we could use those skills to increase some aspects of testability. For example, referencing James Bach model, someone with those skills could spend time improving intrinsic testability, altering the product itself. This could be adding in some logging they require, it could be writing some hooks to make accessing the system easier and much more.

But for me, I want to see more testers focus on what James titled "Project-Related Testability". I encourage people with coding skills, testers or developers, to create tools that really support the testing efforts. For example, they could write tools for reading logs files, creating data, state manipulation, data manipulation and much more.

Of course with any automation pursuit, it should be clear what is trying to be achieved and be aware of falling into the automation trap. If something it taking to long, will the value be returned by continuing or should you just accept defeat.

Anyhow, I encourage all to watch Marie's talk once its published on the AST YouTube page and think about what tools could you create, or someone in your team create, to increase an aspect of testability.


  1. I think it depends on how you view automated checks. Some of those that I have written in Webdriver, are just that - pretty one-dimensional, and serve only one purpose - to check a single aspect of the software.

    But most of the time I try to make my automation serve more than just to check - it will involve some sort of randomised data input, and a lot of logging. So that it is not *always* repeating the exact task each time - and this in itself makes the product more testable, as I can have confidence that my automated *checks* have a testing aspect in them.

    I guess good logging and screenshots etc are a given, when writing effective automated checks, but increasing the amount, and the way in which they are written and thought about, can mean they make the product more testable too.

  2. As said, there a number of testers with coding skills. If they are aligned properly themselves can create apps for increasing the testability of the product.