Software Testers Clinic - A Mentor Experience Report

I recently attended the first Software Testers Clinic, an initiative created by Mark Winteringham and Dan Ashby. You can read more about the idea on their website.

I thoroughly enjoyed the evening, especially the second half of the evening, where attendees were encouraged to actually do some testing. Now, as the sessions are aimed at people new to testing as well as people looking to expand their testing knowledge, the attendees are a combination of students and mentors. I was attending as a mentor. So the testing exercise was arranged so that 3 students were paired with a mentor. My first student was Demetra Cucueanu, a budding new tester, proving to all that it's never to late to try a new career. Demetra is actively seeking a junior testing position. The second was Bhagya Mudiyanselage and the third was Joe McGuinness.

The reason for this post is in response to the experience report created by Bhagya, you can read that here. An attempt to explain the approach I took with the students, as a mentor.

The challenge set to us was simply to test http://www.drawastickman.com. We immediately asked them how long we had, and were told about 30 minutes.

So before we got started, I asked them a simple question. Why are we testing this? Neither of us knew so we asked, the response was to learn more about testing. Drawastickman was simple a vehicle to aid that. We then briefly discussed the importance of knowing why we are testing something.

So we got going by me asking "What do you know about drawastickman.com?". Turns out we/they knew very little, the obvious is all we knew, "it's a website". So I introduced them to the concept of a Scouting/Recon session. I was first introduced to the concept in the book Explore It! by Elizabeth Hendrickson, fantastic testing book if you haven't read it yet. The session is intended to help us set the context a little, specifically around the application. Ideally you would spend a full session doing this, so about 90 minutes to 2 hours.

I believe it's very hard to testing something you know nothing about, and I mean nothing, all we had was a url. So I encouraged the students to spend 5 minutes, just exploring the application. Looking for things they can identify with. For example, immediately after looking at the url they realised it's a game. Joe immediately saw a link for a native mobile version of the game, could be an interesting avenue to explore. Bhagya went straight into the game, to see if she could get a basic understanding of how it worked. Demetra explored the sites navigation and discovered numerous forms that she thought could be interesting to test. Plus a whole lot more.

After 5 minutes, I asked them to explain to each other what they have discovered about drawastickman.com, and I collated the list on a nearby whiteboard, I loves me a whiteboard. I then recapped with them about how quickly we were able to get a better understanding of this application. We went from a url, to a 10-15 item list of things we actually knew about the website.

I then suggested to them, they pick something from their list or the main list to explore further. Something that was of interest to them. Explaining to them how this approach now sets the focus of their testing, it frames it. I introduced the idea of charters to them, again repeated my encouragement for them to read Explore it! We could have continued to explore and just shallowly tested things as we came across them, however I was keen to see them attempt to test an area deeply. So they all selected an area to explore further. The reason I was keen for this, was I attempt to relate it to testing in their jobs, where I assumed they would need to test deeply.

I shadowed them for about 5 minutes, watching them all test, and a pattern emerged. No notes, or very little. So after another 5 more minutes of testing, I stopped them to discuss what they had learnt after just 15 minutes of testing now. They had all learnt a lot, however my earlier observation had come to fruition, they were eagerly telling me about what they have found, but the majority of it was from memory. There was even a few confirmations of this, such as "there was this one thing, but I've forgot".  So we had a brief discussion about the importance of taking notes whilst testing, and how they can help guide future testing, but also aid you in telling the story of the testing you have done thus far.

They continued to test, a which point I decided to take the approach of chatting to them individually, to see how they were finding the approach of using charters. This gave me the opportunity to offer some one to one feedback and directly suggest some resources to them based on what they had done or were doing. Also giving them the opportunity to quiz me.

One of the topics that came up was, when do we move on to the next charter? Time was short, so we had a brief discussion about it. We discussed the idea of feeling like you've found enough information, or that you've exhausted all the ideas you had. This allowed us a brief moment to discuss the relationship between charters, test ideas and actual tests. We then very briefly hit on the idea of heuristics and some popular mnemonics, I suggested some resources for them to explore, including Karen Johnson's card deck and Test Insane's MindMaps. Also with drawastickman being a public application I suggested exploring social media for comments on the game, as well as reviews in the app stores. These can be a fantastic source of test ideas for public facing applications.

The final discussion we had was specifically related to testing drawastickman.com. If you're not familiar with it, you can draw a character with the touchpad or mouse, and the site will bring it to live, depending on what you draw. The discussion was about reproducing bugs, how could we reproduce issues we observe, seeing as re-drawing the exact same stickman would be tricky. So we discussed some ideas, such as recording the screen and using a mouse cursor recorder. Highlighting the use of tools.

That's pretty much that. I feel in this instance that my mentoring/coaching went rather well. I could have perhaps let them test a bit longer than I did, however all the students seem really engaged on learning more about charters and sessions. I had a lengthly discussion with Joe specifically about using sessions in the workplace and encouraged him to google Session Based Test Management.

I tried my best to be the facilitator of discussions, instead of telling them what to do. Allowing them to ask me why I was suggesting X. Such approach also allows me to collate more information from them, which my highlight a different approach I could take. Without the discussion though, it isn't really mentoring/coaching, it's telling.

As a sole tester at the moment, it was great to be able to mentor and coach some testers, while they actually tested something. I really enjoyed the event.

I would encourage anyone in the London area to check out a future Testers Clinic, regardless of your testing level, as you can participate as a student or mentor, both full of potential learnings. The link to their site is at the start of the post and details of the next meetup are on their home page.

A Four Week Approach to Creating Abstracts

I'm often asked how I go about creating abstracts, and it's actually the theme of one of my workshops at LetsTest this year with Martin Hynie. So I thought I would share a timeline with you of how I tend to do it.

Most CFP on average give you around about two months to submit your abstracts, so there is plenty of time, to come up with and formulate those awesome ideas of yours. I tend to take a four week approach.

Week 1
As most of you know, I love my whiteboard. But if you don't have one, there are many other mediums you could use. So what I do in week one is I create a mind map of potential ideas for a talk, workshop or tutorial. I spend no more than 15 minutes on this initially, as I'm looking for things that are on my mind right now. These ideas could be anything, e.g:

  • A blogpost/podcast/video you have seen recently, that you could expand on or argue against.
  • An experience in work, that you feel could make a good story.
  • Something you have been blogging about that could be turned into a talk.
  • Something related to a book you have been reading or read recently.
Then after 15 minutes I stop. Then for the reminder of that week, new ideas and experiences will come to me, so I add those to the mind map. One of the reasons why it's important to always carry something to take notes on, to capture these ideas, a small notebook or your phone.

Week 2
Now at this point I have a mind map contains some ideas. It's now time to try and elaborate on some of them. So I take each node one by one and spend no more than 10 minutes on each node, elaborating on it. Noting key bits of information. For example, if it was an experience at work, I would write down the key people, the problem, quotes, timeline of events, my learnings. 

Once I've done this for each node, I stop and keep adding to it over the next few days when I remember new things. 

Now at this stage, I have a visualisation of my potential talks, and some may stand out more than others. Perhaps the one with more child nodes means you have more ideas about that, it resonates with you more than the others. Perhaps you can spot a nice theme or pattern in one, that you feel would structure a good talk. 

So the remainder of week 2, I take my top three ideas and elaborate on those even further. So to continue the example above, what is it about the key people that is important? What role do they play, are they a positive/negative part to the story, or both. What is problem? How did you identify the problem in the first place. What was this problem impacting. How did you know the problem had been solved? Continuing to do this as above over the remainder of week, adding to it when I remember new things, or have new ideas. 

So at the end of week two, we have three ideas that we have expanded two levels deep now. 

Week 3
So in week three it's time to try and create some abstracts. Take our expanded ideas and try and create a snippet of your story, to entice reviewers to it. In my opinion this is one of the hardest parts, especially if the art of writing doesn't come naturally to you, like me. 

I tend to create a document in google drive, reasons for this later. I take a picture of my mindmap, screenshot if you did it electronically, and I add that to the top of my document for ease of reference. I start my abstract by spending no more than 5 minutes trying to think of some good titles and I note them all down, no matter how crazy some are. Then it's time to write that gripping, sock knocking off, enticing abstract.

Again my time boxing theme continues, it's how I tend to work. I spend no more than 60 minutes writing my first draft. I take the parent node and all its child and try and translate that into some words to explain why it was added to my map. So to continue my example, I may write something like: "this story contains many characters, during my story I will introduce them and their importance in this story, expanding on how their actions impacted my approach to solving this problem, and how their characteristics lead to me changing my interactions with them". Something like this. 

Once all the nodes are done, we should hopefully have a collection of relevant paragraphs and sentences that form the core of our story, it's not time to add some stitching in to turn them into one congruent abstract. Repeat the process for all three.

Now we are three weeks in at this point, that's a long time, that's a lot of thinking. You're probably getting a stronger feeling towards one of the abstracts, or maybe two of them. So I tend to spend some extra time on those to make sure I've included all I can think of in my 1st draft. 

Week 4
This is probably the most important week, we have invested a lot of time by this point, we believe we have some fantastic talks to give, and you believing in it is the most important thing. However, so far it's just you, your ideas, your thoughts on what is interesting. So it's time to get some reviews. This is why I tend to use google drive, as it's easy to share and track comments.

The testing community is a very friendly space, most of the time at least, but especially when I am around :D. There are lots of people willing to help other people out. But what exactly is it your are looking to have reviewed?

The least important thing is your story or theme of your talk in my opinion. May surprise some people, but for you to get this far with it, means you care about it, you believe it's interesting. Doesn't mean you shouldn't ask for feedback on it, or change it based on the feedback offered, but for me, it's not the main thing I am after.

The most important is the words. Spelling and grammar are of course up there. After that though, it's about it's enticement. Is it congruent. Does it pull your reviewer in? Would they attend your talk, because of the abstract, not because it's you. Get their feedback on those things, then tweak and amend accordingly.

Also, read it several times yourself, with sufficient time in between, like a day or so. As I mentioned already, I find time in between allows my brain to give me all it has to offer. 

At the end of week four? Well, you pick one of those titles and you get it submitted of course, then patiently, but excitably wait for the decision. Knowing that you gave it all your could, your best efforts.

Additional notes

Now of course you could do this a lot quicker than four weeks, however I find the time in between the weeks really allows my brain to process all the ideas and inform me of all it has to offer.

Also you don't have to wait until week 4 to get some external input, it could be useful to ask some close peers or work colleagues if they have something they believe you could talk about. 

Once you've done this process multiple times, you will also have a backlog of potential talk ideas. I tend to store these one big central map, which in turn could speed up weeks one and two. However I try not to go straight to this map, unless I hear about a CFP to something that seems interesting to late to adopt the four week approach.

Why Was This Check Created?

As I've been thinking more about Checking and Testing, and how to get them working harmoniously, I'm wondering if we are missing something from our checks. This post will focus on automated checks, but it I believe the same applies to non automated checks.

Some teams have become really adapt at writing automated checks. They are following good practices. Classes, methods and objects are all well named, and it's obvious what they do. Assertions are clear, and have a well structured message for when they fail. There are good layers of abstraction and code re-use. They are performant, execute fast and designed to reduce flakiness. It all sounds rather good.

But why is that well designed, well written, easy to read check there? Why does it exist. Why was this check written, over all the other possible checks. I can read the check, it's well written as mentioned, I can clearly see what it is checking, but that is all I have. How do I know, that the steps and the assertion(s) there match the initial intention for it. What was it about this check, this system behaviour, that was worthy of having an automated check created for it. I don't know that.

Why should we care about the why? I believe the results of automated checks are impacting the way we test. I believe this is especially true in an environment that has adopted continuous integration. As before you test, by test here I mean testing once the developer believes she is "code complete", all the automated checks are ran, and the build is either red or green. A generalisation for now, as I am still giving this more thought, but when the build is red, we tend to immediately focus on that, by chasing green. We will then usually read over the other checks in that area to see what else is covered, and then design and execute some tests to see what else we can learn. Then return to the new piece of work. When the build is green, we tend to focus our testing efforts on and around the new piece of work. As I said, it's a generalisation for now, I know I/we don't always do this, but hopefully most can relate.

I believe we aren't always aware of how much trust we put in our automated checks, and all that trust without always knowing why the check exists or it's importance. We all have a lot of knowledge about our systems, a lot of that knowledge is interwoven, this is why we create automated checks, because we can't remember everything. We need to make some of this tacit knowledge, explicit. It's also why we create mindmaps and checklists, to prompt us to remember things. To consider things.

If the why was also included, I feel it would aid us with test design. It would also aid us when reviewing our automated checks, when deciding do amend some, or delete some. Regularly viewing your checks and questioning their value is something I encourage teams to do regularly. Just because a check is green, doesn't mean it helped you in anyway, doesn't mean it added any value to your testing efforts. Going back to test design, lets say a check failed that had the following why message somewhere: "This check was created as we had a major issues in live where the system did X and lead to Y downtime". If I saw such a failed check, I believe I would probably do more testing in that area than if that message wasn't there. If I was reviewing my checks, and saw such a message, I would be able to assess it's value a lot easier/faster.

Here are multiple ways we could add the why in.

  1. Code Comment - No doubt a lot of you have turned your nose up reading that. But I'm not talking about using them to be able to read what the code does, as stated, we can do that. I'm talking about a few lines above a check, explaining why it's been created.
  2. BDD Tool Lovers - While I discourage people using BDD tools to write automated checks, especially those places that aren't practising BDD, I know many of you are using such tools. So you could add the why to the scenario section of the feature file. 
  3. Commit message - Perhaps we ensure to add excellent commit messages when new checks are created, clearly indicating the why there. We could then look at the commit history of the file. Has flaws if checks are moved around a lot during refactoring. 
  4. External document - Or perhaps we could store the why in a document somewhere. Perhaps a mindmap with IDs for the checks
Even though my thoughts are early days, I don't believe adding the why is a huge deal, the fact you are creating it means you already know why, it's just not there later in the checks life. Or available for new team members to read. Or anyone. But I do believe it could play a significant part is assisting our testing efforts, especially in check reviews and test design.

These are some early thoughts, just had an urge to write something after several conversations at Euro Testing Conference on this subject. Would love to hear some of your thoughts if you have the time to engage. 

Thanks.