How I Get Technical Workshops Up and Running


A few days a go I saw this tweet from Maaret. At the time of reading it, I felt I was at 100% success rate with technical workshops. I had the formula.
I stated on Twitter that I would write about the formula. Then last Tuesday happened.

Last Tuesday, I visited a client to do some internal WebDriver training, and well lets just say, it didn't go to plan. We adapted the plan until lunch, then throw all the plans out after lunch and did something completely different. I'm fortunate to have alternatives at my disposal and be comfortable enough with my knowledge to deal with it, others may not feel the same.

So I'm going to write about some of the approaches I've taken in order to get technical workshops up and running faster, allowing us more time to actually learn.

Virtual Machines (VM)

I haven't used this approach for a while. In fact, I only ever tried it once. However I know several people who swear by this approach. Simple concept, create a VM with all the dependencies on it. Have the attendees download a specific VM client, distribute the VM image, set-up the image and off we go!

My class didn't go to plan. Peoples machines were not powerful enough to run a VM. They got the VM up and running, but interacting with it was painful. Like using IE on 56k modem painful, or watching AOL connect back in the day painful.

Now, other issues I've seen with such approaches, is distributing the image. These images can be big files, some into the GB's depending on the operating system. So downloading them in advance is an option, but could be a slow, data consuming process for some. If they don't download them in advance, it's time to pray to the conference wifi while someone, or several people, download this image. Or, we distribute the image on USB pens, an option not all would be comfortable with, given the security concerns of flash pens.

My final issues with a VM is that in some context, the actual process of setting up the environment is important, especially if a student intends to take their newly found skills forward and use them at work. They will no doubt have to set up an environment, and by using a VM, they won't know how to do this, or not all of it. Easy for the trainer to mitigate by providing a environment setup guide, but I've only had one person do this.

Docker

It wouldn't be right to talk about VMs and not include Docker. Now, let's be clear I'm not stating they are the same thing, they are not. However one use of Docker I've tried on a course I teach with Mark Winteringham is to dockerise the test application. So we ask attendees to install Docker, then we distribute the container. This is because the application is not our focus of the training, it's just the vehicle. We took this approach, as the application has several dependencies, rather poor instructions to deploy, but more importantly, was quick to get up and running. We'd also taken the approach to dockerise the application for our own deployment to AWS, so was a bonus really. 

So I would recommend containers for a test application. But they don't really work for tools that the students are going to interact with, because you cannot get in side them as easy as a VM. 

Going to move away from tools now and talk about approaches I've taken, prior and during the class.

Advanced Prerequisites

Send the students a list of prerequisites as far in advance as you can. Not always easy, especially if using certain tools, as new versions could be released during that window. So be aware of that.

These prerequisites should be more than just a list of tools. Where possible include installation guides or at least link to existing ones. Give the student as much support as you can. Consider writing a list of instructions, referencing specific guides for tools if required, giving the student something to follow. 

Checklists. Provide a checklist for the user to run through to ensure they have all they needed. For example, at the end of installing all the prerequisites for a Java WebDriver class, you might instruct a student to execute 'javac -version' in the console, to ensure the JDK was installed successfully. 

I've more recently taken to recording a short video that students can follow. Details where to find the prerequisites, downloading them, installing them, and checking everything is as expected. It might seem like overkill, however it's something I was doing before each class to check my own environment was prepared, so recording it wasn't much more work.

Now, some of you will would have been reading that, with a little voice in the back of your head saying 'but no one ever does them Richard!!!'. True, there will always be a few people who don't. My only advice here is to ensure they get your message. So ask students to respond to tell you they have received your email, allowing you to chase those that don't respond in the run up to the event. Another additional you can take is to give students a way of informing you they have indeed done it. This could be a simple email, or a Google spreadsheet/form where they can tick their name of.

Also, stress the importance of these to the students. They allow the class to get up and running faster, meaning we can all maximise the time we have together.

Finally, make yourself available to help debug students before the class. I've only had to do this a few times, but it really did make the class go smoother on the day. Helped my focus, and importantly started all students on the same page, keeping the flow going.

Pretest with your contact

This advice is more related to internal client visits, and one I received recently from Alan Richardson. Arrange a call/hangout with your contact at the company, and do a pretest on their machines. So instruct them to run the prerequisites, but they have a call to distribute the specific code for the class and ensure they can compile and run it.

This would have saved me recently, as installing prerequisites was all good, however when it came to actually import the project, it turned out the clients internal network blocked all maven calls, meaning none of the dependencies could be downloaded, something I wasn't prepared for.

Start immediately

Both internally and public events, I tend to get to my room 30-45 minutes before I need to. I like to get my machine set up, the desk how I like it, and just get a general feel for the room. This is also the case for some students. In my experience students tend to arrive 15 minutes or so before the official start time of classes. I like to take advantage of this time.

I will ask those who arrive earlier if they have downloaded all the prerequisites and have everything set. If someone says no, I can now try and get them setup in the free time. By free, I mean it isn't going to eat into the class time. This isn't always an option, but it's one I like to take advantage of if it arises.

Class support

When running a class, especially at a conference, I like to gauge the level of the students. As there is usually someone who has experience with the environment, tool or programming language. If I do believe I've found someone, I will approach them to see if they are willing to help me with getting people setup. Then can be incredible valuable to me, the class, and them as it will speed us up in getting to the fun bits!

Backup plan

Have a back up plan. I could end it there, but let me elaborate with some experiences. Wifi Internet... as we all know, it can be very hit and miss. So if your class requires access to a specific site, and the Internet goes down, you're a bit stuck. So if you have control of the application consider ensuring you can run this site locally. This could then allow you to turn your machine in a hot spot and have people connect and use an instance running on your machine, instead of using the www. 

Another option is to have some theory activities ready, which you can get students to work through, while you try and resolve issues. Visualisation tasks work great for this. Or a retrospectives on what's been covered this far.

Have installations, remember to include all operating systems, on a USB pen ready to distribute if the Internet has gone. Of course this comes with the risks mentioned above.

Pairing

If someone is really struggling with their environment / laptop, see if someone in the class is willing to pair with them. This can be a great way to keep the class flowing. Then at a break or lunch, you can then work with the student on the machine to see if you can get it up and running for them.

Pairing also comes with multiple other benefits which lots of people have written about.

Your machine

If pairing isn't an option, I've actually given my machine to a student before. This works well in a show and tell then practise format. In the instance I did this it was a class where we were exploring how to use proxies with your mobile device.

Not always a valid option but one to consider.

If you have multiple laptops, this could also be a possibility. Give the student a spare machine. I personally don't have a spare, but some of you might!

Stay after the class

I still like to work to the principle that every attendee of my classes will leave with working/running code on their machine. So with that in mind I always try and ensure that I'm free immediately after my class. I do this anyhow for questions etc, but even more time to debug a machine if needed.

Post the class, conference

I like to say that my classes come with life time support. So I do. And I mean it. Funny how very few people take me up on this. Anyhow, what I mean is, if we were still unable to get a machine or something up and running, I will give them access to me post the event. This could be my email, Skype or hangout etc.

So the student can work on the issue in the own time with my support. Or, when they have got their environment working they may want to re-run through specifics from the class, I'm always happy to do this.

Videos / Handouts

Another back up plan is to have a video(s) or handout(s) of the activities available for attendees post the class. This is especially useful when talking over code, or explaining how to use a specific tool. Allowing the student to work though the material again in their own time, pausing where they need to. If you 

Summary

Technically workshops are hard. They are more complex by the very fact we have to use laptops. However we can do a lot to make this easier, however a lot of it requires work prior to the event. However as a teacher of technical classes I feel we have to go this extra mile.

If I design a class on a specific tool I want to ensure I can teach about that tool, not spend the first 0.5-2 hours battling admin permissions or other common issues. I want to teach the class.

If you have any advice on the above, or other ways you try to ensure the smooth running on a technical workshop, I would love to hear it.

Thanks for reading.

Win A Ticket To European Testing Conference 2017

So, the folks at European Testing Conference 2017 are awesome, because they share the profits with the people who play a large part in making the event happen, the speakers. So this year I did a workshop and a talk, therefore I earned free conference ticket for the 2017 event in Helsinki, or the money, I opted for the ticket.

I'm taking a sabbatical next year, and therefore unable to use this ticket, so I'm going to give it away!

So the conference is taking place in Helsinki on the 9th-10th February 2017. This competition is for the ticket ONLY, I'm not offering travel and accommodation.

So, how do you enter, and importantly give yourself a chance to win?

Well, I started a YouTube channel called Whiteboard Testing, the purpose of the channel is to offer short informative videos on testing, no longer than 10 minutes. So to enter, I want you to create a video for the YouTube channel.

The rules:-

  • This can be on any topic you believe relates to testing
  • IMPORTANT, please don't mention that this is for the competition during the video
  • You do not need to use a whiteboard specifically, it could be a piece of paper stuck on a wall or a chalkboard, but there should be some visualisation to support the talk
  • Submit your video to me at richard<donotincludethisbitinmyemailaddress>bradshaw@gmail.com along with a title and description of the video for YouTube
  • I suggest you watch some of the existing videos to get an idea of the pattern
  • You can enter as many different videos as you wish

The competition ends on the 31st October 2016

I will then form a team of testing excellence to determine which we believe is the best video, and they win the ticket!!! Simples. 

Charles Proxy To The Rescue of Adobe Bloodhound

The client I'm currently working at has decided to switch from Google Analytics to Adobe Analytics for our native mobile application. Which created a new testing problem for me, how do I test this?

Our current solution for Google Analytics (GA) was to use a development key, and using a combination of the Real-Time feature on GA and checking back the next day to see final propagated results.

But Adobe Analytics offers a tool to help test them called Bloodhound. Bloodhound is simply a proxy. You configure your device to route all your traffic via Bloodhound running on your machine, just like you would with any proxy tool such as Charles / Fiddler. So I gave it a go.

Initially I got no results in Bloodhound, turns out that is due to the SSL restrictions on iOS, so no problem, Bloodhound comes with a cert you just need to install. Installed it and just like magic analytics started appearing in Bloodhound! The magic didn't last long though, because while Bloodhound seemed happy enough to pick up all the analytic calls, it seemed to kill all other calls, render my app pretty much useless! I could interact with our navigation and see the app making such tap and page calls to analytics, but I couldn't get to 50% of the screens as they required server calls!

I discovered two ways to fix my issue.

The first one is very specific to our context. The iOS developers of our app has built an offline version of the APIs call, essentially a mock. So I was able to configure our app to use the offline mode and I was now able to navigate to all the pages and see if the analytics calls were correct. Sadly. though I ran into an issue with our offline mode, it didn't quite have all the scenarios I needed to be happy with my testing, so back to the problem of Bloodhound eating my apps requests.

I did some googling and find some forums, but most of them were Adobe forums telling you to contact support and they would solve your problem with you. Nice support, but not something I was interested in at this stage, kind of expected it to take a while to get a response being a big company, probably a bias that could do with being tested again.

Anyhow, I ponded for a few more minutes, and decided, well if Bloodhound can intercept these calls, surely any proxy could. Sure enough, Charles could see all the requests to the Adobe servers, and I could see the specific analytic calls being made. However it wasn't as easy as Bloodhound.

Bloodhound was designed to show these analytics, Charles isn't. So in Charles I get calls with lots of analytics in, in their raw form. Bloodhound was designed to strip individual analytic from the calls, meaning I could easy find a specific analytic caused by the action I just did, such as clicking a button or landing on a specific page.

So while I could now use the app against real servers, and see all the analytics, and able to test them, it just wasn't as easy as I wanted. As said, Bloodhound made it really easy to test individual analytics. The process of tapping/navigating in the app, checking Bloodhound, was quick and efficient. It was a bit more cumbersome in Charles as I had to read over the raw call to find the exact analytic.

So again I pondered for a few more minutes, and remember a feature of request forwarding / mapping in Charles. Something I hadn't actually used for a while, so required a few googles to refresh my memory. But there it was, Map Remote, the missing piece. Charles allow you to map requests to a remote server. A featured I'd actually used in the past to test against different version of an API, tricking the app in to using a version of the API it wasn't yet coded to do so, a great way to test early.

However, as Bloodhound was running on my local machine, instead of mapping remotely, I wanted to map locally. No a problem, instead of a remote IP you just enter localhost or your IP. So I enabled 'Map Remote' in Charles and added a new rule. I should add here that you can configure the port Bloodhound runs on, in this instance it was 50000.

I configured my device to point to Charles and instructed it to map any calls for our Adobe server (you can find this out from Adobe SDK config, or just from looking at the recorded traffic in Charles) to my IP address on port 50000, which was where Bloodhound was running. I saved my config and gave it ago.

Voila! It worked. I was now able to see all my analytics in Bloodhound and my app was also able to hit the server, allowing me to hit every page in the app, and verify the analytics in Bloodhound. Win.

So there you have it, a nice combination of tools. I really do love proxies! Such a powerful tool.

Give Your Automated Checks a Voice

We have robots doing a lot for us now, well when I say robots, I mean automation, but robots sounded well cooler. We have them running automated checks for us, we have them deploying builds into production, we have them creating test data, spinning up machines and environments, plus much much more. We love tools, and rightly so, they're awesome, most of them.

However, the context for this post is automated checks. There offer us much more than pass and fail, but only if you ask it to tell you. I listed four things that I've used in the past, and how they've helped me.

Execution time

A lot of the projects I use to work several years ago were for digital agencies. Short to midterm projects, three to six months or so. I use to write automated checks as frequently as I could, as in such an environment the fast feedback was invaluable. This environment being regular last minute changes, hot fixes here, some over there. Something I'm not against, but you need to be able to deal with it. The downside of all these changes was some focused testing would always slip, in this case it was always performance testing. It wasn't a skill of mine, it still really isn't, but I know enough to get by now.

So I changed my approach. I started storing the execution time of all my automated checks. Sadly, there was no CI on this project, not that it's an excuse but this was about 6 years ago. So the automation was being ran from my machine whenever a new version was deployed. So after each run, I would add the time to a spreadsheet. I was probably running them twice a day I would say, so I soon collated a decent data set.

My thinking was that, this metric may inform me of spikes in the execution time, which could potentially be a performance issue, or a performance increase that someone may want to know about.

Did it work. It did, in that it found two issues. The first issue was actually related to the product. Some of the SQL statements had been refactored, with the aim of improvement, but sadly the opposite was seen. Now sure, on reflection we should have had other ways to have found this issue put we didn't, and reusing an existing artifact allowed me to find it.

The second issue, was caused by me! :D

I refactored some of my selenium code, and updated the version, and well it didn't go well, the execution time increased by 50%. Turned out I'd written some sloppy code, but also the release had a bug in it, which I was able to find, due to my spreadsheet informing me.

So I'm not telling you that you should do all your performance testing using the build execution time, that would be ridiculous, especially in 2016, but keep your eye on your build time, and specifically each component of the build, it may be waving a big sign at you, requesting you take a closer look. Plus, most of the CI will visually show you this data these days.

Assertions

"Expected true, but was false"
"Expected 6, but was 5"

I'm sure, just like me, you've all seen some similar failed assertions. Stop writing them. All the test frameworks I've used now allow you to pass in a message, use it!

So in the above examples, what was expected to be true? What was expected to be 6? A simple contextual message in the assertion can really speed up the debugging.

"I was expecting the number of users to be 6, but it was 5". A simple String.Format can achieve this.

Now some would argue that the name of the check should provide you some information on what the assertion relates to, sure I've seen that, but at the same time I don't think it does any harm to add a contextual message to the assertion. I know it's personally saved me a lot of time in debugging failed checks.

Tell me all you know

A common practice I see, is getting your Selenium checks to take a screenshot on failure. A nice pattern, the screenshot can be really useful in understanding the problem. But most applications have a lot more to offer you.

Take advantage of the code that the Selenium projects offers you. Hook into the event listeners to write things out to the console. Give the robot a voice. "I clicked button <locator>", I typed "name" in to element <locator>. "I waited for X seconds for <element>". These tell the story of the check, again all this speeds up the debugging process when they fail.

Application specific, does your application have log files? If so, get the failed check to pull those down to a central location, so you can quickly refer to them when debugging, instead of having to go and get them manually. I did such a thing, a nice trick I added was to only get a specific window, I used the time noted at the start of the check to determine this. Saved a lot of time traversing log files.
Also other application specific things may be useful, such as the user you may be logged in as. The version of the application being checked. The environment they were being executed against.

Why does nobody love me?

"I never get an attention, I'm lonely". Alright, I agree that's a weird thing for your automation to tell you, but what I'm getting at is, when was the last time this check was changed or had its value reviewed.

I've done it, I've had checks that lasted the whole duration of my employment before, I never looked at them, they were green, all gravy. Michael Bolton wrote a nice piece on green. However, this doesn't mean this check was returning me any value. With not reading it for so long, I probably couldn't have even told you what it was checking.
So get the automation to tell you. You could put a date stamp on each check, which you could then write a simple script to read over and flag any that are > X days/weeks old. You could use your version control tool to see when the last commit was.

The point being we should only have automated checks that are returning us value, so in my opinion for that to be the case, we should be regularly reviewing them, so we understand their value. This could just be a nice way of letting them help us with this process.

In the context I used this in, I opted for the source control approach in the end, I never ended up deleting any checks, however I did extend some to check more than they originally were. The best thing I got out of doing this though, was the regular review. When discussing risks on the project, all the checks were fresh in my mind, so I was able to mitigate some risks because I knew that we had some coverage from the checks, allowing me to plan my additional testing accordingly.

Conclusion

Think about what else your automated checks could be telling you, think about the data they produce that could be really useful in guiding your testing.

So there we have it, sorry it was a bit long. I hope it was an interesting read.

If you've extended your automation to tell you more than just pass and fail, I would be keen to hear about it. I may write some more examples up in the future, but these were the main four that initially came to mind.

Webinar Follow-up: New Testing Battlefields

I recently had the privilege to take part in a webinar, this webinar was on the topic of ‘New Testing Battlefields’, which in this context of this webinar and post are Mobile and IoT. The Webinar was arranged by Telerik. I was joined by three other testing minds: -

  • Jim Holmes – Jim was our host, but also active in the discussions, being a tester himself and currently doing some interesting work in the automotive industry.
  • Daniel Knott – Daniel is a tester mostly working on Android over at Xing. He is also the author of ‘Hands-On Mobile App Testing’.
  • Iliyan Panchev – Who is an ex tester, and know currently Program Manager for Test Studio at Progress.

I took some notes during the webinar, which I’m simply going to expand on during this post. If you want to watch the webinar before reading the rest, a recording is available over on YouTube.

Interfaces

I spoke a lot during the webinar about how in my opinion, most mobile applications are just interfaces to the main system, that system being the backend, behind all the APIs. This, again IMO, is the product, not the mobile app. In some cases the APIs could be viewed as being the product. But the point I was trying to make is that the best apps I used and worked on, are where the front end is as dumb as possible, keeping the majority if not all the business logic in the backend. As we will talk about later in the post, this also makes testing significantly easier, especially when looking to add some automated checks into your testing approach.
In such a fast moving industry this also allows you to try and stay ahead of the competition and keep up to speed with all the latest trends in UX, as you can redesign the app without having to focus as much on the business logic.

All companies are software companies

The theme at Davos 2016 this year was “The Fourth Industrial Revolution” referring to the advances of ‘economy-changing’ technologies. Unfortunately, I cannot find the post, but I recall hearing an interview with a CEO saying that all companies are now software companies, it appears that it’s software that is giving companies their edge these days. With this in mind, I think we’re at the beginning of this boom, and the interfaces and applications of this technology we are going to be testing if mind boggling. I personally embrace technology, so I can’t wait!

Internet of Shit

I was thinking about this during the webinar, and Daniel had the courage to bring it up, so I’m just adding a link here. If you haven’t seen this Twitter account, it’s brilliant, it’s hilarious, it’s also terrifying!!!

Mobile, it’s personal

We mentioned many user aspects during the webinar, mostly focusing on how a mobile device is personal. Firstly, users configure they devices any which way they like, not such a huge problem on iOS, but can be very problematic on Android. Some system settings, such as fonts, can actually change your app, amongst many other things.

Also we mention speed a fair amount, but it’s important. People are normally doing 10 things on their phone. Reading the news, whilst sending a tweet, whilst WhatsApping Dave and checking social media. So when they switch to your app, it needs to work, immediately, or they will leave. Now in the context of an app were they are multiple apps you can download, that’s is exactly what they will do. Delete your app and download another one!

Interruptions, we also explored how other apps, and the fact it’s a mobile phone can impact your testing. A user will receive a call whilst using your app, what does your app do? They get a notification from another app, and click it, when your app resumes, what does it do? When the users’ connection drops and recovers, how does your app behave?

We have some really interesting conversation around the above, but they were all in the context of testing mobile, and the short comings of most automation tools. However, this is OK IMO, as mentioned during the webinar, I love the quote from Dhanasekar ~ “It’s a sin to test a mobile app at your desk” to which I’ve adapted to “It’s a sin to only test a mobile app at your desk”. We have to get out there when testing our apps, out into a real environment. We need to test them on a real phone, with real other apps and a sim card so calls can be received. This is how a user is going to use your app. Simulating all this, is difficult, but more importantly very time consuming, time which most teams simply do not have.

Tools

We hit on a few tools during the webinar, but my conclusion on the majority of mobile tools aimed at testing/checking are that they are very immature. But this isn’t surprising, the platforms themselves are immature. For example, iOS was only released in 2007, making it only 9 years old. It’s been in a constant stage of change ever since, which big architecture changes in most major releases, due to the very context of mobile and mobile hardware. It’s evolving at such a rate, change is inevitable. So this means the tool vendors are always playing catch up. There is hope though, tools from the vendors themselves (XUITest, Espresso) have become significantly better in the recent year, I hope this trends continues.

Proxies

I mentioned the importance of testing with proxies several times during the webinar, making the point to repeat it. It’s important. I find it virtually impossible to test a mobile application without using a proxy. The reasoning being is I need to see what data the app is getting. I need to see what data the app is sending. I need to see what APIs the app is calling, and when. A proxy allows me to see this information. It’s the most important tool when testing mobile/IoT IMO. Also from a testing context, it allows you to test multiple scenarios with ease, such as status codes and different length data, as you can alter the request leaving and arriving on your device.
If you haven’t tested using a proxy before, please try them! I use CharlesProxy on my mac, but Fiddler is also a great option for Windows.

Twitter Driven Testing

I first heard about Twitter Driven Testing from the Panda, Pradeep Soundararajan. He spoke about how he was testing a public facing website, and turned to Twitter to read what users were saying about his product. Of course, as with most things, there were positive and negative comments, but all this information turned out to be a great source of test ideas for Pradeep. This is something I’ve continued to do, however I’ve expanded beyond Twitter now, and look on forums and Facebook. But as we are talking mobile, I also take advantage of the app stores, the reviews people go to the effort of leaving there are gold for a tester, or someone looking to do some testing.

Doing some basic searches on social media for your product, see what people are saying about it, could lead to some very interesting testing.

Data Builder Pattern

Someone asked a question during the webinar about how to manage test data. I mentioned this is a place where I heavily relay on automation. I use a pattern called the Test Data Builder Pattern, I’ve blogged about this already, which you can read here.
In addition to this post I suggested adding a common interface to your data creation code now, such as an HTTP API. This would allow you to take advantage of this code from many interfaces now. So your automated checks could call them, you could use a tool like POSTMan to call them whilst testing, means you don’t have to keep repeating code that creates data.

Modelling

My final thing I made a note of was my advice to model your system. Initially a high level model, just have a box for interfaces, box for APIs and a box for databases. This allows you to see the system boundaries with ease. These boundaries can help identify where mocking could be introduced to assist with testing, but also to show where data is moving in the system. I encourage every to have such a model, they are a fantastic tool to assist conversions about the product.

So if you got this far, congratulations, I hope you enjoyed.

Software Testers Clinic - A Mentor Experience Report

I recently attended the first Software Testers Clinic, an initiative created by Mark Winteringham and Dan Ashby. You can read more about the idea on their website.

I thoroughly enjoyed the evening, especially the second half of the evening, where attendees were encouraged to actually do some testing. Now, as the sessions are aimed at people new to testing as well as people looking to expand their testing knowledge, the attendees are a combination of students and mentors. I was attending as a mentor. So the testing exercise was arranged so that 3 students were paired with a mentor. My first student was Demetra Cucueanu, a budding new tester, proving to all that it's never to late to try a new career. Demetra is actively seeking a junior testing position. The second was Bhagya Mudiyanselage and the third was Joe McGuinness.

The reason for this post is in response to the experience report created by Bhagya, you can read that here. An attempt to explain the approach I took with the students, as a mentor.

The challenge set to us was simply to test http://www.drawastickman.com. We immediately asked them how long we had, and were told about 30 minutes.

So before we got started, I asked them a simple question. Why are we testing this? Neither of us knew so we asked, the response was to learn more about testing. Drawastickman was simple a vehicle to aid that. We then briefly discussed the importance of knowing why we are testing something.

So we got going by me asking "What do you know about drawastickman.com?". Turns out we/they knew very little, the obvious is all we knew, "it's a website". So I introduced them to the concept of a Scouting/Recon session. I was first introduced to the concept in the book Explore It! by Elizabeth Hendrickson, fantastic testing book if you haven't read it yet. The session is intended to help us set the context a little, specifically around the application. Ideally you would spend a full session doing this, so about 90 minutes to 2 hours.

I believe it's very hard to testing something you know nothing about, and I mean nothing, all we had was a url. So I encouraged the students to spend 5 minutes, just exploring the application. Looking for things they can identify with. For example, immediately after looking at the url they realised it's a game. Joe immediately saw a link for a native mobile version of the game, could be an interesting avenue to explore. Bhagya went straight into the game, to see if she could get a basic understanding of how it worked. Demetra explored the sites navigation and discovered numerous forms that she thought could be interesting to test. Plus a whole lot more.

After 5 minutes, I asked them to explain to each other what they have discovered about drawastickman.com, and I collated the list on a nearby whiteboard, I loves me a whiteboard. I then recapped with them about how quickly we were able to get a better understanding of this application. We went from a url, to a 10-15 item list of things we actually knew about the website.

I then suggested to them, they pick something from their list or the main list to explore further. Something that was of interest to them. Explaining to them how this approach now sets the focus of their testing, it frames it. I introduced the idea of charters to them, again repeated my encouragement for them to read Explore it! We could have continued to explore and just shallowly tested things as we came across them, however I was keen to see them attempt to test an area deeply. So they all selected an area to explore further. The reason I was keen for this, was I attempt to relate it to testing in their jobs, where I assumed they would need to test deeply.

I shadowed them for about 5 minutes, watching them all test, and a pattern emerged. No notes, or very little. So after another 5 more minutes of testing, I stopped them to discuss what they had learnt after just 15 minutes of testing now. They had all learnt a lot, however my earlier observation had come to fruition, they were eagerly telling me about what they have found, but the majority of it was from memory. There was even a few confirmations of this, such as "there was this one thing, but I've forgot".  So we had a brief discussion about the importance of taking notes whilst testing, and how they can help guide future testing, but also aid you in telling the story of the testing you have done thus far.

They continued to test, a which point I decided to take the approach of chatting to them individually, to see how they were finding the approach of using charters. This gave me the opportunity to offer some one to one feedback and directly suggest some resources to them based on what they had done or were doing. Also giving them the opportunity to quiz me.

One of the topics that came up was, when do we move on to the next charter? Time was short, so we had a brief discussion about it. We discussed the idea of feeling like you've found enough information, or that you've exhausted all the ideas you had. This allowed us a brief moment to discuss the relationship between charters, test ideas and actual tests. We then very briefly hit on the idea of heuristics and some popular mnemonics, I suggested some resources for them to explore, including Karen Johnson's card deck and Test Insane's MindMaps. Also with drawastickman being a public application I suggested exploring social media for comments on the game, as well as reviews in the app stores. These can be a fantastic source of test ideas for public facing applications.

The final discussion we had was specifically related to testing drawastickman.com. If you're not familiar with it, you can draw a character with the touchpad or mouse, and the site will bring it to live, depending on what you draw. The discussion was about reproducing bugs, how could we reproduce issues we observe, seeing as re-drawing the exact same stickman would be tricky. So we discussed some ideas, such as recording the screen and using a mouse cursor recorder. Highlighting the use of tools.

That's pretty much that. I feel in this instance that my mentoring/coaching went rather well. I could have perhaps let them test a bit longer than I did, however all the students seem really engaged on learning more about charters and sessions. I had a lengthly discussion with Joe specifically about using sessions in the workplace and encouraged him to google Session Based Test Management.

I tried my best to be the facilitator of discussions, instead of telling them what to do. Allowing them to ask me why I was suggesting X. Such approach also allows me to collate more information from them, which my highlight a different approach I could take. Without the discussion though, it isn't really mentoring/coaching, it's telling.

As a sole tester at the moment, it was great to be able to mentor and coach some testers, while they actually tested something. I really enjoyed the event.

I would encourage anyone in the London area to check out a future Testers Clinic, regardless of your testing level, as you can participate as a student or mentor, both full of potential learnings. The link to their site is at the start of the post and details of the next meetup are on their home page.

A Four Week Approach to Creating Abstracts

I'm often asked how I go about creating abstracts, and it's actually the theme of one of my workshops at LetsTest this year with Martin Hynie. So I thought I would share a timeline with you of how I tend to do it.

Most CFP on average give you around about two months to submit your abstracts, so there is plenty of time, to come up with and formulate those awesome ideas of yours. I tend to take a four week approach.

Week 1
As most of you know, I love my whiteboard. But if you don't have one, there are many other mediums you could use. So what I do in week one is I create a mind map of potential ideas for a talk, workshop or tutorial. I spend no more than 15 minutes on this initially, as I'm looking for things that are on my mind right now. These ideas could be anything, e.g:

  • A blogpost/podcast/video you have seen recently, that you could expand on or argue against.
  • An experience in work, that you feel could make a good story.
  • Something you have been blogging about that could be turned into a talk.
  • Something related to a book you have been reading or read recently.
Then after 15 minutes I stop. Then for the reminder of that week, new ideas and experiences will come to me, so I add those to the mind map. One of the reasons why it's important to always carry something to take notes on, to capture these ideas, a small notebook or your phone.

Week 2
Now at this point I have a mind map contains some ideas. It's now time to try and elaborate on some of them. So I take each node one by one and spend no more than 10 minutes on each node, elaborating on it. Noting key bits of information. For example, if it was an experience at work, I would write down the key people, the problem, quotes, timeline of events, my learnings. 

Once I've done this for each node, I stop and keep adding to it over the next few days when I remember new things. 

Now at this stage, I have a visualisation of my potential talks, and some may stand out more than others. Perhaps the one with more child nodes means you have more ideas about that, it resonates with you more than the others. Perhaps you can spot a nice theme or pattern in one, that you feel would structure a good talk. 

So the remainder of week 2, I take my top three ideas and elaborate on those even further. So to continue the example above, what is it about the key people that is important? What role do they play, are they a positive/negative part to the story, or both. What is problem? How did you identify the problem in the first place. What was this problem impacting. How did you know the problem had been solved? Continuing to do this as above over the remainder of week, adding to it when I remember new things, or have new ideas. 

So at the end of week two, we have three ideas that we have expanded two levels deep now. 

Week 3
So in week three it's time to try and create some abstracts. Take our expanded ideas and try and create a snippet of your story, to entice reviewers to it. In my opinion this is one of the hardest parts, especially if the art of writing doesn't come naturally to you, like me. 

I tend to create a document in google drive, reasons for this later. I take a picture of my mindmap, screenshot if you did it electronically, and I add that to the top of my document for ease of reference. I start my abstract by spending no more than 5 minutes trying to think of some good titles and I note them all down, no matter how crazy some are. Then it's time to write that gripping, sock knocking off, enticing abstract.

Again my time boxing theme continues, it's how I tend to work. I spend no more than 60 minutes writing my first draft. I take the parent node and all its child and try and translate that into some words to explain why it was added to my map. So to continue my example, I may write something like: "this story contains many characters, during my story I will introduce them and their importance in this story, expanding on how their actions impacted my approach to solving this problem, and how their characteristics lead to me changing my interactions with them". Something like this. 

Once all the nodes are done, we should hopefully have a collection of relevant paragraphs and sentences that form the core of our story, it's not time to add some stitching in to turn them into one congruent abstract. Repeat the process for all three.

Now we are three weeks in at this point, that's a long time, that's a lot of thinking. You're probably getting a stronger feeling towards one of the abstracts, or maybe two of them. So I tend to spend some extra time on those to make sure I've included all I can think of in my 1st draft. 

Week 4
This is probably the most important week, we have invested a lot of time by this point, we believe we have some fantastic talks to give, and you believing in it is the most important thing. However, so far it's just you, your ideas, your thoughts on what is interesting. So it's time to get some reviews. This is why I tend to use google drive, as it's easy to share and track comments.

The testing community is a very friendly space, most of the time at least, but especially when I am around :D. There are lots of people willing to help other people out. But what exactly is it your are looking to have reviewed?

The least important thing is your story or theme of your talk in my opinion. May surprise some people, but for you to get this far with it, means you care about it, you believe it's interesting. Doesn't mean you shouldn't ask for feedback on it, or change it based on the feedback offered, but for me, it's not the main thing I am after.

The most important is the words. Spelling and grammar are of course up there. After that though, it's about it's enticement. Is it congruent. Does it pull your reviewer in? Would they attend your talk, because of the abstract, not because it's you. Get their feedback on those things, then tweak and amend accordingly.

Also, read it several times yourself, with sufficient time in between, like a day or so. As I mentioned already, I find time in between allows my brain to give me all it has to offer. 

At the end of week four? Well, you pick one of those titles and you get it submitted of course, then patiently, but excitably wait for the decision. Knowing that you gave it all your could, your best efforts.

Additional notes

Now of course you could do this a lot quicker than four weeks, however I find the time in between the weeks really allows my brain to process all the ideas and inform me of all it has to offer.

Also you don't have to wait until week 4 to get some external input, it could be useful to ask some close peers or work colleagues if they have something they believe you could talk about. 

Once you've done this process multiple times, you will also have a backlog of potential talk ideas. I tend to store these one big central map, which in turn could speed up weeks one and two. However I try not to go straight to this map, unless I hear about a CFP to something that seems interesting to late to adopt the four week approach.

Why Was This Check Created?

As I've been thinking more about Checking and Testing, and how to get them working harmoniously, I'm wondering if we are missing something from our checks. This post will focus on automated checks, but it I believe the same applies to non automated checks.

Some teams have become really adapt at writing automated checks. They are following good practices. Classes, methods and objects are all well named, and it's obvious what they do. Assertions are clear, and have a well structured message for when they fail. There are good layers of abstraction and code re-use. They are performant, execute fast and designed to reduce flakiness. It all sounds rather good.

But why is that well designed, well written, easy to read check there? Why does it exist. Why was this check written, over all the other possible checks. I can read the check, it's well written as mentioned, I can clearly see what it is checking, but that is all I have. How do I know, that the steps and the assertion(s) there match the initial intention for it. What was it about this check, this system behaviour, that was worthy of having an automated check created for it. I don't know that.

Why should we care about the why? I believe the results of automated checks are impacting the way we test. I believe this is especially true in an environment that has adopted continuous integration. As before you test, by test here I mean testing once the developer believes she is "code complete", all the automated checks are ran, and the build is either red or green. A generalisation for now, as I am still giving this more thought, but when the build is red, we tend to immediately focus on that, by chasing green. We will then usually read over the other checks in that area to see what else is covered, and then design and execute some tests to see what else we can learn. Then return to the new piece of work. When the build is green, we tend to focus our testing efforts on and around the new piece of work. As I said, it's a generalisation for now, I know I/we don't always do this, but hopefully most can relate.

I believe we aren't always aware of how much trust we put in our automated checks, and all that trust without always knowing why the check exists or it's importance. We all have a lot of knowledge about our systems, a lot of that knowledge is interwoven, this is why we create automated checks, because we can't remember everything. We need to make some of this tacit knowledge, explicit. It's also why we create mindmaps and checklists, to prompt us to remember things. To consider things.

If the why was also included, I feel it would aid us with test design. It would also aid us when reviewing our automated checks, when deciding do amend some, or delete some. Regularly viewing your checks and questioning their value is something I encourage teams to do regularly. Just because a check is green, doesn't mean it helped you in anyway, doesn't mean it added any value to your testing efforts. Going back to test design, lets say a check failed that had the following why message somewhere: "This check was created as we had a major issues in live where the system did X and lead to Y downtime". If I saw such a failed check, I believe I would probably do more testing in that area than if that message wasn't there. If I was reviewing my checks, and saw such a message, I would be able to assess it's value a lot easier/faster.

Here are multiple ways we could add the why in.

  1. Code Comment - No doubt a lot of you have turned your nose up reading that. But I'm not talking about using them to be able to read what the code does, as stated, we can do that. I'm talking about a few lines above a check, explaining why it's been created.
  2. BDD Tool Lovers - While I discourage people using BDD tools to write automated checks, especially those places that aren't practising BDD, I know many of you are using such tools. So you could add the why to the scenario section of the feature file. 
  3. Commit message - Perhaps we ensure to add excellent commit messages when new checks are created, clearly indicating the why there. We could then look at the commit history of the file. Has flaws if checks are moved around a lot during refactoring. 
  4. External document - Or perhaps we could store the why in a document somewhere. Perhaps a mindmap with IDs for the checks
Even though my thoughts are early days, I don't believe adding the why is a huge deal, the fact you are creating it means you already know why, it's just not there later in the checks life. Or available for new team members to read. Or anyone. But I do believe it could play a significant part is assisting our testing efforts, especially in check reviews and test design.

These are some early thoughts, just had an urge to write something after several conversations at Euro Testing Conference on this subject. Would love to hear some of your thoughts if you have the time to engage. 

Thanks.