Whiteboard Testing Has Arrived

So, a while back whilst researching a testing topic, I was searching YouTube for relevant videos. I found myself in this circle of thinking I had found something, then realising it was awful. Finding something different, but then realising it was an hour long. I was appalled at the quality of testing related videos on YouTube. Only good content I was finding was conference talks, but again, all lengthy.

So this nagged me for a while, I started thinking, I am sure a lot of people must turn to YouTube like I did, so therefore they are being faced with the same poor videos I was finding. This isn't good for our craft. Especially if people are new or less aware about testing then I am, they make take some of these videos as gospel.

So I decided to do something about it. I turned to my trusty whiteboard in the office and started scribbling, mapping out some ideas. Then it clicked, what if I recorded some video's in front of this board, using the board as "living" slides, so to speak. I could draw up models and critic them, I can write out key bullets point and add to them during the talk, I could pretty much do anything.

So there it was, I had created Whiteboard Testing. You can checkout the YouTube channel and I also created a twitter account for it @WhiteboardTest. There are currently two videos uploaded, one is about my plans for the channel, and the second is about a model I drew in relation to Regression Testing, the FART model. You can find them by clicking on the channel page.

I explain this in the introduction to the Whiteboard Testing video, but I want to repeat it. This channel is not for me, it's not about me. The goal for the channel, is to fill it will short and relevant information on Testing. I am talking videos that are 5-10 minutes long (yes I know, the regression one was 12 minutes, I will get better :D). But, I can not do this alone, I need the help of the wider community.

So if you have the following, which I'm sure you all do, consider creating a video, send it to me, and will can look at getting it on the Whiteboard Testing channel.

  • A whiteboard, flip chart or that sticky stuff you put on the wall.
  • A video camera or a smartphone.
  • A tripod, or a friend to hold the camera. 
  • Something to talk about.
These videos are not high production, look at the intro video to work that out. I'm not looking for fantastic visual effects, I'm looking for awesome content, that is going to help drive our craft forward. 

So please do help me share awareness about Whiteboard Testing, and I looked forward to seeing some of your videos on there in the future.

You can watch the intro video here


Tackling Android Deployment Using ADB and Appium

Problem

It's taking me a bout 10 minutes to update all my Android devices each build. Sometimes, I have 3-4 a day, so that's 30 minutes I am not testing. Also, installing the app is really boring. Then I have to log into to the app on all the devices.

This was the problem I set my self, I solved the problem multiple ways in the end, a journey I felt worthy of a write up.

State of play

We are currently putting new .apk files in dropbox/google drive. Then I either install that apk via Android Debugger Bridge (ADB), or I use dropbox/google drive on the device and install it straight form there. Then I proceed to log into the app on all the devices. As mentioned above, it takes me about 10-15 minutes to do all the devices, it's rather boring.

So I thought, there must be an easier way to do this. So I explored, and created multiple ways, so here goes.

Bash Command

My thinking went a little like this, if I can use ADB to install the apk via the Terminal, surely I could connect multiple devices, create a loop, and install many at once? Seems sensible. Turns out you can. So I brought myself a 4 port USB hub and turned to my best friend, Google.

After a few minutes googling, I stumbled across this command
adb devices | tail -n +2 | cut -sf 1 | xargs -I {} adb -s {} install <Path to apk>

In short, and I am no bash expert, this command takes the response from "adb devices", trims the response to just get the device names, then runs "adb -s install <Path to apk> on all the results.

Fantastic, I could now install the app on as many devices as I could connect to the laptop. 

But what about opening the app and logging me in? Well I knew I could start the app using ADB, I had seen such a command appear in Appium logs when I was using that in the past, so again, a quick google and I had the appropriate command 

adb shell am start -n com.package.name/com.package.name.ActivityName

Of course replace the package name and activity with ones for your app. If you don't know them, ask your Android developer, they will know for sure. 

So we are on a roll now, 15 minutes in, and I can now install and start the app on as many devices as I can connect. Excellent. 

So what about logging in? Well unfortunately, that isn't possible via ADB, unless I was able to write some code to interact with the UI, compile it, and send it to UIAutomator, hmmmm, sounds very much like what Appium does.

Appium

So I've used Appium a lot in the past, not so much these days as I don't believe UI checks are worth the investment on mobile, the tools, including Appium, just aren't mature enough yet. That isn't the tools fault, the platforms themselves are only around 5 years old, think how long it took for browser to be stable enough for automation? Some would argue they still aren't.

But I knew in theory what I wanted to do, was perfectly feasible in Appium. So I set about making it happen. 

Here is an insight into my approach for this. I knew what I wanted to do, so I wrote that out as code comments, like so:
  • Find out how many Android devices connected.
  • Get there IDs
  • Start Appium
  • Run a script that installs the app, starts it on all devices
  • Get the Appium to log me in

So I knew that ADB would tell me how many devices were connect, so I needed a way of executing a bash command from Java, again, hello Google. Discovered multiple ways of doing it. I could then read the output of this command. Then as per the ADB command from earlier I needed to trim the responses so that I was just left with the devices IDs. 10 minutes in, first two steps done.

So, as Appium is based on the same API design as WebDriver I knew how to create a AndroidDriver and pass in the device ID and path to my .apk, I also knew that Appium would attempt to launch the app for me.

Then I also knew, that I needed to tell Appium which activity to wait for, again if you are not familiar with activities in Android, I suggest reading about them. Then I just needed to automate the login screens and job done.

So I had been going, about 30 minutes now on my Appium implementation, all was going really well, then things took a turn. It turns out each Appium server can only have one active session, meaning that it can only interact with one Android device, I am not sure why this is, potentially something to do with ADB connections, I am not sure. This was based on me actually trying it, it failing, googling a bit, and seeing others complain of the same problem.

Appium Workaround

So I needed to find a workaround, some of the responses on Appium's Discuss site, suggested a quick solution, this was to start multiple Appium servers, essentially one per device. Well as above, I knew I could send commands via Java, so surely I could just do the same for starting Appium? Turns out, you can. However, they appeared not to have access to all the alias set on my machine, specifically, ANDROID_HOME which was causing issues for Appium. Sigh.

We are about 45 minutes in at this point, and I came across this excellent code example by Prithivi on this discuss page, this appeared to do something different, I am not 100% sure what the difference is, one observation, is that I can now see the Appium console within IntelliJ so I am guessing it runs the command line arguments within the context of my application, not sure, and right now, it's not that important to me. 

But it worked! I could now systematically start Appium servers. I was now able to start an Appium server per device. Now, in order to do this, you need to set a different port for each server. So looking back over what I had so far, I now had two parameters per device, device name, and a port. So I decided to create a device object, incorporating those two values, so I can pass these objects around.

Update: Copy and paste error from an old project of mine, I since updated the solution to Appium Java-Client 3.2 which comes with ApiumServiceBuilder support, so the above solution is not superseded and far easier using this, I will update GitHub repo. 

Improvements

Right now this tool, is a huge leap for me, and already a week in, it's saving me a substantial amount of time. However I can see some improvements in the pipeline already.
  • Can I thread/parallelize the Appium call, to make it do all the devices in one, currently is sequential.
  • The apk file is currently hardcoded, seeing as we have a naming convention for them, could I pull the latest .apk from Google Drive to make it dynamic. 

Conclusion

So all in all, this was about two hours of work (additional time comes form creating the login code), but I do now have a tool where I can deploy and login on multiple devices. 

I uploaded the basic code I "stole" and wrote to a GitHub repo. I don't proclaim to be an excellent programmer, so if you like this and can improve it, please do, I would love to see it. 

Insights

I posted this blog for a few reasons. The first being that, I think this is an awesome tool. A fantastic example of what I (and others) try to tell people about automation, it's not about trying to automate checks all the time, identify a problem and attempt to solve it.

Secondly, it's OK to steal other people's code. Just don't try to pass it of as yours on a public repo/blog, thank the other people. But off the back of that, take advantage of the knowledge that is out there. Google all the things. Look to the issue and discussion pages for the tool you are using, especially with open source tools. 

Finally, I wanted to reflect and share how I approach such problems, hopefully some of that came across.

Thanks for reading, be very keen to hear from others who use this, or have other solutions. 

Yousaf Nabi - Ermahgerd TestBash New York City Y'all

So about a month ago, a ran a competition to win one of five tickets to TestBashNY. Part of the deal of entering the competition was that, if you won, you would agree to write a short post/article about your day at TestBashNY.

Here is Yousaf Nabi talking about his TestBashNY experience. You can follow Yousaf's future journey on twitter here

5am Wednesday morning and my alarm rouses me from my blissful slumber. I fumble around for the snooze button and nuzzle my head back into my pillow. 10 minutes later and my eyes open again to an Englishman’s morning vice, a steaming cup of tea graces my bedside cabinet. (Thank you Helga!)

For those who know me, will know that 5am has been my bedtime more often than its been my wake up call, so to arise at this ungodly hour, it must be something awesome. Nope, its not a car show (for once!), its TestBash. For those who aren’t aware, TestBash is a unsurprisingly awesome testing conference usually held in Brighton, but for the first time it crossed over the atlantic to be held in New York City.

An uber and train ride later, I find myself on board AA2201 bound for NYC. Now we all know about the time differences when going across the pond, and the key advice from all those I spoke to was, sleep during the flight so your refreshed when you arrive. Sleep, baaaah humbug. I’ve got a test suite for build for a new node based API that is currently under development. A true tester never switches off, after all its a mindset not a job! (More on my fumblings through the world of greenfield projects in an upcoming blog post).

So with my first day free, I spent it wandering the streets of New York in 21 degree heat (there goes the winter clothing!) and made some observations of which I feel it pertinent to share, if only to get a little insight into my brain.
  1. People don't queue, or get annoyed when people push to the front. How??? 
  2. 
No one cares about the police. People just cross the road in front of cop cars with sirens on, totally nonchalant about it too.
 
  3. Pedestrians think they have two wheels since they all walk in the cycle lane. Cyclists don't mind - why so polite? I'd be raging.
 
  4. There don't seem to understand steak sauce - what the feck is A1 sauce?
 
  5. There are no kettles in hotel rooms. How is an Englishman meant to have his tea?!
 
  6. I am jealous of every house in the suburbs as they all have mahoosive garages.
 
  7. Taco Bell didn't live up to my expectations. (I forgot that I ate Taco Bell at the Arndale in Manc and felt the exact same way) 
  8. Generally everywhere is pretty grotty and dirty. NY certainly scrubs up well in the cinema.
    Day 2 – Workshop Day.

    After a brief walk to Baruch College, I transcended the elevator to the 14th floor to be greeted with some happy faces wearing Ministry of Testing. WOOHOO, I’m in the right place, and OMFG they have tea. Morning sorted, now for some “Test Automation & Continuous Integration at scale with Dr. Jess ingrassellino & Noah Sussman.

    The day kicked off with a backstory of how Jess & Noah came to be where they are, and a disclaimer to state that they are not professional speakers, but the well attended room was very receptive. Everything is better when it’s a bit rough around the edges :)

    After a brief dialogue of the pain-points of automation testing, it became clear (as we were running out of whiteboard space), that there are a lot of issues out there, which affects software testers of all walks from all industries. The pain is universal, which can only mean that where people have succeeded building successful automation, that we can learn from their achievements to help each of us in our respective jobs.

    The future talk by Tanya Kravstov from ROKITT at the main conference would lay testament to that. I subsequently fed-back to Jess that some of Tanya’s high level points would have been a good structuring and discussion points for the test automation portion of the talks.

    Noah broke up the discussion with his sand-pile game, which was analogous with system building & change management. It was very interesting, throwing up lots of varied conversations about the various ways of implementation and the risks of each, much like the real world. One of the take home lessons from that was, if you plan ahead and plan well, and accept that there is allowable level of loss, you can provide far greater coverage (near full coverage) and have a relatively non-complex system. I do have to admit that I found a defect in the game instruction slides and could not stop myself telling someone. I’ve never been very good at keeping secrets either =/

    Noah then went on to discuss his devOps experience, and although some of us attending have devops departments who already implement graphite/grafana, I had never considered how it could be used as a testing aid in CI for providing monitoring and quick feedback loops. Coupled with a raspberry pi and a monitor, I now have the initial knowledge to allow me to output metrics directly from the code within my test suites, which even for the purposes of debugging will be so useful in my day-to-day job.

    I’ve jotted down some of my notes below, mainly because it’ll help me when I come back to implement it in the new API suite I’m testing. If anyone wants to explain any of the points further, please ask me, or indeed Noah S. as I’m sure he would be very happy to talk about his experiences.

    Some command line techniques for log trawling
    • Download your log
      • http://ita.ee.lbl.gov/html/contrib/NASA-HTTP.html
    • Unzip your file
      • gunzip NASA_access_log_Jul95.gz
    • Word count on the log
      • cat NASA_access_log_Jul95 | wc –l
      • wc -l NASA_access_log_Jul95
    • Echo to the command line, all the log lines with 200 in them.
      • grep ' 200 ' NASA_access_log_Jul95
    • Echo to the command line, a count of the log lines with 200 in them.
      • grep ' 200 ' NASA_access_log_Jul95| wc –l
      • grep -c ' 200 ' NASA_access_log_Jul95  
      Example code to publish data to graphite
      • Example pseudo-code that will execute your log trawl on the command line
        • echo $(wc /-l NASA_access_log_Jul95)
        • echo $(cat NASA_access_log_Jul95 | wc -l)
      • Extend the above by adding a timestamp
        • echo $(cat NASA_access_log_Jul95 | wc -l $(date +%s)
      • Extend the above by sending it to graphite as my.nasa.metric
        • echo my.nasa.metric $(cat NASA_access_log_Jul95 | wc -l $(date +%s) | nc graphite 
      Some take-home points from the Noah’s talks:-
      • Piping to | nc graphite we can publish metrics to graphite for consumption.
      • You can do any amount of pre-processing to the data before publishing it to graphite.
      • You can put this all over your test code, you can limit the extra traffic it will generate later.
      • You can put statsd in between your code & graphite in order to choose only certain amount of requests to sent to graphite and additionally aggregate stats from several sources (ideal for load-balanced systems).
        • You might not need every message, or want every message (think DDOS).
      • Use UDP for messages as it is not stateful unlike TCP.
      • You don’t want your code waiting because it’s waiting on an ACK.
      • You can use XARGS to build up a list of commands which can be run concurrently
        • The following example that will execute the following command as 24 concurrent processes (but it could be your PHPUnit tests etc)
        • cat NASA_access_log_Jul95 | xargs -P 24
      Regex – If you aren’t using it somewhere in testing, then you probably should be. Once you get the hang of it, the possibilities are endless. From testing fields in an API response (dates for example) to grepping your logs to pull out all sorts of information to suit your particular needs. It’s a skill not many people have really mastered yet. There are some great tools out there that I use to aiding me in my regex.
      I love them both and use them a lot. Glad to see Noah was also a fan of regex.

      Overall, the scope was huge and there was so much to cover, which was probably unfeasible for a single day workshop, but it was thoroughly enjoyable, informative and useful. Massive thanks to Jess & Noah for taking the time to run the workshop. You guys did great :)

      Day 3. TestBashNY

      First off, how freaking cool was it to see TestBashNY text up on the on the gramercy theatre billboard. I bet Anna & Rosie were properly made up.

      Secondly, never seen so many testers in one place before!

      It was amazing to see such a diverse range of people from all over the world and that they had all taken time to either attend, participate or present.

      I could talk for days about the thought-provoking, engaging and funny talks that everyone put on, but I hear they will be available on the Ministry of Testing Dojo as eye-candy for your pleasure. You lucky people, as they were all brilliant, I would definitely recommend checking them out on your lunch break, while you work, while you poop, wherever!

      The whole event showed me that the testing community is in great shape, with many proponents from which I have been inspired to do the same. It came at the right time in my career, where after 10 years of breaking stuff daily, I was feeling a little lost. Knowing that this bubbling cauldron of knowledge is there at my fingertips is a great help and I know that I can help add value from all the pain I have endured over my time in testing.

      Sometimes it feel’s like you’re the glue that holds it all together, if only to be the gatekeeper. I remember the days I used to be reluctant to take holidays for fear of renegade developers with specification allergies running amok. It makes me laugh whilst reminiscing but it was hectic at the time and my mouth was rather colourful to put it nicely.

      So with that new found enthusiasm, I am going to get more involved with my local testing community bringing testing into the focus of the wider technological talks we hold at Sky Betting & Gaming. I’d love to open an invite for any speakers in the testing community to present, and anyone to attend.

      It’s been one of the visions of our company to be one of the best digital companies in the UK and we believe that becoming a technological hub is part of that. Testing hasn’t really had much focus in our past meetings so I am definitely seeking to redress the balance but I can’t do it without your help. Who knows, we may be able to hold a TestBash up int’ Yorkshire. Pie’s n Pea’s at the ready.

      Herman Ching - Caveman first painting

      So about a month ago, a ran a competition to win one of five tickets to TestBashNY. Part of the deal of entering the competition was that, if you won, you would agree to write a short post/article about your day at TestBashNY.

      Here is Herman Ching talking about his TestBashNY experience.

      Test bash, was great in it’s mature offering and highly accepting of the seemingly young community of testers. After the conference was over and a few drinks, I felt satisfied that the conference was worthy of my time. I will be attending my first few meet ups in New York city soon.

      It was my first time at testing conference, my first time writing anything published on a public blog and my second time writing this. I walked in waiting and sitting in a chair in a dark theatre. With time master Mark on the mike, I was ensure a very prompt schedule.

      From collaboration to profanity, it was a wide range of offerings. I could make a few recommendations but I am going to hold onto those. I learn a few things I will be taking back to my work but I am not going to share those either. I want to eliminate bias which I did have a lot of the first time I wrote about my experience a test bash. Let this conversation set the tone and hopefully that alone will entice you to review the content that was offered if not the mysterious I seem to have just created should grow some curiosity.

      The talks set a tone I immediately could not relate to until the plethora of 99 second talks. Someone asked me half way through during the break, what did I think of the talk. I did not have an answer. Normally my immediate go to response was “it was good, I am learning a lot”. I kept that aside and instead said “I wasn’t sure yet, still a lot of content to digest”. One could easily recommend just view the videos/presentations online once it was over. I had to take something away from this conference that static could not offer. Time to start talking.

      Beyond personal stories from audience or advise from the speakers, I could say my biggest inspiration came from the concern, the lack of exposure the general community had around testing. Being a father of two, I could relate to Anna who wanted to created more exposure. Additionally from Helena seemingly water colored presentation and Selena growing hooks into the community, I want to give back. I am inspired to make a children book on testing and I hope to make it available to the testing community. This whole conference entice me to do just that. I could only imagine what’s it doing it for everyone else.

      Melissa Eaden - TestBashNYC == Mind Nova!!

      So about a month ago, a ran a competition to win one of five tickets to TestBashNY. Part of the deal of entering the competition was that, if you won, you would agree to write a short post/article about your day at TestBashNY.

      The first of those is in! Here is Melissa Eaden talking about her TestBashNY experience. She also decided to join twitter whilst at TestBashNY, so you can now follow her testing journey here.

      I could have used mind blown, or mind explosion, but really, it was a day of brain shaking presentations, of foundational changes to my testing habits and philosophy, on epic, but very personal proportions. It was like an internal nova happened in my head. I could not look away; I could not look back. I cannot continue to test as I have been testing.

      Even the paragraph above doesn’t really describe what exactly happened to me on November 6th. I attended another conference for testing before TBNYC. I spent three days at that conference and walked away with one page of notes and freebies. Lots of freebies from people that were mostly trying to sell me something. I vowed to never go to another conference like that again.

      My essay to Richard Bradshaw was a last ditch effort to get to this conference. When people asked about it I described the essay, in short, as a story about wanting to have Ministry of Testing’s love child and how going to TestBashNYC was absolutely like giving me an all access pass to my biggest professional/testing crush.

      I am absolutely pregnant with ideas! My head was so full from the conference, from the conversations with people, from my own ideas on how to implement things I learned, I’m not even sure where to start with them.

      All the presenters were wonderful! The conference format allowed for conversations with presenters that normally don’t happen at bigger conferences either because of time constraints or just from having too many people lined up to ask questions.

      Everyone was there to learn something. I walked away from that conference with ten pages of notes, and no fewer than five business cards. There were so many testers and even developers there, heading the right direction and doing the right things and encouraging loud mouths like me to speak up more and do more in the community. I was never discouraged from my opinion or view, but more often than not, I was gently led in another direction, or given a viewpoint I hadn’t considered before. Or even more to my surprised, agreed with, often. I didn’t have to fight an uphill battle. It was the greatest example of culture fit happening right before my very eyes.

      The biggest takeaways for me were to follow my fears or my dreams and co-create everything! Collaboration was a big theme throughout the day and into the discussions later that night. I’ve been asked to speak, write and even help with a podcast. I don’t think I would have ever been offered those opportunities at another conference.

      For as long as I am testing, I will be forever a fan of Ministry of Testing and their TestBash conferences.

      TestBash NYC Competition - 5 Free Tickets!!!

      So TestBash is coming to New York!!! You can read why here.

      Having been extensively involved with Ministry of Testing for many years now, I can't tell you how awesome this is! Attending a TestBash conference is truly a unique experience, it's personal, it's focused on testing, no sales pitches, it's relevant, it's practical and created by awesome people for your benefit.

      Now, here is a brief sad part, sadly I cannot make TestBash NYC, I have commitments that unfortunately I cannot miss. I would have been the first to purchase if I wasn't already busy. Firstly because New York is awesome, had the privilege of visiting in 2014 for a different conference, but secondly because its TestBash!!!

      So I want to give others the chance to attend on my behalf, so I'm giving away 5 tickets to the conference day, the speakers and talks are here, but even before you read that, I can tell you that they are all awesome, some of them I consider friends.

      So what do you have to do, to get one of these free tickets? I am glad you asked!

      1. Tell me why you want to attend this awesome conference, saying it's awesome, isn't enough!
      2. Post the conference, I will be in touch asking you to write me a few paragraphs about your experience at the conference, that we can share on your blog or my blog or even on the ministry of testing site, so others can hear form someone else how awesome TestBash is.
      If you would like to be in with a chance of winning please complete this form telling me why you want to attend!

      To finish this though, why am I doing this? As I said at the start, if I wasn't busy I would have brought a ticket instantly. Secondly, I really believe in the goal of the Ministry of Testing, I have the privilege of being involved in it every day, if I can help them expand their reach, I know this will result in many many more testers feeling the benefit of the Ministry of Testing. On top of that, if I can help five testers experience such a great conference which I know will influence their careers, well that just makes me feel awesome, and I like feeling awesome.

      If you get this far, please share this will all you can, so as many people get the chance to enter this.

      The form to enter is here. Deadline is Midnight (UK Time) on Monday 19th October.
      Note: The competition is just for a free ticket to the conference, you will be responsible for the cost of your travel and accommodation.

      Interviewing - Question at CAST2015

      So I tuned into the CAST live stream again tonight, and I caught the end of Rob Bowyer's talk titled "Why Should I Hire You?", because I am bloody awesome! Sorry I digressed there. From what I heard it sounded like a talk full of experience and insight. Again as per last night, I decided to take advantage of the CAST twitter printer and ask Rob a few questions.


      I asked this question having done some interviews recently for a testing role requiring automation skills. Well, I see the same things all the time. They tend to read something like this.
      Created and maintained an automation architecture for UI testing using C# along with SpecFlow, NUnit and Selenium WebDriver.
      Rewrote their UI automation architecture from Selenium RC to Selenium WebDriver using the PageObject pattern as well as the Data Builder pattern.
      Lets look at the first quote. BINGO! That's right, buzz word bingo, and there is at least a line there. Now I am glad to hear about the tools you are using, glad to hear they are popular tools, glad you told me what kind of testing you believe you architecture is doing. But why? Why did you create this, for what purpose? Why did you choose those tools over others? How did you maintain it, why did it need maintaining? For some listing the tools is enough, but for me, it's not.

      Now the second quote, this is better, less focus on tools and languages and more focus on patterns. Patterns are transferable, so I am happy to read this. But it's still missing. Why did you rewrite it? Why did you choose those patterns, what do they offer you?

      So those quotes are from my LinkedIn profile. I realised during some prep for a conference talk, that I am massively underselling my self on my CV/LinkedIn, note that I still haven't got around to updating them. But clearly, so are many many others. Hence my question to Rob. As when interviewing recently, I was reading similar and find my self, going "and what". Huh? Really? So?

      I am some what interested in the tools you are using, the programming language you used, but what I am really interested in, what really makes me go, "oooooooo", is when you talk about:

      1. Why you are using automation?
      2. How it fits into your testing strategy?
      3. How you decide what to automate?
      4. How you designed your architecture, and why that way?
      5. Why that language over another?
      You see, as I mentioned above, things like models and patterns, they are transferable, they can be applied to most languages, most tools. If you can demonstrate an understanding of designing a good architecture, an understand or where to use automation and how you go about selecting tools, then that is value to me, that draws my attention. Given a few hours I can learn a new tool. Given a a week or so, I can get the basics of a new programming language. In both scenarios, I can google for help. But if you don't know what to google for, well then your a bit stuck.  

      If you agree, have a look over your CV/LinkedIn profile, are you selling your real skills, or just listing some tools and patterns? 




      Using BDD Tools To Write Automated Checks != BDD

      Using Cucumber, SpecFlow or similar tools to write your automated checks, is not doing BDD. Cucumber, SpecFlow or similar tools are not testing tools ¹. I believe they were designed to facilitate the process of BDD. Sure they could be accompanied by other tools and code to form a testing tool, but as this post will elaborate on, its important to understand what doing that means.

      I am seeing a surge, well it was a surge at first, it's considered the norm now, I gauge this by using blog posts I see, message forums and job adverts. I probably first noticed it in 2013. This surge is people using tools that have come out of the BDD community for their automated checks, with the most popular being Cucumber and SpecFlow.

      Is there a problem? Maybe, as always it depends on the context. But what I do have a problem with is people claiming that by doing this, they are indeed doing BDD. Now I should make it clear from the off, I am no BDD expert, I have a shallow understanding of it, read a few books, a few blog posts, never worked anywhere claiming to do BDD, however I have worked in companies were collaboration is strongly encouraged, if not mandatory, and my understanding of BDD is that it's routed in increasing collaboration. Does writing your automated checks in SpecFlow/Cucumber increase collaboration? Minimally if at all.

      My next gripe is people using Cucumber/SpecFlow in their automated checks, well, just because! "Why wouldn't we", I hear them cry, shortly followed by "it makes everything readable". It could make it more readable, however sadly the way most implement (generalisation here), but also based on what I read about and have experienced, they don't write them in a readable way. Ok I should make this more clear, the "steps" may be very readable, but what the scenario is checking, not so much. But thats the point of writing scenarios in gherkin, right?

      There is a great example of that in, Fifty Quick Ideas To Improve Your Tests by Gojko, David and Tom in the chapter titled "Use Given-When-Then in a strict sequence". It reads like this.
      Given the admin page is open
      When the user types John into the 'employee name'
      and the user types 30000 into the 'salary'
      and the user clicks "Add'
      Then the page reloads
      And the user types Mike into the 'employee name'
      And the user types 40000 into the 'salary'
      And the user clicks 'Add'
      When the user selects 'Payslips'
      And the user selects employee number 1
      Then the user clicks on 'View'
      When the user selects 'Info'
      Then the 'salary' shows 29000
      Then the user clicks 'Edit'
      and the user types 40000 into the 'salary'
      When the user clicks on 'View'
      and the 'salary' shows 31000
      Have you seen a scenario like this before? What is this scenario actually checking? It's not immediately obvious to me, you? I am not going to dive into detail of what it could be, based on the above, as the author of the chapter has already done a great job of that, so I recommend reading it. But a possible scenario could read as
      Given an employee has a salary 'X'
      When the tax deduction is 'Y'
      Then the employee gets a payslip
      and the payslip shows 'Z'
      Which works better for you? The latter I would hope. It's clear what the scenario is doing. Its clean. Also in most cases where BDD may be being practiced, or the acceptance criteria is being written in gherkin, it's directly related to the story, and could subsequently serve as an education resource for someone wanting an insight into some features/behaviours. The same couldn't be said for the first example.

      So why are so many people automating like the first example, where we don't have these advantages?

      I have started calling the first example TestCase 2.0 - By this I mean, if the steps are written like such, someone without any technical skills, can come along and make some new scenarios, just by plugging some steps together. They never have to look at the plumbing underneath, it remains hidden. It's like feature files have replaced spreadsheets... step driven automation instead of keyword driven automation. Takes little thought, and subsequently returns little value.

      It's understandable why someone would use it this way if such a framework existed, "what! I can just search for steps and create 100s of new scenarios, awesome!" Sadly it's not. Not going to dive into that in this post, but we need automation that supports us, creating scenarios for the sake of it, especially ones where the scenario isn't clear, in the long run, isn't going to help anyone. Also, the more you add, the longer you wait for your feedback, you know the fast feedback that automation could provide.

      I am not saying don't use tools such as Cucumber/SpecFlow in your automated checks, but think about why you would.
      1. What's the advantage?
      2. Where's the value coming from? 
      3. Who is your target audience for reading your scenarios?
      But be clear to yourself and your team with what it is you are doing. In most cases you are using BDD tools as part of your automation framework, not doing BDD, so don't fool yourself or others into thinking you are.

      So do use Cucumber/SpecFlow, but use it for the right reasons, use it to make it clear to you and other what the scenario is checking. Not because it's the "cool" tool at the moment.

      If you still think what you are doing is BDD, then I have listed some references at the bottom.

      References

      ¹ https://cucumber.pro/blog/2014/03/03/the-worlds-most-misunderstood-collaboration-tool.html
      https://cucumber.io/blog/2015/06/18/hamish-tedeschi-what-is-bdd
      https://cucumber.io/blog/2015/07/13/anne-marie-cukeup-question-and-answer
      http://prezi.com/hhmqznflya0l/?utm_campaign=share&utm_medium=copy&rc=ex0share

      Look up Dan North & Liz Keogh's work on BDD.


      Testability Question at CAST2015

      I was watching the live stream of CAST2015 earlier, in particular I was listening to Maria Kedemo talking about "Visualising Testability". Having done her's and Ben Kellys workshop at LetsTest, I was interested to hear Maria talk about this topic again, and also to see if anything from the workshop was in the talk.

      Wanting to get involved more, I posted a question to the twitter printer, which on a side note, is an awesome idea.


      If I could rewrite it, I would write. "Should testers with coding skills focus some of their time on increasing testability of the product/testing, instead of focusing on creating automated checks, which I believe is where the majority spend their time" sadly, that doesn't fit in 140 char.

      I believe they should, as I believe automation is just a tool. The most common use being reducing testers time spent checking, by automating those checks. Which I also believe is where most focus their efforts. However some testers skills are on par with some developers now, especially those occupying the role of SDET and such. So surely we could use those skills to increase some aspects of testability. For example, referencing James Bach model, someone with those skills could spend time improving intrinsic testability, altering the product itself. This could be adding in some logging they require, it could be writing some hooks to make accessing the system easier and much more.

      But for me, I want to see more testers focus on what James titled "Project-Related Testability". I encourage people with coding skills, testers or developers, to create tools that really support the testing efforts. For example, they could write tools for reading logs files, creating data, state manipulation, data manipulation and much more.

      Of course with any automation pursuit, it should be clear what is trying to be achieved and be aware of falling into the automation trap. If something it taking to long, will the value be returned by continuing or should you just accept defeat.

      Anyhow, I encourage all to watch Marie's talk once its published on the AST YouTube page and think about what tools could you create, or someone in your team create, to increase an aspect of testability.


      Tail Log File And Play A Sound When Keyword Hit

      So on the testers.io Slack team today, Maik messaged:-
      Does anyone know a way/tool to watch a log file and play an audio sound, if a certain keyword is found (e.g. ERROR)
      there is "log watch" for windows from James Bach, but I am looking for a Macintosh
      Having only had a mac for about 8 months now, I have been using the terminal more and more. I immediately thought upon reading Maik message that this must be possible. Now I knew how to tail the log file but I wasn't sure about playing the sound. Turns out on linux you can use aplay, however doesn't seem to exist on the mac, but a few googles more and I discover that the mac has afplay. I simply googled to find a beep .wav file for use with this.

      So I gave it a go, and success was had!

      Now the funny bit, well wasn't funny at the time, but it is now. As I was testing this, I created log.txt and was editing it in Sublime and the sound would play when I added "ERROR" to a new line in the file. Success? Not quite, it would also beep for every "ERROR" in the file. Unable to work out why this was, I told Maik I was done, but informed him of this "feature". Decided it was a feature and not a bug, he could it use to see how the day had gone. If the final error beep of the day lasted for a few minutes, well then that would have been a day full of errors!

      But I was left feeling unsatisfied with my work. So I broke the script down and tested it in individual sections, the logic was sound, it was doing what I expected it to, but not working as per my goal.

      Turns out, Sublime doesn't append text, instead it overwrites the existing file! So my script was behaving correctly when it played the beep for all the errors thus far, because as far as tail was concerned all the lines were new! So I decided to see how I could append a new line to an existing file without overwriting the whole file. Turns out its actually quite easy.

      echo "sdasdsa" >> log.txt 
      The above command would do this for you. So I tested using this, and low and behold, only beeps when a new line is entered containing "ERROR" and it only plays once.

      So there you have it, a fun 30 minutes was had by all. I learnt some more about shell and using the terminal and Maik got a script, that he later confirmed works for him.

      I never asked Maik what his particular requirement was for this tool, but its something I have done in the past before, and as Maik mentioned in this original post, James Bach built "log watch" for windows.

      Log files are used to capture unexpected errors and to provide information on the product, information that is gold to a tester. So why not monitor it in real time, and having a sound play on certain matches is a nice way to get your attention. Can be very useful when in a testing session, as the beep may alert you to something you hadn't seen, but caused an error. For instance the UI may not report any problem at all, but being alerted to an error, may mean you actions had actually caused this, but you would have never known if not monitoring the logs. Reading them after the event would alert you to the fact there were errors, but being after the event would make it harder to reproduce, to work out what caused the error. So using such a tool or script to alert you can lead to some interesting discoveries.

      This is exactly the kind of thing I want to hear more about with regards to how automation is used within testing. For me this is a fantastic use of automation, the type of automation I talk about, automation that really supports the tester. We need more of these things. It's nothing new, but perhaps gets less focus in some teams due to pressure to produce automated checks. But this whole thing took me 30 minutes, I can only imagine, but in my experience of using "log watch" this will repay itself many times over for Maik and others who decide to use it.

      Happy log monitoring. There was talk of playing the Benny Hill theme tune upon a match, this is of course completely optional :D but highly encouraged.

      p.s. Here is a follow up post from Maik. http://hanseatictester.info/?p=566


      An Introduction To The Data Builder Pattern

      The data builder pattern was first introduced to me by Alan Parkinson, he did a talk about it at the Selenium Conference in Boston back in 2013. You can watch it here.

      I want to share with you a current implementation of the pattern I am using, which was created by myself and Lim Sim. It not 100% aligned to the following links, but its working well enough for us.

      So what is the Data Builder Pattern? Well we start with "Builder Pattern", officially its "an object creation software design pattern". So in the context of data, the pattern is used to create data in the system under test. In my words, its a maintainable pattern for creating and managing the automated creation of data in the system under test.

      It's a fancy name for something that is surprisingly simply.

      In it's simple form, the flow is as follows: Model > Build > Manipulation > Creation. We have a model of a domain object, we create an instance of it with some values, we manipulate some values (not always) and then we create this object in our system.

      Model
      This is a model of the domain object you are wanting to create. So we have to look at our domain and break this down into models. The application database can be a good place to work these out, in some cases, you could simply copy the schema. The code base is also a great place to look, and again, in some cases the models may already exist, allowing you to use them directly in your code or make a copy of them. If you don't have access to any of those things, just take the time to work it out. The application UI could also be a good place to work these out, but be aware that sometimes important values are never displayed in the UI.

      So lets look at an example, we will use this throughout this post. I am going to go with a user of a system, its a made up system, but you will get the point. Should point out for this example, I haven't made the code fluent, you can read more about that here, it doesn't really alter the pattern, but as I said, this is how we have implemented it.

      As you can see, keeping it very simple. We simple have a model of a domain object with getters and setters so that we can assign some values once we create an instance of it.

      Build
      I have called this stage build, this is the stage where we create an instance of our model. So in other words, we create an instance of our class (model), we create an object. We build the model.

      We have called these builders, it's where we create an instance with some values. Those values can be generic or you could use a random string/number generator to create values for you. It all depends on your context. Or they could be scenario/persona based.

      The builder is a class, that consumes a model. In turn it knows the context of the model and importantly what values to assign to its properties. The builder will then return an instance of the model with usable values assigned to its properties.

      You could have a simple generic builder like so:

      So you can see, all we are doing is assigning values. We have decided that in our context, those values fit what we would consider a generic user.
      But of course we could now expand the builders available to us, to fit other scenarios that may exist within our context. For example an OAP user.

      Notice that I added it to the same class, you should do the same if going to try this approach. This class is the builder for a specific model, so this is where you would continue to add your builders. So another common approach to data, is to just randomise it, as in some scenarios the data is irrelevant to the test/check you want to do. So you create yourself a builder that creates random values. You can do that in line, or if you are repeating them a lot, create yourself some utilities to create strings of all lengths and so forth.

      So create a builder for all specific personas or scenarios you require. Of course you should consider if your builder needs to adhere to the schema of your database, for example, the firstName field in the database has a character limit of 50, so if the builder produced a 60 characters firstName, that wouldn't really work. So be aware of data limitation for your system.

      Manipulation
      So at this stage with have our object and it has values assigned to all its properties, it good enough to get created if we so desired. But sometimes we do want specific values, perhaps our check is going to look for a specific surname or search for users of a specific age. We need to be able to create data that is unique, but we don't want to to have to assigned values to all the properties, only the ones we are interested in for our check/test.
      So what we do is make changes post the builder, so we let the builder assign all values, then we change the ones we need to before we create. So it would look something like this, where I need a user with the surname "Richards" and they need to be 35.

      So as you can see, I only change the values I need and the rest of the values remain as they were from the builder.

      Creation
      So as with the build stage we needed a builder, with the creation stage we need a creator. The job of the creator is to actually create our object in the system under test. How your creator creates the data is solely dependent on your context and the testability of the system under test.

      Some ways that you could do it. You could go straight to the database and insert your data there. You could use APIs if they are available. You could even use the UI if no other alternatives are available or if the context leans in that direction. There is no right or best way, it's whatever works, but by following the pattern, you allow yourself to change it in the future if a faster or easier method becomes available.

      So what does the builder do, the builder basically takes your object, reads all the properties of the object and uses flows that you create to insert that data into the system under test, resulting in the data being available for your test/check.

      I am not going to show an example of a creator as they are hugely context dependent and am sure you understand the job of the creator.

      Summary
      So once you have your models, builders and creators, you will end up with something like this, that would result in data being added to your system under test.

      As you can see, this is really easy to read, thats because I have used names specific to this domain, sure in this example its just "User" but am sure you can imagine how your domain may look. Then we have used nouns for our objects so there purpose is clear to us, and verbs for their methods, create and build.

      Anyhow, I hope this was a nice introduction to this pattern if you were not familiar with it. I will follow this post with a few more on this topic. We look at more advanced implementations, as well as how following this pattern allows you to leverage the code for multiple uses throughout your testing approach.





      How Often Do You Really Fix A "Failing" Automated Check

      Do we really fix a failing automated check? Or do we simply defunct one and create a new one.

      I saw a tweet the other day from an old colleague of mine.
      It got me thinking, how often do we really fix a failing automated check? By fix in this instance, my thoughts started with getting it passing again, getting it green. Even though I prefer not to talk about passing and failing, for the context of this post, passing means satisfying the algorithm and failing means didn't satisfy the algorithm.

      Lots of discussion followed on Twitter after I tweeted, "You very rarely 'fix' a 'failing' automated check", https://twitter.com/FriendlyTester/status/592972631705559040, but going to try and summarise the thoughts.

      So lets run through an example, I have a sign up form that has 10 fields on, I also have many checks that are all passing on this form. A new field has been introduced to the form, a mandatory field, this has caused 5 of the checks to now fail. In order to get those checks passing again, then need to be instructed to populate the mandatory field. Does this class as fixing the check?

      I don't believe so, for me this is a new check, it just so happens to reuse 95% of code from a now defunct check. So we haven't fixed it, we have created a new one. Semantic argument some of you may be thinking, but I don't think so. For me, its a very important distinction.

      How often have you been in this situation, the build goes red, someone investigates, 5 checks are failing, then you hear "fix those checks!". If you haven't, replace check with test, and try again. It's certainly something I have experienced multiple times.

      Someone then runs the checks locally and immediately sees that the new field isn't being populated, "oh look it's not populating the mandatory field, I will make it do that", they do it, run them, they all pass! Job done......

      Here lays the problem, how do they know that mandatory field is even meant to be there. Well likelihood is they did know, which then leads to the question, why weren't they "fixed" as part of the initial development work? One problem could be the separation of coding the feature and creating automated checks, happens in a lot of places, especially where 'testing' is a separate activity. It could be they run them after and then fix where required. But I feel it's because teams don't stop to think about the impact changes will have on the checks upfront. Once the work is done, the checks are ran like a safety net, failures investigated and 'fixed'.

      So what's my gripe here? Well I feel more people need to give their use of automated checks more focus, you could write the cleanest automation possible, but if you don't know what they are checking, if anything at all, what use is that to you? It's like the building the wrong product right. We should be thinking at the time of developing new features, what new checks are going to be needed here, and importantly what checks are not. Are existing checks now checking enough, what else should they be looking at.

      Checking can be an important part of an approach to testing, evidently very important at companies where there have created hundreds or thousands of automated checks. If you are going to rely on automated checks so much, then you need to apply a lot of critical thinking to what checks you create and continue to maintain. As repeated many times, your automated checks will only check what they are instructed to, they are completely blind to everything else.

      Designing effective checks, by effective I mean adds value to your testing, supports you in tackling your testing problem can be a difficult process and requires a lot of skill, some are more obvious. It isn't something someone who doesn't understand testing should be doing. Now, turning those into automated checks, sure, I could see that being done by someone who doesn't understand testing, it would be far form ideal though in my opinion.

      The reason I say it's far from ideal relates to this tweet from the start of the year.
      Now of course this depends on your framework of choice and the architecture you are building on, but the creation of automated checks can be one of much exploration. It can also be the case of adding some keywords to a spreadsheet, but even in that scenario much exploration would have already been done.

      The application needs to be explored at a level you may have not yet done. In the example of the web, the html needs to be explored for testability, is this a site I can easy automate? At the API level we may discover json elements that we have no idea what they do, or where they come from, we need to work these things out, we need to test.  Also as we are automating a check we may become aware of other attributes/values that we should be checking, and adjust the check accordingly, again though, this requires thought, requires skill. There is also the process of testing the automated check, something I have previously written about here.

      Feel myself going slightly of topic, so lets try to wrap this up. Testing encompasses checking (James Bach & Michael Bolton, http://www.satisfice.com/blog/archives/856), your automated checks should be supporting your testing, helping you tackle your testing problem. Regularly review your automated checks, don't be afraid to delete them, always be evaluating there effectiveness. If you find yourself 'fixing' lots of automated checks, take the time to stop and think about what you are really doing.
      1. How could the situation have been avoided?
      2. Could it have been done earlier?
      3. What is this check even checking?
      4. I have "fixed" this many time already, is this an effective check?
      5. What information is this checking giving me?
      Don't chase green, chase automated checks that support your testing problem. Don't blindly "fix" automated checks. Also for another post, something that we discussed in my automation workshop at LetsTest and Nordic Testing Days recently, do you checks clearly indicate their intention? Sure we can read the code and see what it does, see what it is checking, but what about its original intention, where is that? Be writing about this soon, your "fix" may actually break the algorithm, and therefore mis direct testing effort.

      Remember the code isn't the check, the check is the algorithm. The code is the implementation of it. They may not always align, especially over time and with lots of fixes. Focus on both.

      P.s thanks to Maaret and Toby for their posts, here and here respectively. I intended to think about this more back at the original tweet time, their blogging gave me the nudge needed.

      p.p.s I should add, that I believe its OK to say you are fixing check if you are changing the implementation of the algorithm, as long as those changes don't alter the original algorithm. Such as changing the data or updating some locators. Or even the url's. Things along those lines.

      Automation In Testing Video From TestBash

      In March I had the privilege of talking at TestBash. My talk was title "Automation In Testing". The talk is my experience with automation throughout my career and I how I feel current terminology restricts peoples views and usage of Automation, such as Test Automation.

      Anyhow, the talk was recorded and is available on the Ministry of Testing Dojo, you will need to sign up, however its free to do so.

      Enjoy, and I appreciate all feedback on content, presentation style or anything else you feel I may be interested in.

      https://dojo.ministryoftesting.com/lessons/automation-in-testing-richard-bradshaw

      Blink Testing In A Mobile Context

      I was first introduced to Blink Testing by James Bach during Rapid Software Testing, nearly two years ago now, however I only frequently used it, until now.

      For the last 9 months I have been testing a native mobile app, and upon recent reflection, it turns out I am using Blink Testing a lot.

      James describes blink testing as an heuristic oracle, and offers the following definition:-
      What you do in blink testing is plunge yourself into an ocean of data– far too much data to comprehend. And then you comprehend it.
       He also offers some examples, one that really relates to my use:-
      • Flip back and forth rapidly between two similar bitmaps. What catches your eye? Astronomers once did this routinely to detect comets.
      So how do I use it, like this...
      Line of mobile devices
      When I am testing I will line up a minimum of two devices and execute my tests on them all at the same time. By doing this, I am exposing myself to more data then I can comprehend. Majority of you have probably done something similar with browsers on separate monitors, but due to the size of mobile devices, using this technique is very easy with devices.

      Going left to right, I repeat the same actions on the app on all the devices and just as James has described, our brains love to pattern match. I am not always looking for the types of problems I will list, but I regularly come across them working this way.

      Here are just a few of the problems I have found using this heuristic.

      Layout
      Mobile devices come in all shapes and sizes, especially on Android... and by using this technique the differences just jump out. By having similar size screen devices side by side, your eyes are quickly drawn to the differences.

      Images
      It can be common to use the same artwork across devices, but devices don't always have the same screen real estate, meaning that images can appear squashed or stretched, again something easy to notice using this technique.

      Transitions / Animations
      Mobile apps user interfaces can have lots of animations and transitions, and depending on the speed of the devices, API levels, memory and screen size the performance of these animations can vary massively. By aligning devices, it can make differences easier to spot.

      Performance
      As mentioned with the animations, performance on mobile devices can vary on many things. By repeating the same process on many devices, the performance becomes obvious. For example, if an action on the first five devices takes what feels like a second to complete, then the sixth feels like it takes more, you will notice.

      If you are testing a mobile application and not using this approach or a similar one already, give it a go, and let me know how you get on.

      References:
      http://www.satisfice.com/blog/archives/33 by James Bach