#NottsTest - A Chapter Ends

That chapter, is my chapter. I am relocating to Hampshire, due to work for both me and my partner. Therefore I have hosted my last #NottsTest :(

It has been an absolute pleasure hosting this meetup, there has been nine NottsTest I have hosted running from May 2nd 2013 to 6th November 2014, I want to thank all the following people for speaking at these events:-
  • Andy Glover x2
  • Bill Matthews
  • Tim Munn x3
  • Simon Knight
  • Stephen Blower
  • Julian Higman
  • Jonathan Roe
  • James Salt
  • Me x3
  • Johan Atting
and of course a huge thank you for all the people that attended these events. It's hard to run a meetup without people! Also a huge thank you to all the sponsors of NottsTest, keeping speakers and attendees feed and watered, well I say watered.... we all love a beer!

Hosting NottsTest really has been all the things I set out wanting it to be. I wanted an event for the testers of Nottingham to network and share their experiences, I would guess at least 50+ people have attended, so we most definitely achieved this!

I also wanted to create a platform for people to feel comfortable sharing their ideas and experiences, a platform for people to practice talks with an eye to submitting them to conferences, or doing dry runs of conference talks. We certainly achieved this, Tim and Jonathan did their first talks at NottsTest, and well as you can see Tim certainly got a taste for it!

I wanted to form friendships, I wanted others to form friendships, I wanted to strengthen friendships, all these happened. People have been successful in finding jobs via NottsTest, which I personally think is awesome!

I am sure this won't be the last meetup I host, it's an incredibly rewarding thing to do. The networking can be really beneficial, seeing people do there first talk is a great feeling and well, all being said, they are just really really good fun!

But am confident this will also not be the end of NottsTest, I am really confident someone will pick up from where am I leaving and continue this awesome event. If that person is you, then please do get in touch, I am more than happy to help you with your first few meetups. 

So once again, thank you so much to everyone who has been involved with NottsTest, its been a great chapter in my meetup book and the NottsTest book, but neither one of these books is complete.

The Negative One

I am having problems at the moment; I am being labelled the negative one. Why? Well I am asking a lot of questions. Probably not asking them in the right way I guess. So why am I asking so many questions.

I want to go deep. People use to tell me their plans, their ideas, they sounded great, and I had even seen a lot of them work in other companies, other contexts. But instead of playing along, now I ask questions.

  • What problem is this solving?
  • Why do you think this is a good idea?
  • Have you considered this?
  • I am not sure that is a great idea because of X & Y, what do you think about X & Y?
  • I disagree because of X & Y, no? I added this as I recently said this. Writing it and re-reading it, it isn’t asked how it could be. E.g. Ok, but what about X & Y, could you see them impacting this idea?
  • How did you get to this idea?
  • Many more…

Why go deep, not sure I am yet to reflect a lot, just had the urge to write something. I feel it may have something to do with experience, I have a lot of experience now, I feel like I can add to most people’s ideas, I guess. Ask the questions now before its implemented, see if they really understand why they think something is a good idea or force them to go deep to ensure they have really questioned themselves. Questioned their own ideas and understandings, are they biased by their own experiences? I am not questioning them to tell them they are wrong, I guess I am trying to see how much thought they have put into it, how deep have they gone. I am not setting out to be negative, more inquisitive.

Learning. Sometimes I have questioned ideas that I think are great, ideas that fit my understanding, and ideas I believe would work. But I still question. I want to learn from them, I want to understand why they think it will work. Have they picked up on something I haven’t, have they considered the same things as me, the things I think make it a good fit. How did they answer the questions that immediately came to my mind, did they answer them the same as me, did any questions even arise for them? If they did, what caused them to ask those questions, what experiences or insights made them make these queries?

Why do they feel I am coming across negative? I guess in most recent cases it’s because they feel like I am “raining on their parade”. They feel like their idea is really going to solve something. They feel like this is the best idea ever. They feel like their idea is perfect, they have done their analysis and are content. Many others I’m sure. But then I come a long and ask similar questions to the bullet points above. Perhaps they are thinking; What is this guys problem? Who is he to question me? Don’t question me, I have done all the analysis this is full proof, no chance I am listening to him. Because labelling someone else negative is an easy way out of going deep? I don’t really know, and something I am going to ask the individuals over the coming days. Something I intend to study and get more insight in with your assistance.

If you have got this far, it’s probably clear to you that I don’t know much about this. I stated on twitter that I was going to write this blog post. People have responded with various things such as: -

So, if this makes sense to anyone, hopefully it does.
Can you please comment suggestions below on things to read, watch and study? I would really appreciate it, because I am not purposely being negative, my aim is to understand their thinking, compare it mine, learn from them, teach them, find a mutual foundation to build on.

Set Proxy Using WebDriver

I am currently at the London Tester Gathering Workshops in a session being hosted by Bill Matthews. He is showing the attendees how to use Zap Proxy. During the class he mentioned that you can hook your existing WebDriver checks into a proxy. Something I also mentioned in my recent "WebDriver Beyond Checks" post.

So I thought I would create a quick post showing you how you can do this. I am not using Zap, instead I am using Fiddler, but you just need the right proxy url and change accordingly.

Here is how you do it in C# using Firefox and Chrome drivers.


Here is how you do it in Java.



There you go, you can now add a proxy to your WebDriver checks, and collate lots of information to explore from various proxies available such as, BrowserMob, Fiddler and ZapProxy.

Happy Proxy-ing.

WebDriver Beyond Checks

So on Tuesday I did at a talk at the London Selenium Meetup (#LDNSE), titled “WebDriver Beyond Checks”. After the talk several people asked me if I would share my slides, I said of course, but felt they would be little use to people without the words that accompanied them, so here are those slides with some words.

This talk was intended to show people that we can use WebDriver a lot more in our testing then just doing UI Checks with it.

My first slide starts with a small rant. Hear too many people say we use Selenium to do our ‘automated #testing’. Well you don’t, you use WebDriver, Selenium is a project not a tool. I made this point because it’s important to realise this, a lot more goes in to the Selenium project then WebDriver. The Selenium project team are awesome!

The slide followed by another small rant. It’s obvious, right? If it was so obvious why do so many people only use it for automated checking? WebDriver is a tool to drive browsers, ooooo the possibilities.

That was my mini rant over, now a look at how I have used WebDriver for more than just my UI checks.

If I was going to start a new automated checking project from scratch this is the basic approach I would go with, this is also what I believe most people have now, even if you don’t use the same names.

Driver Factory – No going to repeat myself, you can read about Driver Factory in this post.
Page Objects – Again, you can read about PageObjects here.
Data Builder – This is a pattern for managing the data your checks consumer and create. The pattern can be a simple as CRUD, but hides the complexity from your check. Alan Parkinson talked about this at the Selenium Conference last year.
Utilities – This is where more context specific methods go, such as driver extensions, screenshot methods, logging and reporting methods.
CI – Continuous Integration, most now have their checks triggered by a CI server, perhaps a checking or a deployment.

So this is what I consider a basic automation architecture, I didn’t and won’t in this post go into the technical aspects of such an architecture because all do it differently, but I will make a few points.

  • Use interfaces wherever possible, IWebDriver for example.
  • When designing your architecture, think of these 4 orange boxes as separate tools that together form your checking framework, but individually could have many uses.

So let’s see how else this architecture can be used in our testing.

If you are familiar with the Data Builder Pattern (if not watch the video above) a lot of effort goes into it, first to make it manage the data you care about, but secondly to abstract this management away from your checks. Allowing the check to call simply methods such as dataBuilder.Offer.Create() and have that return you the name of the offer created. Why should only the checks get to take advantage of this? We all create and manage data when testing, some more than others of course.

So why not create a tool that the whole team can use to manage data during their testing? All you need to do it write an interface that the testers can use and have that call the methods you already have. Could be a command line tool, could be a tool with a simple UI. This saved me A LOT of time, especially in context where the system was very data heavy.

Note: I said WebDriver beyond checks, well truth is, in a few contexts I have had to resort to using WebDriver in my Data Builder as no APIs or straight to DB was available, so the quickest way to get some progress was to use the UI. But using the pattern meant that as soon as the API became available I was able to switch it out without effecting the checks or the tool I had created.

Visual testing, something that is very difficult to automate. However, we can take advantage of the checks we have already written. Most people probably already take a screenshot when a check fails, if you don’t you should, it’s easy and can really help debugging failures. But very few people take a screenshot when a check passes, a missed opportunity I feel. Now I’m not telling you to do this for every check, but why not consider doing it for a few key screens in your application. Have the automation put them in a dated folder, which you or a team member can then click through looking for anomalies. This is far quicker than messing around with image comparison, which in my experience has been very flaky.

In a previous role we use to do this in our daily stand-up, each day it was a different persons responsibility to click through the screenshots from the overnight run. Alternating the person doing this was great as people focus on different areas of the image.

We have all probably had to try and automate a table or a graph, it can be a real nightmare and very time consuming. Even more so if said table or graph has a lot of data in it that regularly changes. So why bother? Why not let WebDriver do all the legwork for us. It was Matt Archer who first introduced me to this approach. Use WebDriver to navigate to the page in question and have it get the graph (screenshot) or table and write some code to write this out to an html file. You could then also write some code which adds the expected outcomes to this html file as well allowing you to quickly do the comparison, far quicker in my experience then trying to code/maintain this.

The same applies with graphs, graphs are very tricky to interact with, even harder if it’s a canvas graph. So again, why not use your existing checks / PageObjects to navigate you to the page, screenshot the graph and produce a output containing the expected numbers along with the graph, allowing you to quickly check the points/labels on the graph.

I am not talking about performance testing here as most probably thought when reading the slide, however I am talking about measuring machine performance by using your existing checks. This is by no means performance testing, however is a low cost early indicator. Try monitoring your machines CPU/Memory usage by running your checks to simulate some load. You aren't going to be able to say you machines are performing as expected, but you might find that the memory or CPU gets unacceptably high, as I said an earlier indicator, but not performance testing.

If you are a small team it can be hard to do concurrency testing, so why not take advantage of your existing checks to be fellow team members. Do your other testing while the checks are running, perhaps even share the same user(s).

Be careful thou, you really need to put some logging in place so you can sync your testing with what the automation was doing when you come across issues, as any problems found could be down to the checking or what you were doing, means that problems could be very hard to replicate without any solid logging.

WebDriver comes with proxy support, mean that you can put a proxy between the browser and the system being tested. So if you consider this ability when creating your Driver Factory you can easy switch in a proxy.

Two specific proxies I mentioned were BrowserMob and Zap. Both this proxies produce A LOT of information, so adding assertions to this information for me, simply isn't worth it. Plus in the context of Zap, new vulnerabilities are being added all the time so keeping it up to date would be exhaustive. So instead take advantage of your checks, hook in a proxy, and explore all the information produced.

*** Update - Here is how you add a proxy to Chrome/Firefox in both C# and Java. http://www.thefriendlytester.co.uk/2014/10/set-proxy-using-webdriver.html

Monitoring is more in demand especially with the DevOps movement. So why not consider using WebDriver to monitor your production sites. You can take advantage of existing checks and CI configuration to achieve this. I removed the DataBuilder as the data here would be different and likely set instead of being created on the fly. I would strongly recommend that if you are going to try this you use a machine outside of your organisation & production cloud, as well, if the cloud goes down, you won’t be informed, so we used an AWS instance.

Simply set up a CI job to run regularly and have it do business critical user journeys. If the build fails, have the CI server email you. We later added texting to our CI using a SMS API provider such as Esendex.

Also note that, you may have to consider versioning you Page Objects if they are external to your production code, as production will be out of sync with dev the majority of the time.

The main reason I explored this was due to the hosting provider at the time saying it would cost £80 per user journey, per month! This cost a few hours of my time, and successfully alerted us of issues on several occasions. Especially useful if you have third parties.

Ever had to test a process that has many steps? Or an application where you always have to log in to test? Then why not consider taking advantage of your existing architecture and produce a simple tool that spins up several browsers instances and completes the login for you, or navigates you to the right step in a process?

I have done this on several occasions by simply writing a few new checks to get me to the stop required, add a section in my driver factory to not close the browser, then created a basic UI to choose the test from along with how many instances to open.

Note that if you regularly test with extensions/plugins ensure to select that profile if using Firefox or add them to the driver instance in your factory.

This really can save you a lot of time, but don’t always use it. As using this all the time will reduce your opportunities for observations on the earlier steps.

Accessibility Testing – I forget to create a slide for this one, but did remember to mention it before the talk. Alistair Scott blogged about this a while back now here. He explains how you can add WAVE to your Firefox instance and activate it by sending key commands to the driver. WAVE then updates the HTML of the page where it finds errors, which you can then read with WebDriver and write out potentially issues which you can go back and investigate.

Design an automation architecture not frameworks. Frameworks, or tools as I prefer to call them can then grow from your architecture.

Think about “How are we going to test this?” not “how I am going to automate this”. Use automation to support your testing, not just by using it for checks, but also for creating tools to support you.

Think about what your architecture is doing, split it up like a jigsaw puzzle and piece it back together to make new tools. You may need to create a few extra pieces, but that just gives you more to play with in the long term. Also don’t be afraid to produce extra pieces that will be thrown away, if they help you test at the time, fantastic!

If you got this far, your awesome! Hopefully you have lots of ideas on how you can use your architecture more.

TDD - That Doodle Defined

So a few weeks ago Duncan Nisbet published an article via the Ministry of Testing titled TDD for Testers. It's a great article, read it if you haven't already. After this, there was a tweet from Chris Simms, which twitter won't return to me when I search, but it contained the words, TDD isn't Testing.

I don't know why, but it initially didn't sit well with me, so I started doodling while on the train, below is the result I tweeted.

There was then several exchanges with Andy McDonagh, which results in some tweaks and additions to the original doodle, the final doodle looked like this:

There was then a tweet from George Dinwiddle.

So here goes.

In my experience of working in teams where TDD is practised it's very rare a developer just jumps in and writes a failing test. There will have been discussions before hand with other developers, looked at previous tests and just generally given it some thought. Some great information to be had for a tester here.

1) Lots of discussion. Is a specific characteristic of the design causing them trouble?
2) "Yes, but changing that, will effect this and make all these fail" great for thinking about coverage. Areas outside of the story focus.

The failing test, the coded oracle. This can help us understand how the developer has interpreted the story/requirement. Of course there is likely to be a trail you will have to follow to get a picture of their understanding but its achievable. This can be done once the work is complete and questions can be asked like, why did you write this test and not this test? But can also be done during, pairing is a wider topic, but you could ask, why did you decide on that test? Again lots of information to gain here to help with test idea creation.

I am not going to into too much detail at the Code - Refactor steps. But again lots of information to be gained. Why did that test need refactoring? What does this refactor do?

I was pairing once, during a coding dojo, and we decided to test drive the challenge. We wrote a test, it failed. We implemented some code, it went green. Awesome I thought, what's functionality is next. He replied by saying we haven't finished yet, that library/method is slow I know of a new one. So we proceeded to make the change knowing that the test was indeed checking our intended purpose for it. So what did I learn from that brief exchange. I learnt a specific library/method was slow, do we use this anywhere else? Also that this developer really cared about what he was delivering.

So we arrive at the Done box, which is carrying two bags. Lets look at the Info one.

As mentioned throughout the post, there is lots of valuable information to be gained from a developer practising TDD. This information can really assist when creating test ideas, should I focus on this aspect of the story as the developer struggled creating the tests for this area? Implementing new feature X caused the tests D,E and F to fail. How do these areas relate? What common domain object are they sharing? You can also look at the pattern of failure during the development period, if there was a lot of tests failing and passing, perhaps the developer has a good story to tell you about what caused this.

So the checks, I refer to them as checks not tests, you can read more about that here. I labelled them as a bonus. They're a bonus, because what they continue to do is validate the information that you have collated. information you will continue to use throughout the project. They also continue to validate the design, our concern from the off.

So talking about design, the final thing to mention is Testability, making something easier to test. Following TDD on my experience can lead to difficulties, I have heard many a time, "how are we going to write a test for that. If you ever hear that, ensure to listen in on the following discussion. But they always solve these problems. The outcome of this is that the developers would have already solved most problems, for example creating objects to use in new tests, mocking data, making values configurable, amongst others.

So when we come to what I refer to as Automation in Testing, perhaps we want to create some UI checks, API checks or data creation. The likelihood in a team practising TDD is that someone will have solved the problems you come across, and will be able to get your going a lot faster. This also means that you could share some of this code, so when refactoring takes place, your automation tools are updated too. For example mocking is great for automated UI checking, if your focus in UI values and behaviour then control the data with mocks, also tends to be faster, but that can also be a disadvantage, as always it depends, but it's another tool in the box.

Read Duncan's article if you haven't already.
A team practising TDD yields a lot of information to support testing. So if you are in a team practising TDD, get amongst them, ask some questions see what you can learn about the code and therefore the application. See what exists that could help you with automation, perhaps some tools could quickly be created using code initially created for the tests, that could really speed up your testing.

"TDD isn't testing, but provides some great artefacts to support Testing"
Michael Bolton tweeted questioning the "checks continue to validate this info" from the diagram. "Instead of validating, I would say that the checks detect changes. Only a human can really validate." He has a valid point. The checks will alert us to a change, which we can then validate taking the context into consideration.


My CAST2014 Talk

So I had the privilege or talking at CAST this year, along with that came the option of talking on the live stream. Wasn't 100% sold at first, but then thought, you know what, lets go for it.

It was great being on the live stream and having the interaction with viewers via twitter during the open season and post talk. Also allowed me to host a #NottsTest where the attendees watched me live at my own meetup, that's cool!

But the benefits for you reading this is that, talks which were on the live stream were also recorded. So here it is, catchy title, but worth a watch even if I do say so my self.

I will do a detailed write up about my talk, as an article for a publication soon...

Automation in Testing Podcast With Software Test Pro Radio

A few moths back Mark Tomlinson approached me, asking if I was interested in recording a podcast. At first I was very much leading towards no, the thought of impromptu discussion did scare me.

However, even further back then this request I had a Skype conversion with Michael Bolton regarding my fear or upcoming talks. Was a lengthy conversation, but the outcome was simply, these things are a great learning curve, of course you will be nervous and not everyone will agree with what you have to say. Just do you best, put the work in, and be prepared for questions, but most important; enjoy it and learn from it.

So here is the result of the approach from Mark, a podcast where we discuss lots of things, which I titled "Automation in Testing".

Enjoy, appreciate all feedback.

Let's Test 2014

Everyone who has been, highly recommends LetsTest, so I decided this year I would attend. I had no real goals or things I wanted to get out of the conference, I guess I was somewhat biased by all the awesome people I knew were attending, thinking that it would just be awesome.

It was.

But as I find with the more conferences I attend, it wasn't the talks/workshops that made it awesome, actually some of them were really disappointing, but we will get to that. It's the people. LetsTest attracts the best, by this I don't mean all the reputable testers, the "experts"; but most are there. I mean the people who really care about their craft, people eager to learn and importantly people eager to share.

Here's where LetsTest has its edge. The venue. The setting is simply stunning. There was an immediate release upon arriving, everything else on my mind left me, testing was all my mind needed to consider. I didn't have to think about finding my way to the conference venue, where I should eat, where peers hotels are and very important, where are the pubs/bars! I was already at all those places.

The sun was shining, which always helps. I checked in, I didn't know what to expect, the room was included in the ticket, I hadn't had to spend hours (probably just me that makes it take this long) searching for the right hotel. A good distance from the conference, is anyone else staying here and of course how much is it! I had none of this, "your room is 311, sir, WiFi is available all over the site, enjoy" and off I went. The room was basic but fully equipped, but did have one unique feature, this view!!!

The view from 311
I decided to go for a run to get the travel out of my system, never really understood that saying, but I felt great and energised after it, so meh. Time to explore the rest of the venue, but also stick to my point about why the venue gives LetsTest the edge.

Everyone is in the same place for the duration of the conference. Everyone eats at the same place, drinks at the same place, parties at the same place and of course talk testing at the same place. There is no expensive phone calls or internet usage trying to find which bar people have gone to, or that awkward moment when you hear people planning a trip to a restaurant, "excuse me, do you mind if I tag along". Or trying to find the restaurant once you have asked if you can attend. Nope, none of this, mind is free, everyone is at the same place.

On the subject of food, oh wow, did we get some good food. OK, now we are on the subject of beer, oh wow, did we get some beer. Lots of beers to chose from, but remember this is Sweden so its vital you find a Swede to buy your beer for you, so much cheaper that way....

In summary, the venue is simply fantastic, everyone staying, eating and partying in the same place for the whole duration, for me, meant that I was so much more relaxed, knew exactly where to go if I wanted to find some people to talk testing to.

DAMN! Of course this was a conference, what about the conference Richard. Lets be frank, I had highs and lows with my selections, but what conference doesn't? I have no more to say on talks, tutorials or workshops. Grrrrrrrr I wanted to know more about the conference I hear you cry. Well I have been talking about the conference. LetsTest is so much more then here is a bunch of speakers doing some workshops/tutorials and talks, it's an environment tailored to allow people to confer. When you have the kind of people attending as mentioned above, this is exactly what you want to be doing with your time.

"Let's Test Conferences on Context-Driven Testing - For Testers, By Testers"

This is exactly what you get. All the noise is removed, leaving you to do what drew you to attending LetsTest in the first place, to collate information about testing, to share your information about testing and to have a damn good time doing it.

I will be back for sure in 2015.

PageObject Pattern - Why, How and More

This post it based on the talk I did at Belgium Testing Days with the same title. I had 8 attendees I believe so thought I would share my ideas with you all on here.

The trigger for this talk was reading lots of complaints about how complicated code is and how difficult patterns are. I want to show how the PageObject pattern is simple to follow and explore the advantages of using it.

Problem can solve
Image by Ben Pacificar || http://www.redbubble.com/people/bvphoto/works/902622-maintenance-nightmare
Automation can be a maintenance nightmare, abstraction can really help with this, PageObject pattern is one way to achieve this. Patterns and a good naming convention can make this a lot easier.
"Automation may not be production code but it is used to check potential production code."
My intention with this remark was to highlight the fact that your automation code is really important, give it some love, you could argue it's more important than your production code.

What is the PageObject Pattern
There is a description of PageObjects available on the Selenium wiki here, the summary is the main part. For me when people ask, I describe it as "A class representation of a page/part of a page, exposing services that the page has to offer".

There are lots of ways to write automation, lots of patterns to follow, think of it like sewing. So many stitches to choose from, but you choose the one that fits the context. More stitches you know the easier it will be for you to create maintainable automation.

The PageObject pattern really comes into its own when the application you are checking consists of many pages with different style or rendering engines. By this I mean you could take an approach where you have custom methods for locating elements, perhaps by label, but as soon as a page breaks this pattern you start introducing unnecessary logic into your finders and over time these can become a nightmare to debug or maintain. So interpreting the pages you want to automate as a PageObject means those methods are already in the context of the page.

An Example
Example of a C# PageObject with notations
So this is where we define the elements we want to interact with along with how we want WebDriver to find these elements. I am utilising the FindsBy attribute which is available in the OpenQA.Selenium.Support.PageObjects namespace. This mean we no longer have to write driver.findElement(By.ID("X")); this supporting class will take care of all that. Another benefit of the library is that WebDriver will now automatically get the element every time we want to use it meaning that we always have the latest element avoiding StaleElementException.

Now lets look at the naming convention I have adopted here. It's hungarian notation. Yes! This does appear old school, but it makes my code and methods really easy to read, meaning that debugging is a lot easier. It's also easier for new members to follow the code. Of course I could just write textboxPassword but I find shortening them is my preference. The benefit of following a naming convention in this format is that when you are creating your methods it's very easy to find the element you want to interact with.

The PageFactory requires your instance of WebDriver to initiate the PageObject. I tend to pass the driver into the PageObject from the check then pass this to a local variable. But of course you could just have each PageObject read the driver instance from a testbase or from a static class.

We then need to initiate the PageObject with the PageFactory this allows PageFactory to do its magic in terms of locating our elements from the FindsBy attributes. I have and seen others initiate the PageObject in the check or step binding however due to the this keyword in C# we can do this in the constructor removing unnecessary code from the check.

Are we loaded? Probably. That doesn't really work with automation so we can harness the constructor and add a wait in. It's important to find something specific to the page and of course this won't take javascript into consideration. But what ever approach you take the constructor can be a good place to do this. You could also look at LoadableComonents however I am yet to need to go to this level of abstraction.

What do you want to do? What does the page offer?
I break this down into the following:
Get - Read something from the page. This could be anything from text, input values and images plus lots more. I tend to use the keyword of "Read" to name methods related to a get action. These methods simply return what they find.

Set - Populate, Select or Tick something on the page. Again considering the context of the element I want to set something on, I tend to use the keywords "Populate, Select, Tick, Check". I make these methods voids. I could add logic into check the action was done but my goal is not to test WebDriver, plus a well defined check would fail sharpish if the set failed.

Check - This is where you want to check something on the page. Different to read in that this method will have logic and tends to just return true or false. I start these method names with the keyword "Is", an example would be IsErrorMessageDisplayed which would just check for an element and return the result as a bool.

Do - This is where you are going to do something that causes you to move away from the page, click a link or trigger a popup perhaps. Click is the most common keyword for me here, e.g. ClickLoginButton. These methods tend to return the PageObject of the page or part of a page triggered by this action.

Using PageObjects
Aside from the abstraction and maintenance benefits I believe come from using the PageObject pattern they make your checks really easy to read. 

Lets look at an example.
A check using a PageObject
We can immediately tell what page the check is interacting with, then because we have taken care in naming or methods is clear to everyone what the check is trying to do. Imagine if this was just a long script consisting of driver.findElement("X").SendKeys or .Click. We have something that is very readable. Not the 4th line of the check, we are using the same variable but the method being called will return us a new instance of the PageObject as the page will have reloaded, therefore a new page. If this method returned a different page we would create that using the method e.g. var homePage = loginPage.ClickLogin().

Do assertions belong in a PageObject? For me no. The PageObject is a dumb messenger, we use it to interact with the page, all it cares about it how it goes about doing that. You design checks to check something specific, if you design them well enough and the oracle is broken, the check will fail. There is no need to check the page title every time or the url, it's unnecessary overhead. "But I get it for free", I've heard this argument, but how is it free? You are making additional calls to the driver, that takes time for something you check doesn't require. It can cause a large increase in execution time if all your PageObjects are doing this, even more so if you switch to GRID or even utilise a cloud.

But my main gripe with it is, don't break your abstraction. Your PageObject doesn't need to be tied to a test framework, it doesn't know its going to be used for testing, it doesn't know what's right or wrong, it knows how to interact with a web page, leave it at that.

Base PageObjects
Using inheritance we can allow another PageObject to utilise methods on another PageObject. An example of this is, if your application has navigation panel. You can create a PageObject for interacting with this panel then all the other PageObjects can inherit this, allowing you to interact with the panel from any PageObject within your check. Reducing the need to create the navigation panel PageObject throughout your test every time you want to navigate around the application.

Helper Methods
After creating numerous checks with your suite, you will likely notice that you find yourself repeating a sequence of PageObject creation and method calls. For example logging into an application. I like to collate these and create what I call a helper.
Example of a helper
This allows me to still utilise my PageObjects and the benefits that come with them, however removes duplication in my checks. For example if there was an additional requirement on the login page to enter my answer to my secret question I would have to update all my checks to add in this action. This way I could just update this helper and therefore update all the checks.

Ways to use your PageObject library
Once you have invested time in creating a PageObject library you can harness it for more than just checks

  • Monitoring (Create a custom monitor but use your PageObject library, so when the application updates so does the monitor).
  • Additional check projects. If you have different projects for different kind of checks, e.g. smoke, acceptance then you can reference the PageObject library for all this. Be careful here though, could have version issue if the smoke test project is updated before the code reaches that environment. Can be avoided if all in the same solution.
  • Tools to assist testing
    • A tool to fetch data from pages
    • Take screen shots to look for layout issues.

Take a screen shot of a page and with four different coloured pens, draw squares around the elements you would want to "Get, Set, Do or Check" on. Done? Awesome you have just designed your PO, now go implement it.

I have been able to overcome a lot of maintenance issues by using the PageObject pattern. It's a simple level of abstraction to understand and can lead to further abstraction. I feel it also leads to more readable checks when following a naming convention as described. Debugging is faster as establishing why the check is failing is simpler as I will know which page is being interacted with, an idea of what the method is trying to achieve and also with which element.

It is an investment, so I wouldn't recommend this for a throw away project, it can feel like you are writing more code, but in my experience all that time will be recouped by faster maintenance.

Also team members will less WebDriver experience could probably create some new checks without digging into the PageObject code.

As with everything, give it a go, hopefully this post with give you some guidance in doing that, then make your own evaluation.

Who Tests The Checks

A common phrase to hear in the testing community, be it on Twitter or on forums, is "Who tests the tests?" Or in my case that would be "Who tests the checks?" or even "Who checks the checks?".

I am referring to automated checks, if you have written a check for the system, what's checking the check? What's checking that check that's checking the original check, we could continue but I hope you get the theme.

Well the answer is simple, you do and your team does. Thanks for reading...... OK I shall elaborate.

A common approach to creating checks is to create it locally against a local or deployed instance, work on it until it passes, then check it in, job done. Well it went green, so it's clearly working, but then later on that day/week it starts failing, you have made the build red then the pressure is on to fix it.

The way I view this is that automation is a tool, it's a piece of software that you are creating to assist with testing the system. So if you are testing the system that the developers are creating, which is a tool for your customers, why not test a system you are creating as a tool for your team?

If your check is passing, explore how you can make it fail, then make it fail, and importantly is the check giving you sufficient information about why it failed (Going to write more about this soon).

So here are a few examples of how I test my checks.

If you check produces some prerequisite data, what happens if the exact same data is already there? Should it handle this scenario? Does it give you good feedback, What happens if the method for creating this data fails, perhaps is direct to the DB and you alter the connection string or its via an API and you alter your credentials, does the error direct you here or just tell you the check failed?

What if tearing down the database after a test run fails, what happens to your check then? Should it make a new record with the same data or perhaps it should error.

Alter Assertions
You have written a check because you want to check something about the system, so your check will have 1-N assertions in it. Alter them, ensuring that the check should now fail.

For example if you are checking some text on the screen is displayed, perhaps it's an error message, change the message you are expecting by 1-N characters and check that it now fails. Reverse that scenario, if you have access to the source code, change the text on the screen which should yield the same result.

Run the tests at least three times
I have seen and written checks that have this unique ability to pass for a while, then fail once then pass again. Some people refer to them as "flaky" or I remember the guys from Salesforce at the Selenium Conference called them "flappers". Either way you write them, but majority of the time you won't discover them until they are on the CI. I have seen several reasons why a test can be flaky, majority of them and down to timing issues. So I have found running them at least three times locally increase my confidence I have written a stable check.

Alter the environment
Always creating your checks locally? If so you may come across situations where your check is only passing because a locally deployed site on an awesome machine is fast! But as soon as you run it on CI it starts to fail. So to mitigate this risk, I sometimes use Fiddler to slow down my connection to see how the check then performs. I have in the past also logged on the CI machine or a VM and ran my check in isolation to ensure it passes.

To get the most out of automated checks you want to be running them on CI. This comes with a potential concurrency issue, because depending on your set up, the same slave could be running several tests in parallel, therefore could your test be impacted by another test? Such as deleting shared data, or a test could clear the DB while another test is still running. I sometimes call this test bleed.

More Automation
So what about more automation to test the automation? I try to avoid this however I do feel there is certain scenarios where this could be an acceptable approach.
If you are using a third party API/library and you decide to write some extension methods for it, then it could add some value to write some checks for it.

However if you have gone to the effort of creating a suite of automated checks you should be running them all the time, so you should find out very quickly when something in your architecture has broken, so you could take the view that there is little value in spending time creating automated checks for the checks.

Code/Peer review
As mentioned earlier in this post, automation is software, where the customer is your team. So if you have a practice of doing peer/code reviews for your application code do the same for your automation code. You will also then take advantage of the "alter the environment" approach as the reviewer will execute the test on their machine.

In summary, I take the view that your automation is software, software you or your team is producing for the team, so test it. It will save you time in the long run in my experience, as many hours have been spent investing failing checks to find its something obvious.

Update: Feb 16th 2016. Bas Djikstra just wrote something similar, also worth reading. http://www.ontestautomation.com/do-you-check-your-automated-checks/

GUI Automation Tweet

Tweet from twitter

Unfortunately in my experience when you mention automation to people, they immediately think about GUI automation, perhaps it’s because that’s how they perceive their applications, or it’s the only entry point they use when testing.
This is bad in my opinion because while I think GUI automation is OK, is does have drawbacks against automation further down the system stack, it can be slower, harder to maintain, requires 3rd party libraries. So seeing automation as GUI means teams aren't exploring other possibilities of automation, unit tests, API, DTO and even just lower than the UI looking at automating just the JavaScript.

Only having GUI automation could be bad, because you are attempting to check the application as a whole therefore increasing the possibility of your checks failing due to something other than the layer you are actually trying to check, in this instance the GUI. You could of course mock everything below the UI but that could take a long time for little value.

If a GUI check fails, the majority of people will then repeat it themselves observing the UI for issues, if nothing obvious, they will then probably look at the logs to see if anything reported there. It’s likely that the error is actually being thrown further down the stack, lets say at the API level, but that investigation would have taken valuable time away from you, and could have been picked up faster if there was automation at the API level.

In conclusion I think GUI automation has a lot to offer any project, however it needs to be used in moderation and teams need to be looking at automating further down the stack. Yes you can just have GUI automation, I wouldn't recommend it, but I have seen it work, it depends!

For another post perhaps, some of you maybe reading this and saying but automating further down the stack can get very technical, I agree, but its the team's job to create automation, if you don’t have the technically skills get help from a developer, learn from them. Produce your checks as a pair, harnessing each others skill sets.

WebDriver Factory

One of the many benefits of WebDriver is that the major browsers are all supported with a version of a driver, some within the browsers and others via a service. This means that you should be able to run your suites of tests against these browsers. This is true, unfortunately though some browsers behave differently so you're likely to face some locators discrepancies, but that's for another post!

So how can you write a framework allowing you to take advantage of these drivers, they're many approaches, it's code at the end of the day, but I use the approach I call the DriverFactory, as do a few others I have seen, such as an example here by Jim Evans.

The concept is very simple, I have attached a diagram below, but here is the jist:

  • App.config file where we define our browser requirements
  • TestConfiguration object which is static, therefore reads the App.config at runtime and creates an instance for us to use throughout the tests. In the example project below I only have driver specific values, however in actual projects I also include any test config in there, e.g. username, passwords, DB connections etc.
  • DriverSetup this is specific to NUnit, but I use the SetUpFixture to deal with the requesting of a driver instead of doing this in each test. Because the namespace on this class is the highest, each test will call this code first before its own SetUp.
  • TestDriverFactory, this is where we interpret the TestConfiguration object to determine which DriverFactory configuation we require. This is decided based on the "Remote" value, if true we create a RemoteWebDriverConfig, if false a LocalWebDriverConfig.
    The TestDriverFactory will then pass those configs to the DriverFactory and expect a driver back that meets the requirements.
  • DriverFactory, this is where the required driver is created. It's a simple switch statement on the browser name.

An example project can be viewed here on my GitHub page and click WebDriverFactoryExample. I have added extensive code comments so hopefully all can follow. Also have an example PageObject and Test so you can see it all together. Changing the browser name to those in the switch statement will work fine, remember if you are going to try it with RemoteWebDriver ensure to have it running and the correct browser versions registered on the nodes.

NOTE: To really take advantage of all the browsers, ensure to use the interface IWebDriver throughout your tests and PageObjects, as all drivers are contracted to this interface you won't have any issues with different browsers. For instance not all drivers (haven't checked recently) have the driver.FindElementBySomething, so if you've written your tests with a specific driver and not IWebDriver your tests could fail when using drivers that don't have such a method. So stick to the FindsBy attribute and if you have to use the driver directly ensure to use driver.FindElement(By.Something) or FindElements.

So what you have now is the ability to control the driver used to run the test suite via config. This allows you take advantage of CI tools such as Jenkins to run your tests potentially in parallel against multiple browsers. Via a CI you could have a main Job, "Run Test Suite" which could trigger down stream jobs of "Run Test Suite - IE", "Run Test Suite - Chrome" and so forth by having those downstream jobs overwrite the app.config before the test run. Or you could have multiple config files in your solution, such as chrome.config, ie.config etc and have your CI tools delete app.config and rename the required driver.config to app.config (this is required for NUnit) thus allowing you to have the configs in source control and not in build jobs.

In my example I have the WebDriverFactory as a separate project, but to take this one step further, you could turn the separate project into a NuGet package (you would obviously need an internal NuGet server to do this), this will then allow you to add this package to any project where you require WebDriver, for example if you had multiple products, or multiple test suites for a product.

In doing this, if you execute your tests locally, you can store the additional driver services in this project and have them pulled into projects via NuGet. These would be InternetExplorerDriver, ChromeDriver and PhantomJS for example. It would then be possible to have your CI download the latest version of this package for you, making it very easy to update all your projects.

Thanks to Neil Kilbride and James Barker from Esendex for showing me the NuGet approach and the enhancements for dealing with local and remote config objects.

What is QA?

This is something that is discussed a lot of Twitter, sometimes defended strongly and sometimes turned into humour. The situation still doesn't appear to be changing though. In my opinion its very straight forward, Quality Assurance (QA) is not Testing nor is it the stage or step that takes place during a project to determine the quality of a product. I believe the current use of such terms is damaging to both QA and Testing, one can exist without the other, in saying that though, if Testing was happening, then one could easy say that the companies approach to QA is to have a Testing stage. However an approach to QA could not have a Testing phase at all. QA, as mentioned is widely used throughout the software industry as the acronym for Quality Assurance, the stage in developing software were we assure the quality, where an individual or a team lead/manager stamps the product with a seal of quality.

“I QA Spokesperson, hereby state that the quality of the product has been assured, I have signed it so”.
That's some impressive work these individuals or teams take on, they must have spent weeks even months painfully trawling over all the available data, they must have done the following and more:
  • Gone back and interrogated the BA/PO’s, “You, is this what you really wanted, IS IT! You better not be lying to me, John! Bring in Sarah, she will make them talk! 
  • Cross references the written requirement to ensure they were indeed what the BA/PO stated they wanted. Ensuring they are all present and correct and of course stored in the correct format and in the correct location. Oh, how could I forget then made sure that they aligned with what they wanted the system to do. 
  • Reviewed every line of code, checked every DB table, debugged every build job to ensure its not doing and secret trickery. 
  • Ran each unit test manually, checked that there was enough unit tests. Counted them to double check that the CI server wasn't lying to them or the IDE. 
  • Interviewed all the developers, “Did you fully understand the feature you were writing?” Dev: “Yes I did”. “Oh yea, I bet you did, bet you wrote more code then need though didn't you, changed other functionality that wasn't necessary too I bet, you £$%£”. Dev: “I didn't, I swear”. 
  • Checked every single piece of existing functionality to ensure nothing has changed. 
  • Tested all the new functionality of the product following the release.
I could go on but you get my point and the train journey home is only 1 hour. But what's more important here is that all this has to take place in the stage that’s called QA, because none of the other stages have QA in their title, so obviously nothing can be done in those stages, right? QA is not a stage or a step and certainly isn't a team or someone's role. It’s not testing or checking. So what is QA? For me I use QA to label all the things individuals, teams and companies do to create an environment in which people can work to the best of their abilities and in turn produce quality products. In my view QA can take many forms, I would class the following as QA:
  • Employing talented people in the correct roles for them. 
  • Free tea and coffee, perhaps even biscuits and fruit. 
  • Two monitors, perhaps even three. 
  • A comfy chair. 
  • Having coding standards and reviews. 
  • Doing TDD or having unit tests. 
  • Some form of automated checking. 
  • Flexible working hours. 
  • Fair salaries. 
  • A test team. 
  • Regular team meetings, be it in the form of standups, retros even 1-2-1s. 
  • Continuous integration. 
  • Training and conferences.
All these things plus hundreds more I am sure you could all come up with, in my opinion come under QA. Allowing the employees to work to the best of their abilities in a comfortable friendly environment where they are encouraged to raise their concerns and have them heard. QA as I described it, can take care of itself, I've had the privilege of working at places where QA is in the culture but they didn't know it. There referred to it as testing, in this context a stage where testing and checking was done as QA, as soon as I made them aware of the difference, there was realisation that QA was throughout the whole process. This caused many to realise that some of things they did were in fact QA, subsequently this lead to them studying the impact of such tasks and improving them because of its relation to quality. I can’t help but see this post as a stepping stone because whilst QA is definitely not Testing, I am not sure what I have described is even QA. This requires more thought but I am not sure QA even needs to be a “thing” any more. What I have described as QA isn't assuring quality at all, its trying to embed quality into employees, processes and subsequently the product. So there is certainly work to be done to stop testing stages and test teams being referred to as QA, but for the future of QA?

“QA is Dead!” or atleast “QA is severely wounded”

Checking If An Element Is Present/Displayed With WebDriver

On several sites I have worked on in recent years there has always been a check scenario whereby I have wanted to check that something isn’t on the page, such as a error message or a field not required in a given context.

WebDriver by design, as its intended to show you what the user can see, will return a NoSuchElementException if given a locator for an element that shouldn’t be on the page.

So what most people write is a function containing a try catch and subsequently return a bool indicating if the element is on the page or not. Something like this:

In my opinion this is a nice way to do it, you could of course return the exception and assert against that, but I find bool a nicer approach. But what is often overlooked with this approach is the default timeout for the driver. If you haven’t altered this, then it will be 0 so you won't have this problem, but I know a lot of people do to reduce flakiness, so lets says its 20 seconds. What happens when you run the above code is WebDriver will try to find the element for that time duration, making it look like your test has hung, before it declares it not present. This can be a lengthy amount of time depending how many times you are looking for something not to be present during your suite.

One way to achieve this is to reduce the driver timeout before the try catch and then setting it back to the appropriate value afterwards. This could be done in a helper class or if you have created a custom driver can be added as a method on that. Something like this:

Another way to do this is with the IDisposable interface as introduce to me by a chap called James Barker, use the using command and then do your call inside there, then the timeout would be automatically set back after the call during the disposal. Something like this:

However if you are following the PageObject approach and using the PageFactory then you would want to be passing the appropriate IWebElement in to the method. You would initially think that this isn't possible because the element would be null, but it actually isn't because at the point of initialising the PageObject with the factory is creates it as an IWebElementProxy (something like this, some black magic :) ) so you can actually pass the IWebElement to a method and call the IsDisplayed() method inside a try catch like above.

I achieve this with the following code, but note that this code isn't full proof because if the element is present but not displayed you will get false, if the element isn't present you will get false. So if you intention is to check that the element isn't in the code at all, your probably better following a pattern above. Issue there will be you will likely have to duplicate your locator or you could do some nasty reflection to get the locator, or stick it in a const string. However I haven't had a need to do that and for me as long as a user cannot see the element, I am happy.

So there you have it, several approaches to dealing with checking if an element is present/displayed.
Hope this is of use to some of you and happy coding!

Here is another approach for you. Jim Holmes asked me and Jim Evans how to check if an element is not present, Jim replied with this very neat approach, which could also be used to check if an element is present too. The reason this works is due to the fact that findElements won't throw an exception if non are present, the collection is just empty.