Charles Proxy To The Rescue of Adobe Bloodhound

The client I'm currently working at has decided to switch from Google Analytics to Adobe Analytics for our native mobile application. Which created a new testing problem for me, how do I test this?

Our current solution for Google Analytics (GA) was to use a development key, and using a combination of the Real-Time feature on GA and checking back the next day to see final propagated results.

But Adobe Analytics offers a tool to help test them called Bloodhound. Bloodhound is simply a proxy. You configure your device to route all your traffic via Bloodhound running on your machine, just like you would with any proxy tool such as Charles / Fiddler. So I gave it a go.

Initially I got no results in Bloodhound, turns out that is due to the SSL restrictions on iOS, so no problem, Bloodhound comes with a cert you just need to install. Installed it and just like magic analytics started appearing in Bloodhound! The magic didn't last long though, because while Bloodhound seemed happy enough to pick up all the analytic calls, it seemed to kill all other calls, render my app pretty much useless! I could interact with our navigation and see the app making such tap and page calls to analytics, but I couldn't get to 50% of the screens as they required server calls!

I discovered two ways to fix my issue.

The first one is very specific to our context. The iOS developers of our app has built an offline version of the APIs call, essentially a mock. So I was able to configure our app to use the offline mode and I was now able to navigate to all the pages and see if the analytics calls were correct. Sadly. though I ran into an issue with our offline mode, it didn't quite have all the scenarios I needed to be happy with my testing, so back to the problem of Bloodhound eating my apps requests.

I did some googling and find some forums, but most of them were Adobe forums telling you to contact support and they would solve your problem with you. Nice support, but not something I was interested in at this stage, kind of expected it to take a while to get a response being a big company, probably a bias that could do with being tested again.

Anyhow, I ponded for a few more minutes, and decided, well if Bloodhound can intercept these calls, surely any proxy could. Sure enough, Charles could see all the requests to the Adobe servers, and I could see the specific analytic calls being made. However it wasn't as easy as Bloodhound.

Bloodhound was designed to show these analytics, Charles isn't. So in Charles I get calls with lots of analytics in, in their raw form. Bloodhound was designed to strip individual analytic from the calls, meaning I could easy find a specific analytic caused by the action I just did, such as clicking a button or landing on a specific page.

So while I could now use the app against real servers, and see all the analytics, and able to test them, it just wasn't as easy as I wanted. As said, Bloodhound made it really easy to test individual analytics. The process of tapping/navigating in the app, checking Bloodhound, was quick and efficient. It was a bit more cumbersome in Charles as I had to read over the raw call to find the exact analytic.

So again I pondered for a few more minutes, and remember a feature of request forwarding / mapping in Charles. Something I hadn't actually used for a while, so required a few googles to refresh my memory. But there it was, Map Remote, the missing piece. Charles allow you to map requests to a remote server. A featured I'd actually used in the past to test against different version of an API, tricking the app in to using a version of the API it wasn't yet coded to do so, a great way to test early.

However, as Bloodhound was running on my local machine, instead of mapping remotely, I wanted to map locally. No a problem, instead of a remote IP you just enter localhost or your IP. So I enabled 'Map Remote' in Charles and added a new rule. I should add here that you can configure the port Bloodhound runs on, in this instance it was 50000.

I configured my device to point to Charles and instructed it to map any calls for our Adobe server (you can find this out from Adobe SDK config, or just from looking at the recorded traffic in Charles) to my IP address on port 50000, which was where Bloodhound was running. I saved my config and gave it ago.

Voila! It worked. I was now able to see all my analytics in Bloodhound and my app was also able to hit the server, allowing me to hit every page in the app, and verify the analytics in Bloodhound. Win.

So there you have it, a nice combination of tools. I really do love proxies! Such a powerful tool.

Give Your Automated Checks a Voice

We have robots doing a lot for us now, well when I say robots, I mean automation, but robots sounded well cooler. We have them running automated checks for us, we have them deploying builds into production, we have them creating test data, spinning up machines and environments, plus much much more. We love tools, and rightly so, they're awesome, most of them.

However, the context for this post is automated checks. There offer us much more than pass and fail, but only if you ask it to tell you. I listed four things that I've used in the past, and how they've helped me.

Execution time

A lot of the projects I use to work several years ago were for digital agencies. Short to midterm projects, three to six months or so. I use to write automated checks as frequently as I could, as in such an environment the fast feedback was invaluable. This environment being regular last minute changes, hot fixes here, some over there. Something I'm not against, but you need to be able to deal with it. The downside of all these changes was some focused testing would always slip, in this case it was always performance testing. It wasn't a skill of mine, it still really isn't, but I know enough to get by now.

So I changed my approach. I started storing the execution time of all my automated checks. Sadly, there was no CI on this project, not that it's an excuse but this was about 6 years ago. So the automation was being ran from my machine whenever a new version was deployed. So after each run, I would add the time to a spreadsheet. I was probably running them twice a day I would say, so I soon collated a decent data set.

My thinking was that, this metric may inform me of spikes in the execution time, which could potentially be a performance issue, or a performance increase that someone may want to know about.

Did it work. It did, in that it found two issues. The first issue was actually related to the product. Some of the SQL statements had been refactored, with the aim of improvement, but sadly the opposite was seen. Now sure, on reflection we should have had other ways to have found this issue put we didn't, and reusing an existing artifact allowed me to find it.

The second issue, was caused by me! :D

I refactored some of my selenium code, and updated the version, and well it didn't go well, the execution time increased by 50%. Turned out I'd written some sloppy code, but also the release had a bug in it, which I was able to find, due to my spreadsheet informing me.

So I'm not telling you that you should do all your performance testing using the build execution time, that would be ridiculous, especially in 2016, but keep your eye on your build time, and specifically each component of the build, it may be waving a big sign at you, requesting you take a closer look. Plus, most of the CI will visually show you this data these days.

Assertions

"Expected true, but was false"
"Expected 6, but was 5"

I'm sure, just like me, you've all seen some similar failed assertions. Stop writing them. All the test frameworks I've used now allow you to pass in a message, use it!

So in the above examples, what was expected to be true? What was expected to be 6? A simple contextual message in the assertion can really speed up the debugging.

"I was expecting the number of users to be 6, but it was 5". A simple String.Format can achieve this.

Now some would argue that the name of the check should provide you some information on what the assertion relates to, sure I've seen that, but at the same time I don't think it does any harm to add a contextual message to the assertion. I know it's personally saved me a lot of time in debugging failed checks.

Tell me all you know

A common practice I see, is getting your Selenium checks to take a screenshot on failure. A nice pattern, the screenshot can be really useful in understanding the problem. But most applications have a lot more to offer you.

Take advantage of the code that the Selenium projects offers you. Hook into the event listeners to write things out to the console. Give the robot a voice. "I clicked button <locator>", I typed "name" in to element <locator>. "I waited for X seconds for <element>". These tell the story of the check, again all this speeds up the debugging process when they fail.

Application specific, does your application have log files? If so, get the failed check to pull those down to a central location, so you can quickly refer to them when debugging, instead of having to go and get them manually. I did such a thing, a nice trick I added was to only get a specific window, I used the time noted at the start of the check to determine this. Saved a lot of time traversing log files.
Also other application specific things may be useful, such as the user you may be logged in as. The version of the application being checked. The environment they were being executed against.

Why does nobody love me?

"I never get an attention, I'm lonely". Alright, I agree that's a weird thing for your automation to tell you, but what I'm getting at is, when was the last time this check was changed or had its value reviewed.

I've done it, I've had checks that lasted the whole duration of my employment before, I never looked at them, they were green, all gravy. Michael Bolton wrote a nice piece on green. However, this doesn't mean this check was returning me any value. With not reading it for so long, I probably couldn't have even told you what it was checking.
So get the automation to tell you. You could put a date stamp on each check, which you could then write a simple script to read over and flag any that are > X days/weeks old. You could use your version control tool to see when the last commit was.

The point being we should only have automated checks that are returning us value, so in my opinion for that to be the case, we should be regularly reviewing them, so we understand their value. This could just be a nice way of letting them help us with this process.

In the context I used this in, I opted for the source control approach in the end, I never ended up deleting any checks, however I did extend some to check more than they originally were. The best thing I got out of doing this though, was the regular review. When discussing risks on the project, all the checks were fresh in my mind, so I was able to mitigate some risks because I knew that we had some coverage from the checks, allowing me to plan my additional testing accordingly.

Conclusion

Think about what else your automated checks could be telling you, think about the data they produce that could be really useful in guiding your testing.

So there we have it, sorry it was a bit long. I hope it was an interesting read.

If you've extended your automation to tell you more than just pass and fail, I would be keen to hear about it. I may write some more examples up in the future, but these were the main four that initially came to mind.

Webinar Follow-up: New Testing Battlefields

I recently had the privilege to take part in a webinar, this webinar was on the topic of ‘New Testing Battlefields’, which in this context of this webinar and post are Mobile and IoT. The Webinar was arranged by Telerik. I was joined by three other testing minds: -

  • Jim Holmes – Jim was our host, but also active in the discussions, being a tester himself and currently doing some interesting work in the automotive industry.
  • Daniel Knott – Daniel is a tester mostly working on Android over at Xing. He is also the author of ‘Hands-On Mobile App Testing’.
  • Iliyan Panchev – Who is an ex tester, and know currently Program Manager for Test Studio at Progress.

I took some notes during the webinar, which I’m simply going to expand on during this post. If you want to watch the webinar before reading the rest, a recording is available over on YouTube.

Interfaces

I spoke a lot during the webinar about how in my opinion, most mobile applications are just interfaces to the main system, that system being the backend, behind all the APIs. This, again IMO, is the product, not the mobile app. In some cases the APIs could be viewed as being the product. But the point I was trying to make is that the best apps I used and worked on, are where the front end is as dumb as possible, keeping the majority if not all the business logic in the backend. As we will talk about later in the post, this also makes testing significantly easier, especially when looking to add some automated checks into your testing approach.
In such a fast moving industry this also allows you to try and stay ahead of the competition and keep up to speed with all the latest trends in UX, as you can redesign the app without having to focus as much on the business logic.

All companies are software companies

The theme at Davos 2016 this year was “The Fourth Industrial Revolution” referring to the advances of ‘economy-changing’ technologies. Unfortunately, I cannot find the post, but I recall hearing an interview with a CEO saying that all companies are now software companies, it appears that it’s software that is giving companies their edge these days. With this in mind, I think we’re at the beginning of this boom, and the interfaces and applications of this technology we are going to be testing if mind boggling. I personally embrace technology, so I can’t wait!

Internet of Shit

I was thinking about this during the webinar, and Daniel had the courage to bring it up, so I’m just adding a link here. If you haven’t seen this Twitter account, it’s brilliant, it’s hilarious, it’s also terrifying!!!

Mobile, it’s personal

We mentioned many user aspects during the webinar, mostly focusing on how a mobile device is personal. Firstly, users configure they devices any which way they like, not such a huge problem on iOS, but can be very problematic on Android. Some system settings, such as fonts, can actually change your app, amongst many other things.

Also we mention speed a fair amount, but it’s important. People are normally doing 10 things on their phone. Reading the news, whilst sending a tweet, whilst WhatsApping Dave and checking social media. So when they switch to your app, it needs to work, immediately, or they will leave. Now in the context of an app were they are multiple apps you can download, that’s is exactly what they will do. Delete your app and download another one!

Interruptions, we also explored how other apps, and the fact it’s a mobile phone can impact your testing. A user will receive a call whilst using your app, what does your app do? They get a notification from another app, and click it, when your app resumes, what does it do? When the users’ connection drops and recovers, how does your app behave?

We have some really interesting conversation around the above, but they were all in the context of testing mobile, and the short comings of most automation tools. However, this is OK IMO, as mentioned during the webinar, I love the quote from Dhanasekar ~ “It’s a sin to test a mobile app at your desk” to which I’ve adapted to “It’s a sin to only test a mobile app at your desk”. We have to get out there when testing our apps, out into a real environment. We need to test them on a real phone, with real other apps and a sim card so calls can be received. This is how a user is going to use your app. Simulating all this, is difficult, but more importantly very time consuming, time which most teams simply do not have.

Tools

We hit on a few tools during the webinar, but my conclusion on the majority of mobile tools aimed at testing/checking are that they are very immature. But this isn’t surprising, the platforms themselves are immature. For example, iOS was only released in 2007, making it only 9 years old. It’s been in a constant stage of change ever since, which big architecture changes in most major releases, due to the very context of mobile and mobile hardware. It’s evolving at such a rate, change is inevitable. So this means the tool vendors are always playing catch up. There is hope though, tools from the vendors themselves (XUITest, Espresso) have become significantly better in the recent year, I hope this trends continues.

Proxies

I mentioned the importance of testing with proxies several times during the webinar, making the point to repeat it. It’s important. I find it virtually impossible to test a mobile application without using a proxy. The reasoning being is I need to see what data the app is getting. I need to see what data the app is sending. I need to see what APIs the app is calling, and when. A proxy allows me to see this information. It’s the most important tool when testing mobile/IoT IMO. Also from a testing context, it allows you to test multiple scenarios with ease, such as status codes and different length data, as you can alter the request leaving and arriving on your device.
If you haven’t tested using a proxy before, please try them! I use CharlesProxy on my mac, but Fiddler is also a great option for Windows.

Twitter Driven Testing

I first heard about Twitter Driven Testing from the Panda, Pradeep Soundararajan. He spoke about how he was testing a public facing website, and turned to Twitter to read what users were saying about his product. Of course, as with most things, there were positive and negative comments, but all this information turned out to be a great source of test ideas for Pradeep. This is something I’ve continued to do, however I’ve expanded beyond Twitter now, and look on forums and Facebook. But as we are talking mobile, I also take advantage of the app stores, the reviews people go to the effort of leaving there are gold for a tester, or someone looking to do some testing.

Doing some basic searches on social media for your product, see what people are saying about it, could lead to some very interesting testing.

Data Builder Pattern

Someone asked a question during the webinar about how to manage test data. I mentioned this is a place where I heavily relay on automation. I use a pattern called the Test Data Builder Pattern, I’ve blogged about this already, which you can read here.
In addition to this post I suggested adding a common interface to your data creation code now, such as an HTTP API. This would allow you to take advantage of this code from many interfaces now. So your automated checks could call them, you could use a tool like POSTMan to call them whilst testing, means you don’t have to keep repeating code that creates data.

Modelling

My final thing I made a note of was my advice to model your system. Initially a high level model, just have a box for interfaces, box for APIs and a box for databases. This allows you to see the system boundaries with ease. These boundaries can help identify where mocking could be introduced to assist with testing, but also to show where data is moving in the system. I encourage every to have such a model, they are a fantastic tool to assist conversions about the product.

So if you got this far, congratulations, I hope you enjoyed.