Category Archives: Technical Testing

“Mobile” Performance Testing

What is Mobile performance testing

The function of Performance Testing is not really understood by most people beyond the requirement to “make sure it works fast”.  So how do we go about defining Mobile Performance Testing?  While the majority of test planning and execution is not that much different than other types of performance testing there are a number of aspects that exacerbate risks, for example the technology involved and the increased number of variations and moving parts of a system’s ecosystem.

When we think of traditional Performance Testing, we think of Load or Stress testing, however these do not really apply for “on-device” mobile testing.  Instead, we need to be concerned about things like battery life, user experience performance, app loading, screen/data rendering and screen interaction.  For the purposes of this post I am going to focus on just the “mobile” aspect of performance testing, specifically focusing mostly on “on-device” testing.


Types of Testing

Types of testing

High level types of testing


As with any good quality assurance approach to a project, testing begins at the requirements stage.  Because of the different technology involved (we are talking about a large number of different variations of hardware and software, with each version of software being specific to the hardware in question) this changes the approach to how your application is to be designed.

The question of Native (Smoother user experience but harder to deploy changes) vs Web (Clunky user experience but easier to respond to change and control content served to users) or some kind of hybrid needs to be asked and answered and the decision will depend on what your target audience is and the focus and complexity of your application.

What is your intended audience?  Is it a focused set of users for a distinct set of features or are you aiming for a more generic widely used set of features.  The closer your relationship may allow you to set-up some sort of Pilot or Early Access program which will allow your to use monitoring tools to capture real life data on the performance of your application before release to the masses.

These are among just some of the questions to be asked and answered in the requirements phase that will play a big part in determining how much of an investment to put in to the technical performance testing.


The guiding rule of all performance testing (to the ley person at least) is “Make the app FASTER!!”.  While this may sound facetious, there is at least an element of this that is true and that we can develop and test for.  It does beg the question however, who is responsible for testing.  The short answer, especially in this scenario, is that testing begins with the developer themselves.

Regardless of the platform you are developing for, each of them have a set of tools aimed at helping debug the performance of an application at the developer level, monitoring things like CPU, Memory and Data usage.

Icon-iOS  iOS

  • Instruments Performance Profiling has long been used for Memory/CPU usage, etc
  • Xcode 6 has also introduced a lot of new features aimed at helping test your app’s performance, eg: Unit test level performance testing of code where you are able to set and measure against baseline performance criteria

More information on what apple are working on can be found at WWDC 2014 Session Videos, in particular the sessions:

  • Improving your apps performance with Instruments
  • What’s new in XCode 6

Icon-Android  Android

  • Turning developer options on on your device gives you access to a lot of tools such as:
    • CPU/GPU Usage,
    • Hardware Layer Updates
  • Android is also good at logging pretty much everything going on (which can be pretty verbose).  It always helps to monitor the logs as you are testing.
    • Eg: adb logcat | grep Skipped results in:
      Skipped 147 frames! The application may be doing too much work on 
      its main thread
  • Android Device Manager tools have also got a lot of performance debugging tools embedded into it:
    • Hierarchy Viewer
      • How complex are your layouts?
    • Thread monitoring
      • How much time is spent in the Main thread and doing what?
      • The biggest culprit of poor performance is too much happening in the Main thread.


Icon-Windows  Windows Phone

One of the many things Microsoft does will is build an integrated development environment.  While I haven’t used these tools myself for mobile development, a quick Bing search quickly results in the Windows Phone Application Analysis tool.

  • You get this automatically as part of the Windows Phone SDK
  • It’s fully integrated into Visual Studio IDE
  • Runs against emulator or phone
  • Enables monitoring and profiling
  • App Analysis – performance and quality
  • Execution and Memory profiling
  • ..make quality assurance a part of the development cycle..



When thinking about testing at the server level, the first question is “What is unique to Mobile performance testing?”.  There are a lot of similarities, you are still testing at the service/api level.  The main difference is that testing is exacerbated by the mobile platform.  There are a lot of different Devices/OS/Network/Bandwidth combinations.  There are more moving parts that can impact user experience.  So at the end of the day there are a number of things the need to be focused even more on than other types of performance testing, some being:

  • Data/Packet size – you cannot guarantee that everyone is on a high-speed or reliable connection.  Keep data as small as possible
  • Battery life – while there are no ‘standards’ on developing to maintain battery life, there are some guidelines to help in this area:
    • When transmitting data, it is better to transmit in larger batches rather than many smaller transactions.  This cuts down on the number of interactions and the amount of actual work the phone needs to do.
  • Latency – It is a given that response times on a mobile device will be impacted.  This will feed into your app’s design more than anything
  • Real Device vs Simulators – Always test on real devices.  When capturing traffic for your favourite load tool, always capture it using real device traffic going through a proxy.


So as we can see, “Mobile” Performance testing is not too different from other types of performance testing, we are simply dealing with a different technology platform.

I’ve mentioned some basic differences in the types of “Performance” testing, across all levels.  As always, performance testing is not something to be left until the end of the development lifecycle.  Requirements definition and the type/design of your app both greatly influence how much you performance test.

Most importantly though, Performance Testing is owned by everyone.  While the mobile platform is still relatively immature, advancements in toolsets are allowing developers to begin technical testing at a debug/unit testing level, they are free, and there is no excuse to not be using them.




Using Charles for Mobile testing and debugging

I have now been testing mobile devices (both iOS and Android) for roughly 9 months now, and boy has it been an experience.  Very steep learning curve for me.  One of the most useful techniques I have found in testing and debugging is the ability to monitor the traffic going through the network, not just to verify what is being sent but also monitor things like request and response headers and contents, http response codes and the likes.

In web development (or using mobile simulators or emulators) you would use an http proxy program, something like Fiddler, Firebug (for FIrefox), Google Dev tools, Charles, or some other tool, but how do you do this from a mobile device?  The short answer is exactly the same way, you just need to make sure all traffic from the mobile device is going through your choice of http proxy tool.

This post will explain how to set this process up using Charles in OSX on a MacBook Pro, but technically the same should work regardless of your toolset (not sure how different tools would handle SSL traffic).

Note: This is not intended to be a “How to use Charles” guide (there is a lot more to Charles than what I will be covering here!)

Install and Set-up Charles

Charles can be installed from

As tools go, it’s pretty simple to set-up and install.  The only things really worth mentioning are:

  • Make sure you switch on Enable transparent HTTP proxying under Proxy -> Proxy Settings.
  • Take note of the Port Number Charles is using on the above Proxy Settings screen.  You’ll be using it soon.
  • Under Proxy -> Recording Settings you may want to define a list of traffic to either Include or Exclude.  This will help reduce the amount of noise you have to wade through to see the traffic you really want (particularly if you are on a corporate network).

Apart from that, everything else falls under the umbrella of ‘standard Charles functionality’.

Configure Mobile device

Now we need to get the traffic going from the mobile device through Charles.  This is simple if your mobile device is connected to the same wifi network as the computer running Charles.  If it is not you will need to find a way for the computer in question to become visible to the mobile device, for example sharing your Mac’s internet via wifi and connecting the phone directly to it.

Disclaimer: By sharing your computer’s wifi you are essentially allowing another connection to your internal network.  Make sure you set it up with security and definitely get permission from your network administrator, particularly if on a corporate network, before attempting this.  They may be able to provide another solution.

Directing traffic through Charles is as simple as setting up the proxy on your mobiles wifi connection.

  • Get the local IP Address of the computer running Charles
    • You can use either the IP address under System Preferences -> Network or the local IP address returned by the option under Charles’ Help menu.  Both seem to work.
  • Configure your iPhone to use the wi-fi network that has access to the computer running Charles
    • Set the proxy of the wi-fi network to Manual
    • Enter the IP address of the local pc from the step above
    • Enter the Port number the is set in Charles

All traffic should now be visible in Charles

Configure for SSL usage

So far all traffic should be visible, but the contents of any SSL traffic (and lets face it, that’s the stuff we want to be looking at) will still be hidden.  Here’s how to get access to it:

Note: While this works for most SSL traffic, some server certificates are set-up in such a way to prevent man-in-the-middle attacks.  Because of this they will fail an SSL handshake, and therefore fail the request.  There is a way to code around this (documented here), but while you still get the data encryption features of SSL, you lose the host identify validation features.

And you’re done!  You should now be able to use the power of Charles to monitor and control the traffic being sent to and from the mobile device.

Useful Scenarios

“But why is this useful in testing?” I hear you ask.  What does this give you that you can’t get from running a build in a local development environment using a simulator and monitoring the console?  On top of running the build locally and monitoring consoles, etc, I use this for four main reasons or scenarios:

  1. Testing of logging functionality.  When testing things like Google Analytics, New Relic, Splunk, Crashlytics or your favourite form of monitoring or logging, before entering a full end to end testing scenario monitoring the network traffic for these calls is useful for monitoring the content of each call, the response from the logging server as well as how many calls are being made.  I’ve lost count of the number of times I’ve noticed a double hit on a specific page hit due to bugs in the app’s navigation stack.
  2. Testing of app ‘post’ build pipeline.  Depending on your environment and deployment setup, or how many build targets you have, your build pipeline may enforce numerous configuration changes depending on what build you are testing.  By testing using network traffic you have full visibility of how the app you are going to distribute (or even have already distributed) is operating.
  3. Testing of specific scenarios dependant on message content. In some scenarios I find it quicker to test specific scenarios by modifying a specific call to the server to force a specific response.  An example of this I did recently is to test a feature where we want to force a user to upgrade depending on what version of the app they have.  Ordinarily you would need to install a specific version in order to test this, however considering the fact that in this instance the version number is being sent through in the request header it is a lot quicker and easier to put a block on the request, modify the build version in the header, then execute it and verify the response.  Doing it this way means not only can you test the specific scenario, but also execute some exploratory scenarios, e.g.: What happens if the version number is in a different format (eg: if you want to change it to cater for Beta/Pilot builds).
  4. Cross-device support. Particularly if you are testing multiple development platforms (eg: iOS, Android, Windows Phone) you may not have all development environments set-up locally.  By using this network monitoring method of testing you are completely non-dependant on any local set-up.  This has also come in extremely useful for me in scenarios where I get a “It doesn’t work on MY phone” complaint for a scenario that works everywhere else.  I find it a lot quicker and easier to debug whether issues are phone related, data related, or app related.

I hope that helps others.  It’s certainly been useful to me.