Monthly Archives: July 2014

Defining a mobile device support strategy

In the world of mobile development and testing one of the most common cause of issues is the sheer number of devices and OS version combinations.  In this situation how do you define what devices you should/want to be supporting?  The short answer is: It depends.  The slightly longer answer is that it depends completely on the nature of the app you are developing in terms of complexity.  Is it a simple, fully contained app or are there external dependencies on servers or hardware devices?  Is it a single page app or are there numerous navigation paths through the app several pages deep?

Initial definition of support policy

There are a number of categories that you can analyse before and during development to help focus initial development:

  • User Experience – One common mistake is to design for one and try to make the others look and act the same which can lead to lots of headaches not to mention limiting the look and feel of your app.  It is important to approach each platform separately considering the differences in each operating system.
  • Minimum OS Version – By defining a minimum OS version (e.g.: Android Version 4+, iOS 7+, etc) it will help define the look and feel of the app from both a User Experience perspective (what OS standards do you need to conform to [see iOS7 standards], how much of the native OS can/will you use, etc) as well as a technical perspective (what native controls are available, what custom controls are supported, etc).
  • Intended audience – It will make a large difference if you are developing an app for a limited audience (internal corporate, subscription based, etc) or general release (e.g.: games, utility apps, etc).  How much control can you have over your intended audience and what devices your app is installed on?  Are you in a position to have a Beta/Pilot program?
  • Handset usage/sales figures – Before you release, see if you get access to any generic device usage statistics in your region.  A quick google search on mobile device sales stats <your region> should return some useful information at a high level of what devices are in use in your region but it will still require some reading and liberal interpretation in order to generate anything useful.  Another possibility is if you have a website, you may be able to get some usage statistics from that to see what devices your existing customers are using.

All of these points will be extremely useful in defining an initial support policy in order to help focus your development process, but it should only be looked at as a first version or even a draft.

Keeping your Support Policy current

“The best laid plans of mice and men often go awry” – Robert Burns

No matter how much effort you put into the initial version of any support policy it is necessary to respond to any feedback in order to keep it up to date and applicable to your current environment.  This feedback can come in many forms, the main two (and the most immediate) are:

  • Usage statistics and logs – Something like Google Analytics or Adobe Analytics (formerly Omniture) should be used to get a realtime view not only on how your app is being used but also of what devices are in use for your app.  Similarly, any logging available (e.g.: Crash Monitoring) should be monitored for device specific issues.  This can then be fed directly back into your support policy.
  • Support calls/reviews – if you have a support phone number/webpage/email address you can gauge people’s experience based on their feedback or complaints.  One of the first questions to be asked would have to be what device/os version are you using.  The same kind of input may be gleaned from reviews left of your app (e.g.: the common “app sucks.  doesn’t work on my <device name>“).

Both of these can provide immediate reactive input into your support policy.  There are also ways you can be proactive about refining your policy, the most useful of which is marketing, ie: what handsets are the main TelCos pushing?  These are the devices that people are going to buy, so surely it makes sense to proactively ensure that your app will work on these devices.

Essentially, a support policy is a living document.  Anything you define today will definitely change as time goes by and technology advances.  With this in mind it would be wise to put an allowance in your yearly budget for new devices, for example $5000 per year covering all platforms should be enough for ~8 mobile and tablet devices.  This should be more than enough to keep you up to date.

Developing/Testing to the support policy

From a development and testing point of view it is impossible to and unreasonable to expect full testing of every device on the market, even every device that is using your app, regardless of what support policy you have defined.  Because of this it is important to define support levels.  These can be as granular as you wish, but there are three main levels.

  1. Full Support – This is a set of devices (recommend to set a limit across all platforms based on available time and resources) that you will buy, develop and test on.  This is essentially where it is 100% guaranteed to work because you have seen it working.
  2. Responsive Support – This is set of devices, usually much larger, that your app should work on, but you do not physically test on.  If there any reports of issues in production you can triage these issues and determine an action accordingly.
  3. No Support – Unsupported/Deprecated devices or OS versions.  We can’t develop for everything and we can’t test everything.  These devices fall below where we draw the line.

Now that we have defined the list of devices we provide Full Support for we can look at the testing coverage we apply to each device.  This is another area that is completely dependant on the nature of your app, environment architecture, available resources and the functionality under test.

As far as automation is concerned the main focus would be on automating as much as possible of the app to minimise the amount of manual testing you need to do for on-going regression.  There are many options for how you approach this.  You can utilise the platforms native automation tools for local acceptance testing.  You can also look at tools like which would allow for cross-platform automation in the language of your choosing.  Another option is using services such as that offer remote testing functionality through various device/OS combinations (more info on supported platforms here).

For manual testing, your automation coverage should allow you focus manual testing on the areas of change.  It would also be dependant on the functionality under test, i.e.: if you are testing a feature that is:

  • Client focused – UI heavy, may render differently based on resolution or is interacting with the phone (saving credentials, making phone calls, etc) you would spend more test effort across each device.
  • Server focused – reading from or saving to a server, etc.  You can spend most test effort on a single device and potentially apply some time-boxed exploratory testing across the rest of your device library.

It would also be a good idea to look at the type of testing you are doing, whether it is a more traditional scripted manual testing approach with test plans and pages of accompanying documentation or whether you approach a more guided exploratory testing approach.

Publicise what devices you support

Regardless of where you end up landing with your supported device list, make sure you publish (if possible or applicable) your supported devices list.  When publishing to the app stores you should also take care to identify the region and minimum OS supported are.  Depending on your app it may head of some headaches for you and your potential customers if they are aware before downloading or subscribing whether or not it is going to work on their phone.


Leave a comment

Posted by on July 15, 2014 in Quality Assurance