“Mobile” Performance Testing

What is Mobile performance testing

The function of Performance Testing is not really understood by most people beyond the requirement to “make sure it works fast”.  So how do we go about defining Mobile Performance Testing?  While the majority of test planning and execution is not that much different than other types of performance testing there are a number of aspects that exacerbate risks, for example the technology involved and the increased number of variations and moving parts of a system’s ecosystem.

When we think of traditional Performance Testing, we think of Load or Stress testing, however these do not really apply for “on-device” mobile testing.  Instead, we need to be concerned about things like battery life, user experience performance, app loading, screen/data rendering and screen interaction.  For the purposes of this post I am going to focus on just the “mobile” aspect of performance testing, specifically focusing mostly on “on-device” testing.


Types of Testing

Types of testing

High level types of testing


As with any good quality assurance approach to a project, testing begins at the requirements stage.  Because of the different technology involved (we are talking about a large number of different variations of hardware and software, with each version of software being specific to the hardware in question) this changes the approach to how your application is to be designed.

The question of Native (Smoother user experience but harder to deploy changes) vs Web (Clunky user experience but easier to respond to change and control content served to users) or some kind of hybrid needs to be asked and answered and the decision will depend on what your target audience is and the focus and complexity of your application.

What is your intended audience?  Is it a focused set of users for a distinct set of features or are you aiming for a more generic widely used set of features.  The closer your relationship may allow you to set-up some sort of Pilot or Early Access program which will allow your to use monitoring tools to capture real life data on the performance of your application before release to the masses.

These are among just some of the questions to be asked and answered in the requirements phase that will play a big part in determining how much of an investment to put in to the technical performance testing.


The guiding rule of all performance testing (to the ley person at least) is “Make the app FASTER!!”.  While this may sound facetious, there is at least an element of this that is true and that we can develop and test for.  It does beg the question however, who is responsible for testing.  The short answer, especially in this scenario, is that testing begins with the developer themselves.

Regardless of the platform you are developing for, each of them have a set of tools aimed at helping debug the performance of an application at the developer level, monitoring things like CPU, Memory and Data usage.

Icon-iOS  iOS

  • Instruments Performance Profiling has long been used for Memory/CPU usage, etc
  • Xcode 6 has also introduced a lot of new features aimed at helping test your app’s performance, eg: Unit test level performance testing of code where you are able to set and measure against baseline performance criteria

More information on what apple are working on can be found at WWDC 2014 Session Videos, in particular the sessions:

  • Improving your apps performance with Instruments
  • What’s new in XCode 6

Icon-Android  Android

  • Turning developer options on on your device gives you access to a lot of tools such as:
    • CPU/GPU Usage,
    • Hardware Layer Updates
  • Android is also good at logging pretty much everything going on (which can be pretty verbose).  It always helps to monitor the logs as you are testing.
    • Eg: adb logcat | grep Skipped results in:
      Skipped 147 frames! The application may be doing too much work on 
      its main thread
  • Android Device Manager tools have also got a lot of performance debugging tools embedded into it:
    • Hierarchy Viewer
      • How complex are your layouts?
    • Thread monitoring
      • How much time is spent in the Main thread and doing what?
      • The biggest culprit of poor performance is too much happening in the Main thread.


Icon-Windows  Windows Phone

One of the many things Microsoft does will is build an integrated development environment.  While I haven’t used these tools myself for mobile development, a quick Bing search quickly results in the Windows Phone Application Analysis tool.

  • You get this automatically as part of the Windows Phone SDK
  • It’s fully integrated into Visual Studio IDE
  • Runs against emulator or phone
  • Enables monitoring and profiling
  • App Analysis – performance and quality
  • Execution and Memory profiling
  • ..make quality assurance a part of the development cycle..



When thinking about testing at the server level, the first question is “What is unique to Mobile performance testing?”.  There are a lot of similarities, you are still testing at the service/api level.  The main difference is that testing is exacerbated by the mobile platform.  There are a lot of different Devices/OS/Network/Bandwidth combinations.  There are more moving parts that can impact user experience.  So at the end of the day there are a number of things the need to be focused even more on than other types of performance testing, some being:

  • Data/Packet size – you cannot guarantee that everyone is on a high-speed or reliable connection.  Keep data as small as possible
  • Battery life – while there are no ‘standards’ on developing to maintain battery life, there are some guidelines to help in this area:
    • When transmitting data, it is better to transmit in larger batches rather than many smaller transactions.  This cuts down on the number of interactions and the amount of actual work the phone needs to do.
  • Latency – It is a given that response times on a mobile device will be impacted.  This will feed into your app’s design more than anything
  • Real Device vs Simulators – Always test on real devices.  When capturing traffic for your favourite load tool, always capture it using real device traffic going through a proxy.


So as we can see, “Mobile” Performance testing is not too different from other types of performance testing, we are simply dealing with a different technology platform.

I’ve mentioned some basic differences in the types of “Performance” testing, across all levels.  As always, performance testing is not something to be left until the end of the development lifecycle.  Requirements definition and the type/design of your app both greatly influence how much you performance test.

Most importantly though, Performance Testing is owned by everyone.  While the mobile platform is still relatively immature, advancements in toolsets are allowing developers to begin technical testing at a debug/unit testing level, they are free, and there is no excuse to not be using them.




Defining a mobile device support strategy

In the world of mobile development and testing one of the most common cause of issues is the sheer number of devices and OS version combinations.  In this situation how do you define what devices you should/want to be supporting?  The short answer is: It depends.  The slightly longer answer is that it depends completely on the nature of the app you are developing in terms of complexity.  Is it a simple, fully contained app or are there external dependencies on servers or hardware devices?  Is it a single page app or are there numerous navigation paths through the app several pages deep?

Initial definition of support policy

There are a number of categories that you can analyse before and during development to help focus initial development:

  • User Experience – One common mistake is to design for one and try to make the others look and act the same which can lead to lots of headaches not to mention limiting the look and feel of your app.  It is important to approach each platform separately considering the differences in each operating system.
  • Minimum OS Version – By defining a minimum OS version (e.g.: Android Version 4+, iOS 7+, etc) it will help define the look and feel of the app from both a User Experience perspective (what OS standards do you need to conform to [see iOS7 standards], how much of the native OS can/will you use, etc) as well as a technical perspective (what native controls are available, what custom controls are supported, etc).
  • Intended audience – It will make a large difference if you are developing an app for a limited audience (internal corporate, subscription based, etc) or general release (e.g.: games, utility apps, etc).  How much control can you have over your intended audience and what devices your app is installed on?  Are you in a position to have a Beta/Pilot program?
  • Handset usage/sales figures – Before you release, see if you get access to any generic device usage statistics in your region.  A quick google search on mobile device sales stats <your region> should return some useful information at a high level of what devices are in use in your region but it will still require some reading and liberal interpretation in order to generate anything useful.  Another possibility is if you have a website, you may be able to get some usage statistics from that to see what devices your existing customers are using.

All of these points will be extremely useful in defining an initial support policy in order to help focus your development process, but it should only be looked at as a first version or even a draft.

Keeping your Support Policy current

“The best laid plans of mice and men often go awry” – Robert Burns

No matter how much effort you put into the initial version of any support policy it is necessary to respond to any feedback in order to keep it up to date and applicable to your current environment.  This feedback can come in many forms, the main two (and the most immediate) are:

  • Usage statistics and logs – Something like Google Analytics or Adobe Analytics (formerly Omniture) should be used to get a realtime view not only on how your app is being used but also of what devices are in use for your app.  Similarly, any logging available (e.g.: Crash Monitoring) should be monitored for device specific issues.  This can then be fed directly back into your support policy.
  • Support calls/reviews – if you have a support phone number/webpage/email address you can gauge people’s experience based on their feedback or complaints.  One of the first questions to be asked would have to be what device/os version are you using.  The same kind of input may be gleaned from reviews left of your app (e.g.: the common “app sucks.  doesn’t work on my <device name>“).

Both of these can provide immediate reactive input into your support policy.  There are also ways you can be proactive about refining your policy, the most useful of which is marketing, ie: what handsets are the main TelCos pushing?  These are the devices that people are going to buy, so surely it makes sense to proactively ensure that your app will work on these devices.

Essentially, a support policy is a living document.  Anything you define today will definitely change as time goes by and technology advances.  With this in mind it would be wise to put an allowance in your yearly budget for new devices, for example $5000 per year covering all platforms should be enough for ~8 mobile and tablet devices.  This should be more than enough to keep you up to date.

Developing/Testing to the support policy

From a development and testing point of view it is impossible to and unreasonable to expect full testing of every device on the market, even every device that is using your app, regardless of what support policy you have defined.  Because of this it is important to define support levels.  These can be as granular as you wish, but there are three main levels.

  1. Full Support – This is a set of devices (recommend to set a limit across all platforms based on available time and resources) that you will buy, develop and test on.  This is essentially where it is 100% guaranteed to work because you have seen it working.
  2. Responsive Support – This is set of devices, usually much larger, that your app should work on, but you do not physically test on.  If there any reports of issues in production you can triage these issues and determine an action accordingly.
  3. No Support – Unsupported/Deprecated devices or OS versions.  We can’t develop for everything and we can’t test everything.  These devices fall below where we draw the line.

Now that we have defined the list of devices we provide Full Support for we can look at the testing coverage we apply to each device.  This is another area that is completely dependant on the nature of your app, environment architecture, available resources and the functionality under test.

As far as automation is concerned the main focus would be on automating as much as possible of the app to minimise the amount of manual testing you need to do for on-going regression.  There are many options for how you approach this.  You can utilise the platforms native automation tools for local acceptance testing.  You can also look at tools like which would allow for cross-platform automation in the language of your choosing.  Another option is using services such as that offer remote testing functionality through various device/OS combinations (more info on supported platforms here).

For manual testing, your automation coverage should allow you focus manual testing on the areas of change.  It would also be dependant on the functionality under test, i.e.: if you are testing a feature that is:

  • Client focused – UI heavy, may render differently based on resolution or is interacting with the phone (saving credentials, making phone calls, etc) you would spend more test effort across each device.
  • Server focused – reading from or saving to a server, etc.  You can spend most test effort on a single device and potentially apply some time-boxed exploratory testing across the rest of your device library.

It would also be a good idea to look at the type of testing you are doing, whether it is a more traditional scripted manual testing approach with test plans and pages of accompanying documentation or whether you approach a more guided exploratory testing approach.

Publicise what devices you support

Regardless of where you end up landing with your supported device list, make sure you publish (if possible or applicable) your supported devices list.  When publishing to the app stores you should also take care to identify the region and minimum OS supported are.  Depending on your app it may head of some headaches for you and your potential customers if they are aware before downloading or subscribing whether or not it is going to work on their phone.


Leave a comment

Posted by on July 15, 2014 in Quality Assurance


Using Charles for Mobile testing and debugging

I have now been testing mobile devices (both iOS and Android) for roughly 9 months now, and boy has it been an experience.  Very steep learning curve for me.  One of the most useful techniques I have found in testing and debugging is the ability to monitor the traffic going through the network, not just to verify what is being sent but also monitor things like request and response headers and contents, http response codes and the likes.

In web development (or using mobile simulators or emulators) you would use an http proxy program, something like Fiddler, Firebug (for FIrefox), Google Dev tools, Charles, or some other tool, but how do you do this from a mobile device?  The short answer is exactly the same way, you just need to make sure all traffic from the mobile device is going through your choice of http proxy tool.

This post will explain how to set this process up using Charles in OSX on a MacBook Pro, but technically the same should work regardless of your toolset (not sure how different tools would handle SSL traffic).

Note: This is not intended to be a “How to use Charles” guide (there is a lot more to Charles than what I will be covering here!)

Install and Set-up Charles

Charles can be installed from

As tools go, it’s pretty simple to set-up and install.  The only things really worth mentioning are:

  • Make sure you switch on Enable transparent HTTP proxying under Proxy -> Proxy Settings.
  • Take note of the Port Number Charles is using on the above Proxy Settings screen.  You’ll be using it soon.
  • Under Proxy -> Recording Settings you may want to define a list of traffic to either Include or Exclude.  This will help reduce the amount of noise you have to wade through to see the traffic you really want (particularly if you are on a corporate network).

Apart from that, everything else falls under the umbrella of ‘standard Charles functionality’.

Configure Mobile device

Now we need to get the traffic going from the mobile device through Charles.  This is simple if your mobile device is connected to the same wifi network as the computer running Charles.  If it is not you will need to find a way for the computer in question to become visible to the mobile device, for example sharing your Mac’s internet via wifi and connecting the phone directly to it.

Disclaimer: By sharing your computer’s wifi you are essentially allowing another connection to your internal network.  Make sure you set it up with security and definitely get permission from your network administrator, particularly if on a corporate network, before attempting this.  They may be able to provide another solution.

Directing traffic through Charles is as simple as setting up the proxy on your mobiles wifi connection.

  • Get the local IP Address of the computer running Charles
    • You can use either the IP address under System Preferences -> Network or the local IP address returned by the option under Charles’ Help menu.  Both seem to work.
  • Configure your iPhone to use the wi-fi network that has access to the computer running Charles
    • Set the proxy of the wi-fi network to Manual
    • Enter the IP address of the local pc from the step above
    • Enter the Port number the is set in Charles

All traffic should now be visible in Charles

Configure for SSL usage

So far all traffic should be visible, but the contents of any SSL traffic (and lets face it, that’s the stuff we want to be looking at) will still be hidden.  Here’s how to get access to it:

Note: While this works for most SSL traffic, some server certificates are set-up in such a way to prevent man-in-the-middle attacks.  Because of this they will fail an SSL handshake, and therefore fail the request.  There is a way to code around this (documented here), but while you still get the data encryption features of SSL, you lose the host identify validation features.

And you’re done!  You should now be able to use the power of Charles to monitor and control the traffic being sent to and from the mobile device.

Useful Scenarios

“But why is this useful in testing?” I hear you ask.  What does this give you that you can’t get from running a build in a local development environment using a simulator and monitoring the console?  On top of running the build locally and monitoring consoles, etc, I use this for four main reasons or scenarios:

  1. Testing of logging functionality.  When testing things like Google Analytics, New Relic, Splunk, Crashlytics or your favourite form of monitoring or logging, before entering a full end to end testing scenario monitoring the network traffic for these calls is useful for monitoring the content of each call, the response from the logging server as well as how many calls are being made.  I’ve lost count of the number of times I’ve noticed a double hit on a specific page hit due to bugs in the app’s navigation stack.
  2. Testing of app ‘post’ build pipeline.  Depending on your environment and deployment setup, or how many build targets you have, your build pipeline may enforce numerous configuration changes depending on what build you are testing.  By testing using network traffic you have full visibility of how the app you are going to distribute (or even have already distributed) is operating.
  3. Testing of specific scenarios dependant on message content. In some scenarios I find it quicker to test specific scenarios by modifying a specific call to the server to force a specific response.  An example of this I did recently is to test a feature where we want to force a user to upgrade depending on what version of the app they have.  Ordinarily you would need to install a specific version in order to test this, however considering the fact that in this instance the version number is being sent through in the request header it is a lot quicker and easier to put a block on the request, modify the build version in the header, then execute it and verify the response.  Doing it this way means not only can you test the specific scenario, but also execute some exploratory scenarios, e.g.: What happens if the version number is in a different format (eg: if you want to change it to cater for Beta/Pilot builds).
  4. Cross-device support. Particularly if you are testing multiple development platforms (eg: iOS, Android, Windows Phone) you may not have all development environments set-up locally.  By using this network monitoring method of testing you are completely non-dependant on any local set-up.  This has also come in extremely useful for me in scenarios where I get a “It doesn’t work on MY phone” complaint for a scenario that works everywhere else.  I find it a lot quicker and easier to debug whether issues are phone related, data related, or app related.

I hope that helps others.  It’s certainly been useful to me.


Fusion 2012 and New friends

So I recently had the pleasure of attending SoftEd’s newest conference, Fusion 2012. Some of the noteworthy presentations were on “Systems Thinking” by Dr Emma Langhorn, from the UK, “Customer Focused testing” by Alan Page from Microsoft in the US, “Acceptance Test Driven Development” by Elisabeth Hendrickson, from the UK, and “Testing with Oracles” by Anne-Marie Charrett. In short, there were a lot of brilliant ideas floating around from a lot of brilliant people.

It was also my honor to give a presentation myself on a practical experience of a tester in an agile world or “Seeking ‘a’gile Testing”. This was my first time presenting at an “international” conference and while it didn’t go as smoothly as I would have liked (eg: the microphone cut out halfway through) I can’t say enough to express how fantastic everyone in the room was, not just because they actually listened to me, but also from the questions the asked, not just in my presentation but in the next day and a half.  It was made harder by the fact that there is so much to talk about, but having to limit it to only 45 minutes (including Q&A) meant there was a lot I didn’t get a chance to talk about.  If your interested, you can have a look at the slide deck here.

People obviously got something out of what I was saying because not only was I pretty much constantly talking with people for the rest of the conference, with them coming up to me to ask questions, I was also asked to chair a round table session the next morning on the transition to agile. Again some fantastic q&a going on, not just from myself :).

But it wasn’t all about me though. Here’s a couple of the key learnings/thoughts/quotes I got from the conference (in no particular order):

  1. “Systems Thinking begins the moment you see the world through the eyes of another”
  2. “‘Business Analyst’ is actually the wrong job title.  It should be ‘Business Analyst and Synthesist’.  They’re not just stenographers, they don’t just detail things.  They understand the system and build up a product from the ground up.”
  3. “Business people don’t need to understand designs.  They need to be designers.”
  4. “Test Design = Test Ideas.  Form your own opinion of testing.”
  5. “BDD = ATDD + TDD”
  6. “Parallel is the wrong word.  Parallel means they never meet up.  “With” is the correct word.”
  7. “The software is fine, it’s just the people that use it”

There were 2 really strange things about the conference, though:

  1. It is really strange seeing your own face & name up on the screen :S, and
  2. It was surreal sitting at a table and talking with people who you really admire, people who you have read a lot about and, to a certain extent, only know as a name on the cover of a book.  Not only talking with them as a fan, but also as a …. peer?  Maybe I won’t go that far 😉

Anyway,  now the adrenalin has faded from my system, it’s now time to get back to work.

Leave a comment

Posted by on September 17, 2012 in Quality Assurance


OSX and Windows 7 Playing nicely together … At Last!!

OSX and Windows 7 Playing nicely together … At Last!!

I’m a Windows person.  I’ve grown up with Windows, I work with Windows, I play with Windows.  From Windows 3.11 for workgroups, through Windows 98, ME, XP, Vista and now Windows 7, I’ve used them all.   I intensely dislike the iPhone and I LOVE my Windows Phone 7.  I develop in C# and SQL using MS TFS 2010.  That’s just who I am, don’t judge me.

But I really like my 27” iMac.  It really is an impressive piece of hardware.  I also recently acquired an Apple TV, perfect for streaming media directly to the TV.  Up until now I have been using Boot Camp and booting into Windows 7 for the majority of my day-to-day computer usage, but with the acquisition of the Apple TV, and for various other curiosity reasons, I’ve started having a bit of a play with OSX Lion.

It really is very different and my first dealings with it left me frustrated and vowing to return to Windows and never come back to OSX.  But I have persisted and gotten used to the niggling differences in the way you navigate around the OS.  There were still many things missing.  One feature in particular of Windows 7 I make frequent use of and missed when in OSX is some of the shortcuts, or Hotkeys, that make life easier, like the Win+Left or Win+Right to make the current window take up the left/right half of the screen, allowing you to work on multiple things at once.  And the deal-breaker for me when playing with OSX is the lack of MS TFS for development and Zune.  As I said I love my WP7, I need to sync it somehow.  This meant I had to spend more time switching between Operating Systems than anyone should have to.

Along comes the perfect solution to my problem.   Desktop® 7 for Mac (


Installation First ScreenWhen installing Parallels the first thing it asks you to do is to create a New Virtual PC.  What impressed me is that along side the options of “Install from DVD” and “Migrate Windows from a PC” there was the Option to “Use Windows from  Boot Camp”.  First sign of brilliance and first sigh of relief.  I wouldn’t have to re-install everything!!  A single click is enough for Parallels to install everything needed on both sides of the relationship.



Installation VM Start-up

The default settings of the new VM are to call it “My Boot Camp” and set the VM to use 1GB of memory.  At this point, before starting it up it is possible to re-configure the VM, eg: to assign more memory or to rename the VM to something more Useful.  I renamed it to “Win7 Boot Camp” and set it to use 2GB memory, then I simply clicked Start and before I knew it, I was up and running.

Using Parallels Desktop

So far, Parallels is pretty easy to use as it seamlessly integrates all of your favorite Windows functionality into the OSX OS. There are 3 modes that the Virtual Machine will run in, each with their own benefits:

Mode 1 – Full-Screen

Full Screen Mode

Similar to any other OSX App, running in full screen mode allows you to “swipe” across to it.  In the case of the VM we are playing with, it allows you interact with it as if you were not in a VM, away from any OSX specific apps, while at the same time allowing you to “swipe” back to the OSX desktop or to any other apps you are running in full screen mode.

I’ll be in this mode when involved in more serious Windows App development or Service/Server configuration.

Mode 2 – Window mode

Just as the name implies, Windows 7 running in a Window.  Useful for keeping an eye on the progress of things, like installations, etc, but I’m not sure I’d use this mode all that much.

Mode 3 – Coherence Mode

Coherence Full Screen

Coherence Taskbar

By far one of the more impressive feats I’ve seen.  Coherence mode essentially merges Windows 7, both the virtual machine and the functionality of Windows, into one.  Say hello to my favorite shortcuts, the Cmd+Left and Cmd+Right keys to shift a window to the left or right side of the screen. Note: This only works on applications running within the Virtual Machine.  Bummer.

Coherence Windows Start Menu

Coherence Windows Applications Menu

All your windows applications are available either by right-clicking (or Ctrl + Clicking for those who don’t know what right-click is) the Parallels icon in the taskbar to bring up the Windows Start Menu, or by clicking the Windows Applications folder in the right-hand side of the Dock.  Also, any Windows Applications you have open are visible in the OSX Dock with a Parallels Icon over-laid.

I’ll be in this mode for most of my casual computer use, browsing, etc.


So while I’m only beginning to scratch the surface and there is a lot more to be said, I’ll finish by saying that not only was Parallels insanely easy to set-up and install, it means I now have access to the best of both worlds with no problems … except one.  Am I still a Windows person?  A friend of mine commented, “Don’t tell me OSX is now your default OS!!?!?”.  How do I answer that without sounding like I’m in denial? If I’m in OSX for some functionality that is only available in OSX, but I’m using >80% Windows applications, am I really in OSX?


Posted by on January 16, 2012 in Uncategorized


Just-In-Time Test Planning

One of the biggest changes in thinking necessary for a Tester in an Agile world is the concept of Just-In-Time (JIT) Test Planning.

One deliverable that was expected of us (I feel) in the Waterfall model is the Gigantic Test Plan and a HUUGE Test Suite with ‘000s of Test Cases.  It is, afterall, one way to guarantee confidence in a product.

But ‘000s of test cases take a long time to plan and execute, which simply does not fit in the Agile world, or indeed in any RAD world.  Not only that, but it doesn’t really make sense.  As Anne-Marie Charrett said at STANZ 2011, “Why spend all your thinking time planning how not to think?“.  It makes us inflexible to change, it reduces the actual testing time (ie: testing, not just following a script) and it reduces the efficiency of the testing when the tester actually gets a chance to test something.

Here’s where you may expect me to go on to talk about Exploratory testing and how we are using it at Seek, but while it is definitely a big part of the way we do things, there is still a higher-level need to plan our testing and where Exploratory Testing would sit alongside Automation and Manual (Scripted) Testing.  After all. sometimes we need to script a test for something that can/should not be automated (eg: de-coupled systems, timing dependencies, DB Hacking dependencies, etc).  Hence JIT Test Planning.

The concept of JIT as it applies to Manufacturing and Development is nothing new.  As anyone who knows anything about the history of Agile development would know, it’s been around since the 60s/70s as one of Toyota’s techniques to meet fast changing consumer demands with minimum delays1.

JIT Test Planning is a strategy of spending more time working with the BA in defining the requirements before taking those requirements and defining the Plan that will define how you test them.

One underlying goal of the way we are doing things is to avoid as much duplication of documentation as possible.  ie: if the User Story says Button A needs to do X, do we really need to right a separate document telling the tester to press Button A and to make sure X happens?  The manual test case works in collaboration with the User Story.  It allows the Tester to define the most efficient way to verify the Acceptance Criteria without repeating the Acceptance Criteria word for word.  The test case now focuses on providing instructions on how to test something, not just what to test.

Probably the best way of proving the beneficial effects of JIT Planning is to show an example graph from one of the projects it has been used on.  Note: The following graph covers manual testing only.

The New Way

The New Way

The main thing to notice (besides the high quality) is the lack of ‘Grey’ area, where Grey is planned un-executed testing.  No grey means No wasting of time.  The lack of Grey is the result of two things:

  1. We do not begin planning until all Acceptance Criteria have been detailed and the User Story has been accepted by Dev and QA.
  2. Because Dev and QA begin at the same time, by the time QA is ready to test, there is already a product to be tested.

In Summary, JIT Planning gives us the clarity, relevance and accuracy of requirements as they are generated in collaboration without months/years between Business Analysis and Development as well as the confidence in Test Coverage.  It allows a much greater flexibility and acceptance of change (the Agile way) and it all works in collaboration with Exploratory and Automated testing.  I’ll talk more about the end to end process we follow in future posts.

More Reading:

Leave a comment

Posted by on September 30, 2011 in Quality Assurance


What is a Bug?

It’s becoming clearer and clearer that there is a lot of confusion around what a bug actually is.  How many times have you heard “That’s not a bug”.  Although we can go into an extended discussion around the difference between a Defect, Bug and an Error2, for the sake of expediency lets assume we are referring to the same thing.

It seems like in today’s world when a tester raises a bug it is assumed to be an issue with the developers code but the fact is a bug is simply product of an unexpected result.  It seems like such a simple thing until one ponders all the possible reasons for an unexpected result.

  1. Developer/Code
    1. It is a development/coding error
    2. It is an unpredicted scenario/combination
  2. Environment/Configuration
    1. It is a deployment issue
    2. It is an environment/data issue
  3. Requirement/Analysis
    1. It is an incorrect requirement
    2. It is a missed requirement
    3. It is an incorrectly managed change request
    4. Non-functional requirements
  4. Test
    1. It is an incorrect test
    2. It is a misunderstanding

(the list really does go on)

None of these examples negate the relevance of the defect at hand.  I can only assume that anyone reading this knows the story of the first bug ever found, erroneously attributed to Grace Hopper:

In 1946, when Hopper was released from active duty, she joined the Harvard Faculty at the Computation Laboratory where she continued her work on the Mark II and Mark III. Operators traced an error in the Mark II to a moth trapped in a relay, coining the term bug. This bug was carefully removed and taped to the log book. Stemming from the first bug, today we call errors or glitch’s [sic] in a program a bug.

Pasted from <>

This is a perfect example of the fact that a bug is not necessarily the fault of anything or anyone in particular but may be the result of a particular set of circumstances that couldn’t have been predicted or managed.

Regardless each “issue” needs to be discussed, validated, prioritised and an appropriate plan for resolution needs to be defined, including assigning to the correct owner to ensure it is resolved.  This is the purpose of Triage4,5,6.

After all, it’s all about a product/project being released with Quality and Quality is owned by everyone, right?

More Reading:

Leave a comment

Posted by on September 14, 2011 in Quality Assurance