Monthly Archives: September 2011

Just-In-Time Test Planning

One of the biggest changes in thinking necessary for a Tester in an Agile world is the concept of Just-In-Time (JIT) Test Planning.

One deliverable that was expected of us (I feel) in the Waterfall model is the Gigantic Test Plan and a HUUGE Test Suite with ‘000s of Test Cases.  It is, afterall, one way to guarantee confidence in a product.

But ‘000s of test cases take a long time to plan and execute, which simply does not fit in the Agile world, or indeed in any RAD world.  Not only that, but it doesn’t really make sense.  As Anne-Marie Charrett said at STANZ 2011, “Why spend all your thinking time planning how not to think?“.  It makes us inflexible to change, it reduces the actual testing time (ie: testing, not just following a script) and it reduces the efficiency of the testing when the tester actually gets a chance to test something.

Here’s where you may expect me to go on to talk about Exploratory testing and how we are using it at Seek, but while it is definitely a big part of the way we do things, there is still a higher-level need to plan our testing and where Exploratory Testing would sit alongside Automation and Manual (Scripted) Testing.  After all. sometimes we need to script a test for something that can/should not be automated (eg: de-coupled systems, timing dependencies, DB Hacking dependencies, etc).  Hence JIT Test Planning.

The concept of JIT as it applies to Manufacturing and Development is nothing new.  As anyone who knows anything about the history of Agile development would know, it’s been around since the 60s/70s as one of Toyota’s techniques to meet fast changing consumer demands with minimum delays1.

JIT Test Planning is a strategy of spending more time working with the BA in defining the requirements before taking those requirements and defining the Plan that will define how you test them.

One underlying goal of the way we are doing things is to avoid as much duplication of documentation as possible.  ie: if the User Story says Button A needs to do X, do we really need to right a separate document telling the tester to press Button A and to make sure X happens?  The manual test case works in collaboration with the User Story.  It allows the Tester to define the most efficient way to verify the Acceptance Criteria without repeating the Acceptance Criteria word for word.  The test case now focuses on providing instructions on how to test something, not just what to test.

Probably the best way of proving the beneficial effects of JIT Planning is to show an example graph from one of the projects it has been used on.  Note: The following graph covers manual testing only.

The New Way

The New Way

The main thing to notice (besides the high quality) is the lack of ‘Grey’ area, where Grey is planned un-executed testing.  No grey means No wasting of time.  The lack of Grey is the result of two things:

  1. We do not begin planning until all Acceptance Criteria have been detailed and the User Story has been accepted by Dev and QA.
  2. Because Dev and QA begin at the same time, by the time QA is ready to test, there is already a product to be tested.

In Summary, JIT Planning gives us the clarity, relevance and accuracy of requirements as they are generated in collaboration without months/years between Business Analysis and Development as well as the confidence in Test Coverage.  It allows a much greater flexibility and acceptance of change (the Agile way) and it all works in collaboration with Exploratory and Automated testing.  I’ll talk more about the end to end process we follow in future posts.

More Reading:

Leave a comment

Posted by on September 30, 2011 in Quality Assurance


What is a Bug?

It’s becoming clearer and clearer that there is a lot of confusion around what a bug actually is.  How many times have you heard “That’s not a bug”.  Although we can go into an extended discussion around the difference between a Defect, Bug and an Error2, for the sake of expediency lets assume we are referring to the same thing.

It seems like in today’s world when a tester raises a bug it is assumed to be an issue with the developers code but the fact is a bug is simply product of an unexpected result.  It seems like such a simple thing until one ponders all the possible reasons for an unexpected result.

  1. Developer/Code
    1. It is a development/coding error
    2. It is an unpredicted scenario/combination
  2. Environment/Configuration
    1. It is a deployment issue
    2. It is an environment/data issue
  3. Requirement/Analysis
    1. It is an incorrect requirement
    2. It is a missed requirement
    3. It is an incorrectly managed change request
    4. Non-functional requirements
  4. Test
    1. It is an incorrect test
    2. It is a misunderstanding

(the list really does go on)

None of these examples negate the relevance of the defect at hand.  I can only assume that anyone reading this knows the story of the first bug ever found, erroneously attributed to Grace Hopper:

In 1946, when Hopper was released from active duty, she joined the Harvard Faculty at the Computation Laboratory where she continued her work on the Mark II and Mark III. Operators traced an error in the Mark II to a moth trapped in a relay, coining the term bug. This bug was carefully removed and taped to the log book. Stemming from the first bug, today we call errors or glitch’s [sic] in a program a bug.

Pasted from <>

This is a perfect example of the fact that a bug is not necessarily the fault of anything or anyone in particular but may be the result of a particular set of circumstances that couldn’t have been predicted or managed.

Regardless each “issue” needs to be discussed, validated, prioritised and an appropriate plan for resolution needs to be defined, including assigning to the correct owner to ensure it is resolved.  This is the purpose of Triage4,5,6.

After all, it’s all about a product/project being released with Quality and Quality is owned by everyone, right?

More Reading:

Leave a comment

Posted by on September 14, 2011 in Quality Assurance


To Blog or Not to Blog…

So I spent 2 days last week at STANZ in Melbourne.  It was fantastic to get out of my testing cave and see and speak to many people across the testing  industry as it stands today and I have to say it was quite an inspiring and encouraging event.

It did remind me of one thing, though.  It’s been quite a while since I blogged anything. Some the things we’ve been working on here at Seek (Tools, Processes, People, etc) I think have been really fantastic.  While talking with some people at STANZ about what we’ve been doing and seeing their responses (mostly good ;)) kinda points out that maybe it’s time to begin putting down on paper (on-screen), partially for my own benefit (you know, organizing my thoughts) but also, if there is anyone out there that may find the way we are doing things interesting or, more importantly, if anyone would have any input into how we can do things better.  We don’t have all the answers, after all.

Let’s see how I go in maintaining regular posts 🙂

Leave a comment

Posted by on September 5, 2011 in Quality Assurance