To Blog or Not to Blog…

So I spent 2 days last week at STANZ in Melbourne.  It was fantastic to get out of my testing cave and see and speak to many people across the testing  industry as it stands today and I have to say it was quite an inspiring and encouraging event.

It did remind me of one thing, though.  It’s been quite a while since I blogged anything. Some the things we’ve been working on here at Seek (Tools, Processes, People, etc) I think have been really fantastic.  While talking with some people at STANZ about what we’ve been doing and seeing their responses (mostly good ;)) kinda points out that maybe it’s time to begin putting down on paper (on-screen), partially for my own benefit (you know, organizing my thoughts) but also, if there is anyone out there that may find the way we are doing things interesting or, more importantly, if anyone would have any input into how we can do things better.  We don’t have all the answers, after all.

Let’s see how I go in maintaining regular posts 🙂

Leave a comment

Posted by on September 5, 2011 in Quality Assurance


Exploratory Testing – More Thoughts

So, we have been giving ET a lot more thought over the last couple of months and there has been much reading and discussion, some of which has been quite spirited (the discussion, not the reading).  While there is still a long way to go, here are some of the points coming out of the discussions.

Should ET be Session-based?

There is a big push towards Session-based testing, both in terms of small teams and large, extended teams, ie: getting people outside the team to get in and test in a single 90 minute session at the end of each iteration.  There is no doubt in my mind that this form of testing is extremely valuable, but the questions that keep coming to mind is, is this ET and is it always valuable or even necessary.  Sure there needs to be some sort of exploration involved in order for people to try to find any issues remaining in the system, but does it match the dictionary definition of ET.

Another question that comes to mind is, as mentioned in an earlier post, when can we begin this type of ET?  Correct me if I’m wrong but before we can enter this form of session-based testing the SUT needs to be of a certain level of quality.  How do we get there without scripted, if indeed we are aiming to replace Scripted testing with ET?

Where are we headed?

While re-reading one of James Bach‘s blog posts (yet again) I had a bit of an epiphany. Well I had a couple really.

  1. What is the difference between a Scripted test and an ET test?  How do we write a scripted test?  We script a test by interrogating the requirements in order to “explore” the imagined SUT.  We rely on the Tester to be able to accurately translate the requirements into a test that will accurately predict what the finished product is.  Therefore the only real difference between Scripted and ET is the timing of when the script is written.  As mentioned in just about every web page/blog post on the internet regarding ET, one of the drawbacks of ET is the fact that we can not review the proposed tests before testing.  But this risk is offset by ensuring that the testing effort is now more focused on the most valuable parts of the site.
  2. “Exploratory testing is especially useful in complex testing situations….” as James Bach says.  This leads me to the thought that Scripted and ET can and should live quite nicely together.  A lot of our time is wasted in trying to predict and script all the different combinations and paths possible through our SUT, but at the same time we need to prove that the system works as required.  It is in this environment that Scripted and ET can live together.Given we are operating of User Stories and Scenarios, we can easily define our Acceptance Criteria and therefore our critical area of Scripted Tests.  These tests usually tie directly into Automation.  The Tester’s role could now look something like:
    1. analyse the defined Stories/Scenarios/Acceptance Criteria,
    2. add any extra scenarios if they are missed (the Testers know the system best, after all ;))
    3. write the scripted tests directly off the scenarios/acceptance criteria
    4. determine the areas of functionality that will be ET’d based on a Risk and Coverage analysis (eg: Website Ajax transitions, Combinatorial, etc)
    5. define the Tours and Charters to be used in the ET sessions
    6. lead the execution of these ET sessions on each iteration.
User Story to Scripted Test with Exploratory Test

Simple depiction of how User Stories can lead to both Scripted tests and ET

By doing it this we can both “Prove that it works” (via Scripted) and “Prove that it’s Broken” (via ET).  We can improve our efficiency by focusing both forms of testing on the parts of the SUT needed (based on Risk/Coverage analysis).  We are still getting the majority, if not all, the metrics we need out of the system in order to make informed decisions as to the Quality of the SUT.

So this is where we are at for now.  Sorry, it’s been a bit of a ramble.  There are still lots of challenges ahead and everything may change tomorrow, but with a common goal in mind we are on the right track.  I’ll let you know how we progress 🙂

Leave a comment

Posted by on October 11, 2010 in Quality Assurance


Windows 7 shortcut keys – Something small, simple but ooooh so useful

Apologies if you already knew this (damn you for not telling me) or if you simply don’t care.

Are you getting sick of constantly switching applications from one monitor to another manually, constantly minimizing, dragging and dropping, then maximising again (eg: browser windows, pvcs, etc)?  I know I am, and I am sick of it!!!

So I went looking for one of those multi-monitor apps they had on Win XP (most of which you had to pay for) and found that Windows 7 has it built in (who woulda thunk?).  Win+Shift+Left or Right will move the current window from one monitor to the other.  <sigh> So good.

Note: this is for Win7 and Win2K8 only (as far as I know)

Here’s a complete list of short-cut keys that may make your life easier:

Enjoy 🙂

Leave a comment

Posted by on September 10, 2010 in Uncategorized


Exploratory Testing – Is it for me?

In the never-ending quest to improve speed to market in a world with as many different testing methodologies as there are development methodologies (when they aren’t the same thing) it can be a challenge to maintain the same level of quality that we, as professional testers, pride ourselves on.

One concept that has been talked about in recent years is ‘Exploratory Testing’ ; the concept of parallel learning, test planning and test execution. In my travels ET is often mixed up with Ad Hoc testing (or Bug Bashing). This an easy misunderstanding to fall into, especially when attempting to compare it the more traditional ‘Scripted Testing’. So what is the difference? James Bach has a pretty succinct briefing on what ET is but here is a brief comparison in my own words.

Key Points – Scripted vs Ad Hoc vs Exploratory

  • Scripted
    • Requires large effort during the Analysis/Requirements gathering phase of the project. There can be little uncertainty in requirements.
    • New tests/Changes to requirements need to be planned for. As we all know, changes can affect the plan greatly. This often leads to a large amount of rigidness in the development/testing process, particular in time-sensitive projects. We need to be prepared to pay the cost of documenting and maintaining tests. Without adequate Change Management, things can be missed/misunderstood.
    • Depending on the AUT there may be large amount of duplication across Dev Nunits, QA Test Scripts and User/Business Acceptance tests.
    • Any bug/scenario investigation or any other deviation from the plan is deemed as “wasted time”, or more often than not, not tracked against the plan. This can result in a test phase that is deemed to be not running according to estimates and running out of time.
  • Ad Hoc
    • Definition: “for the special purpose or end presently under consideration”.
    • Too often synonymous with sloppy careless work or improvised, impromptu “Bug Bashing”. While it can be quite valuable, it really doesn’t have a good reputation.
    • Lack of framework/management means results are unreliable and unclear. How can we determine whether something was tested or not?
  • Exploratory
    • Definition: Simultaneous Learning, Test Planning and Test Execution.
    • Operates in Sessions at a scenario level, fits closely with Use Case/User stories techniques.
    • The term “exploratory” emphasizes the dominant thought process, The skill of the tester becomes a factor in terms of technical skills, Business/Domain knowledge and general curiosity.
    • The key is the cognitive engagement of the tester, and the tester’s responsibility for managing his or her time.
    • A good exploratory tester will write down test ideas and use them in later cycles. These notes often look like test scripts, although they aren’t.

Concerns with Exploratory Testing

So while ET really does seem like the way to go, there are a couple of open questions in my mind that trouble me:

  1. At what stage can the testers get involved?
    I am a big fan of working closely with the Dev’s, maybe not quite paired testing, but with a focus on test early and test often. To me, ET seems to be dependent on an area of functionality being complete (or at least close to) or it would seem like a waste of time.
  2. How can we be sure we tested everything?
    There appears to be a large dependency on the requirements being of a certain level of detail. What is the link between the defined Acceptance Criteria and Test Coverage? Having worked on some projects with a high level of complexity and a large number of combinations, how can we be sure that we have tested all of these combinations?

Which one to use?

I like to think of scripted testing being “Prove that it works” as opposed to exploratory being “Prove that it’s broken”. With this in mind, as the kid in the Tacos commercial says, ¿por qué no los dos? (Why don’t we have both?).

The more I think about implementing ET the more it seems the logical flow to follow is a blend of both, with scripted testing allowing us to test early on specific features/rules and to verify specific Acceptance Criteria and ET allowing us to increase test coverage, focusing on higher level areas of functionality (an attempt to visualise this can be seen in Fig. 1).

Scripted AND Exploratory

Fig. 1

I will post more on this subject, and the progress of implementation, as I proceed.  Wish me luck :).

More Reading

Here’s some further reading for your pleasure. This is just a tiny selection of all the information out there.

1 Comment

Posted by on September 8, 2010 in Quality Assurance


First Post…

So I guess it’s time I jumped on the band wagon and started up a blog of my own, since all the cool kids are doing it.  I guess the main reason for starting up it up, after delaying for so long, is to start getting some of my thoughts on various topics out of my head and onto “paper”.

Firstly, who am I?  My name is Rob Manger and I am currently the QA Team Lead at  I am passionate about process and toolset improvement when it comes to Quality Assurance.  I also love my gadgets and am just starting to get into photography.  So, you can expect topics to be covered in this blog are all near and dear to my heart.

Leave a comment

Posted by on July 26, 2010 in Uncategorized