Monthly Archives: September 2010

Windows 7 shortcut keys – Something small, simple but ooooh so useful

Apologies if you already knew this (damn you for not telling me) or if you simply don’t care.

Are you getting sick of constantly switching applications from one monitor to another manually, constantly minimizing, dragging and dropping, then maximising again (eg: browser windows, pvcs, etc)?  I know I am, and I am sick of it!!!

So I went looking for one of those multi-monitor apps they had on Win XP (most of which you had to pay for) and found that Windows 7 has it built in (who woulda thunk?).  Win+Shift+Left or Right will move the current window from one monitor to the other.  <sigh> So good.

Note: this is for Win7 and Win2K8 only (as far as I know)

Here’s a complete list of short-cut keys that may make your life easier:

Enjoy 🙂

Leave a comment

Posted by on September 10, 2010 in Uncategorized


Exploratory Testing – Is it for me?

In the never-ending quest to improve speed to market in a world with as many different testing methodologies as there are development methodologies (when they aren’t the same thing) it can be a challenge to maintain the same level of quality that we, as professional testers, pride ourselves on.

One concept that has been talked about in recent years is ‘Exploratory Testing’ ; the concept of parallel learning, test planning and test execution. In my travels ET is often mixed up with Ad Hoc testing (or Bug Bashing). This an easy misunderstanding to fall into, especially when attempting to compare it the more traditional ‘Scripted Testing’. So what is the difference? James Bach has a pretty succinct briefing on what ET is but here is a brief comparison in my own words.

Key Points – Scripted vs Ad Hoc vs Exploratory

  • Scripted
    • Requires large effort during the Analysis/Requirements gathering phase of the project. There can be little uncertainty in requirements.
    • New tests/Changes to requirements need to be planned for. As we all know, changes can affect the plan greatly. This often leads to a large amount of rigidness in the development/testing process, particular in time-sensitive projects. We need to be prepared to pay the cost of documenting and maintaining tests. Without adequate Change Management, things can be missed/misunderstood.
    • Depending on the AUT there may be large amount of duplication across Dev Nunits, QA Test Scripts and User/Business Acceptance tests.
    • Any bug/scenario investigation or any other deviation from the plan is deemed as “wasted time”, or more often than not, not tracked against the plan. This can result in a test phase that is deemed to be not running according to estimates and running out of time.
  • Ad Hoc
    • Definition: “for the special purpose or end presently under consideration”.
    • Too often synonymous with sloppy careless work or improvised, impromptu “Bug Bashing”. While it can be quite valuable, it really doesn’t have a good reputation.
    • Lack of framework/management means results are unreliable and unclear. How can we determine whether something was tested or not?
  • Exploratory
    • Definition: Simultaneous Learning, Test Planning and Test Execution.
    • Operates in Sessions at a scenario level, fits closely with Use Case/User stories techniques.
    • The term “exploratory” emphasizes the dominant thought process, The skill of the tester becomes a factor in terms of technical skills, Business/Domain knowledge and general curiosity.
    • The key is the cognitive engagement of the tester, and the tester’s responsibility for managing his or her time.
    • A good exploratory tester will write down test ideas and use them in later cycles. These notes often look like test scripts, although they aren’t.

Concerns with Exploratory Testing

So while ET really does seem like the way to go, there are a couple of open questions in my mind that trouble me:

  1. At what stage can the testers get involved?
    I am a big fan of working closely with the Dev’s, maybe not quite paired testing, but with a focus on test early and test often. To me, ET seems to be dependent on an area of functionality being complete (or at least close to) or it would seem like a waste of time.
  2. How can we be sure we tested everything?
    There appears to be a large dependency on the requirements being of a certain level of detail. What is the link between the defined Acceptance Criteria and Test Coverage? Having worked on some projects with a high level of complexity and a large number of combinations, how can we be sure that we have tested all of these combinations?

Which one to use?

I like to think of scripted testing being “Prove that it works” as opposed to exploratory being “Prove that it’s broken”. With this in mind, as the kid in the Tacos commercial says, ¿por qué no los dos? (Why don’t we have both?).

The more I think about implementing ET the more it seems the logical flow to follow is a blend of both, with scripted testing allowing us to test early on specific features/rules and to verify specific Acceptance Criteria and ET allowing us to increase test coverage, focusing on higher level areas of functionality (an attempt to visualise this can be seen in Fig. 1).

Scripted AND Exploratory

Fig. 1

I will post more on this subject, and the progress of implementation, as I proceed.  Wish me luck :).

More Reading

Here’s some further reading for your pleasure. This is just a tiny selection of all the information out there.

1 Comment

Posted by on September 8, 2010 in Quality Assurance