In the never-ending quest to improve speed to market in a world with as many different testing methodologies as there are development methodologies (when they aren’t the same thing) it can be a challenge to maintain the same level of quality that we, as professional testers, pride ourselves on.
One concept that has been talked about in recent years is ‘Exploratory Testing’ ; the concept of parallel learning, test planning and test execution. In my travels ET is often mixed up with Ad Hoc testing (or Bug Bashing). This an easy misunderstanding to fall into, especially when attempting to compare it the more traditional ‘Scripted Testing’. So what is the difference? James Bach has a pretty succinct briefing on what ET is but here is a brief comparison in my own words.
Key Points – Scripted vs Ad Hoc vs Exploratory
- Requires large effort during the Analysis/Requirements gathering phase of the project. There can be little uncertainty in requirements.
- New tests/Changes to requirements need to be planned for. As we all know, changes can affect the plan greatly. This often leads to a large amount of rigidness in the development/testing process, particular in time-sensitive projects. We need to be prepared to pay the cost of documenting and maintaining tests. Without adequate Change Management, things can be missed/misunderstood.
- Depending on the AUT there may be large amount of duplication across Dev Nunits, QA Test Scripts and User/Business Acceptance tests.
- Any bug/scenario investigation or any other deviation from the plan is deemed as “wasted time”, or more often than not, not tracked against the plan. This can result in a test phase that is deemed to be not running according to estimates and running out of time.
- Ad Hoc
- Definition: “for the special purpose or end presently under consideration”.
- Too often synonymous with sloppy careless work or improvised, impromptu “Bug Bashing”. While it can be quite valuable, it really doesn’t have a good reputation.
- Lack of framework/management means results are unreliable and unclear. How can we determine whether something was tested or not?
- Definition: Simultaneous Learning, Test Planning and Test Execution.
- Operates in Sessions at a scenario level, fits closely with Use Case/User stories techniques.
- The term “exploratory” emphasizes the dominant thought process, The skill of the tester becomes a factor in terms of technical skills, Business/Domain knowledge and general curiosity.
- The key is the cognitive engagement of the tester, and the tester’s responsibility for managing his or her time.
- A good exploratory tester will write down test ideas and use them in later cycles. These notes often look like test scripts, although they aren’t.
Concerns with Exploratory Testing
So while ET really does seem like the way to go, there are a couple of open questions in my mind that trouble me:
- At what stage can the testers get involved?
I am a big fan of working closely with the Dev’s, maybe not quite paired testing, but with a focus on test early and test often. To me, ET seems to be dependent on an area of functionality being complete (or at least close to) or it would seem like a waste of time.
- How can we be sure we tested everything?
There appears to be a large dependency on the requirements being of a certain level of detail. What is the link between the defined Acceptance Criteria and Test Coverage? Having worked on some projects with a high level of complexity and a large number of combinations, how can we be sure that we have tested all of these combinations?
Which one to use?
I like to think of scripted testing being “Prove that it works” as opposed to exploratory being “Prove that it’s broken”. With this in mind, as the kid in the Tacos commercial says, ¿por qué no los dos? (Why don’t we have both?).
The more I think about implementing ET the more it seems the logical flow to follow is a blend of both, with scripted testing allowing us to test early on specific features/rules and to verify specific Acceptance Criteria and ET allowing us to increase test coverage, focusing on higher level areas of functionality (an attempt to visualise this can be seen in Fig. 1).
I will post more on this subject, and the progress of implementation, as I proceed. Wish me luck :).
Here’s some further reading for your pleasure. This is just a tiny selection of all the information out there.
- Wikipedia – http://en.wikipedia.org/wiki/Exploratory_testing
- Exploratory Testing Explained – http://www.satisfice.com/articles/et-article.pdf
- What is Exploratory Testing – http://www.satisfice.com/articles/what_is_et.shtml
- General Functionality and Stability Test Procedure for Certified for Microsoft Windows Logo – http://www.satisfice.com/tools/procedure.pdf
- Explaining Exploratory Testing – http://blogs.msdn.com/james_whittaker/archive/2009/01/08/explaining-exploratory-testing.aspx
- Exploratory Testing in Pairs – http://www.kaner.com/pdfs/exptest.pdf
- Exploratory Testing – Good and Bad sides – http://www.testandtry.com/2010/01/26/exploratory-testing-good-and-bad-sides/
- Example Exploratory Test – http://wiki.laptop.org/go/1_hour_smoke_test
- Context Driven School – http://en.wikipedia.org/wiki/Context-Driven_School