So, we have been giving ET a lot more thought over the last couple of months and there has been much reading and discussion, some of which has been quite spirited (the discussion, not the reading). While there is still a long way to go, here are some of the points coming out of the discussions.
Should ET be Session-based?
There is a big push towards Session-based testing, both in terms of small teams and large, extended teams, ie: getting people outside the team to get in and test in a single 90 minute session at the end of each iteration. There is no doubt in my mind that this form of testing is extremely valuable, but the questions that keep coming to mind is, is this ET and is it always valuable or even necessary. Sure there needs to be some sort of exploration involved in order for people to try to find any issues remaining in the system, but does it match the dictionary definition of ET.
Another question that comes to mind is, as mentioned in an earlier post, when can we begin this type of ET? Correct me if I’m wrong but before we can enter this form of session-based testing the SUT needs to be of a certain level of quality. How do we get there without scripted, if indeed we are aiming to replace Scripted testing with ET?
Where are we headed?
While re-reading one of James Bach‘s blog posts (yet again) I had a bit of an epiphany. Well I had a couple really.
- What is the difference between a Scripted test and an ET test? How do we write a scripted test? We script a test by interrogating the requirements in order to “explore” the imagined SUT. We rely on the Tester to be able to accurately translate the requirements into a test that will accurately predict what the finished product is. Therefore the only real difference between Scripted and ET is the timing of when the script is written. As mentioned in just about every web page/blog post on the internet regarding ET, one of the drawbacks of ET is the fact that we can not review the proposed tests before testing. But this risk is offset by ensuring that the testing effort is now more focused on the most valuable parts of the site.
- “Exploratory testing is especially useful in complex testing situations….” as James Bach says. This leads me to the thought that Scripted and ET can and should live quite nicely together. A lot of our time is wasted in trying to predict and script all the different combinations and paths possible through our SUT, but at the same time we need to prove that the system works as required. It is in this environment that Scripted and ET can live together.Given we are operating of User Stories and Scenarios, we can easily define our Acceptance Criteria and therefore our critical area of Scripted Tests. These tests usually tie directly into Automation. The Tester’s role could now look something like:
- analyse the defined Stories/Scenarios/Acceptance Criteria,
- add any extra scenarios if they are missed (the Testers know the system best, after all ;))
- write the scripted tests directly off the scenarios/acceptance criteria
- determine the areas of functionality that will be ET’d based on a Risk and Coverage analysis (eg: Website Ajax transitions, Combinatorial, etc)
- define the Tours and Charters to be used in the ET sessions
- lead the execution of these ET sessions on each iteration.
Simple depiction of how User Stories can lead to both Scripted tests and ET
By doing it this we can both “Prove that it works” (via Scripted) and “Prove that it’s Broken” (via ET). We can improve our efficiency by focusing both forms of testing on the parts of the SUT needed (based on Risk/Coverage analysis). We are still getting the majority, if not all, the metrics we need out of the system in order to make informed decisions as to the Quality of the SUT.
So this is where we are at for now. Sorry, it’s been a bit of a ramble. There are still lots of challenges ahead and everything may change tomorrow, but with a common goal in mind we are on the right track. I’ll let you know how we progress 🙂