While doing exploratory testing it’s often difficult to decide what to start with since the opportunities abound.
In order to do it effectively we offer testers to concentrate on the three starting points to keep in mind all the way through.
3 Things to remember:
Don’t follow the documentation.
Exploratory testing is something you can perform without any specification and scripted test cases. However, if such handy guides are at hand, it’s always tempting to resort to their instructions. This is the mindset you have to avoid. While checking in with the documentation, you tend to rely on their standpoints and assume the tactic of confirming that everything works as expected. Instead your role now is the opposite, because you have to find the points where the system fails. It’s better to do cross-reference using documentation afterwards. Without resorting to it you think broader, you may even unintentionally discover some undocumented implicit requirements, errors and inconsistencies in documentation etc.
Question rather than verify.
When there’s a need to test some feature, we are naturally inclined to make prove that it works properly. This inclination is likely to restrict our interaction with this feature and place in focus only its positive responses. Our mission here should start with breaking the traditional frame of mind and try to contradict the expected rather than verify it. While testing, ask yourself negative questions such as “In which case it wouldn’t work?” or “In which case it couldn’t work?” to question the expected order of things and test it properly.
Since the temptation to believe that everything works right is strong, we often give up our hunt for bugs after several tests indicating none. To test more thoroughly, here are some additional ideas to dig up more bugs:
- Explore the type, length, formats and other properties of complex input data.
- Test the product’s performance on computers with different capacity, in an environment under different data production quantities etc.
- See if it’s possible for malicious users to use the given feature or its parts in non-provided ways.
- Test it against different browsers, platforms, databases etc.
- Check if the actions performed by several users conflict.
- Test how the feature reacts to the missing external components.
- Try out how different users handle the feature and what can be improved for better usability.
- Check if non-English users understand everything and if something needs localization.
The ideas for effective exploratory software testing are infinite if you assume the given mindsets.