Have you ever considered what questions do testers ask themselves?
Well, here are a couple of them:
- What is the best test I can use right now?
- What kind of test approach am I going to use?
- Is that a bug?
- Have I finished already?
Yet, there are also other questions, which do not usually come to your mind
- Does this item need to ever be checked?
- Should I test it?
- Is that a big deal that it doesn’t work?
However, does everyone ask such questions as the three ones above? It could be the consequence of being taught to test everything. Some of testers even have a method, requiring every feature to be proven and stamped “tested” by someone from the QA team. Testing has become a routine procedure and sometimes you can even hear from testers:
“I am the tester. Therefore, everything must be tested…by me… even if a non-tester has already tested it… even if I already know it will pass… even if a programmer needs to tell me how to test it…I must test it, no exceptions!”
This way of thinking could earn testers a bad reputation. It stresses the importance of testing as a thoughtless process rather than a service that gives the most important information to someone.
So, is there a necessity to test everything and each? As James Bach once noted:
“If it exists, I want to test it! (The only exception is if I have something more important to do.)”
It really often happens that another test is of a bigger importance. Yet, importance is not always obvious. Let’s not measure it right now (because it is such an intimate question), but rather consider the three questions, asked above and search things that may not be worth your time, doing testing. Here are some examples of what is going to be talked about:
- Features that don’t concern production. For instance, increasing of error logging tables or audit reports to follow production activity. These things can happen very often. It also occurs that these items are under Developer User Stories supervision. The bits do not concern directly production, and thus, by their nature cannot literally influence users.
- Fixes for critical production problems that can’t become worse. If you have a critical problem that needs to be fixed right away, just handle this detected issue to programmers for an urgent fix. Just do not impede things to be fixed! Hand it over to production immediately, if the failure occurs. And make tests after that if there still will be a necessity for it.
- Perfunctory bug fixes with appropriate test setup. For example, you have a spelling mistake, displayed on a screen shot of a user error message that needs to be fixed. Of course, it will be fixed without user’s concern in less than 30 minutes- fast and easy. But will it be really worth it?
- Simple configuration changes. Imagine your product has begun to face exceptionally bulky production jobs it could not work up. A developer tried to fix the problem using apparent change of configuration. Yet, it was too enormous for the QA environment. Thus, QA team configured the production and now the users themselves (without their knowing) freely did the testing.
- Extremely technical for a non-programmer to test. When you test some functional issues that demands carrying out some actions, while using breakpoints in the code to duplicate race quality. Sometimes a tester doesn’t possess such skills and haven’t got knowledge as a developer of the product code. The best way out is to talk over tests, and let the developer do his job.
- Use the available non-tester. If you have a non-tester in your team, who is ready to give a hand to test a certain feature, don’t hesitate to take his offer. Discuss test ideas and ask for any test reports you need. If you feel sure about the done work, don’t test it then.
- No repro steps. From time to time a developer will make a stab at something. It often happens that such errors are reported for which no one can detect the reproduction steps. Testers will probably want to regression test of the renewed area, but they won’t avoid the supposed fix from using it just because they are not sure if it works or not.
- Insufficient test data or hardware. Let’s face the truth – the majority of testers don’t have as many load balanced servers as the production environment does. It can happen that to carry on a valid test the production resources can be handy, thus they are available only inside the production environment. As a result, testers may not be able to check the issue.
Now a lot of testers are probably imagining cases where the items above could lead to problems if unchecked. But remember, these are items that may waste our time when testing. First, consider properly your priorities or ask your stakeholders when something is not clear.
However, if you still choose not to test something, it is better to be sure about it. If your team and you agree altogether that the issue is not worth to test, then it is ok, but if somebody says against it, this issue should be thoroughly considered and hence, tested. Yet, if you decided not to test anything, just mark that nothing was tested and pass it through to production. Sometimes it really saves your time and energy, so that it could be used elsewhere.
So the next time you’ll be getting in testing that is not that prior compared to another one you could be spending your time on, think of not testing it. With the time the team you are working with will start to respect your decision and everyone will take advantage from fewer bottlenecks and raise test coverage where it really is needed.