Many testers often try to focus on the high number value of test coverage (up to 90%). Let’s try to figure out whether it is really necessary to spend so many efforts to increase the coverage.
First you need to consider is that test coverage aims to detect untested parts of codebase. Setting a certain level of coverage as a goal makes testers focusing on it. The point is that high number of coverage doesn’t mean the high quality of testing. Usually good testing gives 80-90% of test coverage. If someone got the number close to 100% it can suggests that this person aim to get high number rather than high quality.
Why do testers try to reach high level of test coverage?
The answer is obvious: people want to know if they are testing enough. Of course, when you got less than 50% of coverage this is a bad sign says about troubles. Coverage doesn’t determine the sufficiency of testing. The last one is more complicated.
How to know if you are testing enough?
You can say you are testing enough in the next cases:
- Usually you don’t get errors that escape into production;
- There are not many cases when you need to change a code because it can cause production defects.
Is it possible testing too much?
There are some things experienced testers see as a signs to stop testing:
- You can remove some tests while still having enough;
- Tests are starting to slow you down;
- Duplication in your tests – when a simple change to code caused too long changes to tests, etc.
So, what’s the value of test coverage? Analysis of your coverage let you detect which parts of your code haven’t been tested yet. But if there are some weaknesses in your test suit that can be detected by coverage, it still stays weak even if the coverage can’t detect it.