Software testing is all about finding defects in applications, right? Well, that can be considered as a primary goal of all QA processes indeed. However, defects differ from each other. One can’t say that some are more important than others, however, it is impossible to both locate and especially to fix all of them.
According to Murphy’s Law if there a line of code can include a defect it will, thus even apps that consist of but one line of code will include at least one issue. And, considering, you usually test large, complex solutions there will be lots of defects. Yet there are still deadlines and budget limitations that force testers to classify and prioritize located bugs so that show-stoppers are dealt with ASAP while minor issues can be left untouched.
Among these obvious reasons there are several others that encourage testers to classify located software bugs. For example, defect classification process helps with:
- Determining efficiency and effectiveness of the Test Process;
- Bug-tracking efforts becomes more effective;
- Improves development through direct and precise evaluation of potentially harmful defects.
We must clarify what are defects in order to identify their severity. Defects are pieces of functionality that work as an anomaly and make software behave differently from the way it is meant to. Appropriate behavior is pre-determined through business and technical requirements and anything that works beyond them should be classified as defects. That noted it should also be mentioned that there is a difference between defects and bugs.
Bug is a term that has matured throughout the years of software development and means something that has a negative impact on the system under test/development. However defects might not have any foul affects. This means that not all defects can be considered as bugs while every bug is a defect. Considering severity has different levels it determines, evaluates and classifies all defects that include, but are not limited to bugs.
Speaking of which: defect severity is a precise classification of one particular defect, while based on its overall impact on functionality of the product.
How to determine severity?
Every method of categorization requires categories. Levels of defect priority are pretty much same and used throughout the industry however they might slightly differ from a company to a company. We, for example, use the following classification:
- Show-stoppers. These are critical defects that result in complete failure of the product under test. Usually show-stoppers result in crashes of the entire system or its sub-systems or even particular units. After such defects take place the system seizes to operate.
- High-priority defects. Defects of this level are pretty much same as the ones described above with a slight difference: system may continue to operate after they took place.
- Medium-priority flaws: These defects do not crash the app, however are reasons of incorrect behavior, inconsistent results or poor usability.
- Low-level defects: these are usually slight cosmetic flaws, typos, etc.
Now that this is covered we may discuss how one can accurately classify defects. First of all a tester needs to measure the impact. Keep in mind that even slight cosmetic defects can be repeated all over the product and will be a big deal to end-users as they will cause vast irritation. This way a tiny typo can no longer be considered as a low-priority issue and will be categorized elsewhere due the impact it causes.
Depth of impact can be located through isolation of the defect. This way you will be able to determine and highlight both frequency and the sequence of events (operations) that lead to the defect taking place. You will also be able to catch classes of input particular isolated defects support.
And, to make things even easier we have prepared a tiny checklist. Provide answers to all the questions that are listed below and you will have no trouble with determining actual defect severity.
- Does the defect you have encountered cause the app under test to crash?
- Can the system under test recover from the crash?
- Can the system recover by itself, without necessary third-party interactions or additional external efforts?
- Determine if the defect reflects or is present in other sections of the system?
- If yes: are those sections related?
- Will same configurations empower you, as a tester to simply repeat the defect, but in a different system?
- Is the defect repeatable despite changes in configurations?
- Are all users affected? Or are there categories that fall under this risk?
- Does the defect re-appear often?
- How frequently does it appear?
- What has caused it? Which inputs lead to the defect taking place?