Does anyone have a good source that classifies typical bugs found, hopefully with examples? For instance, one could classify bugs as:
- User Interface Errors
- Error Handling bugs
- Boundary related errors
- Calculation errors
- Control flow errors
- Errors in Handling or Interpreting Data
and so on.
You may find it useful to search for "bug taxonomy" or "failure mode catalog".
This paper, "Bug taxonomies: Use them to generate better tests" provides a great overview of taxonomies, discusses how you can use them to brainstorm better test ideas, and provides useful practical tips on how to use existing bug taxonomies or how to go about creating a bug taxonomy. (The example used is the development of an ecommerce bug taxonomy). There are also other papers available from the Centre for Software Testing Education and Research, but I don't see anything particularly recent on that page. However it is a great source for useful papers (and references to other literature!) on software testing.
I've found it useful to consider the types of bug that are most common, in different projects and groups I've worked in - this helps me to target early tests in areas where I expect to find more issues, but I've never had time to compile a serious bug taxonomy. If that sounds interesting, you might find this blog by Adam Knight on compiling context-specific heuristic cheatsheets interesting.
Classifications will never be finite & will be specific to what and how you're testing. Like the list of 'tags' on the various SO sites.
If you're trying to deal with case/issue/bug management. The best classification is the priority of the issues. Severity is also interesting, but can confuse a developer in what they need to do next.
10. Avoid the temptation to add new fields to the bug database. Every month or so, somebody will come up with a great idea for a new field to put in the database. You get all kinds of clever ideas, for example, keeping track of the file where the bug was found; keeping track of what % of the time the bug is reproducible; keeping track of how many times the bug occurred; keeping track of which exact versions of which DLLs were installed on the machine where the bug happened. It's very important not to give in to these ideas. If you do, your new bug entry screen will end up with a thousand fields that you need to supply, and nobody will want to input bug reports any more. For the bug database to work, everybody needs to use it, and if entering bugs "formally" is too much work, people will go around the bug database.
-Joel Spolsky "Painless Bug Tracking"
In my opinion, there is no "best" classification. I'd follow testerab's links and do some more research - the context in which you're working will give you a good idea of what will work best for you and your situation.
For instance, where I work, bugs are classified by a combination of the following: which module of the system in test they occur in; whether they were found by a tester or by a customer; how severe they are. High-severity bugs found by customers in any of the financial modules (sales journalization, tax calculations etc) or regular operation modules (making sales, taking orders and so forth) are corrected first.
I don't think there's a way around classification schemes turning into triage schemes - my experience is that this seems to be intrinsic to software testing.