I measure test suites under 6 main criteria. The criteria are pretty hard and fast and there are key indicators to measure them. There is also an AND relationship between them. So if you can tick 5 out of the 6, the other one should be addressed.
- Trust. Tests pass when the component is ok.
- Comprehensiveness Majority of the ways of use for the component are covered by the tests
- Correct level of abstraction. Tests should be written to a stable, well defined interface. Unit tests faciliate refactoring.
- Language Tests should match the language of the problem
- Reliability. Tests fail only when the code is not ok.
- Independent Tests should be independent of other tests, methods and classes, in a pragmatic way. ie each test should only use methods that are "well used" in the public domain. This does not include data driven approaches.
What defines a useful test suite is:
- A developer can be pretty sure, once the unit test suite passes, that no other functional issues will be found. We are happy to release the product after the automated suite passes.
- Majority of the problems are found at the unit-test level. For this our Fault Slip-through Analysis of our bugs indicates that the majority of bugs are found in the right level of test.
- The unit tests are a vital tool to help refactoring. I can do multiple run-test - refactor - run-test cycles, without making a change to the tests. The unit-tests are written towards the "thing" wrapped in an interface, and not just any method or any class.
- They reflect the language of the problem definition re using terminology the customer used. Ideally Customers should be able to understand the tests.
- When a test case fails, it points to an actual problem in the component
- Test cases shouldn't change when we change or extend the system, therefore I can trust them. Test cases are the guarantee that what worked yesterday, still works today. If we have common methods and utility classes referenced in our tests, that are changed as the system grows then we cannot depend on our tests. In other words, if I change my test code, who will test my tests?
What "smells" to measure that a unit test suite is useless:
- Developers don't trust the unit tests to verify the component. This means, more or less, that a developer isn't that confident to release the component based on unit test alone. We require a manual test before we are confident to release the product.
- Majority of problems are being found in later stages of testing. Our fault slip through Analysis is showing large numbers of bugs appearing in later phases of test, that could have been found in earlier phases.
- The unit tests are written at too low a level and now hinder refactoring. I change the internals of a component and several unit tests no longer compile, never mind that the don't run. Every method of every class has at least one unit test associated with it. Worse still, methods that should be private are public to enable testing!
- They reflect the terminology of the code - we see language of the solution in the tests. For example things like factories or other design patterns start appearing in the tests.
- Test cases regularly fail at random times during various runs. Failures are "false" because they were caused by some environmental or platform problem. For example a database service we needed wasn't started or the disk was full.
- All my tests depend on a test utility method I wrote a good while back and this utility method needs to be regularly changed when we add new features. Most times I add new tests, I have to change the utility method, causing a subtle change in all my tests.
Updated 26th May, 2016
Updated 3rd June, 2016
Updated 23rd August, 2016
Updated 4th September, 2017
Updated 26th October, 2017