Wednesday, 22 January 2014
A short discussion on code coverage
What is code coverage?
Code coverage is a measurement of the lines of code executed during a test. Usually this is aggregated together over a number of tests, to give a suite code coverage metric. It is a measurement that is easy to obtain automatically as there are lots of tools out there to measure it for you. It is usually expressed as a percentage of the lines of source code executed. It is can be used as an indication of quality of the test suite, however like all statistics it needs to be used in the correct context.
What does it indicate?
Low levels of code coverage indicates that our automated test harness does not do a very good job verifying the entire components' source code. We cannot be confident that the changes we made in this release will not effect some of our users out there in the world.
Hmmm so a higher number is better?
Yes. Higher numbers can mean we have a very good test suite, but does not guarantee it. When developers use code coverage as a targeted Key Performance Indicator (KPI) when writing tests, a small subset of the required tests would give an artifically high code coverage metric, while leaving little or no confidence in our test framework. It is amazing how many times managers put a coverage KPI on developers. This usually results in an artificially high coverage metric, typically 95+%
So we should have 100% then?
Absolutely. Why not? If the code is well structured AND you thought of all the tests required, it should be pretty near it. You will always have code, like logging, tracing and certain "aspect" type or cross cutting work that you may not test because its not actually business logic or effects business logic. If the code is legacy code the cost of introducing a test framework to reach high levels of code coverage may not be worth the effort. You should instead aim to write the most popular/common test cases and just use code coverage to see help indicate future refactorings/removal of code.
100% code coverage means 100% confidence?
Absolutely not. Not even close to 100% confidence. 100% code coverage does not mean we have written 100% of the tests that ensure our code is working. Only 100% of the tests required to test a component ensures confidence when future changes are made.
So I need 100% of tests?
Exactly. Thats all that matters. Thats what we really should be striving to write, not to hit a coverage limit of 80%, 90% etc.
Can I automatically measure % tests I have written?
No... not without a clever human involved. This is where a good automated tester earns their bread and they are worth their weight in Gold. Literally.