Code Coverage, also known as Test Coverage, is a measurement of how much in terms of percentage the source code is executed by tests.
The formula looks like this:
The higher the percentage, the better. Ideally, every team would have 100% code coverage. It does sound impossible when you think about it, but Uncle Bob breaks it down quite simply:
I will get back to this at the end of the post, right now, let’s see what can go wrong with Code Coverage.
Beware the trap
As a developer, it makes sense for you to keep the code coverage as high as possible, but with honest results; this is key. Code coverage alone does not indicate quality at all. The percentage you get when you measure the coverage is purely based on executed code. You could achieve 100% coverage even if your tests have no asserts.
I hope it is just me, but I’ve been noticing a significant increase of interest in code coverage by management. When management sets the coverage goal for a team of developers, they’re essentially trapped. It’s almost like you’re back to the old days of rushing and pressure, what if the end of the iteration is approaching and you haven’t achieved the coverage goal yet? how will your tests look then?
Overall, I believe they recognize coverage as a synonym of success. If my assumption is correct, I like how Jim Rohn defines the path for success:
“Success is something you attract, not something you pursue.”
Therefore, for coverage to have any meaning, it has to be measured against good and meaningful tests, blindly pursuing it will only slow the team down. I plan to explain in more depth what makes up for a good/meaningful test in another post.
You might be wondering whether coverage through poorly written tests is better than no coverage at all. To answer that, It’s important to have in mind that it costs time and money to write tests. If they’re not meaningful, time is being wasted. This is to me, one of the biggest problems with automated test adoption. People usually believe that they’re a good thing to do, but unless they know what makes up for a good test, they’ll just slow themselves down.
Coverage should be kept among the development team so that they can track parts of the code that is untested. This is especially true for legacy systems. However, if they do TDD, they’ll hardly care about coverage, as it’ll come as a bonus from following the discipline. This is the attract part in Jim’s quote.
I was recently pointed out by a friend, that there is indeed a way to verify, to some extent, if your coverage is flaky. For that, check Mutation Testing.