Developing software ain’t easy.
How do you know how you are doing ?
You could start collecting metrics about your code. These can give you some indication how maintainable and reliable it is.
The metric which come to mind to the most people is code coverage. Some people say it must be near 100%, other say 80% is a good number. At the end it can’t tell you if you are doing great or good. The only thing you might read from it is that a low number indicates a potential problem.
Duplication is another one you should look out for. A high number of duplicated code might bring up the problem that a bug was fixed in one place but lives on in other. If you are doing code generation (e.g. from WSDL’s) you might have a lot of duplicated code reported to you. A closer inspection is needed to really make sure this is a problem in the code you have written and is not auto generated.
Test success is not a metric you should be worried to much about. It just needs to be 100% at every checking in a shared repository. No discussion.
Cyclomatic complexity let’s you measure the number of linearly independent paths through your code. A high number means a more complex code. A complex code base is hard to maintain and involves a lot of work to adopt it to new requirements.
Dependencies between packages is another important value to check on (see Uncle Bob’s paper on Acyclic Dependency Principle). Basically it revolves around that packages should not have cyclic dependencies between them since this does have an impact on releasing and testing (and therefore coding). A change in one package could trigger a whole system build to make sure everything works together.
Tools like Findbugs let you define rules to which your code should comply to. This is great because it is tailored to your code/company and ensures you control and fine tune the measures. Or so. It requires some time to define these rules. They need to be refined over time and you have to understand them to make use of it. Of course you can leave out some of these and reduce complexity. Do not underestimate the effort to implement these.
This rounds up the metrics you can get from your code base.
But wait, there is more.
A valuable metric is the meantime between reporting a feature or bug and fixing. This gives your team (and management of course) a rough estimate how fast you can react to market changes.
To get an estimate of the mean time before a failure happens you can measure the time between two bugs. The higher the number the better. Current reported bugs per lines of code can also be measured. These metrics will give you some confidence in your code base.
At the end of the day the ultimate measure is the money on the company’s bank account. If this is not working out you are in trouble.
How to get all of these ?
There are some static code analysis tools to get you started. I only can talk about Java related tools but there are others as well.
To get more information about your bugs should query your bug tracking system like Jira
There are much more numbers out there but I think these are the ones you should always keep an eye on.
Look out for
- Code Coverage
- Cyclomatic complexity
- Cyclic dependencies
- Test success
- Time to market
- Mean time between failure
What metrics do you look out for ? Let me know
in in the comments and why.