Over the years, countless attempts have been made to define and measure software productivity, which means that output must be defined. The “big three” measures that are invariably used:
- Lines of code
- Function Points
- Defect Counts
While striving to measure productivity is certainly admirable, there are definitely problems with using any of the above in isolation. Even worse, if just one of the single “productivity” metrics mentioned above ends up as a measure of a programmer’s job performance, then you have started down a very slippery slope. The problem is that if you have reduced the judgment of complex work to a single element. The result is that you will get what you ask for, but it won’t be what you wanted!
Let’s examine using lines of code as a metric. This sounds like a great measure – after all, a developer needs to write code to tell the computer to do something, right? Well, if lines of code are what you measure productivity on, you’ll get lines of code, in spades.
Here’s a quick example that jumps to mind from my own experience. In updating some date calculation routines, I came across a previously-written, 100-line routine that needed to be modified. Yes, the code worked, but it needed to be extended.
One thing struck me almost immediately when I started looking at this routine; whoever developed it did not make use of date routines available within the language that we were using! The result was that this routine was much longer and more complex than it needed to be. By leveraging date features available with the language, I was able to shrink this routine to less than 20 lines of code.
Was the programmer who wrote an unnecessarily long routine truly “productive?” Maybe this individual didn’t know about these features, but should have taken the time to learn them. The result would have been more maintainable code, even if it took a little longer for that person to actually write the routine. If you are counting lines of code without taking time to put in conditions that make this type of coding unacceptable – something that will introduce more overhead and organizational gymnastics – this is what you will get.
And what about my lines of code count in this scenario? Shouldn’t I receive some type of credit for shrinking lines of code, since reducing the complexity made this routine more understandable and maintainable?
Let’s assume that I received credit for any new code that I wrote, including full credit for the lines of code that resulted from my re-factoring someone else’s code. Did this mean that other individual was rated as productive as well, even though he/she wrote something that was begging to be re-worked later? (In this case, much later; the code in question was a few years old by the time I encountered it.)
The lines of code metric is the simplest to explain (and I won’t bother with the others), but it surfaces the one key tenet:
Using a single metric does not measure true productivity.
And if you insist on using a simple metric to gauge complex work, you will need add rules around what is acceptable and not acceptable to avoid incenting individuals into "gaming" you – working towards an end result that ultimately does not do you any favors. And that adds overhead to your organization without adding any real benefit.
Don't get me wrong, metrics can be good. However, if you manage complex work, then you need to examine a series of metrics in order to gauge how well you are performing.