Essential Metrics

August 8, 2009

What are the most important metrics required to monitor software development? What is required to accurately gauge the progress and readiness of the software being developed? And how can these metrics be implemented with the least amount of administrative overhead as possible, so that project teams can focus as much of their time as possible on delivering working software?

Essential metrics deal with maximizing the value proposition of the software along with understanding how well key software development activities are being performed.

Metric #1: Determine the ROI and the break-even point.

A software project is all about defining and delivering something new, requiring an investment in time and effort of people to design, build, and test the actual software. There should be a return on this investment – either revenue generated from sales or a savings created as a result of using the software. Understanding the point where the investment is no longer worth the effort is critical for all concerned. Why start or continue with a project that will cost more than it will return?

Metric #2: Prioritized features.

What are the key features that should be delivered with the project? Those that deemed to provide the greatest business value should be prioritized over other features, making this related to Metric #1.

I’ve seen software projects get into trouble because “everything is a priority.” Translation: A boatload of features are being targeted, without due consideration for what is truly valuable from a business standpoint. Everything changes over time, and what you think is important today might not be all that important 9-12 months down the road, particularly in today’s uncertain business climate.

I’m also a believer that software can be delivered incrementally, where a release containing higher-priority items can be provided to the business early. The allows the benefits of the software to be realized as quickly as possible. Users may also change their minds and decide that the lower-priority features aren’t really necessary – saving time and resources that can be applied to other projects.

I am in full agreement with the agile community on the next metric:

Metric #3: Working software is the primary measure of progress.

Why use anything else to gauge your progress? And there is no overhead involved! Prioritize the features that provide the highest value, and work towards delivering those as soon as possible. If you can’t see it working, don’t believe it.

Metric #4: Tracking the amount of time wasted due to re-work.

Efficiency and effectiveness – how fast that quality, working software is delivered – is a function of the people involved, their skills, knowledge, experience, personality mix and interactions with other team members. Wasting time means losing momentum, losing money, and sacrificing forward progress. Software activities are a component of the DNA of Software Development model, as noted below:

This doesn't mean that there will be a perfect world, as most software projects will experience some level of refinement as they progress. For example, the Definition activity is about understanding what the software is expected to do – the requirements – and provides input to the rest of the software development process. During Design, it is likely that some refinement of the requirements is possible, as the user interface and structure of the system are defined.

Preventing wasted time and keeping costs reduced means that the refinement loops should be as short and tight as possible. Once you start reaching back across multiple activities in the process, you have problems!

Wasted time can occur at any point in a project, and the intent is not to point blame in any direction, but rather to keep the project on track, along with informing and educating everyone involved on the need to exercise due diligence in every aspect of software development. If wasted time starts creeping into a project (like re-work because requirements were not understood), an intervention can be staged to help prevent wasting any more time.

A grading exercise can help teams – particularly less-experienced teams – understand what is important and to keep the essential hand-offs between each phase in check. Key questions (in bold) that can be used by teams to evaluate where they are at during each activity are covered below, and represent the remaining metrics that I consider essential.

Does the Project Charter (or a Vision and Scope document) clearly define the goals and objectives of the project?

Are the business requirements clearly understood by all project team members?

If the team can’t answer these questions, then there is ambiguity present that will cause delays and problems – like costly re-work later on. Because there are people involved, there will likely be differences in how much detail is required for complete understanding; what works for one team may not work for another.

There is a point where requirements are distilled into product features, and this is not something that should be short-changed! As I noted in my post Six Keys to Successful and Productive Software Delivery, Karl Wiegers and Steve McConnell, two industry gurus have pointed out the following in particular around requirements, defects and wasted time:
  • Requirements defects in software projects account for approximately 50% of product defects.

  • The percentage of re-work on software projects due to requirements defects is greater than 50%.
An extreme example that I’ve seen in years past is when work had to be thrown out during the Verification (testing) phase because the Quality Assurance group had a different interpretation of the requirements than the Development organization, and it was determined that Quality Assurance was correct.

Before actual coding begins, there should be some type of design work performed. If a feature requires a user interface, has this been mocked-up in some way to validate that the design will work for the users?

My personal preference here is to use “paper prototypes” – screens drawn on paper. These are the quickest, lowest-cost prototypes available and can start out life as white-board drawings. Ideally, user interface designers and user experience (interaction) designers are available to create a truly fantastic user experience.

Other key questions related to design:

Are there system and component diagrams that designate the layers, roles and responsibilities of the various software system and components?

Are there well-defined interfaces between components?

If Web Service interfaces are to be used, are these well-defined, business-level interfaces?

Has a design review been conducted with peers to ensure that the specific design will satisfy the business need?

Are the use of design patterns planned, or leveraging of existing routines planned?

Good design and a review of the design is all about preventing problems from getting into the code in the first place. All too often design is overlooked due to aggressive project deadlines, and other times what should be exploratory work to inform the design often ends up being the design and development. This skipping or short-changing of steps will invariably come back to bite you!

My key questions around estimating are as follows:

Is the person doing the work providing input to the estimate? There is a variance in programmer productivity, and any estimate is highly dependent upon who will be doing the work. One programmer may have greater familiarity in one area than another – contributing to how quickly a given task can be completed. The person doing the work should have the final say, but should consider any guidance provided as a part of team input. This leads me to my next question.

Did the team estimate the tasks? When it comes to producing reliable estimates, I’ve found that nothing beats a team effort. The general approach is that the team meets and reviews the requirements one by one, listing the development tasks required to meet each requirement. Everyone gives their estimate, followed by a discussion if someone is low or high – so that the team can understand the deviation.

In the end, there should be a good understanding of the tasks involved and the time required to perform those tasks. The key being that the person responsible for performing the tasks has the final say in how long it will take.

When it comes to actual coding, I allow for changes in process and work preferences. Provided that the work gets done, and is done so as efficiently and effectively as possible, I’m good. We don’t have strict coding guidelines in our shop around formatting code. I really don’t care where the curly-brace is! I want code that can be understood and maintained over time, and I trust my staff to organize their code, comment it, and name variables in ways that will help others to understand what the code is doing.

My guidance to individuals and teams is that I do expect code reviews to take place. If you pair-program, the review is done by the virtue of the fact that another pair of eyes are already on the code. If you don’t pair-program, I do expect a code review to take place. I do, however, differentiate between the type of code review being performed, depending upon the criticality of the code being written.

Peer reviews are fine for what we classify as low-risk code. For example, if you are correcting a defect, a peer review is fine. If a new feature is being developed, but this new feature is more of an ancillary, supporting function, peer reviews are also just fine.

I expect greater scrutiny for new features that are expected to be used frequently and impact business data. Since this feature (or routine) plays a major role in our software, a more detailed team inspection should be conducted to confirm the correctness and implementation. The questions thus become:

If this is low-risk code, has a peer review been performed?

If this is high-risk code, has a team inspection been performed?

As the code is reviewed, the following questions come into play:

Has the planned (from the design phase) software re-use and incorporation of design patterns been implemented?

Is the code clearly commented and deemed maintainable?

Has error-handling has been incorporated?

Has a cyclomatic code complexity check be run?

In terms of producing code that is reliable and can be maintained over time, I like to see a cyclomatic code complexity check on any code produced. Overly complex code is not only a likely source of bugs, but it will take more time and effort for someone to modify in the future, increasing maintenance costs.

And there should be a final check before code is accepted: Have the recommendations from a peer/team review been implemented in the code?

Testing is the last line of defense. Testing your way to quality is not optimal! The Cost of Defects model accurately reflects this, as does Steve McConnell’s observation that “Every hour spent on defect reduction will reduce repair time by 3 to 10 hours.”

Have the high-priority, high-severity bugs been addressed? Track the number of defects found during system and acceptance testing, comparing these against the lines of code produced. This can help you to (roughly) compare a team against industry norms, and understanding both the number and severity of the outstanding defects will help to assess whether software is ready for release.

How much of the product is being tested through regularly-executed, automated testing? Did manual testing adequately cover the remaining bases, or are there gaps? This will help inform you about the readiness of the software, providing a confidence factor when examining the outstanding bugs. A high-percentage of test coverage will leave you feeling confident that you have surfaced most of the critical problems, whereas a low percentage of test coverage should make you feel uneasy because the product hasn’t been tested thoroughly enough.

The metrics presented here are geared towards providing as much flexibility in work as possible for software teams, keeping bureaucracy reduced to a minimum. My objective is to manage high-performing teams who understand what needs to be done, when, and why. This allows the teams to define and manage their own work without the need for rigid processes, which in turn is motivational for all concerned and allows our organization to be adaptable to changing circumstances.

This does require hands-on attention and active management. Expectations must be clear and understood, and the performance of individuals and teams must be constantly assessed and dealt with – which includes making time to address performance issues and to reward great work (something that gets overlooked too often because as managers we tend to focus on problems to a fault).

I’m very curious about the reactions that any readers may have!