Let’s examine the question of the classic definition of on time, on budget, of high quality, with the expected features as a traditional measure of success. Is it possible for a project to meet this traditional measure of success and still be regarded as a failure? I’ll answer that question with a question: What if the project ultimately fails to provide the anticipated business value?
Conversely, what if a project fails on the traditional criteria, yet delivers greater business value than anticipated? It happens; in fact, CIO Magazine reported on this very phenomenon in an article called Applied Insight - Tracks in the Snow, by R. Ryan Nelson.
In the article, Nelson cites an example of a financial services company that developed a new system to improve collections performance that was six months late and cost twice the original estimate – failing in the classic sense. However, once in production, the system provided a 50% increase in the number of concurrent collection strategy tests in production. On that basis, the project was judged to be a great success!
Relate this back to the results from Scott Ambler’s 2008 survey that I discussed in my previous post, The Elusive Definition of Success with Software Projects, where 70% of the respondents believed that providing the best ROI is more important than delivering under budget. This scenario supports the opinion of the respondents perfectly.
There are other realities in the business world, and one is that business priorities shift over time. Can you state with certainty, conviction, and honesty that today’s priorities will remain firm during the entire course of a year? Unfortunately, there are a number of software projects that have this implicit expectation because they have delivery dates set one year or greater into the future!
Achieving Software Project Success
The situation above, where a project was six months late and cost over twice the original estimate – yet was still regarded as successful – points to the first important point in achieving success: Understand your expected return.
Understanding the benefit that you expect to get out of the software provides you with a valuable insight about both the worth of the software as well as defining exit criteria. One thing is certain, and that is that software projects are not as predictable as everyone would like them to be (I won’t go into the reasons here). If project timelines continually move into the future, you need to start asking yourself if the additional investment is worth the payoff.
The 2006 Chaos Report from The Standish Group found there was an 18.8% improvement over a 12-year period for software projects in meeting the on time, on budget, and expected features criteria. This places only 35% of all software projects as succeeding according to this classic measure, but it is still worth examining what drove this improvement to understand what it takes for projects to be successful.
When asked for the reason that project success rates improved, Standish Chairman Jim Johnson replied, “The primary reason is the projects have gotten a lot smaller.” And this means both smaller teams and a smaller number of features.
Smaller teams are more productive than larger teams. This model is supported by studies by Quantitative Software Management (QSM), a company that maintains a comprehensive database of software development project metrics, with its data coming from its own projects and a repository that is part of its SLIM estimation product.
Here’s a pop quiz for you, based on QSM data: There are two software teams that must deliver a project with a size that is estimated to be about 40,000 lines of code. One team, Team Large, has 29 resources compared to Team Small, with 3 resources. How much faster do you think Team Large will be?
According to the QSM data, the difference would be a mere 12 calendar days! The other significant difference would be the fact that Team Large would consume 151 more person-months more Team Small. This begs the question: Why did Team Large experience such poor productivity?
There are a number of reasons why large teams suffer productivity drops. Teams start out as a collection of individuals, and they must spend some time getting to know each other’s strengths, weaknesses, preferences, and idiosyncrasies. Some mixes of people work better than others, and no matter what it takes time for people to work effectively as a team, and this will take much longer with a larger team.
Small teams can also coordinate their work far more easily and quicker than larger teams. Larger teams have greater overhead, including more lines of communication. And when there are more people involved with more lines of communication, the greater the opportunity for miscommunication to occur.
The result of coordination and communication issues can be just what another aspect of the QSM data reveals: that the larger teams produce over 6 times as many defects as smaller ones. As you can imagine, this introduces a significant burden on productivity, since defects must be logged, reviewed, fixed, tested. Overall throughput – getting things squarely into the “done” column – is significantly impacted.
A smaller number of features are easier to contend with. A large feature count is more difficult for project teams to wrap their heads around. My own personal experience is that all too often, projects with large feature counts devote less time to each feature than smaller projects.
If less time per feature is spent to the point where the time is inadequate to thoroughly define each and every feature, ambiguities creep in, and they will remain quite undetected until later in the software development process. But they will surface and cause problems – moving a seemingly well-defined project into the challenged category, particularly if those ambiguities create expensive re-work.
The other problem with a large feature count is that a greater amount of planning, design and testing is required to ensure that all of the features work in harmony. The greater the feature count, the greater the challenge of maintaining internal consistency; features cannot conflict with one another or otherwise interfere with related processing. Quite simply, a large feature count increases the likelihood of defects occurring.
Focusing on a small set of features is also important because business priorities shift over time. Projects with long time frames are likely to have features that are deemed as important now, but won’t be as important when the software is delivered. And let’s not forget that there is uncertainty about the anticipated benefits in the first place; the realized benefit could be less or greater than what was anticipated at the outset. A smaller feature set will have a faster delivery schedule and enable the business to assess the reality of the benefit(s) much earlier.
There is also the problem where stakeholders of a software project have a high-level goal that they want to achieve, but that goal gets lost when the definition of the features is delegated to others in the organization. While it is certainly appropriate to involve those who work in the trenches, as they will be the ultimate users of the software, there is a tendency to add a variety of “must have” features to the list during the process, some of which will not necessarily be related to the high-level goals of the project. A stakeholder review of the features can catch this situation before expensive software design and development begins.
Notice that I did not advocate using any one software development process over another, but instead sought to focus on the key ingredients to achieve higher productivity and greater success:
- Understand your expected return and weigh this against the investment that you are making over time.
- Focus attention on the smallest set of features that deliver the greatest business value.
- Use the smallest team possible to achieve maximum productivity.
Software needs to provide the greatest possible business value through the implementation of the least number of features possible, using the smallest size team possible, and be delivered in the shortest time frame that ensures quality.
Scott Ambler’s December 2008 Software Development Project Success Survey
Standish Group Report: There’s Less Development Chaos Today by David Rubinstien, March 1, 2007
Standish Group: Project Success Rates Improved Over 10 Years
CIO Magazine - Applied Insight - Tracks in the Snow By R. Ryan Nelson
ComputerWorld - Development Study: Haste Makes Waste By Linda Hayes, September 23, 2005