Our goal with planning software projects is predictability. But as we’ve proven with traditional approaches, our age-old enemy – variability – inevitably creeps into the equation to wreak havoc with our plans. This isn’t a planning failure; it’s a failure to assume that we can plan with certainty up front.
The Cone of Uncertainty (McConnell, 1998) is an excellent visual model that captures this dynamic. It plots the range of variability encountered with software projects as they progress through time. Figure 3-1 depicts the general shape of the Cone of Uncertainty, minus the actual data points used to create the cone.
The Cone of Uncertainty shows that variability is greatest early on, and this variability brings both uncertainty and risk. Variability begins as a wide range; estimates can be off by as much as a factor of four at project inception, for example. As you can see, the cone narrows over time as we progress sequentially in time through the various project phases of inception, requirements, design, development and testing.
The Cone of Uncertainty vividly illustrates that the early stages of a traditional project is actually the worst possible time to rely on estimates on when we will deliver. It also helps to explain why asking a team to spend extra time up front with estimation does nothing to improve the reliability of our estimates.
In order to improve our estimates means that we address the variability that drives our uncertainty. Notice that I said we need to address variability. We can’t eliminate it. Just how much variability that we encounter, however, depends upon the circumstances that we are facing.
For example, I once worked for a company that provided software to insurance companies (think home or car insurance). We provided tools that allowed our customers and/or our internal staff to define a line of business that could also be processed using our software. From our customers’ perspective, our tools and technology provided steady-state technology where the technical variability was low.
By the time an insurance company began implementing a line of business in our software, they had already acquired a great deal of clarity about what needed to be implemented based on the regulatory approval process. In other words, by the time we – or our software – became involved, the business problem was already well-defined by people who possessed a great deal of domain knowledge. This kept the variability of the business implementation just as low as the technical variability.
In this scenario, a consistent, repeatable series of steps using our toolset was really all about building a variant of the same application over and over again, based on product decisions that had already occurred. The implementation was more about translating that product definition into our software.
Conversely, when we needed to add new, unique features or even completely new applications to our software, we were dealing with greater variability and uncertainty. For a start, we were designing and building software with ever-changing technology. For us, the software development tools, operating systems, database engines, etc. were always being advanced, and we needed to keep pace.
In addition, new features needed to support the needs of multiple customers. We were dealing with a great deal of design work concerning things like workflows, user interfaces, database schemas, software architecture and code design. We needed feedback on what worked well and what didn’t work well from a cross-section of customers. This type of work reflects the variability found in the Cone of Uncertainty.
As you can imagine, there is an even greater extreme: developing a new product using new technology where your own domain knowledge and expertise is low. Variability and uncertainty will be a part of your experience day in and day out.
One key insight from the Cone of Uncertainty is that in order to reduce variability we need to perform actual work. As software teams work they are learning about what the customer truly values and how to best deliver that value. Equally important, customers are learning about what is possible along with how the software will look and feel as requirements are translated into working software.
The expectation of learning is a critical one because there is always something new involved with software projects. We aren’t simply copying what is already available someplace else. There is always an angle being pursued to provide something that is uniquely tailored to meet a specific set of needs.
Software development is thus more accurately a large learning exercise that requires a variety of specialists and domain experts working in collaboration to produce a valuable, high-quality product. Our challenge is to address variability, uncertainty and risk in a way that allows us to forecast delivery dates with a good degree of reliability while enabling all of that learning to take place between the experts involved.
Simply stated, this is product development, a scenario that agile development is well-suited for. In most software development scenarios we cannot eliminate all of the variability up front before we perform our work. The name of the game is to manage variability and institute regular feedback loops to enable learning to occur early and often. Agile development allows us to do this for the least amount of effort and cost.
We’ll see how this is accomplished in upcoming posts.
This post is a draft of content intended for an upcoming ebook: Agile Expectations: What to Expect from Agile Development, and Why. Blog posts will be organized under the “Agile Expectations” content on the left-hand side for easy reference. I welcome your comments and feedback! – Dave Moran
McConnell, S. (1998). Software Project Survival Guide. Microsoft Press.