Even if we believe that we are in a low-variability scenario, it more likely that we are: a) underestimating the amount of variability involved and/or, b) overlooking something concerning the entire customer value stream where we are a part of a larger process.
Let’s consider the latter point first using an example from my previous post, where I talked about the implementation of home or auto insurance in our software (for a company where I previously worked). I stated that this was all about transforming a previously defined and approved insurance product definition into our software using tools that we provided. Using this perspective, there was low technical and business variability.
But this is also a limited view of the entire value stream from a customer’s perspective. If we expand our thinking a little, it becomes clear that there is an opportunity to combine the actual product definition for regulatory approval purposes and its implementation in our software. In other words, if we actively participated in the upstream product definition activities, we could make things more efficient and effective for the customer.
Moving back to the first point about underestimating variability, the historical track record of software projects delivering on time, on budget, with expected features speaks for itself. As covered in Chapter One, it doesn’t happen often, and odds are that we will contend with enough variability to throw us off our pre-planned track. This doesn’t make variability the enemy, just something we need to acknowledge and manage.
As we’ve seen by examining the Cone of Uncertainty, learning actually spans the boundary of defining and delivering. You can’t get all of your learning done up front, no matter how hard you try. What we need to do is improve our learning, allowing it happen as soon as possible and for the least amount of effort and cost.
This tells us that we need to incorporate a principle of lean development into the mix: We need amplify learning about what we are building and how we are building it, allowing for some overlap between defining and delivering to enable and leverage our learning, as Figure 3-2 illustrates:
I’m going to fill in the shaded overlap point a little later on. Before I do, I want to set the stage by talking about a few other principles of lean development that help us to amplify learning for the least amount of effort and cost. These principles are:
- Eliminate waste
- Deliver as fast as possible
- Decide as late as possible
Requirements are the inventory of software development, and with many software projects we invest a considerable amount of time and effort in defining and shaping this inventory well in advance of when we will actually use that inventory. This not only generates a huge amount of waste, it is not as beneficial as we would like to believe.
If you are defining annual releases, for example, what are the odds that your business priorities will remain stable over that entire year? Will all product features remain fixed and will you remain committed to implementing those features as defined in advance throughout the entire year?
If priorities shift, some work will never see the light of day, swapped out in favor of other items. It is clearly a waste to define features in detail that will never be used. That is why we decide as late as possible about what our priorities are along with delivering in short cycles that enable us to continually re-prioritize. Deciding as late as possible also applies to deciding about the details of the features and the plans to implement those features.
By waiting to the point in time when we are just about to implement a feature, we give ourselves the opportunity to incorporate everything that we have learned about what the customer values and what it takes to implement those features. We are operating with the latest, most up-to-date information at all times.
This is why agile development makes use of short, time-boxed cycles that are measured in weeks to deliver working business features. These short cycles are feedback loops designed to capture learning from one cycle and incorporate that learning into the very next cycle. Note that another key difference is that agile development focuses on delivering complete, tested business features in each cycle as opposed to delivering horizontal, architectural layers. Each cycle is a bundled activity of planning, designing, building, testing, reviewing and reflecting on how to improve, as depicted in Figure 3-3.
Another form of waste is concerned with the features that are actually implemented. The Standish Group has an often-cited study where it was reported that forty-five percent of the features in a typical system weren’t even used (Fowler, 2002). Another nineteen percent were rarely used. Talk about wasted effort to define, design, build and test all of these features!
There are many who argue that the figures from the Standish Group are suspect based on the lack of information on how the numbers were gathered. I don’t have numbers from my own experience, but here’s a few scenarios that I’ve encountered that can lead to defining and implementing features that aren’t necessary:
- “This feature is critical to our business.” I’ve seen executives insist that certain features were required – vital to their operation – only to discover that later, after the software was delivered that these “critical” features weren’t actually needed. The operation had changed and the only ones who didn’t know this were the executives.
- “Let’s nail the project so that we don’t have to come back to this later.” You can eat up a lot of development capacity adding unnecessary features because people are striving to meet a standard of excellence by anticipating every possible scenario – no matter how uncommon some scenarios may be – and including them in what must be delivered. .
- “The project team might not be in place next year; we need to ask for everything now.” This is a concern that once the team moves onto to something else, they won’t be available to add any new features until a much later date. Once the team is in the grips of one department or division, everything that can be thought of is added to the project. Of course, everyone else is doing the same thing, lengthening projects and creating delays in beginning other work.
Given this insight, we not only have a problem with cost, schedule and feature predictability, we have a major challenge with defining and delivering a valuable system. What if we “succeed” by delivering on time, on budget and with the expected features, but accomplish this by allowing very little change? Will the software be used as delivered?
At project inception value is speculative. Actual results may vary. If the software isn’t as usable as it could be or isn’t modified based on learning during product development, it can deliver less value than anticipated. On the other hand, it we allow ourselves to adapt the requirements based on what we learn, it is possible that the end system can generate greater value than originally anticipated, making cost and schedule overruns worth the expenditure.
CIO Magazine reported on this very phenomenon in an article, Applied Insight - Tracks in the Snow, by R. Ryan Nelson (Nelson, 2006). In the article, Nelson cites an example of a financial services company that developed a new system to improve collections performance that was six months late and cost twice the original estimate – failing in the classic sense. However, once in production, the system provided a 50 percent increase in the number of concurrent collection strategy tests in production. On that basis, the project was judged to be a great success!
This leads to a simple conclusion: Signing off on a specification and “building to the spec” might meet a project measurement, but it won’t necessarily satisfy the customer or the needs of the business. And with agile development, our mission is to improve customer and business outcomes.
Let’s wrap up the discussion of waste with another form of significant waste that we’ve already touched upon: generating detailed project plans up front based on detailed requirements. We invest valuable time in tasking out and estimating work that is speculative at best, creating what appears to be well-thought-out, clear and crisp plan that can be followed without deviation. But things rarely work out according to our initial plan. (But people love crisp status and percent complete indicators; it gives us a comfortable, but inaccurate, feeling of control.)
In my thirty-plus years of experience I’ve never delivered a software project that went exactly according to plan. Business priorities shifted, we encountered one or more technical challenges, or we encountered the need for rework based on feedback from the users. Realities always drive change. The variability expressed in the Cone of Uncertainty always surfaced!
What I have done is spend an inordinate amount of time reworking plans to deal with all the changes, dependencies, resource allocations, dates and associated logistics multiple times per project. And even though I understood the need for change, I’ve always felt that it was a tremendous waste to be discarding all of that detailed work as I slowly and painfully adapted to the current realities.
This one way that a certain amount of inertia gets built up around change, and in some cases it can become downright adversarial. Between the schedule impact and the cost and effort required to deal with change, a natural outcome is to create a change control process that lives up to its name: it is really seeking to control change by negating it is much as possible or making it painful so that we can conform to plan rather than accommodating change.
When you hear about agile development being adaptive and responsive, this is what we are talking about! With agile development we can adapt and respond much more quickly and for less effort and cost by deferring on detailing features and plans until we are actually going to use that information, operating on a just-in-time basis.
And because we defer on defining the details of features in advance along with the detailed planning to implement those features with agile development, we not only keep the cost of change low, we increase the value of the software being delivered. But wait, there’s more!
If we defer on all of those details, that means… We can begin work much sooner. Our planning horizons are shorter and easier, allowing us to quickly start a project, assuming that we have the capacity to do so.
Beginning sooner one way agile development supports the principle of delivering as fast as possible. Another way agile development supports this principle is to define the smallest amount of functionality that is necessary and delivering those features using very short cycles, measured in weeks, not months, and reviewing that work at the end of each short cycle with the customer(s) and stakeholders.
Reviewing working software allows agile teams to engage in rapid, validated learning with the customer(s) and stakeholders throughout development. Add in something like the Sprint Retrospective used in Scrum where teams reflect on how to deliver more effectively and we have addressed the key learning objectives associated with amplifying learning at the fastest, lowest possible cost.
In summary, we can amplify learning by learning faster. Agile development accomplishes this in the following ways:
- By getting to the actual work sooner than non-agile projects.
- By using very short delivery cycles to obtain rapid feedback on what is delivered.
- By reflecting in short cycles on how to deliver more effectively.
- By incorporating learning about what the customer truly needs and values along with how to be more effective in delivering that value into future cycles.
This post is a draft of content intended for an upcoming ebook: Agile Expectations: What to Expect from Agile Development, and Why. Blog posts will be organized under the “Agile Expectations” content on the left-hand side for easy reference. I welcome your comments and feedback! – Dave Moran
Fowler, M. (2002). The XP 2002 Conference. Retrieved from Martin Fowler.com: http://martinfowler.com/articles/xp2002.html#BuildOnlyTheFeaturesYouNeed
Nelson, R. R. (2006, September 1). CIO Magazine. Applied Insight - Tracks in the Snow.
The Standish Group. (n.d.). Chaos University. Retrieved from CHAOS University: http://blog.standishgroup.com/pmresearch
Theron R. Leishman, D. D. (2002, April). Requirements Risks Can Drown Software Projects. CrossTalk: The Journal of Defense Software Engineering.
Wiegers, K. E. (2003). Software Requirements 2. Redmond, WA: Microsoft Press.