My previous post walked through a common software development scenario where everything appeared to be well-planned at the beginning of the project, but as work progressed, the project came off the rails. This is a common scenario because software projects invariably fail to go as planned. Figure 1-2 depicts how we envisioned the process:
Our expectation was that we could define our requirements fully up front and send them through our product development pipeline where those requirements were augmented with designs, transformed into working software, validated and ultimately delivered using a crisp, well-defined sequence of steps – all executed according to plan. However, our experience proved that things don’t work this neatly in actual practice.
One problem was that as work got into that pipeline it got bigger than what was originally planned, clogging the pipe in some cases. Our initial estimates were off because we lacked critical information about the users’ true needs. We uncovered those needs as we implemented features. This meant that our initial certainty about the requirements and the confidence in our planning based on those requirements was actually misleading.
Even answering questions during the implementation didn’t clarify all of the needs. We learned about some of the users’ needs when we heard the old, “Now that I see it…” phrase during milestone reviews that led to change requests.
This too, is common because users find that software systems are difficult to conceptualize in a document. Users need to interact with the software in context with their business objectives and tasks that they are striving to accomplish in order to give a final, definitive agreement on whether the software meets their needs.
The combination of these factors meant that we weren’t as complete as we thought we were early on the project, despite what our detailed project tracking and reporting were telling us. This is what I call the peril of predictive planning. Believe initial estimates and plans – and even early progress updates – at your own peril because virtually every software project encounters variability that will impact our plans.
If any of this sounds familiar, take heart. A majority of worldwide software projects fail to deliver on time, on budget, with the expected features per original estimates and planning. Defining success this way almost guarantees that you will be disappointed.
Let’s take a quick look at the numbers provided by The Standish Group from their Chaos Surveys (The Standish Group) in Figure1-3:
Success, as defined by the Standish Group, equates to projects being delivered on time, on budget, with the expected features. Challenged projects were delivered late, went over budget, and/or were delivered with less than the requested features. Failed projects are those cancelled prior to delivery or were never used.
As you can see, many try, but few are successful. And in case you are wondering, a primary reason that projects improved over time is that software projects got smaller. So one way to improve your odds is to define smaller projects; as you’ll see with agile development we’ll take small to a new level.
We Can’t Estimate
Actually, we aren’t very good at estimating task-completion times. This is another contributor to our inability to deliver on time.
We all tend to underestimate task completion times. This is referred to as the Planning Fallacy (Wikipedia, 2013). There have been studies performed that demonstrate even when we have evidence of similar projects completed in the past to guide us, we tend to insist that our current predictions are accurate. (And that is what we’re trying to do, predict the future.)
In a nutshell, with a non-agile approach to software projects we’re relying on our poor estimation abilities to make time-based predictions about work that inherently contains variability and uncertainty. And then we become disappointed in our ability to deliver on time, on budget, with the expected features, despite decades of evidence from around the globe that this is not a likely outcome.
Can leadership make a difference and affect a different outcome? Not if we cling to time-based estimates. Dr. Mario Weick studied the impact of leaders on the estimation process and found that people in charge – those who set policy and decide on courses of action – make overly optimistic time predictions as well.
“The more people focus on what they want to achieve,” Dr. Weick explains, “the more they tend to neglect impediments, previous experiences and task subcomponents that are not readily apparent. Power tends to increase people's focus on intended outcomes. Although this can be beneficial, in the context of time planning we reasoned that power would lead to greater error in forecasts.” (University of Kent, 2010)
How We Make Things Worse
There was a common mistake in our Common Scenario, one driven by leadership. The VP of Development confused estimates with being commitments, holding “…those who provided the estimates at fault for not meeting their commitments.”
An estimate is supposed to be an approximation and not a precise figure. Due to the uncertainty and variability involved, committing to a date early in a project cycle is really a guess – wishful thinking – nothing more.
If we want a committed date early on, the best anyone can do in non-agile projects is to estimate and then add some serious padding to the schedule to contend with the uncertainty and unknowns to arrive at a date that can be committed to. The problem that I’ve had with padding schedules in my pre-agile development days is that human beings tend to feel comfortable with the time allotment, which results in work expanding to fill the time and putting us back to square one.
Another potential problem is that someone (the customer, management, or both) could look at the schedule and wonder why the project is scheduled to take so long based on the work that is slated to be performed. These can lead to a scrutinizing of the tasks and negotiating the team down to “reasonable” – but unrealistic – schedule. This is because no one will have any evidence to support why estimates can’t be reduced. That always comes later, when actual work is performed and reviewed.
Time estimates combined with uncertainty and variability plant the seeds of a schedule nightmare and a lot of unhappiness later. And if a team has been negotiated down to what they feel is an unrealistic schedule right out of the gate, you have a situation where people feel that they have been set up to fail, which does nothing to build commitment and engagement. Quite the opposite, in fact.
But let’s say that we ignore this and press on. Sooner or later, it will become obvious that our plan is in jeopardy. How we react to the situation becomes the next concern. The Common Scenario incorporated a typical reaction: steps were taken to get back on track.
When we do this, we send a not-so uplifting message that we are failing, and that conforming to the plan is of primary importance. Variability becomes the enemy and we take steps to remove as much it as we can, even though this variability represents an opportunity to maximize value.
Consider some of the corrective measures taken towards the end of Common Scenario. The original vision was “cut down to size” (functionality reduced), even after you as the business representative had the experience of making compromises prior to the plan imploding. Additional change requests became something to avoid (by “scrutinizing” them), and the development team was pressured into overtime.
Overtime is commonly leveraged to get back on track because of the perceived increase in productivity that it provides. And it is often used as an indicator of commitment. This is both an overuse and abuse of overtime.
Figure 1-4 from a Rules of Productivity Presentation by Dan Cook (Cook, 2008) shows how we obtain a temporary boost in productivity for a few weeks or so, but it steadily declines and then goes negative after week 4:
This dynamic plays out at an individual and team level. Software teams will experience the same downward trend of their collective productivity after a period of weeks. The message is clear: overtime can get you over a small hump, but you can’t leverage overtime indefinitely. And people need to recover from it in order to return to their normal level of productivity.
Excess overtime will also cause other unintended consequences with software projects. For a start, you will drive defect rates up. A study reported in an article, Impact of Overtime and Stress on Software Quality by Balaji Akula & James Cusick, demonstrates that there is a dramatic reduction in defects when no overtime is involved. (Cusick, 2008)
Baleji and James focused on the impact of project teams working on an aggressive schedule, studying four projects over a two-year period. The results are displayed in Figure 1-5:
Notice how Projects 2 and 3 had the exact same estimate in person-hours, yet the defect rates for Project 2 – where overtime was applied – were an order of magnitude higher. There are a couple of issues that drive this.
The obvious issue with overtime is that programmers will eventually become fatigued. This will lead to mistakes because they will stop thinking as deeply and carefully about their work. Naturally, higher defect rates will cause a project schedule to be pushed out even further because more time and effort are required to address an increasing number of defects, taking time away from adding new features.
But these defect rates are also an indicator of another problem deeper under the surface. By way of analogy, think of software development like writing; your first draft may have the main points, but the expression needs to be refined.
In a software context, this is reflected in the overall design of the code – the craftsmanship aspect of software development. With intense schedule pressure combined with overtime, software craftsmanship and the practices that support it will most likely take a back seat. This pressure shifts the focus to the output of features delivered at the expense of well-designed code.
Unit testing is not likely to be as thorough, which can introduce bugs. What about design and code reviews? Tired, overworked programmers are likely to start cutting corners, including omitting design and code reviews and refactoring the code to make it more understandable and maintainable, even though these practices have been proven to highly effective at reducing errors.
Put yourself in the following situation and as you do, ask yourself what call you would make.
You’re working on Project X, and Project X is significantly behind schedule. Senior management has decreed that, “This is a critical project! All vacation time is cancelled until we get this product out the door!” (Been there, had that done to me.)
It is now early October, and it’s beginning to look like you won’t be spending any time off with your family at Thanksgiving or Christmas, based on current schedule projections. You’re an experienced developer, and this is the same well-worn scenario that you’ve lived through many times before. You’re thinking about packing your bags and taking your skills elsewhere where maybe this time, things will be different.
It is Sunday evening and you’ve been working for fifteen hours straight and are faced with a choice of checking in working code that should be re-factored to maintain a solid design or call it good and move on to the next feature tomorrow. You’re tired and you really want to be done with this project because your health is being affected along with your family life. In fact, your spouse had to forcefully ask you just yesterday to pry your fingers off of the keyboard to go and watch your daughter’s soccer game. (This actually happened to me.)
Realistically, the code isn’t going to be re-factored, is it? In the face of situations like this, developers will still keep plugging away, hopeful that they can come back some day and clean the code up. But when the focus is on feature delivery and dates, short-term thinking will take precedence.
As I pointed out, the cumulative effect of all of this is that the features will take progressively longer to add, plus the team will be saddled with the additional work of fixing more and more defects. This in turn diverts time away from adding new features that will stress the schedule even more. But believe it or not, it can get worse from there!
Let’s say that schedule pressure continues even longer, across multiple releases of the product. There will come a day when you ask the developers to just add one more feature and they are going say, “Stop! The code base is a mess, and we can’t add any more features until we do a complete re-write.”
The scary part is, they will be right. Cramming features into a software product without regard for the quality of the design and maintainability of the code base gives a sense false progress because “development is done,” but it has not been done well.
As I’m sure you can ascertain, the likelihood of anyone feeling good about what was produced at the end of the Common Scenario was very low. Everyone might feel good about the project ending, but the rewarding, satisfying feeling everyone had at the outset was eliminated by the time the project ended.
Next post: Why do we approach software projects this way?
This post is a draft of content intended for an upcoming ebook: Agile Expectations: What to Expect from Agile Development, and Why. Blog posts will be organized under the “Agile Expectations” content on the left-hand side for easy reference. I welcome your comments and feedback! – Dave Moran
Cook, D. (2008, September 28). Rules of Productivity Presentation. Retrieved from LostGarden: http://www.lostgarden.com/2008/09/rules-of-productivity-presentation.html
Cusick, B. A. (2008). James Cusick's Publications. Retrieved from Mendeley: http://www.mendeley.com/profiles/james-cusick/publications/conference_proceedings/
The Standish Group. (n.d.). Chaos University. Retrieved from CHAOS University: http://blog.standishgroup.com/pmresearch
University of Kent. (2010, March 26). Feeling powerful leads to more optimistic and less accurate time predictions. ScienceDaily.
Wikipedia. (2013, 10 15). Planning Fallacy. Retrieved from Wikipedia: http://en.wikipedia.org/wiki/Planning_fallacy
Thunderbolting Your Video Card
2 days ago