Software can be Useful Before it is Complete

August 28, 2009

“Complete” is an interesting term in the software business. A book is complete when there is a beginning, middle and an ending. Who would want an incomplete book?

When software products are released, there is invariably a laundry-list of features that did not make the cut. That does not mean that the software is unusable, it only means that some features that you might like to have simply will not be available – at least not until another release of the software.

The major trick for those of us writing software is to wisely choose what features will be included, particularly with the first version of a product. The company that I work for sells software, and our livelihood depends upon fulfilling a need in the marketplace with products that compel a prospect to open his or her wallet and exchange money for our goods.

It is easy to get yourself into this, but trying to build a product with depth and breadth because something like it already exists and you “must have everything that the other application has” is an exercise in futility. Instead of spending time and effort duplicating something that already exists, focus on how your product will differentiate itself. You will save yourself a lot of aggravation.

I've had the experience of leading a team that was chartered with building a large, enterprise product that our Product Management team insisted on being very broad. Since there were other products already in the marketplace, it was felt that we needed to offer what they did as a baseline. At the time, I was guessing the plan was to figure out how we were different later. (I was wrong on this point, as I will get to.)

I tried to fight this. I pulled a meeting together to talk about focusing on one area, keeping the scope down so that we would have something to enter the market with at an earlier date.

I was told no. We didn’t want to sell to a subset of the market; the goal was to have a product that could be sold to the entire market, and as a consequence our product needed to match competing products feature-for-feature. The keep-it-small-and-focused door was slammed in my face, but I felt that it was important to try before we went down a long and difficult road.

We went ahead and built the product, and it was a struggle. As we approached the first release, I became involved with the marketing efforts. Guess what question our Product Management team had for me, as the Manager of Software Development?

You guessed it! It was: “So, what differentiates our product from other products in the marketplace?”

While I stifled my disbelief, I was at least as prepared as I could be. We didn’t have specific business features that differentiated our product, but we had – fortunately – put a lot of thinking into the design of the product. We had a great user interface, and we had other technological options that enabled this product to easily connect with other systems. We had something, not as much as we could have, but some things were out of my control.

Flash forward a few years. We now have a product that we’ve stripped out some code and greatly simplified other areas because we found that we didn’t really need what we thought we did early on. We’ve also got a much better idea on what should be a part of the product in order to differentiate ourselves.

Part of this understanding became possible because we actively demonstrated the product to prospects and at conferences. If we had focused our efforts on delivering a smaller product to the market earlier, we would have been able to perform this activity much sooner, and would have been much further ahead of the game. And we would have generated revenue a lot sooner!

Need another example?

Google acquired a privately-held company named Upstartle, who had a Web-based word processing application called Writely. What was the appeal to Google?

Upstartle didn’t waste time and money building a product that had the same capability that already existed with Microsoft Word. Can you imagine all of the time and effort that would have been spent? And to what gain? Instead, Upstartle emphasized collaborative features – designing new, differentiating, and compelling benefits as part of a basic, no-frills word processing application.

The net result was that Upstartle was able to build and market a solution in a very short period of time. Upstartle felt that collaboration features were more important in today’s world than duplicating a variety of word processing features that only a subset of customers would find useful. Google agreed and bought the company, and the Writely product became Google Docs.

Overcoming the Business and Software Mismatch

August 22, 2009

My father worked as an electronics technician, and at an early age I became acquainted with electronic terminology. One term that can be applied to software development projects is known as an “impedance mismatch.”

Impedance is used to describe the resistance of a system, preventing the flow of electrical current. Electrical systems attempt to match sources and outputs to maximize power. When mismatch occurs, there is wasted energy.

Where is the mismatch in software development projects? More often than not, when business people and developers are involved, the “impedance mismatch” is really a difference in understanding the needs of a software development project and the expectations associated with those needs.

The real challenge with software projects is establishing the right level of dialog between the business and the software developers up front, and providing the ability to have a continual dialog over the course of the project. And this is where problems usually creep in.

What I’ve observed over the years is that business people get frustrated by what they perceive as constant questions from developers about things that they believe they have already answered. I can appreciate the limited time that business users have available; it’s just that it is also difficult from a development standpoint to operate any differently.

Consider this: Without software, computers have no intelligence whatsoever. In fact, they are very literal and they will only do exactly what someone tells them to do, how they were told to do it, and when. This means that every step in a business process, down to how the data is displayed, how it is stored, the operations and calculations needed to be performed must be given to the computer in the form of specific instructions.

It is difficult to anticipate all of the questions in advance. Since developers do not have the business background and experience, they aren't qualified to anticipate everything in advance. What usually happens on software projects is that the business problem starts out being expressed at a higher level than what will be sufficient later on as actual, detailed development is under way.

Starting at a high level is good, as it provides a project team with an overall idea of what the end solution will be. Developers, on the other hand, are dealing with an insanely stupid, very literal machine and will need the help of those on the business side in telling that machine what you want it to do.

This is why developers are “detail-oriented,” they have to be. It’s also why business people sometimes get frustrated with continual questions from development teams. I’ve seen projects go astray because business input was limited, and development teams had to make guesses based on the information they had available to them – resulting in expensive and time-consuming re-work later because the software application wasn’t what the business thought that they were getting.

A software application is a computerized representation of the business; the application of your business on the computer.

The bottom line: Business people understand their business very well and know what they want to get out of the software that is under development. Software developers understand computers and software development, but they do not understand the business as well as the business users. And despite appearances to the contrary, the continual dialog is NOT the same set of questions being asked repeatedly.

If you are on the development side, explain the software development process to the business so that they understand the demands that will be made of their time, in terms that they can understand. Too often, the focus is on the process and what the business must use to interact with development, but never why. Give them an understanding!

If you are a business user, keep in mind that the software is for you. And bear in mind that the developers writing the software are spending their time keeping up with technological changes, working on tight deadlines while dealing with an insanely literal device, all the while trying to ensure that they are building what you need and expect.

Software Development is a Learning Process

August 15, 2009

Have you noticed that software projects struggle with predictability? There are a a number of reasons why predictability is elusive, but one reason is that there is a great deal of learning going in, particularly in the early stages of a software project.

Those on the business side do not understand how to design and build software, which places them in the position of being able to articulate what they want to get out of the software, but unable to provide a design. Conversely, programmers can design and build software, but lack the business knowledge required to put business value into the software.

In order to build truly valuable software, business and software professionals must collaborate to solve difficult problems in the most effective way possible. This collaboration occurs most heavily at the beginning of a software project, but it does not end when the requirements have been gathered.

Unfortunately, all too often software projects demand solid estimates immediately after the requirements have been articulated. The assumption is that since the business has stated their problem and goals to the software professionals, it is now up to those in the software side to determine just what it will take to design and build the software.

With many software projects, these estimates become the very real Date by which the team will be judged. If the team misses the Date, the project will be regarded as a failure. Is this fair?


The most effective software projects continue to learn and explore throughout the life of the project. Just because the requirements have been stated doesn’t mean that the software professionals have everything that they need. The design and development process will raise questions, and this is where the business can and should support the programming staff.

Collaboration will yield better results than having programmers go off on their own. Programmers are smart people, but they aren’t qualified to make decisions about the business. Programmers spend a good portion of their time keeping up with technology and methods for translating business needs into working software, but don't let them make guesses about the business! This is a recipe for frustration and the need for re-work; and you will likely discover that you need to re-work the software at the point when you think that something is "done."

Most projects significantly reduce learning – also known as variability – within 20-30% into the project, as reported by Steve McConnell with the Cone of Uncertainty.

The exploratory, learning nature of software projects is advantageous to the business. I’ve seen projects where programmers – once they’ve reached a solid understanding of the business problem at hand – engage in a productive dialog about how to apply technology towards the problem, yielding greater results than what could have been otherwise achieved.

The main takeaway is that software projects should begin with a rough estimate, but then seek to refine the estimates at the 30% mark to gain a more accurate picture of what the project entails and determine what the real Date is.

I believe that this is true for any project, regardless of the methodology that you are using. Teams will gain greater insight as they step deeper into a project. They will learn more about what really needs to be built, and they will have a gauge on how fast they can build out the feature set as well as having greater insight to any potential risks.

This is not unprecedented, and has been used in other technological projects outside of the software industry. In an interview this summer about the moon landing and Northrop Grumman Corporation’s role in designing the lunar module, Dick Dunne, former public affairs director was quoted as saying,We were learning as we were building. We were pushing the technology envelope. Windshields were cracking and engines weren't working."

The process used for designing the lunar module was strikingly similar to a good software project today. Gerry Sandler, who eventually became president of Grumman Data Systems, cited a strong team approach for the success of the project.

"You could talk to anybody at Grumman about anything," Sandler said. "You could see any boss, any specialist, anybody, and say I'm interested in this, let's talk about it."

Dick Dunne also noted that they held daily “stand-up” meetings – just like an agile/scrum team today. "Everyone went to that meeting and if you had a problem, you better tell it the way it is and not pass the buck," said Dunne.

Hmmm. Regular communication, confronting the facts, and learning as you are building. If we got to the moon 40 years ago using this process, and it seems reasonable that we can build software today following the same guidelines! And like a lot of things, nothing is as certain as it appears on paper.

Essential Metrics

August 8, 2009

What are the most important metrics required to monitor software development? What is required to accurately gauge the progress and readiness of the software being developed? And how can these metrics be implemented with the least amount of administrative overhead as possible, so that project teams can focus as much of their time as possible on delivering working software?

Essential metrics deal with maximizing the value proposition of the software along with understanding how well key software development activities are being performed.

Metric #1: Determine the ROI and the break-even point.

A software project is all about defining and delivering something new, requiring an investment in time and effort of people to design, build, and test the actual software. There should be a return on this investment – either revenue generated from sales or a savings created as a result of using the software. Understanding the point where the investment is no longer worth the effort is critical for all concerned. Why start or continue with a project that will cost more than it will return?

Metric #2: Prioritized features.

What are the key features that should be delivered with the project? Those that deemed to provide the greatest business value should be prioritized over other features, making this related to Metric #1.

I’ve seen software projects get into trouble because “everything is a priority.” Translation: A boatload of features are being targeted, without due consideration for what is truly valuable from a business standpoint. Everything changes over time, and what you think is important today might not be all that important 9-12 months down the road, particularly in today’s uncertain business climate.

I’m also a believer that software can be delivered incrementally, where a release containing higher-priority items can be provided to the business early. The allows the benefits of the software to be realized as quickly as possible. Users may also change their minds and decide that the lower-priority features aren’t really necessary – saving time and resources that can be applied to other projects.

I am in full agreement with the agile community on the next metric:

Metric #3: Working software is the primary measure of progress.

Why use anything else to gauge your progress? And there is no overhead involved! Prioritize the features that provide the highest value, and work towards delivering those as soon as possible. If you can’t see it working, don’t believe it.

Metric #4: Tracking the amount of time wasted due to re-work.

Efficiency and effectiveness – how fast that quality, working software is delivered – is a function of the people involved, their skills, knowledge, experience, personality mix and interactions with other team members. Wasting time means losing momentum, losing money, and sacrificing forward progress. Software activities are a component of the DNA of Software Development model, as noted below:

This doesn't mean that there will be a perfect world, as most software projects will experience some level of refinement as they progress. For example, the Definition activity is about understanding what the software is expected to do – the requirements – and provides input to the rest of the software development process. During Design, it is likely that some refinement of the requirements is possible, as the user interface and structure of the system are defined.

Preventing wasted time and keeping costs reduced means that the refinement loops should be as short and tight as possible. Once you start reaching back across multiple activities in the process, you have problems!

Wasted time can occur at any point in a project, and the intent is not to point blame in any direction, but rather to keep the project on track, along with informing and educating everyone involved on the need to exercise due diligence in every aspect of software development. If wasted time starts creeping into a project (like re-work because requirements were not understood), an intervention can be staged to help prevent wasting any more time.

A grading exercise can help teams – particularly less-experienced teams – understand what is important and to keep the essential hand-offs between each phase in check. Key questions (in bold) that can be used by teams to evaluate where they are at during each activity are covered below, and represent the remaining metrics that I consider essential.

Does the Project Charter (or a Vision and Scope document) clearly define the goals and objectives of the project?

Are the business requirements clearly understood by all project team members?

If the team can’t answer these questions, then there is ambiguity present that will cause delays and problems – like costly re-work later on. Because there are people involved, there will likely be differences in how much detail is required for complete understanding; what works for one team may not work for another.

There is a point where requirements are distilled into product features, and this is not something that should be short-changed! As I noted in my post Six Keys to Successful and Productive Software Delivery, Karl Wiegers and Steve McConnell, two industry gurus have pointed out the following in particular around requirements, defects and wasted time:
  • Requirements defects in software projects account for approximately 50% of product defects.

  • The percentage of re-work on software projects due to requirements defects is greater than 50%.
An extreme example that I’ve seen in years past is when work had to be thrown out during the Verification (testing) phase because the Quality Assurance group had a different interpretation of the requirements than the Development organization, and it was determined that Quality Assurance was correct.

Before actual coding begins, there should be some type of design work performed. If a feature requires a user interface, has this been mocked-up in some way to validate that the design will work for the users?

My personal preference here is to use “paper prototypes” – screens drawn on paper. These are the quickest, lowest-cost prototypes available and can start out life as white-board drawings. Ideally, user interface designers and user experience (interaction) designers are available to create a truly fantastic user experience.

Other key questions related to design:

Are there system and component diagrams that designate the layers, roles and responsibilities of the various software system and components?

Are there well-defined interfaces between components?

If Web Service interfaces are to be used, are these well-defined, business-level interfaces?

Has a design review been conducted with peers to ensure that the specific design will satisfy the business need?

Are the use of design patterns planned, or leveraging of existing routines planned?

Good design and a review of the design is all about preventing problems from getting into the code in the first place. All too often design is overlooked due to aggressive project deadlines, and other times what should be exploratory work to inform the design often ends up being the design and development. This skipping or short-changing of steps will invariably come back to bite you!

My key questions around estimating are as follows:

Is the person doing the work providing input to the estimate? There is a variance in programmer productivity, and any estimate is highly dependent upon who will be doing the work. One programmer may have greater familiarity in one area than another – contributing to how quickly a given task can be completed. The person doing the work should have the final say, but should consider any guidance provided as a part of team input. This leads me to my next question.

Did the team estimate the tasks? When it comes to producing reliable estimates, I’ve found that nothing beats a team effort. The general approach is that the team meets and reviews the requirements one by one, listing the development tasks required to meet each requirement. Everyone gives their estimate, followed by a discussion if someone is low or high – so that the team can understand the deviation.

In the end, there should be a good understanding of the tasks involved and the time required to perform those tasks. The key being that the person responsible for performing the tasks has the final say in how long it will take.

When it comes to actual coding, I allow for changes in process and work preferences. Provided that the work gets done, and is done so as efficiently and effectively as possible, I’m good. We don’t have strict coding guidelines in our shop around formatting code. I really don’t care where the curly-brace is! I want code that can be understood and maintained over time, and I trust my staff to organize their code, comment it, and name variables in ways that will help others to understand what the code is doing.

My guidance to individuals and teams is that I do expect code reviews to take place. If you pair-program, the review is done by the virtue of the fact that another pair of eyes are already on the code. If you don’t pair-program, I do expect a code review to take place. I do, however, differentiate between the type of code review being performed, depending upon the criticality of the code being written.

Peer reviews are fine for what we classify as low-risk code. For example, if you are correcting a defect, a peer review is fine. If a new feature is being developed, but this new feature is more of an ancillary, supporting function, peer reviews are also just fine.

I expect greater scrutiny for new features that are expected to be used frequently and impact business data. Since this feature (or routine) plays a major role in our software, a more detailed team inspection should be conducted to confirm the correctness and implementation. The questions thus become:

If this is low-risk code, has a peer review been performed?

If this is high-risk code, has a team inspection been performed?

As the code is reviewed, the following questions come into play:

Has the planned (from the design phase) software re-use and incorporation of design patterns been implemented?

Is the code clearly commented and deemed maintainable?

Has error-handling has been incorporated?

Has a cyclomatic code complexity check be run?

In terms of producing code that is reliable and can be maintained over time, I like to see a cyclomatic code complexity check on any code produced. Overly complex code is not only a likely source of bugs, but it will take more time and effort for someone to modify in the future, increasing maintenance costs.

And there should be a final check before code is accepted: Have the recommendations from a peer/team review been implemented in the code?

Testing is the last line of defense. Testing your way to quality is not optimal! The Cost of Defects model accurately reflects this, as does Steve McConnell’s observation that “Every hour spent on defect reduction will reduce repair time by 3 to 10 hours.”

Have the high-priority, high-severity bugs been addressed? Track the number of defects found during system and acceptance testing, comparing these against the lines of code produced. This can help you to (roughly) compare a team against industry norms, and understanding both the number and severity of the outstanding defects will help to assess whether software is ready for release.

How much of the product is being tested through regularly-executed, automated testing? Did manual testing adequately cover the remaining bases, or are there gaps? This will help inform you about the readiness of the software, providing a confidence factor when examining the outstanding bugs. A high-percentage of test coverage will leave you feeling confident that you have surfaced most of the critical problems, whereas a low percentage of test coverage should make you feel uneasy because the product hasn’t been tested thoroughly enough.

The metrics presented here are geared towards providing as much flexibility in work as possible for software teams, keeping bureaucracy reduced to a minimum. My objective is to manage high-performing teams who understand what needs to be done, when, and why. This allows the teams to define and manage their own work without the need for rigid processes, which in turn is motivational for all concerned and allows our organization to be adaptable to changing circumstances.

This does require hands-on attention and active management. Expectations must be clear and understood, and the performance of individuals and teams must be constantly assessed and dealt with – which includes making time to address performance issues and to reward great work (something that gets overlooked too often because as managers we tend to focus on problems to a fault).

I’m very curious about the reactions that any readers may have!

The Challenge of Metrics

August 1, 2009

Now that I’ve articulated my model of software development (the DNA of Software Development), I feel that it is important to discuss the subject of metrics, examining what is essential to measure in order to understand how well any software development organization is performing.

There are a number of challenges when it comes to metrics and software development. It is all too easy to define a large number of metrics to “get the full picture,” which in turn drives a lot of administrative overhead, increasing costs and adding time and frustration to those involved. Another danger is to use metrics inappropriately, such as gauging a programmer’s job performance on a single metric such as the number of lines of code “produced.”

The holy grail is to measure the output of teams and individuals using completly objective and quantifiable criteria. Unfortunately, a generic software widget that can be readily defined and used for comparison doesn’t exist. The type of project, the relative priority, and the people involved introduce a high degree of variability.

As a manager, I strive to maintain consistent expectations and evaluate job performance using the same yardstick for everyone, but the problem of objectively and quantifiably comparing software “output” between different people or project teams remains elusive.

There are metrics that are common in software, such as lines of code (LOC) and Function Points. These are good for gauging the relative size of a project, or examining how well you and your team compare against others in the industry, such as comparing your defect rate per 1K LOC against the industry norm. What they aren’t good for is measuring – and helping – everyone to understand how well teams are doing right now.

To drive the point home about the variability that people alone introduces, does this phrase sound familiar? “I put my best people on my most difficult problems.” When those challenging projects or problems come around, when you need absolutely need to succeed, and need to do so under tight deadlines, who do you select?

Given a choice, would you staff a critical project with junior programmers and hope that they pull a rabbit out of the hat, or, given the option, would you assign at least a couple of senior programmers along with a few junior programmers to the project – the senior programmers who have demonstrated that they are motivated, talented, and capable of delivering results? Do you even need to consider this for more than a couple of seconds?

Expectations – and pay – are very different for the senior and junior programmers as well, aren’t they? A typical software organization is comprised of different people who are evaluated and paid at different levels. Not only that, there will be differences in strengths, weaknesses, individual preferences, communication styles, and experiences that everyone brings to the table.

Variability is also introduced with the type of project assigned to one project team in contrast to another. For example, adding well-defined features to an existing product is far simpler than a project where some exploration is being undertaken to create a new product to meet what is essentially a broader, looser, unmet demand in the marketplace. Can there be accurate, objective comparisons between these very different projects in terms of output?

The challenge of defining essential metrics is that I want them to be as simple as possible, requiring little administrative overhead, be tied to the critical aspects of producing software, and accurately reflecting the state of progress and readiness of the software being developed.

What are these metrics? Next post, I’ll delve into the topic!