A Manager’s-Eye View of TDD

November 23, 2010

Although we haven’t tried Test-Driven Development (TDD) at my company yet, I expect that sooner or later, one of our teams will want to. This post is about head-checking my current understanding of TDD with you, and asking those of you experienced with TDD some questions along the way.

Just to be clear: My goal is not to understand Test-Driven Development so that I can mandate it. I want to be prepared so that I can have intelligent conversations with my staff about it as well as have reasonable expectations about what it will take to adopt it.
.
What is TDD?
Test-Driven Development starts with a test. (Imagine that!) The developer writes a test before implementing any actual code, then the developer writes the simplest code possible to make the test pass. Once the test passes, the developer writes the next test – which will fail – and then implements the code to make it pass, and so on. Test-Driven Development provides the following key benefits:
  • Cleaner, most understandable interfaces because writing the test first requires that the developer consider how the caller will use the code.
  • Good design because testability is considered up front, not after the fact. Reports from others indicate that this helps to keep code loosely-coupled. 
  • Greater test code coverage reduces the number of bugs.
TDD versus Unit Testing
There seems to be some modest confusion about the differences between TDD and unit testing. (Try Googling “tdd unit testing” and you’ll see what I mean.) As I understand it, unit tests are a type of test, and TDD is a practice. TDD is thus an approach to developing unit tests. I’m curious: Does anyone use TDD for more than unit testing?

Unlike unit testing – which is performed after the fact – TDD informs and guides the development of the code that is about to implemented. This does not, however, make TDD an application design practice. (In my opinion.) TDD provides a perspective on the call-ability and testability of the code that needs to be considered as a part the design process.

Jeff Langr has an interesting commentary on unit tests that are written after the fact (of coding). He calls them Test-After Development, or TAD – as in a TAD too late. Jeff has some excellent points on why TAD can be a problem:
  • It doesn’t promote a testable design.
  • It appears unnecessary because it occurs after the fact – the code can be tested in context with the rest of the application.
  • There are undisciplined managers who will direct something along the lines of, “skip the unit tests, we need to ship the software.
  • And finally: “…the industry has been screaming for a more disciplined approach to software development that can consistently produce higher code quality. Perhaps TDD isn't it, but TAD most certainly ain't it.”
While I’ll agree that these things can be a problem, I think organizations can overcome these problems and still derive a benefit from developing automated tests, even if they are developed after the fact.

A testable design can be implemented and enforced through pair programming (or code inspections if pair programming is not used). This is particularly true if the automated test is constructed immediately after the code is written, with modest refactoring to support the testing if the design wasn’t quite right.

Both developers and managers must be disciplined about software development. Developers shouldn’t use phrases like, "I already proved that the code works, I loaded it up in the app server and tried it myself,” (to use one of Jeff’s examples), and managers shouldn’t be short-circuiting good practices by shoving software out the door before automated tests are developed. Developers and managers must establish an agreement with each other that they are going to improve and stick to it.

And though Jeff was clear in his opinion that automated tests created “after the fact” is not a way to improve our craft, in the absence of doing nothing, they are definitely an improvement. Interestingly, regular re-execution of automated tests are less about finding defects than they are about creating confidence in refactoring efforts, and they definitely aid in accelerating final delivery of quality software. (They do play a role in uncovering defects, covered in a moment.)

For example, we had one area of code where a small enhancement was being added that was covered by 30+ automated tests. Our developers added the enhancement and reported that the automated tests found one defect. Small potatoes, but a nice catch. However, our developers had a high degree of confidence in their refactoring effort based on these tests, and that confidence combined with the tests being automated accelerated the delivery of the software. Manual testing of code that the developers were uncertain of would have taken much longer.

This brings me back to the point about how automated tests help to uncover defects. It’s a question of when defects are detected, and our experience echoes what is reported in an article, Observations and Lessons Learned from Automated Testing by Stefan Berner, Roland Weber, and Rudolf K. Keller: A majority of the defects uncovered by automated tests are found during the development of automated tests. The re-executing of automated tests infrequently reveals defects; instead, they prove that the quality of the code has been maintained.

Instead of being a TAD too late, my stance is that creating automated tests after development is a case of being better late than never, and far superior to doing nothing at all.

I can’t argue Jeff’s final point, though. Jeff is right, we do need to discover ways of advancing our craft, and I agree that TDD is worth a try. TDD strikes me as a superior practice in that it facilitates the development of automated tests, taking much of the internal discipline required to create tests after the fact out of the equation. And it definitely forces you to create testable software as it is being developed versus making adjustments after the fact – and creating a mental uphill battle because the software is “done,” but not capable of being tested in an automated fashion.

TDD and Pair Programming
I went through a short training exercise in which TDD and pair programming was combined. (This was not a TDD class, we were actually in Product Owner training.) One developer created the test, and the other was responsible for implementing the code to make the test pass. As I’ve reflected on this, it got me thinking… What does this do to the driver/navigator approach of “classic” TDD? Is the test developer role combined with the navigator? Does anyone have any experience with this?

My take is that more often than not, developers will use the alternative technique of pair programming that I discussed in my post, Results through Pair Programming, where developers tend to work on the same aspects of a problem, switching between tactical and strategic issues together.

My Takeaways…
As a manager whose organization hasn’t adopted TDD yet, my takeaways are as follows:
  • TDD is a practice that facilitates and ensures the development of automated unit tests.
  • TDD can help overcome organizational inertia and lack of discipline to create automated tests “after the fact.”
  • Adoption of TDD is a change, and it will need to be supported. Formal training, coaching, and time to experiment will be essential to its adoption. I’m certain that real-world TDD involving real-world code will be more difficult than the training exercises.
  • Patience and time will be required to provide the necessary practice and development of skills that will allow TDD to take hold.
  • We’ll definitely need to share experiences between developers and teams.

Reading That I’m Targeting
I’ve gleaned everything that I’ve written about from various blog posts and articles that I’ve read over time. I haven’t read books about TDD, but I’m planning on reading the following:

Test Driven Development: By Example

And since we have a fair amount of legacy code to contend with:

Working Effectively with Legacy Code

Does anyone have other suggestions or thoughts about these books?

In Conclusion...
I’ve covered what I believe TDD to be, what I think I need to consider as a manager, and what I think I need to read to obtain a deeper understanding of TDD. I’ve asked a few questions along the way for those who have experience in TDD, and I’ll ask a final question: Is my understanding of TDD and automated testing off the mark?