I've written of my love for Test-Driven Development and for ikiwiki. Love is awfully abstract, though. Here's a concrete example.

Some context

Ikiwiki is a wiki compiler. It transforms easy-to-write input files into easy-to-browse output files.

Since a wiki is open for editing, it's important to be able to see what changed recently. Typical wikis provide their own limited way to view and compare a limited number of revisions. Ikiwiki integrates with many popular revision control systems.

The integration is deep. Write in the browser, and saving an edit means making a commit. Write in the terminal, and committing to the repository means regenerating output files.

I developed and contributed ikiwiki's CVS integration a few years ago. It's been in production use ever since.

The bug

On cvs commit, the commit succeeded and affected output was regenerated, but with a warning:

Use of chdir('') or chdir(undef) as chdir() is deprecated at /usr/pkg/lib/perl5/vendor_perl/5.16.0/File/chdir.pm line 45.

How it got through

The warning was introduced in Perl 5.12. I had written the code under Perl 5.10. (The warning appeared harmless, and I was busy, so Perl got all the way to 5.16 before I could deal with it.)

How I came to understand it

  1. Reproduced the problem by hand.
  2. Extended my tests to automatically reproduce the problem.
  3. Tried to reach understanding through thinking.
  4. Succeeded at reaching understanding through debug printf()s.

How I fixed it

  1. Tried some things that made the tests look good.
  2. Kept the simplest one.

How it'll stay fixed

If this were a canonical example of bugfixing with TDD, I would have written a test that fails if the warning appears and succeeds otherwise. Once the bug was fixed, that test would have magically become a regression test.

This isn't a canonical example of bugfixing with TDD. The bug manifested with ikiwiki as a post-commit hook, which my tests hadn't been covering. It took me a while to figure out how to wire up the needed integration in test. So when I started seeing the warning as a side effect of other tests, that was good enough for me at the time.

Writing these words spurred me to revalidate my previous judgment. I just tried reverting my fix and running the tests. As long as I don't run them verbosely — and I usually don't — the warning that reappears is very hard to miss. Still, I didn't write, and now don't have handy, a purpose-specific test that unambiguously fails if stderr matches the warning. That would be an obvious boon to future developers who don't know what I currently know (a set that includes me) or who don't always run the tests the way I mostly do (also a set that includes me), as well as a boon to the didactic value of this real-world example.


You can see that I spent some time:

  • Improving automated tests
  • Fixing a bug

Not pictured

No code artifact can show you how much time I saved by:

  • Making the computer tell me instantly whether I've solved the problem yet
  • Not needing to settle for a less elegant fix
  • Almost certainly never seeing this bug again


As I've learned to expect, the up-front cost of making tests help me was nowhere near the benefit, especially over the long lifetime of production code. I'm pretty sure I got done faster (and better). If you write code for a living and you don't believe me, you owe it to yourself to question the basis of your belief.