There’s a link in the sidebar where you can buy me a fancy coffee, but I’d much rather treat you, dear reader, to the beverage of your choice. Here are some upcoming chances for us to do that:

Boston, April 19-21: Agile Alliance Technical Conference

New York City, April 28-30: Agile Coach Camp

New York City, May 1: Big Apple Scrum Day

Ann Arbor, May 4-5: Agile and Beyond

Hope to see you. If not one of these, then another time.

Posted Thu Mar 16 09:58:48 2017 Tags:

When solving a problem, we often take advantage of known solutions. For sufficiently small and repeatable problems, we can buy solutions at the store. Usually they don’t solve the entire problem all by themselves. Bleach doesn’t clean the toilet; I do, with the help of bleach and other tools.

In software problem-solving parlance, bleach is a “dependency”. It doesn’t have to be. I’m free to try to solve my problem using some other product. But for as long as I believe bleach is what I need — maybe because it gets me there sooner and more reliably than anything else I can think of — then I’m depending on it.

I’m also free to try to solve my problem using no products whatsoever. It might sound like an unqualified bad decision to get in there with my fingernails. But what if bleach and scrub brushes haven’t been invented yet? What if they have, but the store can’t reliably stock them? What if they can, but I can’t reliably get to the store or pay for it? Or what if cleaning toilets is my core business, and my main differentiator is artisanal hand-maintained porcelain?

Whether to add a new dependency, and if so which, are two of the many, many decisions we make every day in software development. We can have reasons for generally preferring to keep the total number small. But however small that is, it will never reach zero. For me to clean the toilet depends, at minimum, on whatever it takes to keep me alive, able, and present. And on the privilege of having a toilet.

Minimize the number

One Agile strategy for reducing dependency risk is to do the simplest thing that could possibly work. Sometimes that can mean copy-pasting a function from Stack Overflow instead of depending on a third-party library for it.

Another Agile strategy for reducing dependency risk is to maximize the amount of work not done. Sometimes we can deliberately decide that a sub-problem isn’t worth solving.

The only way to entirely avoid dependencies is to entirely avoid solving problems.

Then we’d need new jobs, which would be a problem. Sort of a circular dependency.

We need techniques for living with dependencies more effectively.

A dependency isn’t just one more line in a file. It’s

  • An expected interface (adapter pattern)
  • Noticing when it changes (contract tests)

Amber is the force behind Self.conference, which I’ve spoken at and attended and which just posted its list of sessions and speakers. If you’re a human involved in the making of software, I highly recommend it, and can give you a code good for 10% off your ticket.

And also all of its transitive dependencies — dependencies can have dependencies too — as Amber Conville reminded me.

In the moment where we decide to add one more dependency, it could look like just one more line in a file specifying names and minimum versions. In the moment where we finish the feature and get it out the door, that could feel like enough.

Dependency now, dependency later

We may think we’ve chosen to depend on something as it is today. That’s true.

We may think we can cheaply remove or replace it tomorrow if we have a better idea. That might be true too.

We may even think we can cheaply remove or replace it in a few months if need be. That’s a lot less likely to be true.

Why? If we’re not careful, our expectations of it will be dispersed throughout our code. If it then changes in a way that fails to meet our expectations, we’ll have to be expensively careful to adjust to it. If our own expectations need to change, we’ll have to be expensively careful about how we change them. And if we hadn’t been careful, the way we’ll find out we suddenly need to be expensively careful might well come at an expensive time: in production.

Maximize control

In software, when we are careful, we can arrange to reduce our dependency risk:

I follow similar reasoning when I update all the dependencies on my server every week. The tool I use for this recently underwent a major upgrade. So last week I did my build on Friday, a little early. Over the weekend, I made the fewest possible changes to bring forward my existing configuration, did a build with the new tool, found a regression, did another build, went to production, and reported the fix. If it hadn’t worked, I’d still have another week to figure it out or revert without interrupting my cadence. But since it did work, my dependency on the build tool has been safely managed through some breaking changes. Next week, if I feel like it, I can see about taking advantage of some of the new features.

Adapter pattern: Define the calls we want to be able to make, then implement them by backing our interface with the dependency. Lets us adjust faster when something changes. (more on Wikipedia)

Contract tests: For each call our adapter makes to the dependency, write automated tests for the behavior we’re relying on. Tells us sooner when adjustments must be made. (more at Martin Fowler’s Bliki)

Given these techniques, another Agile risk-mitigation strategy clicks into place: If it hurts, do it more often. Figure out how to get notified when any of your dependencies have been updated. When you get a notification, before you do anything else, update your code to use the new version. If tests break, before you do anything else, fix them. If it’s not clear how to do that, before you do anything else, raise the risk with your team. By uncovering the dependency problem as early as possible, you’ve maximized your options for handling it well.

Once your tests are green, ship it as soon as you can. If a dependency problem somehow slips past your tests into production, it’ll be relatively easy to find, because you’ve narrowed the search space considerably. Roll back first, if you have to. Then test-drive a bugfix and ship it again.

The goal here, given that surprises are inevitable, is to control the influx of entropy into your system. Track updates and put out releases less frequently, and the surprises get larger, take longer to track down, and offer fewer, more expensive options for resolution. Or reduce batch size to get the opposite effects.

Sorry, I only understand toilets

No problem. To keep your delivery schedule from getting clogged, flush regularly.

Here’s how I documented my reasoning for a product I both managed and tech lead-ed:

Why release every month?

  1. Each release contains less total change. Why this matters:
    • Code change is risky. Smaller increments of change help manage the risk.
    • If a bug survives into production, finding it is easier, so fixing it is faster.
    • If a feature doesn’t meet expected requirements, customers will complain earlier, so fixing it can happen earlier.
  2. The next release is always soon. Why this matters:
    • Small features (or bugfixes) don’t have to wait long to get into customers’ hands.
    • Big features can only be delivered via composable solutions, implemented one tractable piece at a time.
    • Each change can be tested well because it’s small. Each change must be tested well because it’s about to ship.
    • The master branch is always production-ready. We can always ship what we have right now.
  3. The previous release is always recent. Why this matters:
    • Release deployment is risky. More frequent practice — and being able to remember what went wrong last time — helps manage the risk.

Why skip a month?

  1. If Operations is fully booked on other product releases and doesn’t have someone available.
  2. If up- or downstream systems are changing and it’s too risky to change ours at the same time.
  3. If a particular big feature simply can’t be decomposed into month-sized chunks of work. (This is very rare.)

When do you notify Operations of new dependencies to package?

By Thursday afternoon, the day before release day, we have a complete or near-complete list. In general, we try to declare each codebase’s dependencies in one place so that we can simply diff it against the previous release to see everything that’s new (updated counts as new). Then we list those dependencies on the release’s wiki page and notify our local Operations representative.

Occasionally a last-minute code change will add another dependency or two. Asking Operations to build one or two more last-minute packages isn’t terrible if we don’t do it often. More packages than that means the change probably isn’t a smart last-minute choice.

Why are you always upgrading to the latest available dependencies?

Because we depend on them.

Less elliptical answer: because they’re code we rely on but don’t control. Therefore we’re especially susceptible to changes in them. Therefore we minimize our exposure by staying up to date whenever possible, giving us the easiest possible rollback option when (inevitably) an unexpected problem occurs.

See also “Why release every month?”

Another dependency inversion principle

GeePaw Hill likes to say The code works for me, I don’t work for the code. If you’d like to put dependencies fully at your service — and not the other way around — I invite you to join me next month for my hands-on workshop at the Agile Alliance Technical Conference.

Posted Wed Mar 8 13:11:42 2017 Tags:

I’m spending time on…

Converting intentions about health back to habits.
Making myself more useful to my teammates.
Incorporating feedback from recent workshops and talks.
Starting to think about upcoming ones.

I’m looking forward to…

Reaping health benefits from health habits.
Deepening relationships.

I’m traveling to…

Late March: Des Moines.

Otherwise: see my speaker page.

What’s this?

It’s a /now page. is a directory of people with /now pages. I’m listed there.

Posted Sun Mar 5 20:08:17 2017 Tags:

At the second annual PillarCon, I facilitated a workshop called “Fundamentals of C and Embedded using Mob Programming”. On a Mac, we test-drove toggling a Raspberry Pi’s onboard LED.

Before and after

Before: ACT LED off
Before: ACT LED off
After: ACT LED on
After: ACT LED on


Noteboard: takeaways from Fundamentals of C and Embedded
Noteboard: takeaways from Fundamentals of C and Embedded

Here are the takeaways we wrote down:

  • Could test return type of main()
  • Why wasn’t num_calls 0 to begin with?
  • Maybe provide the mocks in advance (maybe use CMock)
  • Fun idea: fake GPIO device
  • Vim tricks! Cool
  • But maybe use an easier editor for target audience
  • Appropriate amount of effort; need bigger payoff
  • Mob programming supported the learning process/objective

My own thoughts for next time I do this material:

  • Keep: providing multi-target Makefile and prebuilt cross compiler
  • Try: providing the mocks in the starting state
  • Keep: being prepared with a test list
  • Try: using a more discoverable (e.g., non-modal) text editor
  • Keep: being prepared with corners to cut if time gets short
  • Try: providing already-written test cases to uncomment one at a time (one of the aspects of James Grenning’s training course I especially loved)
  • Keep: mobbing
  • Try: knowing more of the mistakes we might make when cutting corners
  • Keep: delivering a visible result
  • Try: turning off the PWR light (if possible) so we can more easily see when the ACT light turns on and off

Participants who already knew some of this stuff liked the mob programming (new to some of them) and appreciated how I structured the material to unfold. Participants who were new to C and/or embedded (my target audience) came away feeling that they needn’t be intimidated by it, and that programming in this context can be as fun and feedbacky as they’re accustomed to.

Play along at home

  1. Install NetBSD 7 on Raspberry Pi
  2. Fetch the NetBSD source tree (or let the Makefile do it for you)
  3. Build the cross compiler and a complete NetBSD for the target system (or let the Makefile do it for you)

Then follow the steps outlined in the README.

Further learning

You’re welcome to use the workshop materials for any purpose, including your own workshop. If you do, I’d love to hear about it. Or if you’d like me to come facilitate it for your company, meetup group, etc., let’s talk.

Posted Sat Feb 18 19:07:16 2017 Tags:


Give assignments

Since July, when I wrote Part 1, I got married, joined a C-based project, rolled off, joined a new project, and moved house.

One of the recommendations in Part 1 was “take assignments.” The reason I’m finally back for Part 2 is that I’m running a workshop on Saturday. If you’re having trouble making time to learn something, try committing yourself to teach it.

Trust your doubts

I’m supposed to deliver two hours of hands-on “Fundamentals of C and Embedded.” Having worked in C professionally for some weeks — and embedded systems for zero — I certainly don’t know any more than the fundamentals. It could be the case that I know less.

I don’t have to be an expert at C or embedded to deliver what I’ve promised. I do have to be an expert at noticing, of all the things I don’t know, what’s bothering me the most right now. Over and over again.

One question at a time

Even though a Raspberry Pi is a general-purpose computer, in the workshop we’re going to treat it as an embedded system. By that, I mean that even though we could easily develop software directly on the Pi, we’re going to pretend we can’t. Or another way: even though it’s actually reasonably quick and cheap, we’re going to pretend it’s slow and expensive to deploy and test on the “target” system.

As I was preparing the workshop, here’s what I didn’t know that bothered me the most, in order:

  1. Can I write C to drive some visible feature of the Pi? See whether I can get the onboard LED to toggle. Result: it works.
  2. Can I cross-compile some C code? Try adapting Roman Numeral Calculator to optionally target the Pi. Result: it works.
  3. Can I link with one implementation of an interface for the host and another for the target? Yes: the native Mac build prints “gonna toggle the LED” and the Pi build really toggles it.
  4. Can I instruct the Pi build (and not the native Mac build) to depend on that cross compiler, and to rebuild it if it’s missing? Yes (and I’m glad I kept my notes and got it right on the first try).
  5. Can I link with real system calls for the target and custom fakes for the host? Yes, and this supersedes what I learned in (3): with fakes for open(2)/ioctl(2)/close(2), I can get rid of the fake implementation of led_toggle().
  6. Can I now write microtests for led_toggle()? Yes, and in so doing I found and fixed some behavior that didn’t meet my expectations.
  7. Am I now qualified to run a workshop that claims we’ll “test-drive a Raspberry Pi from a Mac”? I believe so.


Am I ready to run the workshop? Not quite. I want to make sure that in our two hours as a mob, test-driving C in Vim, we get the satisfaction of toggling that LED.

How fast we get there depends on how well we test-drive (our company prides itself on this), how well we know C (depends on who shows up), and how well we can deal with Vim (same). It’s possible that our pace won’t be fast enough; I’ll want to have thought about how to speed us up. It’s also possible that we’ll blow right through it; I’ll want to have thought about what more we could do with our time.

I’m still no expert in C or embedded systems, but the state of my knowledge is no longer the bottleneck to delivering a valuable version of this workshop. Wish me luck!

Posted Thu Feb 16 18:56:35 2017 Tags: