Last week someone asked some Agile coaches for their thoughts on metrics. It occurred to me that I’ve got some feelings too.
I distrust metrics. More accurately, I distrust what people — myself included — tend to do with metrics not only by default, but also in spite of any better intentions we might set out with. It doesn’t matter what we say a number is supposed to be for: people will decide (and re-decide) for themselves. Regardless of what we wish, expect, or can even observe, a metric is “for” whatever behaviors it produces. In other words:
The purpose of a system is what it does.
Metrics are tools. Tools have affordances. Affordances influence behavior. Most metrics have untidy affordances.
Not on my watch!
When I’m asked for metrics, I find myself feeling oppositional. I may ask questions like “Which decisions will this help you make?”, but I’m not asking to clarify intent and explore the problem. I’m asking to be passively aggressive, to be obstructive, to make myself a hurdle to be cleared before one more development team gets measured in one more useless, counterproductive, or spirit-crushing way. As I like to say:
Careful what you measure, because you’ll get it.
(I’m smart enough not to track how many times I quote myself.)
There are a few problems with my strategy.
Who needs my approval?
Nobody. I don’t have to agree that a given metric is a good idea for it to get measured and observed.
Who’s measuring whom?
My reaction is maybe a bit parental, triggered by the prospect that people with more power might use it to do harm to people with less. But sometimes teams have their own reasons to measure themselves, as an input to their own satisfaction with their own work. When that’s what’s happening, and we can make sure nobody else will ever get to see our numbers, I’m happy to join in clarifying intent and exploring the problem.
Who’s got a better idea?
Just because I think my idea is better doesn’t mean anyone else does.
For instance, maybe I get on my soapbox and claim that we’ve already got some metrics. What?!? Sure, we’ve already got some idea of how much it’s costing us to do stuff, how much risk we’re taking, how much value we’re delivering, and how much people feel like what we’re doing is worth continuing to do. Aren’t these the questions we’re motivated to answer better? And if so, how about before we pose new questions, let’s look for ways to get better answers?
Maybe that’s convincing, maybe it isn’t. If we’re gonna take on new metrics, I try to improve the likelihood that we’ll pay attention to them and act on what we notice.
Idea: metrics come with expiration dates
Just because I think adding a metric constitutes an experiment to see whether it produces net-desirable behaviors doesn’t mean anyone else does. Because I believe they’ll come around to my way of thinking once they see for themselves, I suggest a behavioral hack. It’s a rule:
Every new metric we add must be accompanied by an expiration date.
Could be two retrospectives from now, if we’re doing iterations. Could be three months from now, if we’re not comfortable with running experiments.
The expiration date is a means to an end. The ends are affordances to:
- Remember to observe, and
- Stop if we want.
By the time the expiration date rolls around, we might not remember why we started tracking the metric. That would be an observation worthy of reflection, but also an affordance to retain the metric out of inertial uncertainty. The expiration date provides a counter-affordance that it’s safe to drop it, because we’ve made that aspect of our original intent clear.
If we remember what else we originally intended, and/or it seems to be serving us well, we can choose a new date and renew our subscription. Or try a new variation.
In short, I find most grasping for metrics to be a reliable metric for lack of understanding of human behavior, not only that of those who would be measured but that of those who would do the measuring.
If a higher-up wants a metric about a team, say, as an input to their judgment about whether the team’s work is satisfactory, oughtn’t there be some other way to tell?
And if I choose nearly any metric on someone else’s behalf, doesn’t that reveal my assumption that I know something about how they do their good work better than they do?
Or worse, that I prefer they nail the metric than do something as loose and floppy as “good work”?
Let’s try that again
New metric (expiration = next subhead, privacy = public): I’m 0 for 1 on satisfying conclusions to this post.
I’m hardly an expert on human behavior. If I were one, rather than being passive-aggressive and obstructive, I’d have a ready step to suggest to metrics-wanters, one that they’d likely find more desirable than metrics.
Instead I have to talk myself down from passo-aggro-obstructo, by which time they’ve chosen what they’ll observe and the ready step I can offer is limited to encouraging them to observe the effects of their observation.
Can you give me some better ideas?
I’m much more comfortable with the latest conclusion, especially if you’ll give me some ideas. I’ll call it 1 for 2 and let the metric expire here.
Chris Freeman writes in:
You might add Out Of The Crisis as a resource to your post since he writes about how the folks on the floor are in control of their own improvements and how metrics are hard and how surprising “normal” can be.