This is the sixth in a series of “Nifty and Minimally Invasive qmail Tricks”, following
- qmail + SMTP AUTH + SSL + TLS - patches
- qmail + badrcptto - patches
- qmail + NetBSD nightly maintenance
- qmail + IMAP-before-SMTP
- qmail + spam filtering
Losing services (and eventually restoring them)
When my Mac mini’s hard drive died in the Great Crash of Fall 2008, taking this website and my email offline with it, I was already going through a rough time, and my mental bandwidth was extremely limited. I expended some of it explaining to friends what they could do about their hosted domains until such time as my brain became available again (as I assumed and/or hoped it eventually would). I expended a bit more asking a friend to do a small thing to keep my email flowing somewhere I could get it. And then I was spent.
The years where I used Gmail and had no website felt like years in the wilderness. That feeling could mostly have been about how I missed the habit of reflecting about my life now and again, writing about it, and sharing. But when the website returned four years ago (in order to remember Aaron Swartz), the feeling didn’t go away. All I got was a small sense of relief that my writings and recordings were available and that I could safely revive my old habit. After a year and half of reflecting, writing, and sharing, the feels-needle hadn’t rebounded much further.
It was only after painstakingly restoring all my old email (from
Mail.app’s cache, using
moving it up to my IMAP server, carefully merging six years’ worth of
Gmail into that,
accepting SMTP deliveries for
not needing Gmail at all
for several weeks that I noticed my long, strange sojourn had ended.
If it so happened that I’d instead fixed email first, I’d also have felt a tiny bit weird till my website was back. But only a tiny bit. When my web server’s down, you might not hear from me; when my mail server’s down, I can’t hear from you — or, as happened in 2008, from my professors during finals week. So while web hosting can be interesting, mail hosting keeps me attached to what it feels like to be responsible for a production service.
Keeping it real
I value this firsthand understanding very, very highly. I started as a sysadmin, I’m often still a developer, and that’s part of why I’m sometimes helpful to others. But since I’m always in danger of forgetting lessons I learned by doing it, I’m always in danger of being harmful when I try to help others do it.
As a coach, one of my meta-jobs is to remind myself what it takes to know the risks, decide to ship it, live with the consequences, tighten the shipping-it loop until it’s tight enough, and notice when that stops being true.
And that’s why I run my own mail server.
What’s new this week
My 2014 mail server was configured just about identically with my 2008 one, for which it was handy to consult the earlier articles in this series.
Then, recently, my weekly build broke on the software I’ve been using to send mail. It was a trivial breakage, easy to fix, but it reminded me about a non-trivial future risk that I didn’t want hanging over my head anymore. (For more details, see my previous post.)
Now I’m sending mail another way. Clients are unchanged, the server no longer needs TMDA or its dependencies, and I no longer have a specific expectation for how this aspect of my mail service will certainly break in the future. (Just some vague guesses, like a newly discovered compromise in the TLS protocol or OpenSSL’s implementation thereof, or STARTTLS or Stunnel’s implementation thereof.)
A couple iterations
First, I tried the smallest change that might work: replacing
with the original
(by the author of
the software around which my mail service is built), wrapped in
(new use of an existing tool), wrapped in STARTTLS by
stunnel (as before).
No more TMDA.
Next, I tried a change that might shorten the
chain of executables:
committed an update
adding a build-time option to apply
John R. Levine’s SMTP AUTH patch,
did a build with the option enabled, and observed that I no longer
needed the second instance of
The new settings in
/etc/rc.conf to start a mail submission service on
localhost port 26:
qmailofmipd=YES qmailofmipd_datalimit="160000000" qmailofmipd_postofmipd="'' `cat /etc/qmail/control/me` /usr/pkg/bin/checkpassword true"
And the corresponding
/etc/stunnel/stunnel.conf to make the service available on the network:
[submission] accept = submission connect = localhost:26 protocol = smtp
I’m still relying on
spamdyke for other purposes, but I’m comfortable
I’m still relying on
stunnel for STARTTLS, but I’m relatively
comfortable keeping OpenSSL contained in its own address space and
I think this configuration is good enough for the time being.
Want to learn to see the consequences of your choices and/or help other people do the same? Consider productionizing something important to you.
Last night, mere moments from letting me commit a new package of Test::Continuous (continuous testing for Perl), my computer acted as though it knew its replacement was on the way and didn’t care to meet it. This tiny mid-2013 11” MacBook Air made it relatively ergonomic to work from planes, buses, and anywhere else when I lived in New York and flew regularly to see someone important in Indiana, and continued to serve me well when that changed and changed again.
The next thing I was planning to do with it was write this post. Instead I rebooted into DiskWarrior and crossed my fingers.
Things get in your way, or threaten to. That’s life. But when you have slack time, you can
- Cope better when stuff happens,
- Invest in reducing obstacles, and
- Feel more prepared for the next time stuff happens.
Having enough slack is as virtuous a cycle as insufficient slack is a vicious one.
Paying down non-tech debts…
Last year I decided to spend more time and energy improving my health. Having recently spent a few weeks deliberately not paying attention to any of that, I’m quite sure that I prefer paying attention to it, and am once again doing so.
Learning to make my health a priority required that I make other things non-priorities, notably Agile in 3 Minutes. It no longer requires that. I’ve recently invested in making the site easier for me to publish, and you may notice that it’s easier for you to browse. I didn’t have enough slack to do these things when I was writing and recording a new episode every week. Now that enough of them have been taken care of, I feel prepared to take new steps with the podcast.
…And tech debts
Before that, I inspected and minimized the differences between “dev” (my
laptop) and “prod” (my server, where you’re reading this), updated prod
with the latest ikiwiki settings, and (because it’s all in Git)
dev from prod.
In so doing, I observed that more config differences could be easily
harmonized by adjusting some server paths to match those on my laptop.
(When Apple introduced
System Integrity Protection,
on Mac OS X could no longer install under
/usr, and moved to
automated NetBSD package build,
I can easily build the next batch for
/opt/pkg as well, retaining
/usr/pkg as a symlink for a while.
So I have.)
I’ve been running lots of these builds in the past week anyway, because a family of packages I maintain in pkgsrc had been outdated for quite a while and I finally got around to catching them up to upstream. Once they built on OS X, I committed the updates to the cross-platform package system, only to notice that at least one of them didn’t build on NetBSD. So I fixed it, ran another build, saw what else I broke, and repeated until green.
…And taking on patience debt telling you about more of this crud
Due to another update that temporarily broke the build of TMDA, I was freshly reminded that that’s a relatively biggish liability in my server setup. I use TMDA to send mail, which is not mainly what it’s for, and I never got around to using it for what it’s for (protecting against spam with automated challenge-response), and it hasn’t been maintained for years, and is stuck needing an old version of Python.
On the plus side, running a weekly build means that when TMDA breaks more permanently, I’ll notice pretty quickly. On the minus side, when that happens, I’ll feel pressure to fix or replace it so I can (1) continue to send email like a normal person and (2) restart the weekly build like a me-person. If I can reduce the liability now, maybe I can avoid feeling that pressure later.
Investigating alternatives, I remembered that
which I already use for
delaying the SMTP greeting,
blacklisting from a
as well as
To: addresses that only get spam anymore, and
greylisting from unknown senders,
So I’ll try keeping
tmda-ofmipd with a second instance of
If that’s good, I’ll remove
the list of packages I build every week,
spamdyke with OpenSSL support and try letting it handle the
TLS encryption directly.
If that’s good, I’ll remove
security/stunnel from the list of packages
too, leaving me at the mercy of fewer pieces of software breaking.
Leaning more heavily on Spamdyke isn’t a clear net reduction of risk.
When a bad bug is found, it’ll impact several aspects of my mail service.
And if and when NetBSD moves from GCC to Clang, I’ll have to add
lang/gcc to my list of packages and instruct pkgsrc to use it when
building Spamdyke, or else come up with a patch to remove Spamdyke’s use
of anonymous inner functions in C.
(That could be fun. I recently
started learning C.)
I could go on, but I’m a nice person who cares about you.
That’s enough of that. So what?
All these builds pushing my soon-to-be-replaced laptop through its final paces as a development machine might have had something to do with triggering its misbehavior last night. And all this work seems like, well, a lot of work. Is there some way I could do less of it?
Yes, of course.
But given my interests and goals, it might not be a clear net improvement.
For instance, when
drew my attention to that
Test::Continuous Perl module, being a pkgsrc
developer gave me an easy way to uninstall it if I wound up not liking
it, which meant it was easy to try, which meant I tried it.
conditions in my life to favor trying things.
So I’m invested in preserving and extending those conditions.
formulation, I’m aiming to maximize the
area under the curve.
No new resolutions, yes new resolvings
I’m not looking to add new goals for myself for 2017. I’m not even trying to make existing things “good enough” — there are too many things, and as a recovering perfectionist I have trouble setting a reasonable bar — I’m just trying to make them “good enough enough” that I can expect small slices of time and attention to permit small improvements.
- When conditions are expected to change, smaller batch size helps us adjust.
- Reducing batch size takes time and effort.
Paying down my self-debts (technical and otherwise) feels like resolving. I have, at times, felt quite out of position at managing myself. Lately I’m feeling much more in position, and much more like I can expect to continue to make small improvements to my positioning.
When you want the option to change your body’s direction, you take smaller steps, lower your center, concentrate on balance. That’s #Agile.
My current best understanding is that a balanced life is a small-batch-size life. If that’s the case, I’m getting there.
This coming Monday, I’ll be switching to one of these weird new MacBook Pros with the row of non-clicky touchscreen “keys”. If my current computer survives till then, that’ll be one smooth step in a series of transitions. (In other news, Bekki defends her dissertation that day.)
The following Monday, I’ll be starting my next project, a mostly-remote gig pairing in Python to deliver software for a client while encouraging and supporting growth in my Pillar teammates. I’ll be in Des Moines every so often; if you’re there and/or have recommendations for me, I’d love to hear from you.
The Monday after that, we’ll pack up a few things the movers haven’t already taken away, and our time in Indiana will come to an end. We’re headed back to the New York area to live near family and friends.
No resolutions, yes intentions
For 2017, I declare my intentions to:
- Continue to improve my health and otherwise attend to my own needs
- Help more people understand what software development work is like
- Help more people feel heard
I hope to see and hear you along the way.
Last week someone asked some Agile coaches for their thoughts on metrics. It occurred to me that I’ve got some feelings too.
I distrust metrics. More accurately, I distrust what people — myself included — tend to do with metrics not only by default, but also in spite of any better intentions we might set out with. It doesn’t matter what we say a number is supposed to be for: people will decide (and re-decide) for themselves. Regardless of what we wish, expect, or can even observe, a metric is “for” whatever behaviors it produces. In other words:
The purpose of a system is what it does.
Metrics are tools. Tools have affordances. Affordances influence behavior. Most metrics have untidy affordances.
Not on my watch!
When I’m asked for metrics, I find myself feeling oppositional. I may ask questions like “Which decisions will this help you make?”, but I’m not asking to clarify intent and explore the problem. I’m asking to be passively aggressive, to be obstructive, to make myself a hurdle to be cleared before one more development team gets measured in one more useless, counterproductive, or spirit-crushing way. As I like to say:
Careful what you measure, because you’ll get it.
(I’m smart enough not to track how many times I quote myself.)
There are a few problems with my strategy.
Who needs my approval?
Nobody. I don’t have to agree that a given metric is a good idea for it to get measured and observed.
Who’s measuring whom?
My reaction is maybe a bit parental, triggered by the prospect that people with more power might use it to do harm to people with less. But sometimes teams have their own reasons to measure themselves, as an input to their own satisfaction with their own work. When that’s what’s happening, and we can make sure nobody else will ever get to see our numbers, I’m happy to join in clarifying intent and exploring the problem.
Who’s got a better idea?
Just because I think my idea is better doesn’t mean anyone else does.
For instance, maybe I get on my soapbox and claim that we’ve already got some metrics. What?!? Sure, we’ve already got some idea of how much it’s costing us to do stuff, how much risk we’re taking, how much value we’re delivering, and how much people feel like what we’re doing is worth continuing to do. Aren’t these the questions we’re motivated to answer better? And if so, how about before we pose new questions, let’s look for ways to get better answers?
Maybe that’s convincing, maybe it isn’t. If we’re gonna take on new metrics, I try to improve the likelihood that we’ll pay attention to them and act on what we notice.
Idea: metrics come with expiration dates
Just because I think adding a metric constitutes an experiment to see whether it produces net-desirable behaviors doesn’t mean anyone else does. Because I believe they’ll come around to my way of thinking once they see for themselves, I suggest a behavioral hack. It’s a rule:
Every new metric we add must be accompanied by an expiration date.
Could be two retrospectives from now, if we’re doing iterations. Could be three months from now, if we’re not comfortable with running experiments.
The expiration date is a means to an end. The ends are affordances to:
- Remember to observe, and
- Stop if we want.
By the time the expiration date rolls around, we might not remember why we started tracking the metric. That would be an observation worthy of reflection, but also an affordance to retain the metric out of inertial uncertainty. The expiration date provides a counter-affordance that it’s safe to drop it, because we’ve made that aspect of our original intent clear.
If we remember what else we originally intended, and/or it seems to be serving us well, we can choose a new date and renew our subscription. Or try a new variation.
In short, I find most grasping for metrics to be a reliable metric for lack of understanding of human behavior, not only that of those who would be measured but that of those who would do the measuring.
If a higher-up wants a metric about a team, say, as an input to their judgment about whether the team’s work is satisfactory, oughtn’t there be some other way to tell?
And if I choose nearly any metric on someone else’s behalf, doesn’t that reveal my assumption that I know something about how they do their good work better than they do?
Or worse, that I prefer they nail the metric than do something as loose and floppy as “good work”?
Let’s try that again
New metric (expiration = next subhead, privacy = public): I’m 0 for 1 on satisfying conclusions to this post.
I’m hardly an expert on human behavior. If I were one, rather than being passive-aggressive and obstructive, I’d have a ready step to suggest to metrics-wanters, one that they’d likely find more desirable than metrics.
Instead I have to talk myself down from passo-aggro-obstructo, by which time they’ve chosen what they’ll observe and the ready step I can offer is limited to encouraging them to observe the effects of their observation.
Can you give me some better ideas?
I’m much more comfortable with the latest conclusion, especially if you’ll give me some ideas. I’ll call it 1 for 2 and let the metric expire here.
Chris Freeman writes in:
You might add Out Of The Crisis as a resource to your post since he writes about how the folks on the floor are in control of their own improvements and how metrics are hard and how surprising “normal” can be.
At Agile Testing Days, I facilitated a workshop called “DevOps Dojo”. We role-played Dev and Ops developing and operating a production system, then figured out how to do it better together.
We wrote down our takeaways on the final “Retrospective” slide.
You’re welcome to use the workshop materials for any purpose, including your own workshop. If you do, I’d love to hear about it.
I’ve spoken at several instances of pkgsrcCon (including twice in nearby Berlin), but that’s more like a hackathon with some talks. Agile Testing Days was a proper conference, with hundreds of people and plenty of conferring. If someone asks whether I’m an “international speaker”, or claims I am one, I now won’t feel terribly uncomfortable going along with it.
I met a fellow WeDoTDD practitioner, Carlos Blé of Codesai. (Here’s their WeDoTDD interview and Pillar’s.) Carlos and I have both relied on Twitter to build our careers. Who knows, maybe we’ll give a talk together about it.
At the Tuesday morning Lean Coffee, I found a bug in myself (not a first).
What I expected from many previous Lean Coffees: I’d have to control myself to not say all the ideas and suggestions that come to mind.
What happened at this Lean Coffee: It was very easy to listen, because I didn’t have many ideas or suggestions, because the topics came from people who were mostly testers.
Conclusions I immediately drew:
- Come to think of it, I have not played every role on a team. I don’t know what it’s like to be a tester. Maybe my guesses about what it’s like are less wrong than some others, but they’re still gonna be wrong.
- This is evidently my first conference that’s more testing than Agile. Cool! I bet I can learn a lot here.
Thanks to Troy Magennis, Markus Gärtner, and Cat Swetel, I decided to try a new idea and spend a few slides drawing attention to the existence and purpose of Agile Testing Days’ Code of Conduct. I can’t tell yet how much good this did, but it took so little time that I’ll keep trying it in future conference presentations and workshops.
My next gig will be remote coaching, centered around what we notice as we’re pair programming and delivering working software. I’ve done plenty of coaching and plenty of remote work, but not usually at the same time. Thanks to Lean Coffee with folks like Janet and Alex Schladebeck, I got some good advice on being a more effective influencer when it takes more intention and effort to have face-to-face interactions.
- Alex: For a personal connection, start meetings by unloading your “baggage” — whatever’s on your mind today that might be dividing your attention — and inviting others to unload theirs. (Ideally, establish this practice in person first.)
- Janet: Ask questions that help people recognize their own situation. (Helping people orient themselves in their problem spaces is one of my go-to strengths. I’m ready to be leaning harder on it.)
As I learn about remote coaching, I expect to write things down at Shape My Work, a wiki about distributed Agile that Alex Harms and I created. You’ll notice it has a Code of Conduct. If it makes good sense to you, we’d love to learn what you’ve learned as a remote Agilist.
I found Agile Testing Days to be a lovingly organized and carefully tuned mix of coffee breaks, efficiency, flexibility, and whimsy. The love and whimsy shone through. I’m honored to have been part of it, and I sure as heck hope to be back next year.
We’d be back next year anyway; we visit family in Germany every December. Someday we might choose to live near them for a while. It occurs to me that having participated in Agile Testing Days might well have been an early investment in that option, and the thought pleases me. (As does the thought of hopping on a train to participate again.)
I’m in Europe through Christmas. I consult, coach, and train. Do you know of anyone who could use a day or three of my services?
One aspect of being a tester I do identify with is being frequently challenged to explain their discipline or justify their decisions to people who don’t know what the work is like (and might not recognize the impact of their not knowing). In that regard, I wonder how helpful Agile in 3 Minutes is for testers.
Let’s say I could be so lucky as to have a few guest episodes about testing. Who would be the first few people you’d want to hear from? Who has a way with words and ideas, knows the work, and can speak to it — in their unique voice — to help the rest of us understand a bit better?
My first job was in Operations. When I got to be a Developer, I promised myself I’d remember how to be good to Ops. I’ve sometimes succeeded. And when I’ve been effective, it’s been in part due to my firsthand knowledge of both roles.
DevOps is two things (hint: they’re not “Dev” and “Ops”)
Part of what people mean when they say DevOps is automation. Once a system or service is in operation, it becomes more important to engineer its tendencies toward staying in operation. Applying disciplines from software development can help.
These words are brought to you by a Unix server I operate. I rely on it to serve this website, those of a few friends, and a tiny podcast of some repute. Oh yeah, and my email. It has become rather important to me that these services tend to stay operational. One way I improve my chances is to simplify what’s already there.
If it hurts, do it more often…
Another way is to update my installed third-party software once a week. This introduces two pleasant tendencies: it’s much…
- Less likely, at any given time, that I’m running something dangerously outdated
- More likely, when an urgent fix is needed, that I’ll have my wits about me to do it right
Updating software every week also makes two strong assumptions about safety (see Modern Agile’s “Make Safety a Prerequisite”): that I can quickly and easily…
- Roll back to the previous versions
- Build and install new versions
Since I’ve been leaning hard on these assumptions, I’ve invested in making them more true.
The initial investment was to figure out how to configure pkgsrc to build a complete set of binary packages that could be installed at the same time as another complete set. My hypothesis was that then, with predictable and few side effects, I could select the “active” software set by moving a symbolic link.
It worked. On my PowerPC Mac mini, the best-case upgrade scenario went from half an hour’s downtime (bring down services, uninstall old packages, install new packages, bring up services) to less than a minute (install new packages, bring down services, move symlink, bring up services, delete old packages after a while). The worst case went from over an hour to maybe a couple of minutes.
…Until it hurts enough less
I liked the payoff on that investment a lot. I’ve been adding incremental enhancements ever since. I used to do builds directly on the server: in place for low-risk leaf packages, as a separate full batch otherwise. It was straightforward to do, and I was happy to accept an occasional reduction in responsiveness in exchange for the results.
the Mac mini died,
I moved to
a hosted Virtual Private Server
that was much easier to mimic.
So I took the job offline to a local
VirtualBox running the same release and architecture of
to begin with,
now, both under
The local job ran faster by some hours (I forget how many), during which the server continued devoting all its I/O and CPU bandwidth to its full-time responsibilities.
Last time I went and improved something was to fully automate the building and uploading, leaving myself a documented sequence of manual installation steps. Yesterday I extended that shell script to generate another shell script that’s uploaded along with the packages. When the upload’s done, there’s one manual step: run the install script.
If you can read these words, it works.
DevOps is still two things
Applying Dev concepts to the Ops domain is one aspect. When I’m acting alone as both Dev and Ops, as in the above example, I’ve demonstrated only that one aspect.
The other, bigger half is collaboration across disciplines and roles. I find it takes some not-tremendously-useful effort to distinguish this aspect of DevOps from BDD — or from anything else that looks like healthy cross-functional teamwork. It’s the healthy cross-functional teamwork I’m after. There are lots of places to start having more of that.
If your team’s context suggests to you that DevOps would be a fine place to start, go after it! Find ways for Dev and Ops to be learning together and delivering together. That’s the whole deal.
Here’s another deal
Two weeks from today, at Agile Testing Days in Potsdam, Germany, I’m running a hands-on DevOps collaboration workshop. Can you join us? It’s not too late, and you can save 10% off the price of the conference ticket. Just provide my discount code when you register. I’d love to see you there.