Week of 2021-08-16
Effectiveness and certainty
I’ve been thinking about the limits to seeing, and this diagram popped into my head. It feels like there’s more here, but I am not there yet. I thought I’d write down what I have so far.
There’s something about the relationship between the degree of certainty we have in knowing something and our effectiveness applying that knowledge. When we’ve just encountered an unknown phenomenon -- be that a situation or a person -- we tend to be rather ineffective in engaging with it. Thinking of it in terms of constructed reality, our model of this phenomenon generates predictions that are mostly errors: we think it will go this way, but it goes completely differently. “I don’t get it. I thought he’d be glad I pointed out the problem with the design.” Each new error contributes to improving the model and, through the interaction with the phenomenon, we improve our prediction rate and with it, our certainty about our knowledge of the phenomenon. “Oh, turns out there’s another tech lead who actually makes all the calls on this team.” Here, our effectiveness of applying the knowledge seems to climb along with the improvements. At some point, our model gets pretty good, and we reach our peak effectiveness, wielding our knowledge skillfully to achieve our aims. This is where many competence frameworks stop and celebrate success, but I’ve noticed that often, the story continues.
As my prediction error rate drops below some threshold, the model becomes valuable: the hard-earned experiential knowledge begins to act as its own protector. The errors that help refine the model are rapidly incorporated, while the errors that undermine the model begin to bounce off. Because of this, my certainty in the model continues to grow, but the effectiveness slides. I begin to confuse my model with my experiences, preferring the former to the latter. “She only cares about these two metrics!” -- “What about this other time... “ -- “Why are we still talking about this?!” Turns out, my competence has a shadow. And this shadow will lull me to sleep of perfect certainty… until the next prediction error cracks the now-archaic model apart, and spurs me to climb that curve all over again.
For some reason, our brains really prefer fixed models -- constructs that, once understood, don’t change or adapt over time. A gloomy way to describe our lifelong process of learning would be as the relentless push to shove the entire world into some model of whose prediction rate we are highly certain. And that might be okay. This might be the path we need to walk to teach ourselves to let go of that particular way of learning.
🔗 https://glazkov.com/2021/08/20/effectiveness-and-certainty/
The miracle count
While working on a technical design this week, my colleagues and I brushed up an old framing: the miracle count. I found it useful when considering the feasibility of projects.
Any ambitious project will likely have dependencies: a set of unsolved problems that are stringed together into a causal chain. “To ship <A>, we need <B> to work and <C> to have <metric>.” More often than not, the task of shipping such a project will be more about managing the flow of these dependencies rather than their individual effectiveness or operational excellence.
Very early in the project’s life, some of these dependencies might pop out as clearly outsized: ambitious projects in themselves. If I want to make an affordable flying saucer, the antigravity drive is a dependency, and it still needs to be invented. I call these kinds of dependencies “miracles” and their total number in the project the “miracle count.” Put it simply, the miracle count is the number of unlikely events that we need to happen for our project to succeed.
I don’t know about other environments, but in the engineering organization, having a miracle count conversation is useful. The miracle might be technological in nature (like that antigrav drive), it could be a matter of shifting priorities (“this team is already overloaded and we’re asking them to add one more thing to their list”), or any number of things. Recognizing that a dependency is a “miracle” is typically a prompt to de-risk, to have a contingency plan. “Yeah, I wonder if that generalized cache project is a miracle. What will we do if it doesn’t go through?”
Miracle-spotting can be surprisingly hard in large organizations. Each team is aiming high, yet wants to appear successful, and thus manage the appearance of their progress as a confident stride to victory. I have at least a couple of war stories where a team didn’t recognize a miracle dependency and parked their own “hacky workaround” projects -- only to grasp for them suddenly (“Whew! So glad this code is still around! Who was the TL?”) when the miracle failed to materialize. I’ve also seen tech leads go to the other extreme and treat every dependency as a miracle. This inclination tends to create self-contained mega-projects, and increase their own miracle-ness. In my experience, finding that balance is hard, and usually depends on the specific constraints of a dependency -- something that a robust miracle count conversation can help you determine.