Week of 2022-04-11
A Solution
If we are looking at a problem, and as we learned earlier, our understanding of a problem is a model that includes us, our intention, and the phenomenon that is a subject of it, then a solution is the problem understanding-based prediction that resolves the problem’s intention, aligning the state of the phenomenon with it.
Because the problem’s model includes us, the solution often manifests as a set of actions we take. For example, for my trying to repel that mischievous bunny from the previous piece, one solution might look like the list of a) grab a tennis ball, b) aim at the tree nearby, c) throw the ball at the tree with the most force I can muster. However, solutions can also be devoid of our actions, like in that old adage: “if you ignore a problem long enough, it will go away on its own”.
Note that according to the definition above, a solution relies on the model, but is distinct from it. Same model might have multiple solutions. Additionally, a solution is distinct from the outcome. Since I defined it as a prediction, a solution is a peek into the future. And as such, it may or may not pan out. These distinctions give us just enough material to construct a simple framework to reason about solutions.
Let’s see… we have a model, a solution (aka prediction), and the outcome. All three are separate pieces, interlinked. Yay, time for another triangle! Let’s look at each edge of this triangle.
When we study the relationship between solution and outcome, we arrive at the concept of solution effectiveness, a sort of hit/miss scale for the solution. Solutions that result in our intended outcomes are effective. Solutions that don’t are less so. (As an aside, notice how the problem's intention manifests in the word “intended”). Solution effectiveness appears to be fairly easy to measure. Just track the rate of prediction errors over time. The lower the rate, the more effective the solution is. We are blessed to be surrounded by a multitude of effective solutions. However, there are also solutions that fail, and to glimpse possible reasons why that might be happening, we need to look at the other sides of our triangle.
The edge that connects solution and model signifies the possibility that our mental model of the problem contains an effective solution, but we may have not found it yet. Some models are simple, producing very few possible solutions. Many are complicated labyrinths, requiring skill and patience to traverse. When we face a problem that does not yet have an effective solution, we tend to examine the full variety of possible solutions within the model: “What if I do this? What If we try that?” When we talk about “finding a solution,” we usually describe this process. To firm this notion up a bit, a model of the problem is diverse when it contains many possible solutions. Solution diversity tends to be only interesting when we are still looking to find one that’s more effective than what we currently have. Situations where the solution is elusive, yet the model’s solution diversity is low can be rather unfortunate – I need to find more options, yet the model doesn’t give me much to work with. In such cases, we tend to look for ways to enrich the model.
This is where the final side of the triangle comes in. This edge highlights the relationship between the model and the outcome. With highly effective solutions, this edge is pretty thin, maybe even appearing non-existent. Lack of prediction errors means that our model represents the phenomenon accurately enough. However, when the solution fails to produce the intended outcome, this edge comes to life: prediction errors flood in as input for updating the model. If we treat every failure to attain the intended outcome as an opportunity to learn more about the phenomenon, our model becomes more nuanced, and subsequently, increases its solution diversity – which in turn lets us find an effective solution, completing the cycle. This edge of the triangle represents the state of flux within the model: how often and how drastically is the model being updated in response to the stream of solutions that failed? By calling it “flux”, I wanted to emphasize the updates that lead to “interesting” changes in the model: lack of prediction error is also a model update, but it’s not going to increase its diversity. However, outcomes that leave us stunned and unsure of what the heck is going on are far more interesting.
Wait. Did I just reinvent the OODA loop? Kind of, but not exactly. Don’t get me wrong, I love the Mad Colonel’s lens, but this one feels a bit different. Instead of enumerating the phases of the familiar circular solution-finding process, our framework highlights its components, the relationships between them and their attributes. And my hope is that this shift will bring new insights about problems, solutions, and us in their midst.
🔗 https://glazkov.com/2022/04/11/a-solution/
Rubber duck meetings
When I am looking for new insights, a generative conversation with colleagues is hard to beat in terms of quality of output. When I look back at what I do, a large chunk of my total effort is invested into cultivating spaces for generative conversations. It seems deceptively easy (“Let’s invite people and have them talk!”), but ends up being rather tricky – an art more than a technique. My various chat spaces are littered with tombstones of failed generative spaces, with only a precious few attempts actually bearing fruit. Let’s just say I am learning a lot.
One failed outcome of trying to construct a generative space is what I call the “rubber duck meeting”. The key dynamic that contributes to this outcome is the gravity well of perceived power. For example, a manager invites their reports to partake in a freeform ideation session. At this session, the manager shares their ideas and walks the team through them, or reviews someone else’s idea and brainstorms around them. There is some participation from the others, but if we stand back, it’s pretty clear that most of the generative ideation – and talking – is done by the manager.
Now, a one-person ideation session is not a bad thing. For programmers, it’s a very common technique to find our way out of a bug. It even has a name: rubber duck debugging. The idea is simple: pretend like you’re explaining the problem to someone (use a rubber ducky as an approximation if you must) and hope that some new insights will come dislodged in your network of mental models in the process.
The problem with the rubber duck meeting is that everyone else is bored out of their mind and often frustrated. The power dynamic in the room raises the stakes for participation for everyone else but the manager. No matter how much we earnestly try to participate, even a subtle gravity well inexorably shifts the meeting to monologue (or a dialog between two senior peers). The worst part? Unless these leaders make a conscious effort to reduce the size of their gravity well, they don’t notice what’s happening. They might even be saying to themselves: “This is going so well!” and “Look at all these ideas being generated!” and “I am so glad we’re doing this!” – without realizing that these are all their ideas and no new insights are coming in. They might as well be talking to a rubber duck. I know this because I led such meetings. And only much later, wondered: wait, was it just me thinking out loud all this time?
Now, about that “consciously reducing the size of the gravity well”? I don’t even know if it’s possible. I try. My techniques are currently somewhere around “just sit back and let the conversation happen” and “direct attention to other folks’ ideas”. The easiest thing to reduce the rank-based power dynamics in a meeting seems to be inviting peers, though this particular tactic isn’t great either: the vantage points are roughly similar, and so the depth of insights is more limited.
I kept looking for ways to finish this bit on a more uplifting note. So here’s one: when you do find that generative space where ideas are tossed around with care, hang onto it and celebrate your good fortune. For you struck gold.