Week of 2022-01-31
The structure of a tilt
While I roughly understand how boosts and bumpers work, and I’ve seen/made plenty of those, tilts felt a bit more tricky. Coincidentally, a colleague recently asked me why anyone would open-source a library that is a part of another, larger project (whether itself open or close-sourced), and it just felt like a perfect moment to smush these threads into one story.
At the crux of every tilt are two forces in a or near a state of static equilibrium. When I find a teacup sitting on a table, the cup might seem serene. However, it is constantly experiencing a tension of two opposing forces: the force of gravity that is trying to yank the cup closer to the center of the Earth, and the force of the table that is preventing that, called in physics the normal force. I love the name, because I can just picture the table acting as the maintainer of normalcy, the defender against the crazy antics of gravity. If you think about it, gravity force acts as a boost (“Hey! Let’s go nuts and flyyyy!”) and normal force as a bumper (“Not if I have anything to do with it!”) Static equilibriums tend to be like that. They are a result of some long-term boost pointed at an equally robust bumper. For example, a software engineering team that is building a piece of critical infrastructure for a larger project is experiencing a near-state of static equilibrium: the force of the team’s mandate to ship the infrastructure (a boost) is mightily pushing against the force of difficulty of the problem (a bumper).
Tilts take advantage of such standoffs by angling the surface of where the interaction of the forces occurs. I once put a cup down on a piece of computer equipment and left the room, only to rush back, alarmed by the sound of glass breaking. What the heck? I looked closely … and sure enough, the surface was gently – nearly imperceptibly – curved, guiding the cup to slide off. Physics tells us that even a small angle between equally opposing forces results in additional momentum that’s roughly orthogonal to these forces. Wait a minute… Am I adding “silly physics” to my silliness repertoire?! You betcha.
Here’s the crux: this additional momentum will remain present for the duration of the tension between the two forces. Because of that, tilts can be a durable source of nonlinear effects. It’s like a judo move of influence: let existing forces do our bidding. Tilts typically have this “might as well…” quality. Unlike the booster’s contagious “let’s go!” or the bumper’s authoritative “don’t you dare,” tilts usually sound like “we’re doing this thing already, might as well do that other thing.”
So here we have a composition of a tilt: the boost and the bumper in a nearly even draw, and a small angle representing another, additional objective at the point where the two meet.
Let’s return to that engineering team we met earlier. Suppose that in addition to their mandate, they have such an additional objective. They believe that the surrounding software ecosystem will benefit from having a robust, best-in-class library their project represents. So they do a tilt: they structure their code as a separate project, and run it in the open. Yes, there’s a bit of overhead associated with that, and yes, some colleagues furrow their brows at why this extra work has to happen (“I don’t get it, why are they not in our main repo? They are still part of our project, right?”) But over time, a magical thing happens. The funding of the overarching mandate ensures that the library is solidly built and can shoulder a high-scale deployment. The community around it is flourishing, excited about improvements and helping hunt down regressions. The project is welcoming those who want to adopt the technology, making it easy for others to innovate on top of them. Instead of remaining an implementation detail stuck in the amber of a larger project, the project becomes the means of industry-wide technological progress.
What I portrayed here is not a fictional tale. It’s the story of projects like WebKit, Skia, V8, and many others. Tilts are incredibly powerful that way. Especially when the forces in tension are large, even a tiny angle results in massive compounding effects over time, changing the entire landscape – just like the projects I mentioned changed the landscape of computing. If you are aiming to effect a lasting change in your organization, this might be the influencing approach to reach for.
🔗 https://glazkov.com/2022/02/04/the-structure-of-a-tilt/
Objects in vision are farther than they appear
Trying to describe my intuition about a project to a colleague, I found myself using this tongue-in-cheek inversion of a well-known catchphrase to describe a sequence that sometimes plays out on software engineering projects.
Here’s how that story typically goes. The idea looks big, ambitious, and fits into some bigger aspirations of the team. Then, there is usually a great demo or a prototype that appears to put the desired outcome within reach. There is a lot of excitement and the team boldly commits to pursue the project. Around half-way through, the full extent of the project’s scope becomes evident in its horrifying scale. Like a vast creature from the deep, it leans forth and threatens to capsize the whole thing, taking the team with it. Stuck between that and an equally unappealing prospect of cutting their losses, the team has some choices to make. Some decide to persevere. Some opt to scale down the effort, the big idea shrinking into a resounding “meh.” Whichever path is chosen, the shock of exploding scope never quite goes away, affecting team’s morale. In the hallways, there are disgruntled “this isn’t what I signed up for” or snarky “we’re always three quarters away from shipping” or “hey, Dimitri, didn’t you say you were shipping this two years ago?” Yep, I totally did. I was naive and — sigh! — too enamored with the idea. The object was much farther than it appeared. (And it will be three more years until it actually ships) So yeah, I’ve been there.
Especially in environments where there’s pressure to show results quickly, the distortion effect tends to exacerbate. Big ideas that clearly won’t yield outcomes for a while will be either dismissed or presented as simplistic, stick-figure caricatures of themselves. Here, it is usually the intuition of those who’ve been there before, the voice of the seemingly jaded and so frustratingly realistic that can break the illusion. Yes, it is scary to consider that the project we thought was going to take a year is actually a three (or five!) year endeavor. It is my experience that deluding ourselves ends up being much scarier. If you have that spidey sense that the proposed timelines might be too chipper, please consider doing a simple miracle count exercise to regain your grasp on reality.
And of course, I am so grateful to you, all of my ornery colleagues who have grounded my overly optimistics prognoses – and I expect you to continue to do so in the future. In return, I promise to do the same, even if it is uncomfortable.
🔗 https://glazkov.com/2022/02/02/objects-in-vision-are-farther-than-they-appear/
A user-situated trustworthiness model
Picking up where I left off in the previous essay, I want to reflect on the causal arrows that turned up in the exploration. It seems that maybe we have a seedling of a simple framework for evaluating user-situated trustworthiness. I’d like to now zoom in a bit on software products, since this is the area where I spent most of my time. In this area, the “things that are mine” are usually the data that I, as a user associate with myself. Looking at the properties of the boundary-tracing process, I can infer that there are two challenges that any user evaluating a product for trustworthiness will face.
First, there’s the challenge of evaluating the extent of what they consider theirs. The implicit question a user asks: “What’s all the data that I need to think about in relation to this product?” When looking at the extent, two concerns pop out for me: quantity and substance of the data. Quantity of data seems to correlate with extent. When interacting with a software product, the more of my data (higher quantity) I share with this product, the higher will be the extent of the boundary-tracing. Substance is similarly correlated. The more important the data is to me, the more invested I will be in the boundary-tracing. Conversely, if I don’t consider this data to be important, I will be engaged in boundary tracing to a lower extent.
My first year in the US was one of the most culturally transformative years of my life. I might as well have arrived on an alien planet. It took me a few painful mistakes and great wisdom of caring friends to learn the strength of the spirit of individualism in American culture. Coming from the culture where very few things were truly “owned” by an individual (and thus, would be considered as insubstantial in our little framework), the discovery of property rights and proprietorship was jarring and profound. Think of “substance” as the strength of a user's connection to their data. Comparing myself back then and now, it is fascinating how little of “what is mine“ that I consider valuable today would be viewed as such by that Soviet kid.
At least within this framework, it is now easy to see that the extent of boundary-tracing is inversely correlated to trustworthiness. The more important and more of the data, the more difficult it would be for the user to trace the boundary around it.
The second, orthogonal challenge of the boundary-tracing process that a user will face is that of clarity. How much confidence do I have as a user that the boundary I traced is accurate? The big two obstacles — or put differently, the inversely correlated components — are connectedness and fluidity. The first one stems from the idea that tracing the boundary is more difficult in a densely connected graph. If the software product I use is potentially connected to another product or a place where the data could be moved to, do I have to treat that other place as part of my boundary-tracing?
Fluidity makes things even worse. Being able to move data quickly adds ambiguity to where to trace boundaries. In my last post, I talked about floppy disks. If you ever used one, you probably remember the unmistakable grinding noise of the floppy drive writing your data down, light blinking and all that. Once the noise stopped and light stopped blinking, you knew that the data made it over to disk. Compare that to the frictionless fluidity of today’s Internet, with its seemingly instant data transfer speeds. The more the data is like water, the less confident a user is about their ability to trace the boundary around it.
So, when a user is looking at a software product, I am suggesting that they are implicitly evaluating these four components. Is the data I will share with this product substantial? How much of it will I share? How connected is this product to others? How quickly can my data be moved elsewhere? Of course, depending on whether they are a young adult from the Soviet Union or an aging Silicon Valley software engineer, results of their evaluation will differ. However, my intuition is that they will roughly follow the same process.