Week of 2025-01-13: Recipes for Thought
Where I highlight a different approach to using LLMs and expand it into the whole repeatable thinking recipes bit.
I’d like to present to you a distinction: two different approaches I take when using large language models (LLMs). I’ll call these two approaches “chat” and “recipe”.
In the first approach, I treat my interaction with Gemini, ChatGPT, et al. as a conversation: I type something, the LLM replies, I type again, etc. Very familiar, right? It’s how we talk to other humans. This is the “chat” approach and it seems to be quite dominant in the modern AI landscape, so I am not going to spend much time studying it.
Now, let’s step out of the familiar and change the perspective a little bit. Instead of seeing it as an unstructured back-and-forth, let’s treat the turns of the conversation as going through steps in a recipe. Each step contains a prompt from me and a generated reply from the LLM.
The shift is subtle, but it’s there. I am no longer chatting. I am guiding the LLM through the steps of a recipe for thought. “First think this way, then think that way, and now think like this”. With each step, the LLM’s replies get closer to the final product of the recipe.
If you observe me use this approach with an LLM, you’ll notice a difference right away in how I treat the conversation turns.
Suppose I type: “Write me a story” – and an LLM writes a story about … the Last Custodians of Dreams. It’s nice, but as I read it, I am realizing that I actually want a story about dolphins.
When using the “chat” approach, I simply ask the LLM to fix the problem in the next conversation turn. “No, I want a story about dolphins”.
With the “recipe” approach, I click the little “Edit” icon next to my first turn’s prompt and edit it to refine it: “Write me a story about dolphins”.
Okay, the response is much closer, but now I see how this story is too short. Hmm.. how do I make it stretch the response a bit? Perhaps I need to first let the LLM consider the full story ark – and then fill in the details?
So I edit the first turn prompt again: “Write an outline of a story about dolphins. Anthropomorphize dolphins to tell a story about being alone, but not lonely.” Alright! This outline feels much closer to the story I want to see.
All this time, I was still in the first conversation turn! Now, I am ready to move to the next turn: presumably, asking an LLM to start adding details to the outline.
The end result might only look like a very brief conversation, but the outcome is typically much better: the continuous refinement of the prompt at each turn and carefully shaping the structure of the recipe results in the LLM output that I actually want.
The reason for that is the nature of the process of finding the right recipe. When building one, we try to better understand how an LLM thinks – and more importantly, how we think about the problem. I find the recipe approach very similar to mentoring: in the process of teaching it to follow my recipe, I learn just as much about my own cognitive processes. How do I typically think about writing a story? What are the steps that I myself take to ensure that the story is novel, coherent, and interesting?
This process of thinking about our own thinking is called metacognition. When using the “recipe” approach, we engage in metacognition for both the LLM and ourselves. Using our prompts as probes, we explore what an LLM is capable of and what prompts yield better results. We also are challenged to uncover our own tacit knowledge and turn it into a sequence (or a graph!) of prompts that an LLM can easily follow.
Metacognition is easier for some and more difficult for others. I know many folks who are experts at their stuff, but suffer from the “Centipede’s Dilemma”, unable to explain their own thought process – their expertise is entirely submerged in the subconscious.
However, if metacognition is something that we’re used to, we can now, through the “recipe” approach, transfer our thought processes onto recipes. We can let LLMs do our thinking for us, because – plot twist! – we can make these recipes repeatable.
Observe: once I have a complete recipe for writing a story about dolphins, all I need is a way to substitute the word “dolphin” in the first prompt for another creature – and to re-run all the steps in the recipe! Now I can generate great stories about monkeys, doves, cats, and turtles. By parametrizing our recipes, we can make them generic and applicable to a variety of inputs.
Stories about animals are great, but let’s step back and engage our meta-metacognition. Hoo-boy, we must go deeper. What kind of metacognition are we seeing here? If we were to describe the pattern generally, what’s being described above is a process of transferring cognitive know-how – some expertise of thinking about a problem – into a repeatable recipe.
We all have cognitive know-how, even if we don’t realize it. More importantly, we all have potential for drawing value from this know-how beyond our individual use.
There’s a saying that goes “if you want something done right, do it yourself”. The thinking recipes allow us to amend the last part to “make a repeatable thinking recipe for it, and let the LLM do it”.
An expert in organizational strategy will undoubtedly have a wealth of cognitive know-how on the topic, from listening and interviewing, to running the sessions to generate insights, to coalescing disparate ideas into crystal clear definitions of the problem, and so on. Whenever this expert engages with a client, they have a playbook that they run, and this playbook is well-scripted in their mind.
I once was so impressed with a training session on team communication that I just had to reach out to the speaker and ask them for the script. If I had this script, I ought to have a way to run these sessions for all of my team. I was quite shaken to learn when the speaker revealed that what she had was more like a hundred fragments of the session puzzle that she puts together more or less on the fly, using the audience as a guide to which fragment to pick next. What to me looked like a simple linear flow was actually a meandering journey through a giant lattice of cognitive know-how.
In both cases, the cognitive know-how is trapped inside of the experts’ heads. If they wish to scale, go bigger or wider, they immediately run into the limitations of having just one person traversing their particular cognitive know-how lattices.
However, if they could transfer their lattices into repeatable reasoning recipes, the horizons expand. At the very least, the LLM armed with such a recipe, can produce a decent first draft of the work – or ten! When I am applying repeatable reasoning recipes, my job shifts from following my own know-how to reviewing and selecting the work, produced by a small army of artificial apprentices.
Repeatable thinking recipes allow us to bust through the ceiling of generic thinking that the current LLMs seem to be stuck under – not by making them omniscient and somehow intuiting exactly what we’re asking for, but by investing a bit of time into turning our own cognitive know-how into recipes to help us think at scale.
This is not just a matter of scaling. With scale come new possibilities. When the overall cost of running through a recipe goes way down, I can start iterating on the recipe itself, improving it, adding new ingredients, and playing with new ideas – ideas that I wouldn’t have had the opportunity to explore without having my artificial apprentices.