Munger said collect mental models. I think the deeper question is how those models get created.
Someone on a factory floor once noticed that the slowest process determines overall speed. They didn't store the event — "our line 3 is slow." They stored the structure — "the slowest point determines the whole." That structure left the factory. It works in organizations, products, daily life. Stripping the specifics from an experience and keeping a frame that travels — I'd call that portable abstraction.
Charlie Munger had a version of this idea. Pull core principles from many disciplines. Weave them into a latticework. Use that lattice to see problems through multiple lenses. Shane Parrish built Farnam Street around it. Cognitive scientists have been studying how people reason across domains — the same mental representations showing up in completely different fields. They all point to the same thing: thinking across domains isn't a gift. It's a learnable structure.
But the latticework is about collecting models that already exist. I think the more interesting question is how models get made in the first place.
You've seen it happen in meetings. Someone says, "Our product onboarding is weak." Most people start thinking about UI fixes. One person says something different: "Isn't this the same structure as when we redesigned new hire orientation last year?"
Product onboarding and new hire orientation — what do those have in common? Once you hear it, it clicks. Both are the same problem: a newcomer drops off before experiencing the core value. The fix points in the same direction — create a small win early.
We call this "insight" or "big-picture thinking." But it's not a talent. It's a habit — seeing a problem once and walking away with a frame you can use again. You're not plotting a coordinate. You're laying down a coordinate system.
Bring this idea to AI and you hit a contradiction.
If you've worked in any professional setting, you know the rules. "What's the point?" "Lead with the conclusion." Between humans, brevity is a courtesy. The person who gets to the point fast is the one who communicates well.
AI doesn't reward brevity. It rewards context.
One person asks AI: "Why is our churn rate high?" They get a generic checklist. Onboarding complexity, slow time-to-value, feature gaps. Nothing wrong with it. Nothing new, either. The conversation leaves no residue. The next question starts from scratch.
Another person asks: "Map out the common patterns behind 30-day post-onboarding churn in B2B collaboration tools. Scope it to teams under 50 people." This isn't a question about their product. It's a question about the category.
AI lays out the structure. Three decision points drive 30-day churn — time to first collaborative action, whether the admin completes setup, and switching cost from the previous tool. Then comes the follow-up: "Where does our product fall within these patterns?"
Now they see angles they'd missed. "Admin setup completion" as a churn driver means the root cause might not be the end user at all. They'd been staring at user experience. Admin experience becomes a different axis entirely. And the result travels. "Early churn decision points in B2B SaaS" is a frame that works for the next product, the next team's churn problem. They just added a new piece to the lattice — inside a single conversation.
But it doesn't always work that way.
Three engineers quit in the same quarter. Instead of asking why directly, you try the coordinate-system approach: "Map the common patterns behind engineering attrition in mid-stage B2B startups." AI gives you a framework — compensation benchmarking, career path visibility, manager quality. It looks structural. It looks like it travels.
But all three engineers left because they hated the office relocation policy. The frame was real. The answer was local. You built a map when the problem had an address.
Portable abstraction has a failure mode: you can climb so high above a problem that you can't see it anymore. The habit of extracting structure is powerful, but it needs a counterweight — knowing when the useful answer lives in the details, not above them.
So when do you abstract and when do you stay concrete? If it's a simple lookup — "why did these three people quit" — just ask directly. Search works fine for that. What makes AI different is its ability to carry context forward and reason on top of it. That ability only activates when you give it context first.
Between humans, less context is more efficient. You're saving the other person's time. AI has no concept of time. More context makes it work better. The virtue of human conversation becomes a constraint in AI conversation.
Most people bring the same protocol to both. Be concise, get to the point. It works between humans. With AI, it's the reason every answer sounds generic. AI isn't generic. The question was.
When Munger told people to build a lattice, the premise was that you'd do the work yourself over decades. With AI, that process can happen in real time. Ask the kind of question that lays down a coordinate system, and AI pulls lenses from multiple domains at once — a compressed version of what Munger spent a lifetime building, playing out in a single session.
AI gives back what you put in. Ask narrowly, get narrow answers. Lay down context and open it up, and you get connections you didn't expect.
Portable abstraction isn't a technique for using AI well. It's a way of thinking. Munger said build the lattice. Parrish said cross the boundaries. Cognitive science is studying cross-domain reasoning. They're all pointing to the same thing: people who extract structure from one experience and carry it to another move faster in a complex world.
AI is a mirror that shows, in real time, whether that thinking is working. The person on the factory floor had to wait years to find out if "the slowest point determines the whole" held up somewhere else. You can find out in a single conversation.