Stop Using AI as an Assistant
There's a difference between having AI do your tasks and developing AI as a leader. Most organizations are doing the first and calling it the second.
When I meet executives and CTOs, I always ask the same question: "Is your organization using AI well?" The answer is usually confident. "Yes — our developers use it, our designers, our marketers. Everyone's using it." Push one layer deeper. "What specifically are you using it for?" Most people answer: writing first drafts, code review support, polishing email copy.
All of it is assistant work. The things they're asking AI to do are roughly what they used to ask an intern to do. Anthropic's March 2026 Economic Index showed the same pattern: experienced users — six months or more — skew heavily toward work tasks and tackle higher-level problems, while newer users lean toward personal use. Same tool — but what people ask of it is already diverging.
Using AI as an assistant isn't wrong. Drafts get faster. Translations get easier. Code snippets appear immediately. But it stops there. When AI stays an assistant, the existing work process stays intact. Humans plan, humans judge, humans execute — AI handles small tasks in between. The process itself doesn't move.
It's like bolting an engine onto a carriage. Faster, sure, but still a carriage. Upgrading AI to "team member" puts the engine in a proper frame, but the driver is still human — where to go, which road to take, all human decisions. AI's potential is still capped at what the human thinks to ask for.
There's a third level. In films, there's a character who makes presidents. They don't exercise power directly — they design who gets placed where and in what direction to develop them. The kingmaker.
That's what using AI well actually looks like. Not putting AI beneath you. Developing it as a leader. And becoming the kingmaker who builds that leader.
The difference shows in practice. Repeated complaints are coming into the support channel. The assistant approach: a human reads the logs and tells AI "summarize this feedback." Done. The team member approach goes a step further — "categorize these logs" — but the human still defines the categories and figures out next steps.
The kingmaker approach is different. Give AI the entire support log archive and say: "Find recurring patterns, categorize by type, and build an improvement proposal for each category." AI analyzes, judges, and proposes. The kingmaker looks at the result and says "why did you frame it this way?" and "this doesn't fit because of X." Next round, AI incorporates that feedback and produces better judgment. Not directing — delegating whole problems, then sharpening AI's judgment through feedback.
And something important happens in the process. The kingmaker changes too. Delegating a whole problem requires first clarifying what the essence of that work actually is — a question that never came up when "just summarize this" was enough. Raising AI's level requires sharpening your own standards first. And once you've developed an AI leader well enough, it starts catching patterns you missed and proposing directions you hadn't considered. You build the leader, but the leader you've built starts teaching you back.
Once one domain works, apply the same approach to others. Build an AI leader for marketing that analyzes campaign data, evaluates performance, and proposes the next campaign's direction. Early on, some proposals will miss. But after repeated feedback — "that's not our target," "this channel doesn't produce ROI" — it starts spotting trends before you do. Do the same with customer analysis, product planning, operations. One kingmaker developing multiple leaders across multiple areas.
Back to the original question. "Is your organization using AI well?" The organizations confidently saying yes and the ones that can't quite say it look different on the surface, but share the same problem: no clear view of what AI is to them.
A tool operates at the level of the person using it. Without a perspective on what kind of entity AI is, the way you use it won't change no matter how much better the tool gets. The potential of AI developed as a leader grows with every model improvement — no ceiling. The capacity of AI used as an assistant may already be sufficient, which means it has nowhere left to grow.