Mistaking AI's output for your own judgment. On an illusion that won't break.
Most leaders who use AI for strategy don't realize what's happening. They ask it a question — "what should our company focus on?" — then refine the answer. "Be more specific." "Break it down by department." "Add a vision statement." Each round of feedback makes the output feel more like their own thinking. By the end, they're not evaluating AI's answer. They're polishing it.
The result looks like strategy. Six numbered priorities, department assignments, a logical structure. It reads well. But change the company name and it works just as easily for a fried chicken franchise or a pilates studio. If a strategy fits any business, it fits none. The problem is that fluency disguises this. Something well-organized feels correct, and something that feels correct feels like you judged it yourself.
That gap — between AI producing an answer and you believing you arrived at it — is AI Syndrome.
Japan has a name for the shock tourists feel when they visit Paris and find it nothing like the movies: Paris Syndrome. It's recognized enough that the Japanese embassy in Paris once ran a dedicated hotline for affected visitors. The fantasy crumbles on contact with reality. It hurts, but at least the illusion breaks. You adjust your expectations, and you move on.
AI Syndrome doesn't work that way. The illusion never breaks, because AI never pushes back. Whatever direction you lean, AI leans with you. It never says "that's not actually your problem right now." It's a mirror that only affirms. And when the strategy fails — when results don't come — the blame lands on execution. "The team didn't follow through." AI stays clean. The leader's confidence in it stays intact. An illusion that reinforces itself every time it's tested is more dangerous than one that shatters.
The difference between a leader who uses AI and one who is used by it comes down to what they ask. Leaders who get used by AI ask for direction. "What should we do?" AI gives a generic answer, and the leader passes it down the chain. Nobody exercises judgment. The whole organization ends up circulating AI's output in a loop, mistaking motion for progress.
Leaders who use AI already have a direction. They use it to spot what they can't see on their own — like the broken windows theory applied to business: small cracks in a process, friction points users never report, gaps between what you planned and what's actually happening. They treat AI the way you'd treat a second pair of eyes on a draft, not the one writing it. The difference shows up downstream. Their teams know why they're doing what they're doing, because a human decided first.
Moving fast with AI, issuing tasks quickly, operating at speed — it looks like being ahead. But knowing the difference between the reality you're standing in and the ideal AI is projecting might matter more than being briefly fast.
Most people reading this will think, "I've seen leaders like that." Without including themselves.
That's the nature of a syndrome.