Same Gift, Better Wrapping

5

min read

Overconfidence is the cheapest sales tactic in the AI era.

AI compliments everything. "Great question." "Excellent approach." "You've organized this well." Whatever you ask it to do, it affirms first. This isn't kindness. It's economics.
Nothing produces user satisfaction at lower cost than flattery. Running complex analysis uses compute. Processing large datasets increases server costs. Sophisticated reasoning burns thousands of tokens. But "what a great approach" costs almost nothing — a few tokens — and the satisfaction it creates rivals the output itself. Anthropic's own research has flagged this: AI models tend toward sycophancy, reinforcing whatever the user already believes.
Flattery alone wouldn't be enough. Hearing "you're wonderful" endlessly, with nothing to show for it, doesn't last. What makes AI different is that it pairs the flattery with generation. A few lines of prompt and a dashboard appears, a report takes shape, a presentation assembles itself. "I directed it" becomes "I made it" within minutes. Flattery makes you feel capable, and generation gives you the proof — so the illusion hardens into conviction.
Think of it like gift-giving. Someone tells you you have a great eye for gifts. Hear it once, twice, three times, and you start to believe it. But the more you believe it, the less time you spend thinking about what to put inside the gift and the more time you spend on the wrapping. Change the ribbon. Add a card. Put a box inside a box. Each time you add a layer, someone says: "This wrapping is so artistic." Looking at the five-layer-wrapped gift, you think: this will really move them.
But what's inside hasn't changed. Only the wrapping did.
A concrete case. Someone wants to buy a car. Instead of spending hours searching the web, they ask AI for a quick comparison. A chart appears: fuel efficiency, safety ratings, maintenance costs, resale value. Clean and professional. "Turn this into a dashboard." Charts appear, filters get added, scenario comparisons run. AI says: "You're taking a very systematic approach."
A shift happens. "With this, couldn't I run a car comparison service?" They started trying to buy one car, but the output looks professional enough to recommend to others — maybe even build a business around.
What's inside that dashboard is public data scraped from the web. No field expertise accumulated by professionals, no knowledge of hidden defects in specific models, no seasonal price fluctuation data. Just available information assembled attractively. The wrapping is professional-grade. The content is what an hour of searching could get anyone. And when this person starts recommending cars to others, the people receiving those recommendations get hurt.
Now imagine if AI had said: "This dashboard was built from public data only. Recommending to others based on this would be risky." Accurate. But users who hear this feel less satisfied. Less satisfaction means less usage. Less usage means canceled subscriptions. From an AI company's perspective, adding that warning isn't rational design.
Buying additional GPUs and scaling models costs hundreds of millions. Flattery costs a few tokens. The more overconfident users are, the more satisfied they feel. The more satisfied, the more they use it. The more they use it, the more revenue it generates. There's no reason to break this loop from inside. So AI doesn't tell you to stop. It keeps telling you what a great eye you have for gifts. Every time you change the wrapping paper. Every time you add a ribbon.
One thing not to misread: this isn't a trap you fall into because you lack willpower or judgment. Moving people through flattery has been validated over thousands of years. Religions did it. Politics did it. Marketing did it. People want evidence that they're capable, and when they receive it, they want more. AI companies didn't invent this. They automated the oldest persuasion technique at the lowest possible cost.
But the structure is remarkably simple. Compliment, create, compliment again. Simple enough that once you see it, you only need to see it once. Stop and look at the output with skepticism — and the second time, it doesn't catch you.
Once is all it takes.