The Illusion of Data-Driven Decisions

4

min read

The data-driven era saw 1%. AI promises 100%. Neither is what a decision actually needs.

AI isn't a data tool. It's a decision partner — something that holds all four at once: numbers, situation, emotion, context. We use it like a calculator anyway, because that's what we did with data for a decade. We saw 1% of the data, called that "data-driven," and built an industry around the claim. Now we're doing the same thing with AI — smoother, faster, at scale.
Take any table with twenty columns. Pair them two at a time and you get 190 combinations. Three at a time, 1,140. Add time windows, segments, filters, and the number runs into the tens of thousands. The average company looks at maybe a hundred of these in a quarter. That's not even 1%.
The catch is that 1% became the bar. Cross it and you got to say you "decided with data."
And the 1% wasn't even random. It was whatever fit a hypothesis someone had already formed. If you can phrase the hypothesis, you already half-know the answer — so most analysis was just confirming, in numbers, what was already in someone's head. Data wasn't producing answers. It was dressing intuitions up as data.
So what was the actual difference between intuition and data? The intuition crowd wasn't guessing blind. They ran on maybe 0.1%. We ran on 1%. That's the gap we built an industry on. Two people in a dark room — one shines a flashlight for a second, the other for ten. Neither sees the room. The ten-second one laughs at the other for "deciding without looking."
We built BI tools, hired data teams, preached governance. All of it to see 1%. The result was 1%. The cost was a hundred times that. The data-driven era was beautiful and expensive — and ended with almost nothing to show for it.
The AI pitch is that this all gets fixed at scale. Every combination, run in real time. All data labeled automatically. The era of looking at slices is over.
Except the 100% isn't 100%, and you weren't going to look at it anyway.
What AI sees is data someone already processed. Labels get automated, then another model trains on those labels, then another model trains on top of that. It's not 100% of the world. It's 100% of data layered on data layered on data.
And what users actually ask for breaks the rest. Almost nobody asks AI, "What's in this data?" They ask, "Confirm this hypothesis," or "Build me data that supports this point." AI obeys. The tool that could in theory show 100% returns 0.001% — shaped to the answer the user walked in with.
The old data era was pretty and expensive and ended in nothing. The new one gets there cheaper.
A decision was never just about data. Think of any meeting where something real got decided. You looked at numbers. You also weighed which quarter they came from, how the team was feeling that morning, what a competitor had announced the week before. In the data-driven era, only the numbers made it into the spreadsheet. The rest got worked out in the hallway after the meeting. No tool could hold them together.
AI is the first tool that can do all four. It shows you numbers while asking about the situation, reading the emotional weather, picking up the context. You ask about Q3 revenue. It pulls the numbers and asks if this is the right week to push — your team flagged burnout on Friday. The strange part is how rarely we let it. "Confirm this." "Back this up." "Analyze that." We hand a partner a calculator's job and complain that the answer feels thin.
The question was never 100% versus 0.001%. The question is what dimension you're using AI in.
We never decided with data. We decided with 1% and called it data-driven. Now we're holding the same ruler up to AI — either believing we're finally seeing 100%, or complaining we're only getting 0.001%. Both miss the point.
The tool is ready. The way we ask isn't.