Unpacking Anthropic's Learning Curves Report — The Numbers Say Settlement, Not Revolution

5

min read

When the froth recedes, what's underneath is the foundation.

The Report

On March 24, 2026, Anthropic published the fifth installment of its Economic Index: "Learning Curves." The report analyzed one million Claude usage data points from a single week in February 2026 (Feb 5–12) — three months after Opus 4.5 launched, overlapping with the Opus 4.6 release.
The official framing is the learning curve itself: longer-tenure users use AI better. That's the headline. But before you reach the headline, the numbers along the way tell a different story. Almost every indicator is trending down.


The Declining Numbers

Updated figures from Chapter 1:




Average implied wage: down. Prompt education level: down. Top-task concentration: down. Academic use falling. Everyday queries — sports scores, product comparisons — rising. New task types: essentially none.
One or two indicators trending down would be noise. Almost every metric pointing the same direction is signal. The average level of AI use is declining.

This Is Froth Coming Off

The narrative that AI is transforming work has dominated for two years. Its evidence was high-value use: coding, data analysis, document creation. Knowledge workers boosting productivity through AI.
Now those high-value segments are shrinking. What's growing is everyday queries — "what were the scores last night," "recommend a washing machine." Implied wage down, education level down, no new task types appearing. AI use is drifting shallower, not deeper.
This is the classic froth pattern. Early enthusiasm concentrates at the high end. As time passes and actual value becomes clearer, the average comes down. AI hasn't become useless. The expectation that it would change everything is being calibrated against reality.


The Counterargument: This Is Just Mass Adoption

The report explains the phenomenon as an "adoption curve." Early users concentrated on high-value tasks like coding. Less technical users entered with more general purposes. The average came down. That's just what happens when something gets adopted broadly.
This isn't wrong. When smartphones went mainstream, early-adopter app exploration gave way to messaging and YouTube dominating usage time. Average use declining as the base widens is structurally normal.
But the adoption story requires one check. Even as new users enter with everyday queries, existing users' usage level should hold or improve. That would make it "the base widened" rather than "the whole thing got shallower."
The report does show high-tenure users with higher task success rates and higher work-use share. Up to that point, the adoption curve holds.
The issue is the rest of the picture. Almost no new task types emerged. The share of occupations engaging with Claude via task work hasn't budged from previous reports. An adoption curve should show new use cases being discovered as it widens. What's happening is widening without anything new being found.
Whether it's adoption or froth, the practical pattern is the same: "everyone's productivity rises" becomes "a few people use it right, most use it like a search engine."


So Where Did the High-Value Use Go?

Maybe high-value work didn't disappear. Maybe it moved somewhere less visible.
Coding migration offers a hint:




Coding that used to happen in the chat interface — people pasting code and asking "fix this" — has moved to API-based agent tools like Claude Code. It joined automated workflows rather than chat sessions.
The report calls this "a signal of imminent labor market change." Stepping back, it looks more like people choosing the right tool for the job. A dedicated coding agent beats a chat window for coding — that's not a transformative behavioral shift. The "model selection" data the report highlights is similar: using Opus for complex work, Sonnet for simple queries is just using expensive tools for expensive work. Calling this evidence of a learning curve is overselling it.
What matters is what the migration left behind. After high-value work moved to the API, what remained in Claude.ai is personal use and everyday queries. The declining averages partly reflect high-value work going invisible. Underwater, not extinct.

What's Happening Underwater

Two patterns emerged distinctly in the API-based workflows.
Sales automation — cold email generation, B2B lead verification, customer data enrichment. And automated trading — market monitoring, investment proposals, trader alerts. Both doubled in share over the prior three months.
Automation is declining overall. With these two areas surging, automation has moved from "spreading broadly" to "concentrating in specific domains." As the froth recedes, the contour of what AI can actually automate becomes visible. Narrower scope, deeper penetration.


Long-Tenure Users Work Differently

Chapter 2 of the report is "Learning Curves" proper — comparing users with 6+ months of tenure (high-tenure) against those with less (low-tenure):




High-tenure users tackle harder tasks, use AI more for work, engage more collaboratively, and succeed more often. Their work variety is also wider.
Half of this is obvious. Someone who's used AI for six months at their job stuck with it because it was useful there. Higher education level among high-tenure users? Early adopters skew toward credentialed professionals. This could be selection bias as much as learning-by-doing.
The genuinely interesting finding is the success rate gap. Controlling for task type, country, and model, high-tenure users show 3–5 percentage points higher success rates. Same task, same model, different outcomes. The report treats this as potential evidence of learning-by-doing.
The other finding worth noting: high-tenure users automate less and collaborate more. This runs counter to intuition — you'd expect power users to hand off more. The data says the opposite. People who have used AI longest are less likely to fully delegate tasks and more likely to iterate with feedback. The report acknowledges this directly — it contradicts a hypothesis from the previous year that experienced users would lean toward automation.


What the Data Does and Doesn't Say

What the data says:
The average level of AI use is declining: easier tasks, lower education levels, more everyday queries. This is an undeniable trend. At the same time, high-value work is migrating to API-based environments — less visible, not gone — and longer-tenure users are handling harder tasks with higher success rates. Average is declining, but the density and performance of the top tier is rising. A polarizing structure. And the people using AI best are choosing collaboration over automation. "AI will do it all" is not what the data shows.
What the data doesn't say:
This report, like the others, is built from Claude user data. People who don't use Claude, people using other AI tools, and industries that haven't adopted AI at all are absent. Whether high-tenure users' success advantage reflects learning-by-doing or selection bias — capable people who found AI useful and stayed — isn't yet separable. The report acknowledges this, noting cohort effects and learning effects can only be disentangled as more time accumulates.


What's Left After the Froth

The froth is coming off. What that exposes is the real thing.
Once "AI will change everything" fades, the outline of what AI is actually changing becomes visible. Productivity gains in some professional segments. Higher success rates among a subset of skilled users. Automated workflows running quietly behind API layers. A narrower scope, but a real one.
These aren't revolutionary numbers. They're settlement numbers. And settlement only begins where the froth used to be.