← What's New

When AI Productivity Becomes Layoffs: The Workday Pattern Every CTO Should Read

When AI productivity becomes layoffs. The Workday pattern, the Klarna reversal, and the playbook for honest restructuring without losing the engineers you need

“AI made us more productive” and “we are cutting headcount” are two different sentences. The interesting question is which one is doing the work in any given announcement, and whether the people staying behind believe the answer.

In February 2025, Workday cut roughly one thousand seven hundred and fifty jobs — about eight and a half percent of its workforce — with CEO Carl Eschenbach attributing the decision in a memo to prioritizing investment in AI. On the same earnings cycle, Marc Benioff told investors Salesforce was “seriously debating” hiring no software engineers at all during fiscal 2026. By December 2025, on Salesforce’s Q3 FY26 earnings call, Benioff confirmed the engineering hiring freeze had held and that internal developer productivity was up roughly twenty percent. Block announced a reorganization around AI in February 2026 affecting approximately four thousand employees. Meta, Cisco, Intel, Microsoft, Pinterest, Dow, and dozens of others have made announcements in similar shape.

In May 2025, Klarna’s CEO Sebastian Siemiatkowski admitted publicly that the company’s 2023-2024 decision to replace roughly seven hundred customer service agents with AI had been a mistake. “We went too far,” he told Bloomberg, and announced a return to human-led customer support, with AI relegated to a triage layer.

And in February 2026, the National Bureau of Economic Research published a working paper surveying nearly six thousand CEOs, CFOs, and senior executives across the US, UK, Germany, and Australia. The headline finding: ninety percent of firms reported no impact from AI on either employment or productivity over the prior three years. Apollo’s chief economist Torsten Slok summarized the moment: AI is everywhere except in employment, productivity, or inflation data.

So which is it? The companies announcing AI-attributed restructurings cannot both be right and be part of the ninety percent who see no impact. Something is being said that is not quite what is being meant.

What happened: the receipts

The pattern is best seen by looking at the specific announcements side by side, with the gap between the headline and the substance laid out plainly.

CompanyActionScaleStated AI attributionSubstance
WorkdayLayoff (Feb 2025)~1,750 / 8.5% workforce”Prioritizing innovation investments like AI”Real cost reduction in HR-software market that was over-hired post-pandemic; AI framing made the cut look strategic rather than corrective
SalesforceEngineering hire freeze (FY26)0 net new SWEs for a year”20-30% AI productivity gain”Real productivity improvement plus a slowing growth phase; freeze served both narratives
KlarnaLayoff then reverse (2023-2025)700 cut, then rehiredOriginally: “AI replaced 700 agents.” Later: “We went too far”The original cut was real and AI-enabled. The reversal showed the AI couldn’t actually handle the long tail. Customer satisfaction recovered the headcount
BlockReorganization (Feb 2026)~4,000 / ~40%“Reorganization around AI”Large restructuring with multiple drivers; AI was one of them but the scale exceeds plausible AI-attribution alone
MetaMultiple rounds (2023-2025)Tens of thousands”Year of efficiency” plus AIOverhiring correction first, AI narrative layered on later

The honest read across the table is that AI is a real factor in some of these decisions, a partial factor in others, and a convenient explanation for cuts that would have happened regardless in several. The mistake engineering leaders make is treating the announcements as a single phenomenon. They are not. Workday’s cut and Klarna’s reversal and Salesforce’s freeze are three different things wearing the same costume.

The NBER paradox

The NBER paper is the most important data point for understanding the gap between announcement and reality. Six thousand executives across four developed economies. Sixty-nine percent of firms actively use AI. But ninety percent report no impact on productivity or employment over the past three years. Executives’ own AI usage averages roughly ninety minutes per week. The gap between deployment and effect is enormous.

This is not new in the history of technology adoption. Robert Solow’s 1987 productivity paradox — “you can see the computer age everywhere but in the productivity statistics” — describes the same pattern. The IT boom did eventually show up in the data, but only after companies redesigned workflows around the technology rather than layering it on top of existing ones. The gap between “computers are deployed” and “productivity has measurably increased” was roughly a decade. AI is currently inside that gap.

What this means for an engineering leader is sharp: if a peer CEO is publicly claiming AI-driven productivity gains that justify headcount reductions, the prior probability that the claim is fully accurate is genuinely low. They may be right. They may be selectively measuring. They may be doing what they would have done anyway and using AI as the cover story. All three are common; only the first is honest.

The forecast piece of the NBER paper is also worth knowing. The same executives who reported no past impact predicted that AI would boost productivity by roughly one and a half percent and reduce employment by under one percent over the next three years. These are modest numbers. They are not the numbers that would justify the most aggressive AI-attributed restructurings now being announced. The executives are simultaneously betting big and forecasting small.

When AI-attribution is honest and when it’s not

The useful test is mechanism. AI-attribution is honest when there is a specific, traceable mechanism by which AI changed the work that the laid-off employees were doing, such that the work is now genuinely being done by AI. It is dishonest, or at best motivated, when the work is being absorbed by other humans, deferred indefinitely, or simply not happening anymore.

The honest version looks like this. The team had a measurable workload — customer support tickets, code review queue, document processing, claims adjudication. A specific AI system was deployed against that workload. The workload was measured before and after. The post-deployment metric showed a sustained, durable reduction in the human work needed without quality degradation. The headcount reduction matched the measured reduction in human-required work. The remaining humans have their workload preserved or rebalanced toward higher-value tasks. The team can describe, in concrete terms, what the AI is doing that the people used to do.

The dishonest version looks different. The team had no measurable workload to begin with, or the workload was loosely defined. The “AI deployment” is more aspirational than operational — copilots used episodically, not workflows fundamentally restructured. The headcount cut was decided before the AI mechanism was specified; AI is the narrative wrapper, not the cause. The remaining team is doing more work to absorb the gap. Quality is degrading in ways that are not yet showing up in the metrics the executives watch. The team cannot articulate what AI is specifically doing that previously required a human.

The Klarna case is instructive precisely because it crossed from the first category to the second. The original cut had a real mechanism — an AI handling a measurable fraction of support tickets. The reversal happened because the metric being measured (tickets resolved by AI) masked the metric that mattered (customer satisfaction on complex interactions). The AI was doing the easy work; the humans had been doing the hard work; cutting the humans broke the system. The lesson is not that the AI was bad. It is that the measurement was incomplete, and the cut was made on the incomplete measurement.

Second-order effects every CTO should plan for

Even when the AI-attribution is honest, the cut has second-order effects that most announcement decks underweight. Four worth planning for.

Knowledge concentration. When you cut twenty percent of an engineering team and claim AI productivity covers the gap, what you actually have is the same body of institutional knowledge in fewer heads. The on-call rotation gets shorter. The bus factor gets worse. The senior engineer who was the only person who understood the legacy payments service is now still the only person and has no one to teach. AI is not a substitute for institutional knowledge transfer; it is a productivity multiplier for the people who already have the knowledge.

On-call burden. AI helps write code. It does not help carry a pager. The remaining engineers absorb the on-call rotation of the people who left, often with the same or increased system complexity. Burnout shows up six to twelve months after the cut, in the form of senior engineers quietly leaving for companies that did not cut as aggressively. The replacement cost substantially exceeds the savings from the original cut.

Succession risk. A team that stops hiring junior engineers because “AI productivity covers the gap” is a team that has stopped building its own future senior engineers. The pipeline from junior to mid to senior is a three-to-seven-year capability investment. Pausing it for a year is recoverable. Pausing it for three or four years is the kind of structural mistake companies look back on as the moment they lost their engineering depth.

Cultural signal. Every employee in an organization with an AI-attributed cut is now doing math on whether their role is next. The signal is loudest among the most senior, most marketable engineers — the ones who have other options. The retention problem after an AI-attributed cut is not the people you wanted to keep all along; it is the people you assumed would stay because they always have, who now have reason to update.

A communication and restructuring playbook

For engineering leaders facing a genuine need to restructure with AI as part of the story, a few practical principles separate the honest version from the disastrous one.

First, name the mechanism specifically. Do not say “AI productivity gains” without saying what work AI is now doing, what work used to require humans, and how the gap was measured. Vague claims breed disbelief among the engineers you most need to retain. Specific claims, even small ones, build credibility.

Second, do not announce the AI productivity narrative before the measurement is in. The single largest cause of the Klarna outcome is announcing the cut before the AI deployment has been operationally validated. The right sequence is: deploy, measure for at least two quarters, validate that the metric you measured is the metric that matters, then decide on headcount. The wrong sequence is to decide on headcount first and search for a mechanism that justifies it.

Third, separate the cost-reduction narrative from the AI narrative when they are actually separate. If a team is being cut because the business unit is shrinking, say so. Layering “AI productivity” on top of “shrinking business unit” produces a story that no engineer believes and that erodes trust for the next round.

Fourth, invest in the remaining team’s career path. If the official story is that AI is making the team more productive, the remaining engineers should be doing more interesting work, not just more work. If their day-to-day looks the same as before the cut except with more on-call shifts, the story is not true and they know it.

Fifth, respect the gap between announcement and reality. If you believe your AI deployment is genuinely going to deliver Salesforce-level productivity gains, build the org around that belief by hiring different roles — fewer junior engineers, more senior architects, more deployment-oriented engineers, more product partners — rather than just reducing the existing org. The companies that have made AI restructuring work have used it to change the shape of the team, not just the size.

The hard truth underneath the announcements is that the AI productivity gains that justify the most aggressive headcount cuts are not yet showing up in the broad data, and the leaders making those cuts know it. Some are doing real, mechanism-grounded restructuring. Some are using AI as the polite framing for cuts they would have made anyway. The Klarna reversal is the proof that the latter strategy has a back-end cost most executives have not modeled. The Workday pattern works when the AI mechanism is real and measurable; the Salesforce pattern works when productivity gains are durable and the remaining team genuinely has slack. The mistakes happen when companies adopt the framing without the mechanism. The engineers in the room can tell the difference, and the cost of being caught short is paid in the months and years after the announcement, when the people you most need to keep update their LinkedIn and start taking calls.