• Monday

Why "Productivity" Is No Longer Enough to Justify Your AI Investment

Enterprise CFOs just changed the AI ROI standard. Productivity is out; P&L impact is in. Are your employees trained for the new bar?

A major new survey of 830 enterprise IT decision-makers, released just weeks ago by The Futurum Group, landed a quiet bombshell that most of the AI commentariat completely ignored: productivity gains — the argument that has justified nearly every enterprise GenAI purchase since 2023 — just fell from the #1 AI ROI metric to #2. In its place, CFOs are now demanding direct P&L impact: revenue growth and margin improvement nearly doubled as the primary measure of AI success. Everyone is busy writing about what AI can do. Almost nobody is writing about what just changed in how your executive team is going to evaluate whether it worked.

This shift matters far more than it sounds. For the past two years, the "productivity" frame was enormously useful — it gave functional managers a low-stakes way to introduce AI, gave employees a forgiving benchmark ("did you save a few hours?"), and gave vendors an easy win to claim. But that era is closing fast. The 2026 enterprise buyer, as Futurum's research director put it directly, is "significantly more sophisticated than their 2025 counterpart," and sales conversations that lead with "save 4 hours per week" are now entering a losing argument. The measurement goalposts have moved — not slightly, but structurally.

What makes this particularly interesting is that the tools themselves haven't changed dramatically. Microsoft Copilot, Google Gemini, and ChatGPT Enterprise are all broadly the same products your organization deployed one or two years ago. What changed is that the patience window for "we're building adoption" as a sufficient answer to the CFO has expired. Wharton research cited recently puts an exclamation point on this: AI deployment surged 400% across enterprises in 2024 and 2025, while only 12–18% of companies captured meaningful ROI. You can have massive adoption numbers and still be in that bottom 82%.

The gap isn't a technology problem. It's a deployment and skills problem — specifically, a problem of whether functional employees know how to connect AI assistance to outcomes that move a number on a spreadsheet, not just outcomes that feel easier. Feeling faster is not the same as being more profitable. Organizations that built their AI case on the former are going to have a very uncomfortable budget conversation this fall.

Here's the pattern I see repeatedly: an org rolls out Copilot or Gemini, employees start using it, usage dashboards look great, and leadership declares the deployment a success. But when someone asks, "What business result did we actually move?" the room goes quiet. This is the Activity Trap in its most expensive form — high engagement, low connection to outcomes that the CFO actually cares about. The tool is running. The ROI isn't.

And for organizations that haven't crossed into broad adoption yet? The pressure just got worse, not better. If you can't show productivity gains and you can't show P&L impact, you're defending a line item that leadership increasingly sees as a sunk cost. The slide from "we're being thoughtful about rollout" to Shelfware Stall is shorter than most managers think, and the new ROI standard makes it even harder to buy more time with vague adoption metrics.

Here is my honest opinion: most organizations were never going to get P&L-level AI ROI by training employees to use AI faster. They were going to get it by training employees to use AI differently — to do work that previously required a more expensive resource, a longer timeline, or a specialized skill they didn't have. That is a fundamentally different capability than "here's how to summarize a meeting in Copilot," and it's the capability almost no enterprise training program has actually built. The productivity framing wasn't just a measurement choice — it quietly shaped what employees were taught, what they practiced, and what they thought AI was for. Shifting the ROI standard without shifting the skills model is going to leave a lot of organizations holding an expensive, underperforming tool and no clear path forward.

If you're not sure whether your organization is building toward real business value or just logging activity, the BetterWork AI Adoption Audit takes four questions and gives you an honest read on where you actually stand — and what to do about it. It's free, it's specific, and it won't tell you everything is fine if it isn't.

0 comments

Sign upor login to leave a comment