Logo
Blog page/The compounding costs your AI budget didn’t calculate
Apr 26, 2026 - 5 mins read

The compounding costs your AI budget didn’t calculate

Blog_costs-your-ai-budget-did-not-calculate.jpg

In most organizations, the CFO and the CIO are looking at the same AI program and seeing completely different problems.

The CFO sees a budget line that keeps growing: millions invested, sometimes more, and P&L still hasn’t moved. Every quarterly review produces the same conversation: the investment is real, the return isn’t, and yet the ask is always for more time and more budget.

The CIO sees something different. The demos have impressed, and some tools are genuinely being used. The problem isn’t that nothing is working — it’s that nothing is graduating. Pilots that prove the concept in controlled conditions fail to survive contact with production. The AI portfolio keeps growing, but the production deployment count stays stubbornly low.

Both perspectives are accurate. Neither one gets to the root cause.

What’s happening underneath is a cost structure problem — and it compounds quietly with every pilot that doesn’t make it to production and every tool that technically ships but doesn’t actually get used.

The costs that don’t show up in the project budget

When organizations account for AI spend, they typically count what’s visible: the cost to build or procure a tool, the infrastructure to run it, the subscriptions that come with it. What doesn’t show up in that accounting is everything that accumulates next:

Maintenance on tools built to prove a point. A pilot is scoped to demonstrate feasibility, not to run indefinitely. The integrations are purpose-built and fragile. When upstream systems change — and they always do — someone has to fix the connection. When the model behaves unexpectedly in a new context — and it will — someone has to investigate. When the governance rules need updating — and they do, constantly — someone has to find where each rule lives in each tool and update them one by one. In project-mode AI, none of this work was budgeted, because the goal of the project was to build it, not to run it.

Multiply that by a portfolio of fifty pilots and the maintenance burden isn’t a line item — it’s a tax on the entire engineering organization that grows with every deployment and never decreases.

Human-in-the-loop costs that were supposed to go away. The ROI case for most AI pilots rests on an assumption of reduced human labor: when the AI handles the work, the humans are freed for higher-value tasks and costs compress. What happens in practice is that the AI gets deployed with humans still in the loop — checking outputs, approving actions, handling the exceptions the system can’t. In most cases it’s a symptom of an AI that couldn’t be trusted to operate autonomously, but was deployed anyway because the pilot needed to ship.

The result is a cost structure that the CFO’s model never anticipated: the organization is now paying for the human, the AI subscription, the infrastructure, and the oversight process that reconciles all three. The AI didn’t reduce cost, it added a layer on top of the existing cost.

Parallel systems nobody talks about. The most invisible cost in the portfolio is the one teams build for themselves. A pilot gets board-level excitement, gets shipped, gets announced. And then quietly, the team that was supposed to use it keeps running the spreadsheet, the manual process, or the legacy tool they had before. Not out of resistance — out of necessity. The official AI tool doesn’t work in production the way it worked in the demo: the data is wrong, the integrations are unreliable, the edge cases it can’t handle are the ones that matter most in real workflows.

No one reports this openly because it’s uncomfortable. The CIO doesn’t know the tool isn’t being used. The CFO doesn’t know the system they approved budget for is running in parallel with the system it was supposed to replace. The organization is paying for both, getting the value of neither, and the gap between the official story and the operational reality widens with every quarter.

Why this compounds

Each of these costs is manageable in isolation. Maintain one tool, keep one human in the loop, run one parallel system — none of that is catastrophic. The problem is the trajectory.

In project-mode AI, every new deployment adds its own maintenance tail, its own human oversight layer, its own probability of becoming a parallel system. The costs don’t compound downward as the portfolio grows — they compound upward. The organization is running faster and faster to stand still, and the gap between AI investment and P&L impact keeps widening even as genuine technical progress is being made.

This is what the CFO is seeing from the top: an investment that produces activity without return, where the ask for more budget is structurally guaranteed because the cost model underneath is broken. And this is what the CIO is seeing from inside: a portfolio that looks active but isn’t producing, where the engineering team is consumed by maintenance rather than new capabilities and every new use case costs as much as the first.

They’re describing the same building from different floors.

The architecture underneath the accounting

These aren’t problems with AI. They’re problems with AI deployed without the infrastructure that would make it sustainable.

Maintenance compounds because every tool was built with its own integrations rather than drawing on a shared layer. When a system changes, the fix happens fifty times instead of once.

Human-in-the-loop costs persist because governance was bolted on after the fact rather than encoded into the platform.

Parallel systems proliferate because the official tools were built on demo data and controlled conditions rather than on a unified context layer that reflects the actual state of the business.

The CFO’s P&L problem and the CIO’s production problem have the same root cause: a cost structure built for one-off projects that was never designed to compound.

The organizations that have crossed from pilot purgatory to production-scale AI didn’t find a way to manage that cost structure better. They replaced it with one that works differently — where each deployment draws on shared infrastructure rather than rebuilding from scratch, and where the marginal cost of the next use case decreases rather than replicating.

That’s an architecture decision — and it’s the decision that determines whether the next round of AI investment compounds value, or compounds the problem. The Third Transformation goes deeper on what that architecture looks like in practice — and what it takes to build AI infrastructure that gets cheaper, not more expensive, as it scales.

;