By: Yossi Abraham
Customer experience is universally praised - and quietly questioned.
In boardrooms and budget reviews, few leaders will argue that Customer Experience (CX) does not matter. The language is familiar: “customers expect more,” “digital-first is the new normal,” “loyalty is fragile.” All of that is true. The tension shows up later, when the conversation turns from aspiration to allocation. When budgets tighten and priorities collide, CX often finds itself on the defensive. Not because executives have suddenly become anti-customer, but because belief alone does not survive scrutiny. In today’s environment, CX is no longer judged by how good it sounds or how well it scores. It is judged by whether it produces measurable business value.
That shift is uncomfortable for some teams, but it is also healthy. It forces clarity. It pushes us to connect what customers feel to what the business funds. It rewards programs that deliver both better experiences and better outcomes. It also exposes a hard truth: many CX initiatives struggle to prove ROI, even when customers genuinely benefit.
The gap is rarely about intent. It is about design.
For years, CX has been treated as a strategic pillar, often sitting alongside growth, innovation, and transformation. Strategy, however, is not a budget line item. Finance leaders and operating leaders need to understand how experience improvements translate into tangible results: lower cost to serve, fewer avoidable contacts, faster resolution, higher digital adoption, and better retention. They also need to see those results within a timeframe that matches how businesses actually run. In an era of quarterly pressure and continuous reprioritization, “eventually” is a risky promise.
Traditional CX measures such as Net Promoter Score and customer satisfaction still matter. They capture sentiment. They can signal emerging risk before it becomes obvious in churn or complaints. But on their own, they are often too indirect to carry a funding conversation. A rising score does not automatically explain what changed operationally, what it saved, or what it earned. Meanwhile, a flat score does not necessarily mean the investment failed. Customers can be happier and still score you the same if expectations moved, competitors improved, or the scoring question did not reflect the journey you fixed.
This is where skepticism grows. If CX cannot translate improvement into outcomes the organization recognizes, experience becomes easier to trim than to defend. Teams end up arguing for the moral value of customer-centricity rather than the economic value of reducing customer effort. That is a losing posture, not because it is wrong, but because it is incomplete.
When CX initiatives fail to demonstrate ROI, it is usually not because the experience did not improve. More often, ROI was never designed into the program from the beginning.
I have seen this pattern repeat across industries. A journey is redesigned. A new digital path is launched. A channel is modernized. The organization celebrates a release date, then realizes it cannot answer basic questions: Did contact volume change? Did the handle time drop? Did customers complete tasks faster? Did repeat contacts decline? Did retention improve in the segment that used the new experience? The team scrambles to instrument measurement after launch, when baselines have already shifted, and data gaps are expensive to close.
Three structural issues tend to drive the problem.
First, measurement is treated as a phase instead of a requirement. Teams will spend months designing experiences and weeks debating microcopy, then allocate a fraction of that time to defining what “success” means and how it will be tracked.
Second, time-to-value is underestimated. Many experience programs rely on broad, multi-quarter rollouts that delay observable impact. During that gap, leadership changes, priorities shift, and budgets tighten. Even good programs can die from lack of early evidence.