What the AI:ROI Spring Conference revealed about enterprise AI transformation
A year ago, the primary AI question inside most companies was, “How do we get people to use this?” Champions programs, rollout plans, lunch-and-learns – the goal was adoption.
But at our AI:ROI Spring Conference last week, the leaders onstage had moved well past that. They were talking about AI pilots, governance frameworks, agents in production, portfolio management, outcome measurement, and code. Enterprise AI transformation has entered a new phase – and it’s moving fast enough that last quarter’s playbook may already be stale. Four shifts stood out.
From activity to outcomes
The first generation of AI measurement was about usage – how many people logged in, how many prompts they ran, how often they opened the tools. That made sense when the goal was adoption. But the companies getting real results have learned the hard way that activity metrics can be misleading.
UKG CIO PK Kota shared a telling example. His team started by measuring how much code was being generated using AI – and people gamed it. The metric showed adoption but revealed nothing about value.
So they shifted. For code assist, they now track what actually gets pushed into production. For internal GPTs, they stopped counting how many were created and started asking which ones are embedded in business workflows that run automatically. “Let’s not measure activities or tasks,” PK said. “Let’s measure outcomes.”
IBM CIO Matt Lyteson described a similar evolution. IBM moved from brainstorming possible AI applications toward building cross-functional “fusion teams” – seven to nine people with data, AI, platform, and business expertise, all aligned on specific outcomes. They define a measurable target – review 100% of contracts at a given accuracy level – then run one- to two-week sprints against it. The question isn’t “where should we try AI?” anymore. It’s “what result are we designing for, and can we hit it this sprint?”
The shift matters because it changes what gets funded and what gets killed. When you measure activity, everything that gets used looks like a win. When you measure outcomes, you find out fast which use cases are worth running. That’s where AI transformation ROI actually shows up – not in adoption dashboards, but in the work that changes because of it.
From experiments to operations
Getting everyone to experiment was the right first move. But it has a ceiling – and the best companies at the conference described hitting it.
Zapier CEO Wade Foster put it plainly. After getting the whole company building with AI, a new problem emerged – they had far more high-impact ideas than capacity to execute. And many required cross-functional AI change management that no single person could drive alone.
Wade also named something most leaders haven’t caught up to yet – the economics of building have flipped.
“Code used to be this very expensive thing,” he said. “And so there’s all these rituals inside of a company that recognize the fact that code is expensive.”
Now that code is cheap, the rituals built around expensive code become actively wasteful. The bottleneck has moved from creation to editing, testing, and judgment.
That’s a profound operational shift. The companies recognizing it are moving to rapid iteration with tight feedback loops. UKG described a maximum 90-day cycle – pilot goes in, subset of users, measure, iterate. Not a six-month planning process followed by a launch.
Meanwhile, companies that didn’t build operational structure are drowning in their own experiments. One conference attendee captured it in the chat: “Having hundreds of GPTs or agents actually made things harder for employees to find what they needed.” Experimentation without coordination creates noise without scaling impact.
The ground keeps moving
The shift from experimentation to operations would be hard enough if the technology held still. It doesn’t.
DJ Sampath, SVP of AI at Cisco, described the pace as “almost a vertical slope” – and said it’s still accelerating. His most striking example cuts to the core of enterprise IT – the assumption that deployed code stays static. “You’ve always assumed that when you deploy code or software in production, the code stays static… But right now that very core assumption is being challenged, because you have agents that are able to generate code to perform a specific type of task.”
The thing you deployed isn’t the thing that’s running. That breaks most of what enterprise security and governance were built on – and it creates real organizational tension between speed and safety. Organizational AI readiness, it turns out, isn’t a moment you reach. It’s a posture you maintain. The companies managing it well are the ones who’ve built systems designed to adapt as fast as the technology does.
Your moat moved too
The pace of change is also changing competitive advantage, fast.
DJ described enterprises building smaller, purpose-built models trained on proprietary data and tuned to their specific network topology, security vulnerabilities, and ticket workflows. Agents access these tools alongside the frontier models in what he called an ensemble approach, designed for “very specific hyper-specialized needs” that no off-the-shelf product can touch.
“We’re leaving the SaaS era, where a bunch of companies built generic software packages that we then use,” he said. “Now we’ve got these AIs that can really be purpose-built. They can build purpose-built software on demand, on very specialized needs. We’re starting to build our own systems.”
Section CEO Greg Shove and Scott Galloway of Pivot said that when building gets easy, distribution and brand will be more important than ever. When supply explodes, the buyer’s attention and trust are at a premium. The companies that know how to go to market – with capital, brand, and trust – will win.
For companies building internally, the implication is clear. The moat isn’t the software. It’s the proprietary workflows, data, and organizational knowledge that no competitor can replicate – and the infrastructure to keep training against it as the ground shifts.
What to do about it
The companies pulling ahead at the conference weren’t doing anything exotic. They’d just stopped running AI like a side project. A coherent AI strategy for enterprises right now has four moves.
Move your metrics from activity to outcomes. Stop only counting logins and prompt volume. Start tracking what reaches production, what changes a business process, and what you can kill because the new way is better. If you’re still only measuring adoption, you’re measuring the wrong thing.
Build operational structure around your experiments. The experimentation phase is necessary, but it’s not a strategy. Stand up cross-functional teams with clear outcome targets and short sprint cycles. If a pilot can’t show measurable results in 90 days, reshape it or cut it.
Treat your proprietary data and workflows as your competitive advantage. The models are commoditizing. What won’t commoditize is your organization’s specific knowledge, context, and processes. Invest in the infrastructure to train and run models against that data – and in the teams who can keep doing it as the technology evolves.
And accept that the ground is going to keep moving. The leaders who looked most confident weren’t the ones who had it all figured out. They were the ones who’d built systems designed to adapt – measurement that catches what’s working, operations that can pivot fast, and a team that treats change as the operating environment, not the exception.





