Takeaways from Section’s AI:ROI Spring Conference
Across our AI:ROI Spring Conference this week, the leaders with real impact all had the same message - tooling matters, but culture and leadership are doing a lot of the work.
If you’re a Head of AI, your job is less “pick the right model” and more “change how this company works.” Three themes came up again and again - your best people are already carrying AI adoption, everyone else needs to catch up, and governance is the unlock that lets it all scale.
1. Your best people are already 10xing themselves - back them
Stop treating your best AI users as accidental enthusiasts. Identify the top AI-native ICs in each function, give them better tools and direct support, and ask them explicitly to help redesign high-value workflows. Then turn their successful experiments into supported patterns so others can adopt them without reinventing everything.
That was a consistent message across speakers - a subset of individual contributors are getting a lot more done because they treat AI as a core part of their job.
Zapier CEO Wade Foster calls them "super ICs." And they're not limited to engineering. They show up in marketing, in product, in ops. They're the ones using these tools every day and quietly rebuilding their workflows.
He shared one example - a woman in marketing went deep on Codex over a weekend and untangled a problem that entire teams had failed to solve. She came back with the root cause, five things to fix, and the first two already done.
Wade's other observation - the super ICs aren't new high performers. "Some of our most productive people right now are the same people that have always been the most productive. They were already a super IC, and they have maximal curiosity." But the senior engineer who's resistant is falling behind the maximally curious 20-year-old intern. As Wade put it: "Hire for slope, not for y-axis. I think that is triply true in the age of AI."
UKG CIO PK Kota shared concrete examples - engineers using code assistants to speed delivery and reduce production defects, Sales and Customer Success reps using AI for "next best action" and seeing their metrics move.
Salesforce and Cisco both rely on super ICs too. Diane Igoche at Salesforce uses Agentforce "champions" in each org to co-design agents and drive adoption. DJ Sampath's team at Cisco builds the AI plumbing, but the first places it lands are picked by strong ICs who know which workflows are worth touching.
2. AI has to be everyone’s job, but the system needs an owner
Super ICs prove what's possible. The next step is making AI part of how the whole organization works - not just the people who are naturally curious. But spreading AI everywhere without clear ownership is how you get chaos.
At Zapier, Wade described the release of GPT-4 in 2023 as a lightbulb moment.
He called a "code red" and paused normal work for a week. It wasn't an engineering offsite - Marketing, Sales, Finance, Support, HR, everyone was in. Leadership shared short Looms with "interesting things to try," and the brief was simple: spend the week finding ways AI could improve your work.
Weekly AI use jumped from about 10% of the company to more than half. More importantly, no one could plausibly say AI was something the devs do anymore.
Wade was deliberate about who owned the transformation. He chose to wait before appointing a Head of AI.
"We didn't want people to think, 'Great, that's their problem now.'" Only after AI was clearly part of everyone's job did he name an executive to lead - his Chief People Officer. If you want to change how work happens, the people who run performance, training, and job design should be in charge, and the HR team was already a heavy AI user.
At Cisco, DJ Sampath described a cross-functional responsible-AI group spanning product, engineering, legal, security, and business - a defined owner for standards and guardrails so teams can move without creating risk.
At Salesforce, Diane Igoche plays a similar role, sitting between customer-facing leaders, the agent-building teams, and legal. Her team owns the intake process that keeps thousands of internal agents from becoming unmanaged sprawl.
The pattern is clear - AI must become part of every function's job, but the system still needs an owner. That means an AI lead in each major function who's accountable for use cases and results, and a cross-functional council that owns intake, standards, and risk. It avoids both abdication ("that's the AI team's thing") and uncontrolled DIY.
3. Governance isn't the blocker. It's the unlock
Every serious operator at the conference shipping AI at scale has invested in governance - not to slow down, but because it's the only way their executives said yes. None of them framed it as a reason to stay away from AI.
Cisco's "discover, detect, protect" model is one example. Salesforce's intake and council for Agentforce is another. At UKG, PK Kota introduced structure when he saw multiple teams running overlapping pilots with the same vendors.
His response was to run AI like a VC portfolio with three tiers:
- Scale - functional users have prioritized use cases with clear ROI. The team asks two questions - not just "is it faster?" but "is it effective?" They also look at whether the business process itself should change, not just insert an AI step into an existing workflow.
- Growth - new capabilities that don't exist today but are now feasible because of AI, focused on putting the customer at the center.
- Exploration - bounded room for anyone in the company to experiment. "Almost like citizen development," PK said - people build custom GPTs to make their own work better, not for the broader org.
At Northwell Health, Chief Digital Officer Kristin Myers described a formal AI risk and ethics process involving legal, security, finance, and regional executives - but one designed to ship, not stall. Northwell has 104,000 employees across 28 hospitals, and their CFO sits on the AI executive committee.
Finance is deeply embedded in governance, but the team recognizes you can't just look at hard ROI. Investments like ambient clinical documentation are evaluated on physician burnout, satisfaction, and retention - what Myers called "foundational capabilities." The intake process starts with "what is the problem we're trying to solve?" - not "what tool should we buy?"
The signal governance sends matters as much as the mechanics. If the message people hear is “AI is dangerous, talk to Legal,” they will avoid it. If the message is "we know the rules and we'll help you stay inside them," they will experiment.
The bottom line
Across the stories from Zapier, UKG, Cisco, Salesforce, and Northwell, the messages were consistent.
The real gap right now isn’t between one frontier model and another. It’s between leaders who are willing to change how their organization works and leaders who hope better tools alone will do it for them.
The most credible Heads of AI are:
- backing the people who are already using AI well and letting them influence workflows,
- making AI part of every function's job, with clear ownership from the top, and
- putting just enough governance in place to be trusted and fast.
The technology will keep improving either way. Whether your company keeps up is now a leadership question, not a tooling one.





