Many of your employees know how to use AI. They've participated in trainings, they can write a prompt. They might even know not to paste PII into a public LLM. By the metrics most companies were tracking in 2025, they're basically proficient.
But for our AI Proficiency Report, we analyzed 4,500 work-related AI use cases across 5,000 knowledge workers. Only 15% are likely to generate ROI for the business.
The problem isn't that your employees don’t use AI. Most of them use it every week. It's that no one has told them what to use it for - and the data shows it, even in your strongest functions. (For a broader look at the proficiency gap, start here.)
AI use cases are basic across nearly all functions
Here's what we found when we analyzed 4,500 work-related AI use cases:
- 26% of workers have no work-related AI use case at all
- 59% of reported use cases are basic task assistance - one-off help with a single task, disconnected from any larger workflow
- Only 2% of use cases were advanced, meaning they involve automations that benefit the organization
- And only 15% of reported use cases are likely to generate ROI for the business.
If you're leading AI transformation, you might assume this is primarily a problem in less technical or less language-intensive parts of the business. Our data says otherwise.
Consider engineers. Of all the functions we surveyed, they lead in AI proficiency and are among the most frequent users of AI. And yet 54% of engineers don't use AI for writing or debugging code, scripts, or formulas - the single most obvious, high-value AI application for their entire job.
The pattern holds across for product managers too - 87% don't use AI for creating prototypes. And 56% of marketers don't use AI to create first drafts of content.
These aren't lagging departments. These are the people you'd expect to be ahead. And they're skipping the most valuable applications of AI in their roles, defaulting to basic applications like everyone else.
This isn't a function-specific or an industry-specific problem. It's a structural one.
Why ‘just encourage experimentation’ isn't a strategy
There's a gap between the freedom leadership thinks it has extended and the guidance employees feel they've actually received.
When we surveyed C-suite executives, the majority said their company encourages employees to experiment and build their own AI solutions. But when we asked individual contributors the same question, only 10% agreed.
The instinct to create a culture of experimentation is well-intentioned, but it assumes that employees have enough context about their own workflows and enough familiarity with AI's capabilities to independently identify where AI can create leverage. But most don't - not yet, anyway.
Use case development cannot solely be a personal responsibility for employees. It needs to be a core competency - something your organization actively builds, curates, and teaches.
Three ways to close the gap
1. Build function-specific use case libraries
The highest-leverage step most organizations can take right now is to create curated, role-relevant AI use cases. Not a generic "here are 50 things you can do with AI" document, but a focused set of applications that are specific to what an engineer, a product manager, a marketer, or a financial analyst actually does every day.
These libraries should be built in tandem with the people doing the work, validated against what actually saves time or improves output quality, and updated regularly as AI capabilities evolve. They give employees a map instead of a mandate, and they dramatically reduce the "I don't know what to use this for" problem.
2. Make use case development a measured responsibility for managers
Right now, use case development - to the extent it happens at all - is something individuals do on their own time.
Use case development should be the primary priority of managers and team leads. Require every manager to identify and track at least three meaningful AI use cases for each direct report, and tie it to performance expectations. When something becomes a measured responsibility, it gets done. When it's left to organic curiosity, it gets skipped.
This is especially important for individual contributors, who our data shows are the least likely to have access to AI tools, training, or manager support, despite doing the most repetitive, automatable work in most organizations.
3. Create feedback loops so good use cases spread
One of the most underrated problems in enterprise AI adoption is that when someone discovers a valuable use case, it tends to stay with them. There's no mechanism for it to surface, get validated, and spread across a team.
Build a lightweight process - it doesn't need to be complicated - for employees to share what's working. A monthly use case spotlight in team meetings. A shared channel where people post their AI wins. A quarterly review where team leads share the highest-impact use cases from their function.
The goal is to make it easy for a great use case discovered by one person to become a default tool for their entire team. Right now, most organizations have no such mechanism, which means the same discoveries are being made over and over in isolation - or not at all.
The mandate for AI leaders right now
The goal isn't just to get more people to log into an AI tool. Adoption, measured by access and frequency, is no longer a meaningful signal of progress.
The goal is to get every employee to have at least one workflow where AI is doing meaningful, time-saving, quality-improving work. That requires giving people direction - not just access.
The data makes the mandate clear. Stop measuring AI success by who has access to a tool, and start measuring it by who has a use case worth using. That's the gap most organizations haven't closed yet - and it's the one that matters most.
This post is based on findings from Section's AI Proficiency Report, a survey of 5,000 knowledge workers from 1,000+-person companies in the U.S., U.K., and Canada. [Download the full report here.]






