March 19, 2026

Your employees are losing their minds to AI

hero image for blog post

We’ve spent two years telling organizations to adopt AI faster and implement it everywhere. But the latest data implies we need to tell them to do something harder – use it more intentionally, or lose the cognitive capabilities needed to make AI impactful in the first place.

At SXSW 2026, cognitive neuroscientist Dr. Sahar Yousef (UC Berkeley) and Section CEO Greg Shove shared research on AI’s cognitive effects on Berkeley students – as well as real operational evidence from inside Section.

Together they made the case that cognitive health is the missing piece of AI workforce transformation – and needs to be managed as deliberately as tool deployment, data security, or AI ROI.

Loneliness makes people more AI-dependent

Sahar shared data from her study of 450 UC Berkeley students, on AI’s relationship to cognitive dependence and social connection. Three findings stood out:

Phone addiction predicts AI dependence. Students who are addicted to their phones are 30% more likely to become cognitively dependent on AI. The shortcut-seeking behavior transfers directly from one technology to the next.

Loneliness and lack of purpose drive AI companionship. She described how one student put it bluntly – “Relationships seem messy. Why ask a girl out if I can just talk to AI? She gets me, she laughs at all my jokes, and she’s always available.”

Mindfulness is the only cognitive protector. Of all the traits measured, only one showed a protective effect against both cognitive dependence and AI companionship reliance: the ability to be fully engaged in the present.

When Sahar asked students anonymously what percentage of their peers are “no longer actively participating in their education,” the estimate was 50% – at the number one public university in the world.

Gen Z will make up 30% of the knowledge workforce within four years. The cognitive habits they’re forming now will walk through your door as your next wave of hires. AI dependence may seem inevitable, but the one factor that protects against it – mindfulness – is something organizations can build into how they train and manage their workforce.

What AI removes matters more than what it adds

We can all appreciate that everything good in life requires a lot of effort – all-nighters, hard conversations, going outside your comfort zone, and sweating.

Sahar asked the audience to imagine explaining a gym to a great-great-great-grandparent. You put on special shoes and run in circles. “They’d say, ”“why would you do that?”” We invented gyms because the Industrial Revolution removed the physical labor that kept our bodies functional. Now AI is removing the cognitive labor that keeps our minds sharp.

Every time AI handles a draft, an analysis, or a decision that a knowledge worker would have struggled through, it removes the friction – a rep that builds cognitive muscle. The brain atrophies just like any other muscle when it stops being used.

“We don’t have brain gyms yet,” Sahar said. “You’re going to have to do this on your own for a little bit if you care.”

The failure modes are already here

Greg has observed similar dynamics inside Section, which has explicit operating principles about AI use. Even with constant conversation, there are still challenges:

  • “I vibe-coded a prototype – take a look.” Initiative is great, but the feature was nowhere on the roadmap. This is wasted effort dressed up as productivity.
  • “Here’s my business case – let me know if you have any questions.” They shared a three-page document written by GPT for a $500 decision. The employee transferred their friction to leadership instead of owning the call.
  • “We have a lot of options – let’s meet to discuss.” Seven options are provided, all generated by AI, none filtered by judgment. Now there’s a meeting on the calendar to discuss options no human has actually thought through.
  • “I’ve got about 8 agents working on this, so it should be good.” Maybe. Or maybe not.

“If you’re just outsourcing to AI and dressing that output up, the client’s not going to pay for it – because they can get that from their own AI,” Greg said. “You need to stop and do some thinking and introduce some value that the client will actually recognize as human value.”

That’s the AI transformation ROI question most organizations are avoiding: not “are we using AI?” but “is the AI-assisted work genuinely better?”

How to tell if you’re behind the wheel

Greg and Sahar converged on a simple framework: Every knowledge worker is going to become either an AI driver or a passenger.

Drivers manage their AI – they prompt effectively, verify output, and own decisions. When AI generates something, they pause, judge, and decide before moving forward.

Passengers defer – they cut and paste, prioritize speed, and let AI make the call. The distinction comes down to a single decision – when the AI produces an output, do you stop and think, or do you ship it?

The warning signs that your team is drifting toward passengers are subtle. Watch for people who always say “yes” when AI offers to do more. Every “yes” is another cognitive rep your employee didn’t take. People who are suddenly calm and ahead of schedule when they used to scramble may also be outsourcing the thinking, not just the busywork.

Keep your workforce cognitively sharp with AI

Building an AI strategy for enterprises right now means addressing two capabilities at once: AI fluency and cognitive independence. Here’s what that looks like operationally.

Struggle before you prompt

Greg used to advise teams to use AI to get to V1, but he’s reversed that. The new guidance at Section is to “struggle for a few minutes on your own first – even if it’s just two or three bullet points.”

The reasoning is neurological and practical. Struggling doesn’t just build cognitive muscle – it also produces dramatically better prompts, because you arrive at the AI conversation with a point of view, however rough. That changes the dynamic from “tell me what to think” to “help me work through this.”

The ability to evaluate whether AI output is actually good – judgment – only comes from doing the cognitive work yourself, at least some of the time. And as Sahar’s data shows, the next generation of hires isn’t building that muscle at school. They’re turning everything over to AI.

This is the paradox leaders need to sit with – you’re hiring for AI fluency, and then you’re going to have to train for independent thinking. Both are critical requirements. The “struggle first” principle is how you build the second one without sacrificing the first.

Design intentional friction into the AI workflow

Heads of AI have spent months getting adoption up and removing friction. Now you need to add some back – not to slow people down, but to keep their brains in the game.

Greg was direct about where the friction needs to go. “You’ll notice GPT, all the AIs – ‘Can I help you with the next thing? Can I make the PDF? Can I do the spreadsheet for you?’ Yes, yes, yes, yes. Do the whole thing for me. Do my job for me.” These prompts are designed to deepen engagement and dependency and reduce thinking.

So organizations need to deliberately design human thinking moments into their workflows. Some specific tactics:

  • 15 minutes of quiet uncertainty to start each workday. Block your calendar – but don’t turn to AI, Slack, or email. Instead, look at your to-do list and calendar and let your mind engage with the day’s problems before reaching for any tool. Greg described this as “work mindfulness” – the practice of being present with the work ahead of you before you start delegating it.
  • AI-free decisions. Designate specific decisions each week that get made without AI input – not because AI couldn’t help. Think of it as deliberate cognitive exercise, the same way you’d do bodyweight exercises even if you have access to a gym.

Codify how your team works with AI

Section has explicit operating principles for AI use that every new employee learns in onboarding and are enforced by managers – “We don’t copy and paste AI’s results into our work. We push back, critique, and improve.”

This is what enterprise AI training programs should look like – not a one-day workshop, but operating principles that live in onboarding and get enforced daily.

To keep employees from defaulting to shortcuts during a busy workday, managers at Section also ask the questions:

  1. Does this represent your best work?
  2. What did you add or change after AI did the V1?

“If you don’t train your workforce to use it properly, people will be cranking out a lot of shit, and the shit is going to ruin their reputation,” Sahar said.

What’s at stake

The good news, from Sahar – the brain remains plastic at every age. Cognitive fitness is trainable. No one has to lose their mind to AI.

But that requires deliberate organizational action. Left to individual willpower and the default incentives of AI tools designed to maximize engagement, the drift toward cognitive offloading is almost certain.

Organizational AI readiness isn’t an adoption dashboard metric. It’s whether your people can still do the thinking that makes AI output worth having.

The organizations that get this right won’t just have more productive employees. They’ll have employees who can still think – which, increasingly, will be the differentiator.

Greg Shove
Section Staff
This is some text inside of a div block.

Heading

hero image for blog post

Heading 1

Heading 2

Heading 3

Heading 4

Heading 5
Heading 6

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.

Block quote

Ordered list

  1. Item 1
  2. Item 2
  3. Item 3

Unordered list

  • Item A
  • Item B
  • Item C

Text link

Bold text

Emphasis

Superscript

Subscript

Greg Shove
This is some text inside of a div block.