Section is currently hiring for 9 roles – and we need every single new hire to be an AI power user. Here’s why:
- We’re a small team serving massive enterprise businesses, so every single person on our team must know AI inside and out
- We want our revenue per employee (Silicon Valley’s new favorite metric) to stay high – which means using AI to ensure we don’t solve every problem with “new headcount”
But AI fluency isn’t that easy to measure. We don’t just need people with a ChatGPT account – we need people who use AI every single day to solve a variety of problems.
So we’ve incorporated a live AI prompting test into our interview process.
If you’re serious about hiring AI power users and turning them into champions, steal this test – or tell us about your own process so we can learn from you.
What we’re evaluating
AI fluency is a way of working and thinking, not just the use of a specific tool. So our prompting test is designed to understand how someone thinks about AI and its role in their work.
Our test looks at:
- Prompt quality: Prompting is a tablestakes skill at Section. We want to see that they can craft a clear, precise prompt that includes context, constraints, and requirements.
- Choice of LLM & model: Their choice of Claude over ChatGPT, or Deep Research vs. Gemini 2.5, helps us understand their literacy in the strengths of different tools and features.
- Thought process: We’re looking for candidates who critique AI’s outputs, ask for sources, refine their own prompts, and can identify hallucinations and work around them.
- Domain expertise: Candidates are told that they’ll be doing this test, just not what the specific scenario will be. So their ability to frame a problem to an LLM, on the spot, helps us gauge their understanding of the problems they’ll be solving.
The structure of our prompting test
These live prompting tests take about 10-20 minutes during the first interview with the role’s hiring manager.
Here’s an example, based on a Customer Success Manager (CSM) role we just filled:
Scenario: You are preparing for your first QBR presentation to a wide range of existing and new stakeholders at a multinational software company. You have several months of call recordings from the sales process and program performance data to extract insights from to prepare.
Prompt Task: Use these resources and start to extract insights from these transcripts and data to help you prepare an outline for your QBR.
- What tool(s) would you use?
- How would you quickly get started?
- What are your real-time thoughts as you navigate this?
Here’s how we run it:
- Setup (2 minutes): Explain the exercise and have the candidate share their screen
- Execution (5-10 minutes): Allow the candidate to work through the prompting process
- Discussion (5-10 minutes): Review the output and discuss their approach
There is no “right answer” or one output that we’re hoping a candidate will get to. We’re looking to see if they:
🟢 Narrate their reasoning and demonstrate iterative thinking. We want to see how they assess a response and work with AI to get to a better one.
🟢 Use AI as a thought partner to reach the goal. We want to see that the candidate can leverage AI to amplify their own reasoning and get to the most strategic outcome.
🟢 Spot a hallucination or weak output and adjust. This shows the candidate is a seasoned user, understands AI’s weaknesses, and can mitigate them.
🔴 Give vague or sloppy prompts. If they talk to AI like a search engine or don’t provide strong context, that’s a red flag.
🔴 Don’t adjust for AI’s weaknesses. We want to hear things like, “I know AI will give me XYZ, so I’m going to tell it XYZ to get around that.”
🔴 Copy-paste outputs. If they take AI’s first answer with no further refining, we’re concerned that they’ll do the same in their strategy work at Section.
Bonus points: Having a paid LLM account. This shows us that you’ve made a personal commitment to using AI. A free account is also fine, but if you show up to the test without an account at all, that is a ding.
How it’s going and how to copy us
How did we pick the CSM we ultimately hired in the above example?
- They knew the common pitfalls to avoid when working with AI
- They knew how to format information to have a better conversation
- They knew how to refine their prompts to get to better outcomes
You don’t have to create showstopping automations – you just need to show us that working with AI on routine tasks is already in your repertoire.
If you want to copy our live prompting test and hire more AI-first employees, here’s our advice:
- Run the test early in the process – right after the screening call, ideally. If this is a must-have skill for you, make sure no one in the process is moving forward without it.
- Have a subject matter expert design the test – someone who knows the kinds of AI use cases that will be common in the role. That way you know you’re testing the most relevant possible skills.
- Standardize the scenario per role – all candidates for a given role should have the same prompting assignment, so you have a baseline to grade them against.
And lastly, think of this like you would a cognitive aptitude test or interview assignment. The purpose is not to add friction to the hiring process. It’s to ensure you’re bringing on people who can thrive in an AI-augmented company.