“How do you give engineers the freedom to experiment with AI without compromising on product delivery?”
This is the question a customer recently put to us at Section. In mapping out an answer, I realized our strategy isn’t just about new tools - it's about a philosophy inherited from my previous CTO that has proven surprisingly resilient in the AI era.
Here is our framework for leveraging AI’s true value, while keeping engineers focused on what matters most.
The pillars of AI-native engineering
Two core ideas orient how we work. Neither is new, but they have been essential operating principles for Section in the AI era:
Domain-driven teams: Instead of organizing by features or technical components, we use small pods of 6-8 people organized around customer personas.
Each team owns a complete capability area and deeply understands their customer. By matching our team structure to the customer structure, we ensure AI remains a solution, and not just a feature of our workflow.
The 2/2/2 framework: AI moves fast, but quality requires a cadence. We run a 6-week milestone cycle broken into three 2-week sprints:
- Scope and prototype
- Build and iterate
- Ship and prep
This framework protects the two things that usually get squeezed in the rush to ship - upfront planning and rigorous review.
Diving deeper: The domain-driven organization
Each domain team is a 6-8 person pod:
- 1 Product Manager
- 1 QA Lead
- 4-6 Engineers (anchored by a senior or staff engineer)
This size is the sweet spot - it’s large enough to support two swimlanes per team and peer review, but small enough to keep everyone aligned. It also creates natural mentorship opportunities, and junior engineers can grow by watching how senior engineers approach problems.
We don't divide these teams arbitrarily, but by who they serve. For example, our Employee Enablement team owns the end-user experience, while Intelligence focuses on the buyer’s need to demonstrate ROI. This context is what makes AI-assisted development effective - because the team (and Claude) understands the customer’s voice and knows what "good" looks like before they start building.
This model is built to scale. I’ve seen it support over 100 engineers by allowing domains to grow horizontally and vertically. A single team like Intelligence can eventually split into subdomains like Data, Agents, and Connectors, each maintaining its own clear KPIs. While this approach favors the autonomy and accountability necessary for AI-native work, it does require more effort to keep teams in sync. At scale, you have to be intentional about cross-team communication to ensure that speed doesn't come at the cost of alignment.
Leadership without hierarchy
I don't appoint permanent "team leads" or "architects." In my experience, fixed titles remove accountability from the team - decisions get pushed to one person, and the group loses the muscle of problem-solving together.
Instead, we use rotating project leads. For each project, one engineer handles the ceremonies, owns the tech spec, engages with the PM, and tracks milestone progress. The role rotates. This grows the whole team, not just individuals, and shares the operational burden that comes with structure.
Diving deeper: The 2/2/2 framework
We run a 6-week milestone cadence to ensure we aren't just moving fast, but moving in the right direction. While every engineer at Section has access to Claude Code Max, the tools are only as effective as the discipline behind them.
Weeks 1-2: Scope + prototype
This is where AI has fundamentally changed the game. The discovery, spikes, and POCs that used to consume the first sprint are now dramatically compressed. We anchor this phase with two specific artifacts:
- The Product Canvas: defines the what, why, and success criteria.
- The Tech Spec Canvas: maps out architecture, implementation details, and edge cases.
These templates aren't bureaucratic checkboxes - they're what make AI useful. AI excels at implementation but fails at ambiguity. A high-quality Product Canvas gives the engineer context, and the Tech Spec gives the AI direction.
The chain matters: Product Canvas → Tech Spec → AI Agent -> Engineer. Each handoff builds on the last. When everyone (including the AI) works from the same structured inputs, you get predictable outputs.
Weeks 3-4: Build + iterate
Smaller teams + AI-assisted coding changes what's possible. We can build multiple versions, compare approaches, and ship the best one - not just whatever we could get done in time. The key is tight feedback loops within the sprint. Engineers aren't just writing code - they're orchestrating AI to generate, review, and refine implementations while they focus on architecture and edge cases.
Weeks 5-6: Ship + prep
This sprint is the one engineers historically never get. In traditional models, you ship and immediately pivot to the next thing. Technical debt accumulates. Documentation gets skipped. Planning for the next milestone happens in stolen moments.
The 2/2/2 framework intentionally protects this time. Engineers have space to clean up, document decisions while they're fresh, and think ahead. By the time the next milestone kicks off, the team isn't scrambling to figure out what they're building. They already know.
The goal is to ship feature-complete value every six weeks - actual capabilities that move the needle for customers. If a project runs over, we carry it over. We prioritize delivering value over hitting arbitrary deadlines. This six-week rhythm creates a healthy pressure to perform without forcing the team to ship incomplete work just to check a box.
The bet we're making
The real bottleneck in AI-assisted development isn’t coding speed, it’s context, clarity, and review.
In the old world, roadmap pressure often pushed teams to start coding before they fully understood the problem. Planning was minimal, and engineers figured it out as they went. The result was scope creep mid-build and users getting whatever we could get done in time, instead of the best version of the product.
AI inverts this. Coding is increasingly the fastest part of the process. This means a framework that front-loads planning and protects time for review will always outperform one that doesn't.
AI is remarkably capable at implementation, but it’s terrible at understanding why you are building something. Our investment in structured specifications solves this. By the time code is being written, the AI already has clear problem statements, success criteria, and identified edge cases.
This is why small, senior-minded teams are the future. Success in this era isn't about years of experience - it's about how someone thinks. I’ve seen engineers from bootcamps outperform those with decade-long resumes because they were more inquisitive and intentional. The people who struggle with AI aren't "junior" in the traditional sense - they are simply the ones who can't evaluate whether the output is right, regardless of their tenure.
Practical implementation
If you're considering a similar shift, here’s how to start:
- Start with one domain. Pick a single customer-facing capability area and build a complete pod around it. Let them run autonomously for 2-3 milestones before scaling the model.
- Invest in great product managers. In an AI-native workflow, upstream clarity is your biggest force multiplier. You need PMs who live 6–12 weeks ahead of the team, with Product Canvases ready for the next three initiatives before the current one even ships. A great PM doesn't just manage tickets, they arm the team. When the “what” is crystal clear, engineers can focus entirely on the “how.”
- Invest in templates. Create standardized formats for documenting requirements. Product Canvas and Tech Spec Canvas templates pay dividends when engineers can hand well-structured context to AI tools.
- Measure cycle time, not headcount. The goal isn't to reduce team size - it's to increase what each team can ship per cycle.Headcount efficiency may be a byproduct, but it's not the objective.
- Expect the first milestone to be rough. The new rhythm takes adjustment. By milestone 2-3, teams hit their stride. Don't judge the framework by the first awkward iteration.
- Hire for judgment. In an AI-native organization, the most valuable engineers are those who can think like product owners. That's not always correlated with years of experience. I look for curiosity and the ability to evaluate whether an AI's output is actually right. Fit matters, too: put the engineers who love polish in user-facing domains and those who love data pipelines in the infrastructure domains.
Try this today
Want to see how this works in practice? Here's an exercise you can run right now with any LLM.
Step 1: Create a product canvas template
Ask the AI to take on the role of a senior product manager and design a product canvas template grounded in the "why" of the product lifecycle. Have it include:
- Headline: What would a marketing announcement say?
- Strategic Framing: How does this feature fit into the product architecture? What key insights does it unlock?
- Problem Statement: What specific pain points are the user facing?
- Target Audience: Who are we solving this for, and what are their unique needs?
Take that output and refine it with the metrics or competitive analysis your organization requires.
Step 2: Create a tech spec canvas template
Ask the AI to take on the role of a principal or staff engineer. Design a tech spec canvas grounded in the "how":
- What are we building? A one-paragraph overview in plain language that a new hire could understand.
- Rationale: Why this approach over the alternatives?
- Technical constraints: Architecture overview, component diagram, data flow, and prioritized requirements.
Most organizations already do some version of this exercise. The difference is making these artifacts the input layer for AI. When your Product Canvas and Tech Spec are structured and complete, they become the context that allows AI to actually generate useful milestones, stories, and code.






