Hey everyone, Neil here. You're reading High-Signal Hiring. Hiring systems from 20+ years of global recruitment experience and 500+ technical hires. Zero noise and instantly actionable.
Last issue, we covered how to scope a role using the judgment test, separating what a human should own from what an AI agent can handle. If you ran that exercise, you've got a sharper job description. Good.
Now you need to find someone who can actually do the job you just described. And here's the problem. Your interview process is still testing for the old one.
You'll learn why the most productive engineers have stopped writing code by hand, what "orchestration ability" actually looks like in practice, and three changes to your interview process that will surface the engineers who work this way.
Not a subscriber yet? Sign up here
| The engineer who ships 30 PRs a day
Boris Cherny leads Claude Code at Anthropic. He runs 5 local AI agent sessions and 5-10 remote sessions simultaneously. Each one working on a different task, in its own git checkout, shipping real production code.
The output? Roughly 30 pull requests a day. Not from a team. From one person.
He doesn't type most of the code. He plans what needs to happen, spins up parallel agents, feeds them context, verifies their output against test suites and browser testing, and course-corrects when they go off track. He describes this as a plan-first approach, going back and forth with the AI until the plan is right, then switching to execution mode and letting agents run.
This isn't a demo. This is how one of the best AI labs in the world actually builds software. And it tells you something important about the engineer you're about to hire.
| The job changed. The interview didn't
Most engineering interviews still test three things: can you solve algorithm puzzles, can you write clean code under pressure, and can you talk through system design on a whiteboard.
Those skills still matter. But they're no longer the differentiator.
The engineer who will 10X your startup's output in 2026 is not the one who writes the best code. It's the one who knows what to build, can break it into parallelisable tasks, spin up multiple agents to execute simultaneously, and verify the output is correct. The job has shifted from execution to orchestration.
That's a fundamentally different skill set. And your current interview process doesn't test for any of it.
Think about what Cherny actually does all day. He decides what needs to happen (judgment). He breaks problems into independent work streams (decomposition). He manages multiple agents running in parallel (orchestration). He verifies results through test suites, browser testing, and code review (quality control). He maintains a living knowledge base so agents don't repeat mistakes (systems thinking).
Not one of those skills shows up in a standard technical interview.
| What orchestration ability actually looks like
Before you can test for it, you need to know what you're looking for. An engineer with strong orchestration ability does three things consistently:
1️⃣ They think in parallel, not sequentially
When given a problem, their first instinct is to break it into independent streams that can run simultaneously. They naturally ask "what can happen at the same time?" rather than "what's the next step?" This is not how most engineers are trained to think. Sequential reasoning is deeply embedded in computer science education. The engineers who've broken out of it are rare and valuable.
2️⃣ They scope ruthlessly
Orchestration only works when each task is well-defined. An engineer who's good at this will instinctively decompose a vague requirement into tight, specific tasks with clear inputs and outputs. If they can't define the task crisply, they know the agent will fail, so they invest the time upfront. This maps directly to the judgment test from last issue. The same skill that makes someone good at scoping a role makes them good at scoping tasks for agents.
3️⃣ They verify obsessively
Cherny's team has found that providing feedback loops, test suites, browser testing, bash command verification, improves output quality by a factor of 2-3X. Engineers who are good at orchestration don't trust agent output. They build verification systems. They treat every agent output as a draft that needs to be validated, not a finished product. The ones who blindly accept what an AI produces are the ones who ship bugs to production.
| Three changes to your interview
Here's where this gets practical. You don't need to redesign your entire process. You need to add one exercise and adjust two existing ones.
1. Add an orchestration task
Give the candidate a moderately complex feature to plan, not build. Something like: "We need to add user authentication with social login, email/password, and role-based access to our Next.js app. You have access to AI coding agents. Walk me through how you'd break this down and execute it."
You're listening for: Do they decompose into parallel streams? Do they identify which parts need human judgment vs. which parts an agent handles? Do they think about verification? Do they know which tasks have dependencies and which can run simultaneously?
A strong candidate will naturally separate the auth strategy decision (human judgment) from the implementation of each provider (parallelisable agent work), and they'll mention how they'd verify each piece works before integrating. A weak candidate will describe a sequential plan they'd execute themselves.
2. Shift your system design interview
Stop asking "design a URL shortener." Start asking "you're the only engineer at a startup that needs to ship a URL shortener in two weeks, and you have access to AI coding tools. How do you approach this?"
Same problem. Completely different answer. You're now testing whether they can architect for speed using AI, whether they know where to invest their own time vs. delegate, and whether they have a realistic mental model of what AI tools can and can't do today.
The candidate who says "I'd build a clean architecture document first, then have agents implement each service while I focus on the data model decisions and deployment strategy" is showing you exactly the skill you need. The one who walks through how they'd personally write each component is showing you they haven't adapted.
3. Replace the take-home with a live orchestration session
Traditional take-homes are dead. Candidates use AI to complete them anyway, so you're not testing what you think you're testing. Instead, give the candidate 60 minutes with a real AI coding tool and a real problem. Watch how they work.
You'll learn more in that hour than in any whiteboard session. Do they plan before they prompt? Do they run multiple tasks in parallel? Do they verify output or accept it blindly? Do they course-correct when the agent goes in a wrong direction? How do they handle the agent producing something unexpected?
This is the closest thing you'll get to seeing how someone actually works day-to-day. Which is the whole point of an interview.
| What this means for who you hire
TIf you run these adjusted interviews, you'll notice something. The candidates who excel are not always the ones with the most impressive CVs or the deepest algorithmic knowledge. They're often mid-level engineers with strong product instincts and a natural curiosity about tools.
That tracks with what we covered in Issue 13. When you scope the role properly and interview for orchestration, you might discover you don't need the expensive senior engineer you were targeting. You need someone with good judgment, a bias toward shipping, and the ability to leverage AI tools aggressively. That's a different candidate, a different salary, and a different sourcing strategy.
Cherny deliberately under-funds his teams at Anthropic, putting one engineer on a project, because the constraint forces them to use AI. They ship faster, not slower. The same principle applies to your startup.
| How to use this today
Pick your next interview. Add the orchestration task. Watch what happens. The gap between candidates who think in parallel and candidates who think sequentially will be immediately obvious.
Your best engineer doesn't write the most code. They ship the most outcomes.
Cheers
Neil
