Hey everyone, Neil here. You're reading High-Signal Hiring. Hiring systems from 20+ years of global recruitment experience and 500+ technical hires. Zero noise and instantly actionable.
Over the last few issues, we built out the full sourcing playbook. Where to find engineers, how to reach them, which ones will say yes. You should have a pipeline forming.
Now we're shifting. Before you interview anyone, there's a question most founders aren't asking. And it's changing the economics of every engineering hire.
You'll learn why the scope of an engineering role has fundamentally changed, how to separate what a human should own from what an AI agent can handle, and why getting this wrong means you're either overhiring or writing a job description for a role that shouldn't exist.
Not a subscriber yet? Sign up here
| The role you're hiring for has already changed
Six months ago, a first engineering hire owned everything. Architecture, frontend, backend, infrastructure, deployments, bug fixes, documentation.
That's not the job anymore. AI agents now handle real chunks of engineering work. Not toy demos. Production-level code generation, automated testing, CI/CD configuration, debugging, etc.
Boris Cherny, who leads Claude Code at Anthropic, recently said that coding is "virtually solved." His team roughly 4X'd in size, but productivity per engineer increased 200%. One of his engineers built an entire Go service over a month and still doesn't really know Go. That's how fast the line is moving.
This means the role you're about to hire for probably includes work that doesn't need a human. If you don't scope that out before writing the job description, you'll either hire someone with a bloated mandate (expensive and slow) or hire for tasks that an agent handles for a fraction of the cost.
| This is not a "should I hire?" question
Let me be clear. If you passed the four readiness tests from Issue 9, you should be hiring. The 90-day mission still stands. The human is still essential.
This is about precision. What exactly does the human own?
You're not replacing the hire. You're sharpening what they spend their time on so every hour goes toward work that only a human can do. Think of it as scoping, not gatekeeping.
| What agents handle vs what humans own
Agents are good at doing work with clear inputs. Code generation from defined specs. Writing and running tests. Refactoring. Documentation. Boilerplate. Anything where the input is clear and the output is verifiable.
Humans own the judgment calls. What to build and why. How the product should feel. Which trade-offs to make when there's no right answer. Talking to users and translating messy feedback into engineering priorities. Figuring out ambiguous problems with incomplete information.
The next frontier isn't AI writing code. It's AI coming up with ideas, looking at feedback and telemetry, then proposing what to build. We're not there yet. Which means the value of your next hire isn't their ability to write code. It's their judgment about what to build and why.
Most early-stage roles have both. Judgment and execution, mixed together. The scoping exercise separates them so you hire for the judgment and automate the rest.
| How to scope the role
Take your 90-day mission and break it into the actual work involved. Not job description language. The real tasks, week by week, that delivering on that mission requires.
Run each task through one question:
Does this require human judgment, or can it be defined clearly enough for an agent to handle?
Be honest. Most founders overestimate how much of the work requires a human. "Hard" and "requires a human" are no longer the same thing.
Here's what this looks like in practice: Say your 90-day mission is…
“Ship the first customer-ready version of our mobile app."
Before scoping, the task list might look like: design the architecture, build the core features, write tests, set up CI/CD, write API documentation, handle deployment config, talk to early users, decide what to cut when timelines slip, and figure out which features actually matter for launch.
After running the judgment test: architecture decisions, feature prioritisation, user conversations, and tough trade-offs on scope all land on the human side. Writing tests, generating boilerplate, CI/CD setup, documentation, and deployment config land on the agent side. The role just went from "build everything" to "own every decision about what gets built and why." That's a sharper hire.
You'll end up with two lists. The human list becomes the real job description. The agent list becomes the work you either automate now or delegate later.
| What if you get the split wrong?
Two things can happen:
If you over-scope to agents and critical work falls through the cracks, your engineer picks it up. That's fine, you adjust.
If you under-scope and your engineer is doing work an agent could handle, you're burning salary on tasks that don't need judgment. One of those is recoverable. The other is just expensive.
This isn't a one-time decision. Revisit the split at 30 and 60 days. As you learn what the engineer is good at and where agents fall short, adjust. The point is starting with intention, not getting it perfect.
| What this changes
When you scope properly, the role becomes more attractive to strong engineers. You're asking them to make decisions and ship outcomes, not write boilerplate. Cherny deliberately under-funds his teams, putting one engineer on a project, because the constraint forces them to use AI aggressively. They ship faster, not slower. The best candidates already think this way. If your role description is still a laundry list of tasks an agent could handle, those candidates will self-select out.
You also might discover you need a different hire than you thought. Not a senior engineer, but a strong mid-level with good product instincts who knows how to leverage AI tools. That's a very different candidate profile, a very different salary, and a very different sourcing strategy.
| How to use this today
Run the judgment test on your 90-day mission before you write a single line of the job description. The role that comes out the other side will look nothing like the one you started with.
Cheers
Neil
