Hey everyone, Neil here. You're reading High-Signal Hiring. Hiring systems from 20+ years of global recruitment experience and 500+ technical hires. Zero noise and instantly actionable.
Big week this week. I had the honour of doing a guest piece for Gregor Ojstersek's "Engineering Leadership" newsletter on why startups shouldn't be mirroring Big Tech hiring processes (check it out here). We've had quite a few new members join the community off the back of it. To those people, welcome. Very grateful to have you here.
Last issue, we covered how to interview for orchestration ability, the skill that separates engineers who ship 30 PRs a day from engineers who write one file at a time. If you've started testing for parallel thinking and verification habits, you're ahead of most.
Now here's the uncomfortable follow-up. Everyone's chasing "AI-native" engineers. Founders want the person most excited about AI tools, the one who ships fastest. That instinct is wrong.
You'll learn why the engineer who's sceptical of AI output is more valuable than the one who trusts it, what "verification ability" looks like in practice, and how to test for it before you make an offer.
Not a subscriber yet? Sign up here
| The data nobody's reading
CodeRabbit analysed 470 GitHub pull requests. 320 AI-co-authored, 150 human-only. The results were, shall we say, rough.
AI-generated code produces 1.7x more issues than human-written code. Not minor issues. 1.4x more critical bugs. 1.7x more major bugs. 2.74x more cross-site scripting vulnerabilities. Performance inefficiencies appeared nearly 8x more often (not a typo).
This isn't a dig at AI tools. I've been writing about how AI is reshaping engineering roles for the last three issues. I believe it. But the speed AI gives you is worthless if nobody's checking the output. And right now, most people aren't checking.
| The verification gap
Sonar surveyed developers and found 96% of engineers say they don't fully trust AI-generated code. But only 48% actually verify it before committing. Half the engineers who know the output might be wrong are shipping it anyway.
Why? Because reviewing AI code takes longer than reviewing a colleague's code. You didn't write it. You don't know why certain decisions were made. You're reverse-engineering intent from output, which is harder than reviewing code where you understand the reasoning.
Werner Vogels (AWS CTO) called this "verification debt." It accumulates faster than traditional technical debt because the code is being generated faster. And it's invisible until something breaks in production at 2am.
| What a good AI sceptic looks like
This isn't about hiring someone who refuses to use AI. That's not scepticism, that's denial. The engineer you want uses AI tools aggressively but treats every output as a draft. Never a finished product.
They do three things consistently:
1️⃣ They verify before they commit
Every AI-generated function gets tested. Not "does it look right" but "does it work under edge cases the AI didn't think about." They write more tests than the average engineer, not fewer, because they know AI is confident and often wrong. (AI doesn't tell you when it's guessing. That's the whole problem.)
2️⃣ They read the code they didn't write
Most engineers skim AI output and ship it if it passes the linter. A good sceptic reads it like they're reviewing a junior developer's first PR, looking for logic errors, security holes, and assumptions that don't match the requirements. This is hard, slow work. It's also the most valuable work an engineer does in 2026.
3️⃣ They know when to override
AI tools are getting better fast. But they still hallucinate. They still make architectural decisions that look clean in isolation but fall apart at scale. The sceptic knows when to accept the output, when to tweak it, and when to bin it and write the code themselves. That judgment only comes from understanding systems deeply enough to spot what the AI missed.
| Why this matters more than "AI-native"
Here's where most founders get this backwards. They see two candidates. One is excited about AI, talks about all the tools they use, and ships demos at incredible speed. The other is more measured, asks probing questions about code quality, and mentions building verification systems into their workflow.
Founders pick the first one almost every time. Speed is seductive.
But the first candidate is the one who ships 2.74x more security vulnerabilities. The second candidate is the one who catches them.
Boris Cherny at Anthropic found that feedback loops, test suites, and verification systems improve AI output quality by 2-3x. His team doesn't trust AI output. They've built systems to catch its mistakes. That's not anti-AI. That's how you get the most out of AI.
The best AI-era engineers aren't the ones who trust AI the most. They're the ones who question it systematically.
| How to test for this in an interview
Add one question to your process:
Give the candidate a piece of AI-generated code with three subtle bugs. Not syntax errors. Logic errors, a security vulnerability, and a performance issue that only shows up under load.
Ask them to review it.
You're not testing whether they find all three. You're testing how they approach the review. Do they read carefully or skim? Do they ask about the requirements before evaluating? Do they check edge cases? (Most won't. That's the signal.)
The engineer who says "this looks fine" in two minutes is the one who'll ship bugs to your users. The one who takes fifteen minutes, asks three clarifying questions, and finds at least one issue is the one you want.
This connects directly to the orchestration interview from Issue 14. Same core skill. Knowing when to trust the machine and when to override it. Issue 14 tested planning. This tests quality judgment.
| How to use this today
Stop filtering for "AI enthusiasm" in your interviews. Start filtering for verification rigour. The engineer who's cautious with AI output will protect your codebase, your users, and your reputation.
Your best AI-era hire doesn't trust AI. They verify it.
Cheers
Neil
