Hey everyone, Neil here. You're reading High-Signal Hiring. Hiring systems from 20+ years of global recruitment experience and 500+ technical hires. Zero noise and instantly actionable.
Last issue we looked at why outsourcing your first-round interview to AI filters your best candidates in reverse. The pitch is efficiency. The cost is the engineers you actually want most.
This week, the mirror image. The candidate using AI on you.
AI interview fraud is here. Cluely sells a hidden overlay that feeds answers in real time. Stealth earpieces do the same through audio. Deepfake personas are being built in under 90 minutes. Gartner projects 1 in 4 candidate profiles will be fake globally by 2028. The standard advice is to buy detection software. Trial it. Some of the tooling is genuinely impressive and it has a place in your stack. Just don't treat it as the whole defence. It's one layer, not the system.
You'll learn why detection alone won't save you, the three behavioural signals still worth watching, and how to adjust the loop from Issue #19 so AI assistance becomes irrelevant or self-exposing.
Not a subscriber yet? Sign up here
| The arms race is already over
Cluely raised $5.3M and hit 70,000 signups in its first week. Its founder openly pitched it as "a cheating tool for literally everything." Karat ran 700 engineering applicants through a timed coding test with explicit instructions not to use AI. Over half did anyway.
That's 50% of engineers ignoring written rules on a low-stakes test. Now imagine the stakes of a real offer.
Deepfakes are the second front. Palo Alto's Unit 42 showed a novice building a convincing synthetic interview identity in 70 minutes. The Pragmatic Engineer documented a Polish AI company that caught two deepfake candidates in sequence, suspected to be the same operator wearing different synthetic personas. The FBI has linked over 300 US companies to unknowingly hiring North Korean operatives using AI-generated faces and stolen identities. A crypto startup lost more than $900K to a hire that was a nation-state actor in an AI mask. I've had a startup client personally targeted by this. They caught it in time. Not everyone does.
The people building the cheating tools ship faster, raise faster, and have more users feeding their training sets than the people building the detectors. Any detection tool sharp enough to catch today's fraud risks being obsolete in a quarter. Useful as a layer. Not something to anchor your whole defence on.
Detection alone won't win this. Design is where you win.
| Three signals still worth watching
Behavioural detection still works as a cheap heuristic. Not as a filter, as a confidence check. Three things to watch for during any live technical round.
1️⃣ Instant, perfectly-formed answers.
Real engineers hedge. They say "it depends" and ask two questions before committing. A candidate who opens with a fully structured 4-part answer every single time is being fed.
2️⃣ Explanations that don't match the code.
Ask them to walk through what they just typed. If the vocabulary of the explanation sits a level above or below the code they just produced, something's off. AI-assisted candidates can't consistently paraphrase their own work because the work isn't theirs.
3️⃣ Tracking a second screen.
Watch where they look when they pause. AI overlays and earpiece prompts produce a specific pattern. Eyes darting slightly off-camera during any non-trivial question. It's the modern tell.
Use these as canaries, not as a process. If you feel two of the three, end the interview.
| Redesigning the loop so AI assistance is self-exposing
This is where the real work happens. If you are using the interview structure I described in Issue #19, two adjustments make AI-assisted cheating either useless or obvious.
Adjustment 1: Interview for the judgment call, not the answer.
In the Mission and Depth Interviews, stop posing problems that have a canonical solution. Pose problems that require picking a trade-off. "Here's the architecture we're considering. What breaks at 10x scale?" A good engineer will give you three options, rank them, and tell you which one they'd actually pick and why. An AI-fed candidate will give you the most statistically probable answer. The difference is obvious to anyone who's hired engineers before.
Adjustment 2: Anchor the “Depth” (aka the main technical session) interview in their own past work.
Spend 15 minutes on specifics (at least). What the system did, where it broke, what decisions they had to make, how they got team buy-in, what they'd rebuild today. A real engineer will talk for an hour. A proxy candidate or deepfake starts hedging within minutes because they didn't build any of it.
This is your implicit verification layer. You don't need to check a passport. You ask them to tell you about Tuesday, three weeks ago, when the deploy went sideways. Fraudsters can't fake Tuesday.
| Stage 4 is your verification layer
You need a verified moment before the offer lands. The good news is you don't need to bolt on an extra stage. You already have one.
Stage 4 in Issue #19 was Reference and close. Most founders treat this as paperwork at the end. In the AI-era this stage is critical.
Two backchannel references, on the phone, not by email form.
Ask to speak to a former manager and a former peer. Make the calls yourself (don’t outsource them). Ask specific questions about the work the candidate told you in the “Depth” interview. If the person on the other end doesn't recognise the story, your candidate wasn't there. Deepfakes and proxies don't have real ex-managers willing to get on a call and answer specifics.
The close conversation is face time, not an email thread.
Ten minutes in-person or on a verified-video call before contracts, not a test, a sanity check. You're confirming the person you're hiring is the person you interviewed.
Neither of these adds real time to the loop. Both are things you should already be doing. AI interview fraud just raises the cost of skipping them.
| What to do this week
Open your current interview structures. Find every question that has a single correct answer. Rewrite them as trade-off questions. If every question in your loop has one right answer, you're running a quiz AI can win.
Then make Stage 4 do real work. Call the references yourself and ask them about the specific things the candidate told you in the earlier sessions. Make the close conversation FaceTime, not email. That's your verification layer.
The fraudsters are selling speed and automation. Your defence is the one thing AI can't fake in real time. A conversation about the specific thing the candidate did last Tuesday, with a human in the room who can ask a follow-up.
Cheers
Neil
