Table of Contents
ToggleThere’s a moment every recruiter knows well. The candidate joins the video call, camera on, background neat. They answer questions fluently. Their resume checks out. The conversation flows. You think: this one’s good.
What if that person wasn’t real?
This isn’t a hypothetical pulled from a sci-fi script. In March 2026, Bengaluru-based interview platform InCruiter flagged a live interview where the participant — who appeared entirely natural on camera, answering technical questions with ease — turned out to be a deepfake. An AI-manipulated face overlaid on a real person, or in some cases, a fully synthetic identity. The incident barely made national headlines, but inside HR circles, it set off a quiet alarm.
India’s hiring ecosystem is at an inflection point. Remote hiring, which exploded post-pandemic, hasn’t fully retreated. Thousands of interviews happen every day over Zoom and Google Meet — in IT, BFSI, edtech, logistics, and beyond. And deepfake candidates are learning to walk right through that door.
How did we get here?
The technology behind deepfakes has been democratising at a pace most people haven’t kept up with. Two years ago, generating a convincing face-swap required GPU resources and technical expertise. Today, there are consumer-grade apps — some available for a few hundred rupees a month — that can overlay a synthetic face onto a live video feed in near real time. Voice cloning tools can replicate tone and accent from just a few minutes of sample audio.
Put those two things together and you have a candidate who looks like one person, sounds like another, and is possibly being coached through an earpiece by a third. The technology to do this costs almost nothing. The damage it can cause costs a great deal.
India isn’t immune to this. If anything, the scale of India’s talent market — millions of applications, overworked HR teams, and the normalisation of fully remote hiring — makes it fertile ground for exactly this kind of fraud.
What deepfake candidates actually want
It’s worth understanding the motivation, because it shapes what kind of companies are most at risk.
The most common scenario isn’t a sophisticated spy operation. It’s simpler and more mundane: a candidate who lacks the skills for a role, but knows someone who has them. They sit in front of the camera; the skilled person controls what appears on screen or feeds answers remotely. The fraudster clears the interview. A real but underqualified person shows up on Day 1 — or worse, never shows up at all and the job is subcontracted to someone entirely unknown to the hiring company.
In higher-stakes cases, particularly in IT and fintech, deepfake candidates have been hired at significant salaries, only for investigators to later conclude that code submissions were AI-generated and the video presence was manufactured. Once inside, such employees can access sensitive systems, client data, and internal infrastructure. This is where a hiring fraud problem becomes a cybersecurity problem.
For Indian companies handling customer financial data, health records, or proprietary tech — the stakes couldn’t be higher.
The KYC blind spot
Here’s what makes this particularly uncomfortable for Indian HR and compliance teams: KYC, as it has traditionally been practised, was never designed for this threat.
Standard KYC checks an Aadhaar number. It verifies that a PAN card exists. It confirms an address. What it doesn’t do — and was never built to do — is confirm that the face on the video call matches the face on the document in real time, in a way that accounts for AI manipulation.
This is the gap. Organisations are checking whether documents are valid, but not whether the person presenting those documents is genuinely the person they claim to be — especially across the distance of a video call. And most existing KYC frameworks weren’t built with that question in mind.
What HR teams are noticing on the ground
Experienced recruiters are picking up on tells, even if they can’t always name them. Slight delays between lip movement and sound. Blurring around the hairline when the candidate moves their head quickly. Eyes that don’t quite track naturally. A strange smoothness to the skin under bright lighting. These are artefacts of deepfake rendering — imperfections that improve with every generation of the technology, but haven’t disappeared yet.
What used to be a filtering problem has become a verification problem. Recruiters who once worried about inflated CVs and rehearsed answers are now having to think about whether the person they’re talking to is biologically real.
Some of the signals worth watching for: if a candidate struggles with unexpected requests — “can you turn sideways for a moment?” or “hold your hand in front of your face” — deepfake overlays often fail under these conditions. Sudden pixelation around the face when there’s fast movement, audio that sounds slightly processed, or a suspiciously perfect response delivery (possibly scripted and fed via earpiece) are all worth noting.
But relying on recruiter intuition alone is not a system. It’s a best guess.
Where background verification fits in — and where it needs to evolve
Traditional background verification — employment checks, education verification, criminal record checks — remains essential. A deepfake candidate still has to submit documents, and those documents can be cross-verified. That layer hasn’t changed and shouldn’t be underestimated.
But the frontier of BGV is now moving toward something more dynamic: identity continuity. This means verifying not just that a document is valid, but that the person who submitted it is the same person who appeared in the interview, which is the same person who shows up on Day 1. Liveliness detection, AI-assisted anomaly flagging during video, and biometric cross-checks against submitted ID photos are becoming part of serious onboarding stacks.
Maintaining the integrity of the hiring process protects not just the organisation from fraud and data breaches, but also ensures a fair playing field for honest candidates — a point worth remembering when the temptation is to treat this as a fringe concern.
India’s DPDP Act also adds a layer of complexity: any enhanced identity verification must be consent-driven and purpose-limited. That means the solution isn’t just technical; it requires a compliance framework around it.
What responsible hiring looks like now
No single tool eliminates this risk, but a layered approach narrows it significantly. For companies doing high-volume remote hiring, this means building pre-interview screening that cross-checks submitted photos with live video before the conversation even begins. It means using platforms that flag rendering anomalies in real time. It means keeping at least one in-person or in-office touchpoint before final onboarding — not because video interviews don’t work, but because physical presence is still the hardest thing to fake.
It also means investing in background verification partners who are thinking ahead of this curve, not just processing documents in a queue.
Gartner projects that by 2028, one in four candidate profiles globally will be fake. That number is striking, but it also points to where urgency needs to land: with the systems organisations put in place now, before the problem scales further.
India’s talent market is too large, too competitive, and too consequential to leave this to instinct. Deepfake candidates aren’t the future of hiring fraud. They’re already here, sitting across the screen, waiting for the next interview to start.
OnGrid helps organisations build layered, compliant verification frameworks — from document checks to identity validation — for a hiring landscape that’s changing faster than most people realise. Talk to our team to learn how.





Leave a Reply