Customer experience leaders often focus on building trust with customers—how it's earned, lost, and how fragile it has become with the rise of AI. However, this trust challenge is now emerging much earlier in the value chain: within the hiring process itself.
Survey data from the Institute for Corporate Productivity (i4cp) reveals that deepfakes and synthetic identities are infiltrating hiring processes, directly impacting customer-facing operations. A recent incident at an AI security startup, where even an experienced CISO was nearly deceived by a deepfake candidate, underscores that CX leaders can no longer leave this issue solely to HR or security teams.
Hiring risk has become customer risk.
Cautionary Tale from a Security Startup
In a survey of talent acquisition executives, i4cp found that 59 percent are concerned about identity fraud or impersonation via deepfake video or audio. Lorrie Lykins, VP of research at i4cp, stated:
“Deepfaking and identity fraud are chief among the concerns of organizations when it comes to the use of AI in the hiring process.”
That concern is well-founded.
If any organization might be expected to spot a deepfake quickly, it would be an AI security firm specializing in threat modeling. Yet, this is what makes the recent experience of Expel co-founder and CEO Jason Rebholz so compelling.
After Rebholz posted a security researcher vacancy on LinkedIn, a new connection introduced a potential candidate they claimed to have worked with previously and sent a link to a resume hosted on Vercel, an app-building platform that integrates with AI tools. This raised suspicion that the resume was generated by Claude Code, but Rebholz noted this wouldn't be unusual for a developer.
The connection's urgency to schedule an interview raised a red flag. When the candidate joined the video call with their camera off before switching it on with a virtual background, warning signs became harder to ignore. The candidate's face appeared "blurry and plastic," with visual artifacts flickering in and out. Even so, Rebholz hesitated.
“What if I’m wrong? Even though I’m 95 percent sure I’m right here, what if I’m wrong and I’m impacting another human’s ability to get a job? That was literally the dialog that was going on in my head, even though I knew it was a deepfake the whole time.”
Rebholz continued the interview and sent video clips to fraud detection firm Moveris for analysis using its deepfake detection technology, which confirmed the deception. As Rebholz said:
“It’s one of the most common discussion points that pops up in the CISO groups I’m in. I did not think it was going to happen to me, but here we are.”
The incident highlights that deepfakes exploit more than technical blind spots—they target social norms, empathy, and the fear of making the wrong call.
Deepfake Job Candidates Put Customer Experience at Risk
This incident underscores the risk to customer experience teams, which operate at a unique intersection of trust, access, and scale.
Customer-facing roles, such as contact center agents, onboarding specialists, support engineers, and trust and safety teams, often rely on remote hiring, global talent pools, and rapid scaling. This makes these functions especially attractive targets for fraudsters seeking access to customer data or internal processes.
A single compromised hire can have an outsized impact, exposing customer data, manipulating service interactions, or damaging a brand, potentially with regulatory fallout.
Over half (54 percent) of respondents to the i4cp survey reported encountering candidates in video interviews they suspected of using AI tools to assist with answering questions or completing technical challenges. Despite this, only 17 percent reported increasing the use of in-person interviews in response to concerns about AI-related fraud.
Survey respondents noted that industries handling sensitive data, like financial services, healthcare, defense, and infrastructure, are more cautious in AI adoption due to these risks. Customer-facing functions in less regulated sectors may lack the same guardrails, but the customer impact of a breach can be just as severe.
The survey revealed ambiguity in organizations' policies around candidates using AI. Forty-one percent have no official stance, 29 percent encourage ethical use but worry about misuse, and only 26 percent clearly welcome AI use with guidelines.
There's also a human dimension. AI-assisted hiring can erode qualities CX leaders value, such as empathy, judgment, adaptability, and real-time problem-solving. When candidates rely on AI-generated responses during interviews, hiring managers risk optimizing for polish over presence.
Customer Trust Starts at the Hiring Stage
For customer experience leaders dealing with the growing threat of voice and video AI fraud, collaborating closely with HR departments is key to mitigating the risk of deepfakes infiltrating the hiring process.
Clear guardrails and intentional friction can help flush out fake candidates. Hiring for customer-facing functions should have firm non-negotiables around live, unaided interaction, especially in roles where real-time judgment and empathy are essential.
Interviews should include visible verification steps, such as mandatory cameras, no virtual backgrounds, and spontaneous questions that test the candidate's presence and comprehension in real time, rather than polished responses.
Because resumes are no longer reliable signals of authenticity, i4cp recommends cross-checking identity and work history, exploring platforms that verify user presence and conduct live identification checks. Verification can stop fraud before it reaches customers. Recruiters and hiring managers also need support to overcome the social discomfort of challenging suspicious candidates.
Deepfakes in hiring are no longer a future problem or a niche security issue. They are a present-day operational risk with direct implications for customer experience and brand credibility. At a time when seeing and hearing are no longer believing, hiring has become one of the most vulnerable moments in the CX lifecycle.





Comments
Join Our Community
Sign up to share your thoughts, engage with others, and become part of our growing community.
No comments yet
Be the first to share your thoughts and start the conversation!