
Make talent quality your leading analytic with skills-based hiring solution.

Hiring is undergoing a transformation that few HR teams are prepared for. What once felt like a talent challenge has rapidly evolved into a cybersecurity challenge unfolding inside the hiring process itself. That shift came into focus during a recent episode of The Talented Podcast, where Joseph Cole, VP of Marketing at Glider AI, interviewed Robert Richardson, HR Tech Strategist at SAP SuccessFactors.
Their discussion revealed a growing reality: hiring systems have become targets, and AI is enabling both legitimate candidates and malicious actors in ways that traditional HR processes were never designed to detect or defend against.
Candidate fraud has accelerated into a full-scale enterprise risk due to generative AI, deepfakes, and real-time coaching tools. The interview between Joseph Cole and Robert Richardson shows why hiring workflows must now operate more like security checkpoints and what organizations must do to rebuild trust in their talent processes.
Summary of Key Insights
Robert’s concern didn’t emerge recently. His alarm bells rang in 2019, when early deepfakes began blurring the line between real and synthetic. The technology was still experimental, but the implications were already clear.
“It suddenly dawned on me that as AI advanced, the trust economy was about to erode, You would have to question every video you ever saw and every audio clip you ever heard.”
A joint study from University College London, Oxford, Cambridge, and OpenAI later confirmed this trajectory, identifying deepfake impersonation as one of the most severe AI-enabled cyber threats.
Today, that warning plays out inside hiring funnels. Gartner predicts that by 2028, one in four candidate profiles worldwide will be fake. Hiring signals that once felt reliable now demand skepticism.
Another source, Checkr, reports that 59% of hiring managers surveyed suspect candidates sometimes use AI tools or deceptive tactics during hiring which underscores growing concerns about fraud in applicant pools.
HR teams rarely think of themselves as guardians of sensitive data, yet they hold some of the most valuable information an attacker could want: payroll records, Social Security numbers, compensation history, and performance documentation. It’s enough to build complete identity profiles and access corporate systems from the inside.
“HR is a gold mine of data. There is a long history of social engineers trying to access HR systems. The only difference today is that fraud is now on steroids thanks to democratized AI.”
And attackers are already exploiting the gap. According to South China Morning Post (SCMP), an employee in Hong Kong transferred HK$200 million (≈ US$25.6 million) after being tricked by a deepfake of the company’s CFO in a video conference. The U.S. Department of Justice uncovered cases where North Korean operatives secured remote tech jobs using synthetic identities, later deploying malware across corporate environments.
These incidents didn’t start with breached servers. They started with breached hiring processes.
One of Richardson’s sharpest observations is that AI is helping everyone, but not everyone is using it for the same purpose.
Many candidates use AI to refine their resumes or communicate more clearly. Fraudsters use similar tools to impersonate entire identities, fabricate credentials, or glide through interviews with skills they don’t possess.
“As the blue team, HR has to be right 100 percent of the time. As the red team, they only need to succeed once.”
The modern applicant toolkit now includes:
Cole described testing one such interview assistant with Satish Kumar, CEO of Glider AI. Although not an engineer, he answered advanced technical questions in real time with surprising accuracy. The experience underscored how dramatically AI has blurred the boundary between actual competence and AI-manufactured performance.
Modern fraud isn’t just a technical evolution. It’s a psychological one.
“Social engineers leverage something called amygdala jacking, when you trigger urgency or fear, the brain shifts to emotional processing, and logical reasoning drops.”
Recruiters often work under pressure, juggling deadlines, candidate volume, and demands from hiring managers. That emotional environment creates ideal conditions for attackers to exploit.
Zero-trust security models assume a breach has already occurred and limit access accordingly. It’s a cybersecurity staple and increasingly a hiring necessity.
“Zero trust builds trust by eliminating it, You assume you have already been breached and limit access accordingly.”
In hiring, that shift includes:
Not every role requires the same measures, but high-access and remote roles increasingly do.
New security steps can feel intrusive unless framed properly. Richardson stressed that transparency is what keeps these processes humane.
“Explain why you are making the request, and most candidates will understand.”
Simple checks, such as asking a candidate to shift slightly on camera or to place a hand in front of the lens, can reveal deepfake anomalies. These are signals, not verdicts, but they help teams spot when something deserves a closer look.
Richardson describes a new archetype in the hiring landscape: the professional applicant. A synthetic persona that can look legitimate, sound legitimate, and even behave legitimately long enough to clear screening.
A fraudulent candidate today can:
“You can look like anyone, sound like anyone, and still pass technical screens.”
Remote hiring makes this even easier. Geography no longer constrains attackers.
Not every use of AI is harmful. Richardson outlined an integrity continuum that helps teams distinguish between enhancement and deception:
Honesty → Distortion → Cheating → Fraud
Distortion has existed as long as resumes have. Cheating involves AI support that enhances communication but doesn’t replace underlying skill. Fraud occurs when identity, credentials, or intent are deliberately falsified.
Cole noted that AI can help strong candidates who struggle with writing present their skills more effectively. The goal is not to eliminate AI but to verify the person behind it.
Richardson is optimistic about recruiters’ ability to detect fraud once trained properly.
“Recruiters are incredibly savvy. Give them the right training, and they will quickly adapt.”
Patterns they can learn to spot include:
These cues, combined with automated forensics, can significantly reduce risk.
After one SAP customer introduced authenticity checks, recruiters immediately encountered candidates unwilling to comply.
“They would not put their hand in front of their face.”
It was a small act, and a revealing one. A moment that quietly raised questions about authenticity, intent, and what trust really looks like under scrutiny.
What’s Coming Next: The Integrity Layer
Richardson expects enterprises to move through three phases:
“Just because someone passed your hiring scrutiny does not mean their credentials cannot be stolen later.”
By 2026, he believes this will dominate HR, IT, and security discussions.
Joseph Cole (00:02)
Hello, Robert. Thanks for joining us today. Why don’t you give a quick introduction about yourself, who you work for, what you do there.
Robert Richardson (00:02)
So.
Sure. Well, hi, Joseph. Thank you first and foremost for having me today. I appreciate it. I’m Robert Richardson. I’m a HR tech strategy advisor for SAP SuccessFactors. I just like to tease that that title means everything and nothing at the same time. So people have heard me say that a lot before. And I give people what I actually do for a living on a day-to-day basis. And that is I help organizations. I help our internal teams, our external teams, our
product management, our partners, all think through the landscape of HR technology, how it’s evolving, and what you should do about it. So rather that’s implementing something new to solve challenges or developing a product internally.
Joseph Cole (00:51)
Yeah, You’ve had quite an extensive career at SAP and obviously before that, how long have you been at SAP for?
Robert Richardson (01:00)
Depends on how you count it, because I was acquired. So I started my career in HR tech going all the way back to 2006 with Jobs to Web, a recruiting marketing startup that we put together to solve organizations recruiting marketing challenges. And so I was acquired by SAP in 2011.
Joseph Cole (01:11)
Wow.
Yeah.
Wow, so you’ve seen the evolution of HR tech and some of the things, for example, what we’re going to talk about today, which is candidate fraud. So maybe something a little personal that people won’t be up to, I guess, Google about you that you wouldn’t mind sharing.
Robert Richardson (01:41)
Oh my gosh. Well, let’s see. I have four kids and I’m most recently now a cub master of their scouting organization. So maybe that’s something that people probably don’t know about me. I’m a huge nerd. I tend to get really deep on topics, especially HR tech and psychology and
some of these disciplines that this topic overlaps as well. So maybe that gives you two items. You wouldn’t have guessed about me. Although you would have guessed I’m a nerd. That one, everybody’s gonna know.
Joseph Cole (02:16)
I don’t know. I think being a nerd is pretty cool. I might consider myself a nerd, but thanks Robert for that. Yeah, exactly, exactly. Well, unfortunately, you may have exposed some items that might impact candidate fraud or, know, fraud schemes now.
Robert Richardson (02:28)
The Geeks shall inherit the earth.
Joseph Cole (02:41)
But anyway, Today’s theme is candidate fraud, enterprise risk, and the future of trust in hiring. So the first question for you today is, you’ve been calling out candidate fraud for a long time before it became mainstream. What was the early signal, the like, wow, this is different moment that told you this wasn’t just a hiring inconvenience, but an enterprise risk issue?
Robert Richardson (03:04)
Yeah, I wouldn’t even say that it’s just relegated to candidate fraud. I think that part of what captured my attention was deepfakes going all the way back to 2019. mean, everybody has that, you know, I mean, you often get the question like, what keeps you up at night? And there was an actual night that I was kept up, right? I could not sleep because it suddenly dawned on me that as AI was advancing, what I started to think of as the trust economy, the trust economy was about to erode in that at some point in time, you would have to question every video you ever saw, every audio clip you ever heard. And that goes all the way back to 2019. And so today with the democratization of AI, these models are cheap and easy, right? And anybody can use them today, here we are.
Whether it’s candidate fraud, employee fraud, all kinds of fraud are much easier to conduct now with the advent of democratized AI.
Joseph Cole (04:14)
What was it, just for clarity purposes, back in 2019 that you saw the uptick in these fraud schemes happening, whether it was for candidates or hiring itself?
Robert Richardson (04:26)
Yeah, well, many people may have in those years seen Barack Obama’s voice and likeness used. It turns out, yeah, if you see presidents get used in these ways a lot, it turns out that, I believe the reason is, presidents’ images and likeness are considered public, whereas, you know, using a private person may not, you know, be as acceptable. And so you know, People were testing this and seeing what they could accomplish with machine learning at the time. And they found that with a sample of voice and a sample of video and a little bit of background noise, right, to try and and ease the audio because you could still sort of hear it in those years, it turned out that you could make a really good vocal and video likeness of an individual.
And it was very clear, I feel, in those years that that technology was going to do nothing but advance.
Joseph Cole (05:23)
Yeah, no, That’s crazy. I forgot completely about that, but it was a big news story. And yeah, you’re right with today’s technology and AI has just exploded. right? So next question for you is we talk a lot about AI disrupting hiring, but it’s disrupting. I don’t know. Sorry. This is a weird question. Says we’re talking a and it’s disrupting fraud. Okay. Let me just ask this one. Where do you see AI accelerating fraud faster than HR tech can actually keep up with?
Robert Richardson (05:42)
That’s good.
Sure.
It probably is in the ubiquity. You know what I mean? It is, and it’s also in the fact that people are not prepared yet. Right? You know, I mean, I think we’ve thought of HR as secure against these things. And I say it that way, but the problem is people haven’t even thought about it. You know what I mean? And so it’s advancing fast.
Joseph Cole (05:59)
Mmm.
Robert Richardson (06:24)
But it’s not as though in other industries people aren’t paying attention to this. It simply is that in HR, and I suppose other industries as well, it’s true, but particularly in HR, we’re not often security focused in that way. Nobody expected recruiting to become the front line of defense in corporate espionage. That’s wild. That’s wild. Recruiters never thought that was going to happen. It didn’t even occur to us.
And yet your jobs, suddenly turns out, are a primary vector of attack for organizations to infiltrate and hack your company from the inside.
Joseph Cole (07:08)
Yeah, It’s the new Trojan horse, I guess, right? yeah, next set here, the enterprise risk angle. And I think this is kind of a perfect segue from the last question. So many leaders still think of candidate fraud as an HR issue. How do you explain to these executives that hiring fraud is actually a security data and compliance that is much bigger than just recruiting?
Robert Richardson (07:11)
Yeah. Yeah, absolutely. It’s good way to put it.
Yeah, you know, there’s impacts on both sides. It’s almost as though people forgive one side or the other. Either you are thinking, if you’re thinking about this issue at all, right, which is a big challenge today, you’re thinking it’s going to impact massive things like, you know, your data privacy, the security of your systems, your IP, right, can be leaked.
Or you’re thinking, you know, it lowers the candidate quality, right? Because so many people are submitting fallacious applications, you know? And so both sides of those things are true. And I think both warrant some real attention because the cost of recruiting is going up as we see so many fake candidates make it into the pool. It’s a lot more effort for recruiters and hiring managers to find the right people, but also simultaneously, those that make it through the process and come to the end, if they’re not the people that they’re representing, it’s a really dangerous prospect for organizations.
Joseph Cole (08:53)
Yeah, I guess To expand on that, what? mean, because as you kind of alluded to, it’s the front line of defense or security in many ways. But then what do you say to some of these execs when they’re like, no, that’s an HR issue. And like, I don’t care about it. know, like, I feel like that’s a short.
window or short perspective to think, because it’s like, got now these new candidates in your system have access potentially to customer data, et cetera.
Robert Richardson (09:25)
Yeah, well, HR is a gold mine of data, right? I mean, we have all kinds of data from salary information, payroll, social security numbers, performance improvement plans, right? That would be embarrassing to individuals and organizations. We are a gold mine of data to begin with. And there is a long history, in fact, of social engineers working to garner access to HR systems.
Joseph Cole (09:47)
Yeah.
Robert Richardson (09:55)
It is only the only difference today because it’s not as though fraud wasn’t happening in the application process in the past. And it is not as though people weren’t working to social engineer their way into HR systems data. It is simply that it is now on steroids, right? With the technology again, the democratization is the key here. It is that it is now wildly inexpensive and easy. You can download these deep fake apps right online, right? And with minimal configuration, set them up so I can look like Joseph, I can sound like Joseph, and I don’t have to be technical to do it anymore.
Joseph Cole (10:41)
Right. That’s a really interesting and pertinent example, I think, right? That, you know, HR is data, you know, treasure of data, right? And then, you know, just the fact that these people can even use, you know, employee data, whether it is especially at the executive level, like to blackmail companies and blackmail, is scary. you know?
Robert Richardson (11:02)
Sure. Yeah, yeah.
Yeah, and I won’t, don’t know, Just for the sake of protecting the innocent, I won’t say any names here in terms of which companies this happened to. People can look it up. Most of this stuff is public. But just as an example, one organization, fraudsters impersonated senior executives. They use deepfakes and voices on video phone calls and on video calls. And HR and finance employees transferred $25 million to those scammers.
Joseph Cole (11:35)
Wow.
Robert Richardson (11:36)
Right? so those AI tools, you know, they made lifelike replicas of executives’ faces and voices. And so there is real monetary losses that are occurring right now as a result of this. There was a social engineering attack that, you know, got HR to release all kinds of W2 data to the tune of thousands of employees that were released to fraudsters. And so a lot of this stuff, again, it’s been happening for a long time. These attack vectors aren’t new, they’re just more effective with the use of AI technologies today.
Joseph Cole (12:23)
Yeah, it’s fascinating. It’s frustrating, right? And it’s also very scary. I was just thinking about the probably the one of the most common examples where you get these random texts from like your boss or someone important at the company or an email from them and saying like, Hey, give me these gift cards. I mean, now with AI and bad candidates, they could actually just get in, you know, get whatever data and actually make it look a lot more real and do it at scale, which again is like a big, you know.
Robert Richardson (12:39)
Yeah.
Hmm.
Joseph Cole (12:51)
financial impact potentially.
Robert Richardson (12:53)
Yeah, absolutely. know, vishing, fishing, you know, these attacks have been with us for a long time. In many ways, they’re not new. You know, I keep saying of AI, we’re not solving new problems with AI oftentimes. It’s just the acceleration of them. It’s the nature of the attack vector that’s more sophisticated for the average person now.
Joseph Cole (13:00)
Yeah.
Yeah, and you know, I’ve always been one of these people like, ah, it’s not gonna happen to me. Like, oh, you know, I see through all this stuff, but it did happen to me. And it was like really embarrassing and I got my credit card. Pendleton’s going out of business? Oh my God, right? But it’s just like silly stuff like that when you’re not in, you know, the mindset that this can happen to you and it can happen to you very easily. So it was a little embarrassing.
Robert Richardson (13:25)
Well do tell. What happened?
Sure.
Yeah,
I’ll give you and your listeners a tip today. And that is social engineers leverage what is referred to as amygdala jacking. I told you we were going to get nerdy today, right? So this is where I create an emotional reaction in you. so amygdala jacking refers to this concept that when you become emotional, all the blood and the calories that are burned in your brain move to the amygdala and it has the effect of reducing the impact of your prefrontal cortex. So I tell you some of the science of it because it has an impact on what happens. When that happens, you are capable of thinking logically less than when you’re less emotional. So if I can create a sense of urgency in you, if I can use your family or a financial you know, event to trigger an emotional reaction in you, you’re thinking less because you’ve been amygdala jacked and you’re more likely to react in a way that you shouldn’t. So you might provide your credit card information, you might provide a password, right? You’re gonna get compliance out of that person better when they’re emotional. And so if you get those text messages, if you get those emails, recognize that urgency is often a sign that it’s actually a social engineer working to get you to make a mistake because you’re emotional right now And so take a step back, right? Give it a minute breathe think about it and then decide if it warrants any action at all
Joseph Cole (15:15)
Yeah.
That’s really insightful. feel like we need some kind of checklist that goes into the nerdy details of it. It kind of reminds me of that Adam Sandler waterboy movie. It’s something like where he’s like, there’s something wrong with his medulla oblongata. But you know, what is it? Oblongata?
Robert Richardson (15:28)
Yeah.
Sure, yep.
Joseph Cole (15:45)
So it wasn’t that with me, it was something else. But thank you for the insight there. So companies protect their networks with zero trust frameworks. Should hiring be moving towards a zero trust model as well? And what does that look like, for example, inside SAP Success Factors? If you can share.
Robert Richardson (16:02)
Sure,
yeah, so it’s a good question. maybe just for your listeners, just in case you’re not familiar with this term, zero trust, it is an approach to security that builds trust by eliminating it, essentially, if that kind of makes sense. You require strong authentication from all users. You only provide the lowest level of system access required for an employee to complete whatever it is that that they need to do. And so in a lot of ways, the assumption is that you have already been hacked, right? That you’ve already been breached and Zero Trust then works to limit the amount of access that anyone could have ⁓ in order to ensure minimal harm, if that makes sense. So you build trust by removing it. So that’s the Zero Trust framework. For candidates, this is a really interesting question.
I think so. You know, is the answer yes, but with the concept of knowing your threat profile in mind, because there is a trade off between security and privacy, if that makes sense. There’s also a trade off between security and user experience. You know what mean? So you have to take your threat model into consideration and decide how far you’re going to go with these approaches. So as an example, if you were going to engage in a zero trust approach, you might require identification at every step of the application process. You might require a login at every single stage. You might take forensics on everything from a person’s ID to their voice, to their face, and match it.
Across every single step of the application process. You can engage in behavioral analysis, skills validation. What else? Continuous monitoring and audits. Post-hire, and you always should do this, you should post-hire device monitoring to ensure malicious software hasn’t been installed. So I would say in the highest threat levels, if you are a defense company, yes, you probably should take zero trust as far as reasonable given the threat level of each role. If the consequences of, you know what I mean, of a fraudulent hire or something are lower for you, you might, you know, or you’re just not, your threat level isn’t high because you’re not that type of organization, then maybe you want to think differently about this and escalate, but not bring it to the absolute top tier where you are monitoring the screen, right, or the system of the person you’re just interviewing on a first interview. So there’s levels of this.
Joseph Cole (19:03)
Yeah,
really like that, having different tiers based on even security access, right? Or access to confidential information or company secrets. There was a question I had about that, but it just slipped my mind.
Okay, we’ll just move on to the next one. We’re seeing cases where fake candidates gained access to sensitive systems. What forms of risks do most organizations still underestimate?
Robert Richardson (19:36)
Everything? You know, I mean, I hate to say it, but in part, that’s the answer. You know, especially for HR, I just do not think HR is accustomed to thinking with the security mindset. You know what I mean? HR will set up your security training and then fail to think that a candidate might be working to hack into their applicant tracking system. You know what mean? It’s kind of wild, but that is the world we’re living in right now where HR is not thinking about this. So the vulnerability of the application process, unfortunately, is extraordinarily high right now, especially, again, with the advent of some of these AI systems and recruiters and hiring managers and people in HR, again, never expected to be on the front lines of this.
Joseph Cole (20:34)
Yeah, no, That’s true. And actually the question that I had in my mind was more around making candidates feel comfortable about going through this process, because it seems like an extra step. I feel like is, but it’s important, right, that you have that communication with them. Have you? Yeah.
Robert Richardson (20:46)
Yeah. Yeah.
I loathe this trade off. Yeah,
I hear you. I hear you because we are approaching a stage where proctored interviews might become necessary. Again, keeping in mind your threat profile, right? And maybe what stage you are in the application process. But what a pain. know what mean? Candidates already hated taking our assessments. You know what I mean?
They’ve been jumping through hoops forever. As a matter of fact, in many ways assessment has waned over the years. Unfortunately, even though there are fabulous tools out there because candidates simply hate them. You know what I mean? And so we see drop-off when you put in place assessment techniques. But in an age when everyone can leverage AI to make their resume and their cover letter,
look perfect when they can match it absolutely perfectly to the skills required in your application and your system starts to rate.
Candidates who are not a fit as incredible fits, skills validation starts to become necessary. And so to your question, I think the answer is transparency. And we’ll get better and better at this, right? To be clear, we’re at the front end of this and these processes will improve and they’ll become less painful, I assume. But I think transparency is the key.
There’s this test, it’s the hand in front of the face test. So you can tell I’m not using a fake background. You can tell I’m real because there’s no haloing around my fingers as I put my hand in front of my face and things like that. I recommend recruiters do that with every single video interview. But it’s kind of weird to ask if you don’t provide a little context, you know? And so I think transparency is the key. Hey, we’re seeing a rise in fraudulent candidate activity.
Sorry to ask you this, I know it’s a little bit weird, but can you put your hand in front of your face? Would you mind standing up for a second, right? Move around and just demonstrate you’re real. And I think in that same way, our systems will have to be transparent and alert people to fraud and some of the reasons why we have to go through some extra steps, especially in high threat roles.
Joseph Cole (23:15)
Right, and I think this is perfect segue to the next question about what’s happening on the ground. So you shared an example or you you mentioned the rise of synthetic candidates through the use of deepfakes, right? So what’s the most surprising or most sophisticated fraud pattern or examples that you’ve seen recently, whether it’s at SAP or you know, other places?
Robert Richardson (23:39)
Sure. So I think the pattern, I like that you use that term because if you’re talking about a pattern, it’s almost applications at scale. There’s tools out there that will allow you to customize a resume to the job. It will review the job, determine what skills are most critical for that position.
create a custom tailored resume for that position and it will do that at scale if you want it to across hundreds and hundreds of jobs. So that is all to say that legitimately or illegitimately, right? Because most of these tools can be used by perfectly legitimate candidates and non-legitimate nefarious actors. So the nefarious actors can do this at scale.
And they can rifle out thousands of applications per day. Right? And so the hard part for HR organizations and companies in general is that it’s red team versus blue team. It’s this classic security challenge that we face. As the blue team, I have to be right 100 % of the time. As the red team, I only have to succeed once to hack your system. You know what I mean? And so if I can rifle these out, at scale, thousands and thousands of applications, again, all I have to do is get through once while HR has to protect itself from every single one of those attacks.
Joseph Cole (25:21)
Right.
Robert Richardson (25:23)
That’s one, right? It’s it’s robotic process automation applied with the use of generative AI at scale. And then the other thing I’d say too is, you know, if you, some of these technologies are allowing for what I’ve started to think of the rise as, I’m sorry, the rise of the professional applicant. So now if you can look like anyone, if you can sound like anyone and we haven’t talked about this, but there are tools that I can put right here in front of my screen. Sorry, for those who are listening, you can’t see I’m pointing right at the camera. ⁓ And it will, based on your questions, feed me the answers in real time. So if I’m a candidate, you can even ask me technical questions. And you can ask me to provide code as an example. It will listen to your question. It’ll provide answers to mundane questions. But it would also do things like provide coding responses. Now I copy and paste that and it will also go as far as to give me the reasoning behind my coding methodology so that now I can look like anyone, I can sound like anyone, and I can pass your interview questions.
Joseph Cole (26:38)
Yeah, speaking of which, This leads us nicely to the next question. There are so many tools like that out there. One of the more recent ones that I think has gotten a lot of news is Cluely, right? It provides that real-time coaching. It gives these unqualified candidates superpowers. So what is the hardest part of detecting?
This assisted fraud versus, you know, just outright impersonation, you know, these tools are so they’re coming everywhere. right? And they’re so sophisticated. And in fact, Satish and I, I’m a Glider AI CEO, we tested it. And he’s technical, I’m non-technical. right? He’s engineer, but by practice, right? And he was asking me really intense questions and I was able to answer them, right? As a result of Cluely, but also in context.
Robert Richardson (27:07)
Yeah, yeah.
Yeah.
Sure,
Yeah, I mean, they’re hard to detect to begin with because people are unaware. You know what mean? And so the very first step is what you’re doing right now, right? It’s podcasts, it’s articles, it’s really getting the word out there to recruiting organizations to say, this is real, this is happening today. It’s no longer a thought or fictitious or people theorizing about what the future is. It is here right now.
A Gartner study recently said that 6 % of their respondents admitted to interview fraud. That was wild to me. 6 % to interview fraud. That’s crazy. So this is here right now. It is a rising problem. And sometimes we’re seeing even 40 % of applications to some organizations being fake. So we’ve got to get the word out to let people know that this is happening.
And then I think to, you know, to Glider’s credit, you guys have some of this forensic detection technologies available in order to detect signals that indicate some sort of fraud or cheating might be afoot. And organizations do need to take seriously the implementation of both behavioral techniques and software related techniques to identify fraudsters.
Joseph Cole (28:54)
Yeah, What is the balance then between, you know, I guess the use of AI to do a better job versus like saying, you know, this is outright fraud.
Robert Richardson (29:04)
So I think of this as existing on a spectrum, right? Let me know if I’m getting at your question. So I think of this as the integrity continuum and yeah, varies. It’s not a Robert Original. I mean, that term has been out there for a while, you know, but I think that this spans a gamut, right? From honesty to distortion.
Joseph Cole (29:17)
I like that.
Robert Richardson (29:34)
cheating and then fraud. So honesty is easy, right? We all kind of know what that means. Distortion would be leveraging embellishment, right? It is, you know, I like to tease how come every person has senior before their role? can’t possibly have that many senior developers, you know what I mean? Or senior salespeople or senior whatever, right? So, but that’s a pretty innocent one. know, resumes have been fraught with embellishment and distortion forever. But that doesn’t, it doesn’t impact the same level of harm as outright cheating does. know what mean? Recruiters are very accustomed to having to deal with distortion. That’s why they’re skeptical on a regular basis. Cheating ups the game, you know? And so co-pilots might help you, you know, to add a lot of data to your resume that you know, helps you pass a score in an applicant tracking system, but may not be fully real. You know, they may help you mass apply to hundreds and hundreds of jobs, you know, just to get, you know, a volume approach out there. You know, so there’s all kinds of ways that candidates can cheat now. They might use the tool we just talked about, right, which allows you to in real time answer interview questions, whether or not you really knew the answer or not. And then fraud takes that a level deeper. If distortion and cheating were harmful enough as it was, in the case of fraud, there’s almost a level of maliciousness there. They’re leveraging deepfake videos, stolen identities, falsified credentials, malware to up the ante. And there is presumably a level of malicious intent involved there.
I don’t know if that gets exactly at what you were asking, but that is how I think of it. Is it on a continuum?
Joseph Cole (31:32)
No
I like the continuum example because I feel like you need to provide a baseline for what is acceptable and what’s not. But also understand that some of these tools do help candidates who truly have the skill do a better job and can be more effective.
Robert Richardson (31:52)
Thank you.
Yes, yes. Because let’s go back to the truth end of that scale. What wasn’t happening before is individuals who don’t need to be writing experts being well represented by their resume and cover letter. Right? If you’re a driver, if you’re a forklift operator, if you’re many of the types of roles that don’t require deep proficiency in writing,
Why are we requiring them to have perfectly tailored resumes? It’s just, you know, that’s been a misrepresentation, an accidental bias that we’ve applied to many types of roles for many, many years in the past. So AI can really help those individuals to adequately represent their skills. The specific skills that matter for those roles, regardless of whether or not they’re writing technique or
the template they found, right? Their ability to put together and craft a resume. mean, many of us have spent 50 hours on a resume, right? Anybody who’s hunted for a job knows how big of a pain that was if you’re a good writer, right? Now imagine not being a good writer and having to create that. Well, now it’s easy for everybody in a way or easier, you know? And so those individuals can represent themselves appropriately as well.
Joseph Cole (33:17)
Yeah, that’s great. ⁓ So have you noticed whether there are certain roles or regions that are more fraught for fraud? Or are you seeing a disproportionate spike in different areas, different role types?
Robert Richardson (33:30)
Sure.
Yeah, yeah, I think there are. think any role that is remote, any role that would give access to systems data, to confidential data, especially if they are remote and they provide access to sensitive information like your IP or your, you know, your personnel data as an example, those are going to be the highest targets today.
So we see IT developers, roles like that, being the biggest targets right now, especially if there’s an expectation that you might never meet that person in person. You know what mean? It allows you to bring these tools to bear for those types of positions. But that said, they are not alone. so there is, you know, I mentioned the rise of the professional applicant as a real risk with deep.
Deep fakes, voice and video. And in that case, you can see a future, because it is happening today already, but just not maybe at the same scale that we’ll see it soon. You can see a future where people are regularly applying on somebody else’s behalf, using their likeness and their voice, even with permission, to help them get the job. And then the real person does show up for the job, but rather than that, they have the same skills that were represented in the application cycle, you know, is anybody’s guess.
Joseph Cole (35:08)
Right. So you work for SAP SuccessFactors and it’s an ATS, but to be fair, ATSs do some things great and that’s what they’re made for. But then things like fraud, wasn’t really ever built to detect fraud. So what. I guess, unofficially, can you say, do you believe to be the next generation of HR tech? Or what does HR tech need to do differently to establish trust in the hiring process again?
Robert Richardson (35:39)
Sure. I think it is assessing legitimacy at every stage of the application process. That’s really what needs to be done. So from the application to the initial screening to interviews to assessments to your background check, what needs to happen is we need to put tools and behavioral techniques in place that provides signal. And so we can take a lesson, for instance, from credit fraud and things like that. In these domains, organizations built really neat AI to detect signals. So in financial transactions, oftentimes there’s not like a singular red flag. There can be, right? But not one single one all the time that just says, hey, this is fraud. But you’d gather all these signals regularly, and then you get the potential for fraud, right? And all these things together can raise a potential fraud score, so to speak. And so that’s what we need to start paying attention to. It’s data along every step of the application process. And rather than not, it is more or less likely that the person applying is or is not who they’re representing themselves to be. And if that is the case, then there needs to be an alert.
And a human needs to be able to review it and potentially take action if there’s a problem.
Joseph Cole (37:11)
Yeah, and To expand upon that, and the next question that I have is there’s AI that’s making it easier to bring more more tools as it is making it easier for candidates, bad candidates, should say. Too cheat or others to commit fraud. And I don’t have the data on this. I don’t know how many tools HR or TA has now, but I’ve seen different technology landscapes. One of the issues, and I think you kind of alluded to this, is that there are just so many things out there that don’t connect the dots. They’re not talking to each other. They’re not communicating with right signals. So…
Robert Richardson (37:29)
Mm-hmm.
Joseph Cole (37:51)
How crucial then is now identity and skill verification inside that workflow? It just can’t happen outside of anymore.
Robert Richardson (37:59)
My gosh, yes. Yeah, what a great question. I’ll give you an example. It just reminds me of a little story. I was at HR Tech this last year, and an assessment vendor was interested in my opinion and potential partnership with SAP and that kind of thing.
I think they were a little surprised that I had a lukewarm response and I didn’t really mean to, but what was really running through my mind and eventually I explained to her was, I’m not as interested in additional additional assessment techniques. Or I should say I’m not interested in additional skills techniques. What I’m interested in is skills validation. Because we have all kinds of applicant tracking systems in the marketplace that already do skills. You know what mean? We can pull skills out of your resume. know, that process is sort of complete and mundane at this point. What it doesn’t do is give you a proficiency level. And what it certainly doesn’t do is ensure that that skill is real. Right? And so there’s these two kind of camps. There’s that automated skills approach using AI, and then there’s skills assessment and validation. And assessment and validation have been waning for several years at this point because of what we talked about earlier. They are a pain, right? They do require candidates to go through extra effort. And so those AI assisted techniques where you can just identify what skill people have based on what they’ve written really took off. And now, for better or worse,
All the techniques that you’ve come up with in HR are only as valid as your ability to defend them. so skills validation is now absolutely critical. If I identify these skills in the resume, now, again, depending on your threat profile, always keep that in mind, you may need an assessment technique to validate that you really have those skills.
Joseph Cole (40:19)
Right. Yeah, great. Not only just because I work for Glider AI, but I do think with what’s happening in the market, this needs to be integrated into the hiring process, but even post hiring, right? know, validating that someone has a skill and keeping them, you know, engaged as an employee to be able to a better job. So I agree, obviously. So how do you rethink then, because we’re talking about recruiting, we’re talking about TA.
Robert Richardson (40:35)
Mm-hmm.
Hahaha.
Joseph Cole (40:49)
Recruit enablement so they’re not the weakest link in this process.
Robert Richardson (40:55)
Yeah, I think recruiters could become one of the strongest links. Recruiters are incredibly savvy individuals, it turns out. You know, many of these folks have been actually trained in OSINT techniques or open source intelligence gathering. That industry is really close to and often akin with social engineering as well. You know, so many have a background in that area. They find people for a living.
You know what I mean? And again, They’ve been dealing with candidate fraud for a long time. So I think we just need to train recruiters and they will very quickly become one of the strongest links in these chains. You know, Give them the information, show them the new attack vectors, help them to think through behavioral markers, implement technology that identifies markers on an automated basis, and they are a smart population that will do the rest.
Joseph Cole (41:59)
I agree. And we’ve all seen the data about the cost of a bad hire. And obviously the cost of bad hire plus fraud just amplifies that to a different degree. Have you seen, You’ve mentioned some of these with the psychology behind how fraud happens, but are there any anomality’s that leaders should be?
know, trying to notice and even, you know, it’s not just recruiters, maybe it’s also the education with the hiring managers as well, so that they actually know that this is, you know, something big and something that is an issue.
Robert Richardson (42:32)
Yeah, yeah, there are. There’s a whole bunch of them. I love these. They’re really kind of fun anyway. if you nerd out on psychology and behavior, you’re going to love these. some of them include simple things like am I eye scanning? So I don’t know for anybody who’s watching this, you can tell if I’m reading sometimes because people’s eyes will just tick back and forth and back and forth.
they’re not looking around and thinking and engaging in cognitive processes. Instead, just, their eyes are ticking back and forth. So you can watch for that reading pattern. The other thing you can watch for though, interestingly, because there’s technology out there to solve that, for example, NVIDIA came out with a driver that will always make it look like I’m looking at the camera. And so you have both sides of these, sort of a bell curve. You should watch for people consistently reading, know, or their eyes ticking back and forth. But you should also watch for cases where people don’t ever seem to look away from the camera because there’s technology that always makes it look like they’re looking at the camera, you know, but when somebody thinks they’ll often look up or they’ll look you know, to the side and things like that. and that’s an indication that they’re thinking. Other areas are things like overly scripted or perfect answers.
Joseph Cole (43:51)
Yeah.
Robert Richardson (43:56)
AI will put a linear answer out there. if you’re a nerd for this stuff, go out and look up the term statement analysis. And what you’ll find is that it is a way of assessing the validity of something somebody is saying or the veracity of what they’re saying. And what you find is that when people make mistakes, if they’re telling a story.
They get this far in the story and they have to go back and correct themselves. they’re like, wait, actually, sorry. What really happened was that is an indication of truthfulness, in fact, even though it sounds like maybe it would be the reverse. They don’t have a perfectly scripted from A to Z set of sequences that AI came up with. Their thinking and our thinking works in that fashion. So statement analysis can be used. There’s no ums or ohs or you know, mistakes in their speaking, that’s an indication that it might be AI. Bad lip syncing is another, right? If it looks like a bad lip sync movie, you know, that’s, that’s a pretty good indication. And I’m sorry, I know you’re going to cut in, but I’ll give you just one last one, just for fun. And that is because we are seeing it all the time. These trainings that we see nowadays, sometimes drive me a little crazy where it’s the AI avatar, but it’s because I have a background in behavioral analysis. And so when I see them and their eyebrows move up, you know, but they didn’t say anything interesting, you know what mean? There’s a mismatch. And so if you watch these avatars and how they emote, the emotion does not follow what they’re saying. And so if it’s a full AI avatar rather than a deep fake that is in your interview as an example, they will emote and they will gesture in ways that don’t have a strict correlation to the words that they’re actually saying. So those are just some behavioral things you can look for, but it’s a fascinating area that you can absolutely train recruiters on.
Joseph Cole (46:08)
Yeah, fascinating. And obviously SAP has taken a very advanced look at this, probably more than a lot of other organizations. And I think part of that is you understand the severity and you’ve got massive, very complex customers, right, that are dealing with very, very sensitive data. So without naming names, do you have any customer examples where you’ve seen fraud, have been told about it?
Robert Richardson (46:36)
Yeah, I can actually give you a really interesting example. Just given my travels in this domain, in the presentations that I’m delivering on this front, it starts out really surprising. And people are just shocked that this is occurring. And then they take the information to their own teams. And so I had one individual as an example who attended one of these presentations.
And same thing, was shocked, couldn’t believe it was happening, took the same presentation that I had delivered, showed it to his own recruiters, and stopped me at a conference months later to say, you would not believe it. I showed your presentation to my recruiting team, and they are now regularly bumping into people who will not do the hand in front of the face technique or test. And so they’re not willing to put their hand in front of their face to demonstrate that they’re not a deep fake.
And so, you know, we talked about some things are and some things are not, you know, just a signal that you’re absolutely engaged in fraud. And so I would ask questions, know, follow up questions, but it’s a good signal. You know what I mean? If that person is not willing to demonstrate that they’re a real individual, there’s risk there and there’s something to look into. I’ll tell you a counter to that though. We in the conference that we just had in Chicago.
Joseph Cole (47:57)
Yeah
Robert Richardson (48:02)
I mentioned this as well and somebody said, well, you could have them get up as well to prove that they’re human. And that’s absolutely true. And somebody else said, yeah, but what if they’re wearing pajama pants? And so, you know, like I was saying before, you have to be careful. So look at everything as signal and not necessarily proof. Come up with a theory, right? Test these things. It just means you should look into it deeper because that person may have a legitimate reason why they’re not willing to stand up.
Joseph Cole (48:32)
Yeah, and basically now like what new compliance or HR considerations have to be, you know, thought of during interviews now like, yeah, you can’t ask someone to stand up because of that. All right. Or they can’t stand up, you know, so
Robert Richardson (48:32)
And wearing your pajama pants doesn’t disqualify you.
This is why you have to explain things to keep them from being too awkward. You know what mean? Hey, look, I’m sorry. We’re seeing a real rise in the use of AI. so I apologize. Can you just spin around in your chair? Here we go. I’m spinning in my chair to prove I’m real. Can you stand up? Can you put your hand in front of your face? So explain it. Explain it to your candidates.
Joseph Cole (48:52)
Yeah.
Yeah
Yeah,
exactly, exactly. I think that’s very true. Now, thinking ahead, it’s crazy to think that 2030 is only five, well, going to be four years away at this point. So what do you think will be the ultimate, right? It becomes the credit score of hiring integrity.
Robert Richardson (49:27)
right?
That feels like a loaded question, given that you’re a, you know, AI security tech company. I don’t know, maybe, can I answer this question a different way? you know, I think, if I put on, you know, my HR tech futures hat, I would say that
Joseph Cole (49:41)
and AI, and if you’re a flash, you get fraud detection.
You totally can.
Robert Richardson (50:05)
For many years now, super geeks like myself have been warning about this and this kind of thing since 2019, like we started by talking about. In the last two years, democratized AI has made this huge already, right? It’s a big deal. Companies are absolutely seeing it. They are being defrauded by this process. We have examples of organizations that have published to their credit warning saying,
that their own organizations actually hired North Korean operatives who used deep fakes to get through their process. And in fact, we’re installing malicious software on their corporate laptops, and that’s how they caught them. This stuff is absolutely happening today. So we are here. We are here right now. And so now we’re in that second phase. right? Like the reality phase. And what you’re going to see right now
is a wave of emergency implementation, so to speak, of defenses because I know right now while we’re publishing this, for many people it will be the first time that they have ever heard about this topic. right? I can almost guarantee you by the end of 2026, we can come back here and you can judge whether or not my prediction was true or not. In 2026, you won’t be able to get away from this topic.
you will see post after post on LinkedIn and social media. right? Warning you that this is happening. So we’re in the reality phase. We’re seeing it happen. That means that there’s going to be a large number of startups that sees on the opportunity, right? Every time red team makes an advance, blue team has to rise to the challenge. And so you’re going to see new software out there like Glider, who can, who can help protect you from these nefarious actors. And by the way, we’ve spent a lot of time.
talking about fraudulent behavior. There is the lower end of that spectrum where it’s cheating. You know what I mean? And so maybe it’s not going to cause deep reputational harm, but it’s still causing harm. And tools like Glider and other tools that are out there will protect you from a gamut of attacks, so to speak. That’s the phase that we’re in right now. You’ll also see big tech build and partner.
Joseph Cole (52:22)
Yeah.
Robert Richardson (52:29)
to protect these applicant tracking systems and ultimately, of course, acquire technologies as well. And then thinking forward, like you’re asking, the next phase of this is going to be that organizations will recognize, and some of us are already, that this certainly does not pause outside of your organization. This will absolutely become a problem inside your organization too.
So just because I was hired and passed all of this scrutiny does not mean that someone didn’t hack my system or steal the login credentials from an email, log into your meeting and leverage a deep fake to steal information. right? To steal your IP or your PII or whatever it might be. We saw this happen all the time in the past. It still happens that
organizations or individuals steal conference IDs, right? And so when conferences were bigger than teams meetings, people would gain access to a conference meeting and just record it and have all the information. Well, imagine now that person looks like you and sounds like you, but isn’t you. And that’s going to happen as well. And so the bounds of this will not be limited to your applicant tracking system, but to your corporate architecture that will have to do continuous tracking, continuous analysis to ensure that you’re a real person. And hopefully, that zero trust approach that you mentioned earlier, Joseph, will broaden to the internet as a whole. That zero trust strategies will be implemented across the board so that we can have deeper trust. Zero trust leads to trust in all the content that we’re seeing online that is transparent and we know whether it’s real or not.
Joseph Cole (54:24)
Right. Yeah, well, Robert learned a lot. Fascinating conversation. Thank you for sharing your insights today. If people want to get in touch with you, I’m assuming they can just type Robert Richardson, SAP, and they should be able to connect with you on LinkedIn.
Robert Richardson (54:40)
Yeah, certainly. LinkedIn
is a great place for it. am Robert, what is it? Slash in slash Robert Richardson, whatever their URL is. I have the Robert Richardson. So that’s me. You can also feel free to email me as well, robert.richardson at SAP. And I’m happy to continue the dialogue on this topic if you’re interested.
Joseph Cole (55:01)
Awesome. And I’m sure you’re writing articles left and right and posting them on your LinkedIn. So people should probably follow you there too. Thank you, Robert. Before I close this, is there anything else that you want to re say or another question you would like me to ask or?
Robert Richardson (55:07)
Yeah, it’d be great. Yeah, thank you. I appreciate the plug.
Let me think about that for a second. mean, was, no, just, to begin with, just thank you. You know what I mean? Probably exit by saying thank you very much, Joseph, because I really appreciate that your organization has taken this so seriously. You are early in this phase. You know what I mean? We’ve seen this dramatic rise in cheating and fraud, and there aren’t an enormous number of tech companies
that can help defend against it. So to your credit, know, Glider is one of those organizations that are out on the front of this. So thank you guys as well for paying attention to the issue.
Joseph Cole (56:01)
Yeah, truly appreciate that Robert. So thank you again.
Robert Richardson (56:05)
Yeah, yeah, you’re very welcome.

The Limits of AI in Hiring Why Human Judgment Still Matters Abhishek Nag is not a conventional HR leader. An engineer turned entrepreneur turned HR executive, he currently serves as Global Head of HR at Fornax, a multi-entity services organization with more than 40,000 employees across 16 countries. That cross-functional background gives him a systems-level […]

Keeping the Human in the Loop: How AI Is Rewriting HR, Recruiting, and the Role of Leadership Featuring Lionel Paul David, Global CHRO, WaterTech India As the Global Chief Human Resources Officer at WaterTech India, Lionel Paul David has spent the past two years driving transformation across HR, communications, and ESG for one of India’s […]

The End of Transactional Recruiting Why AI Is Likely to Reduce Recruiter Headcount Before It Creates New Roles For recruiters worried about what AI means for their jobs, Hank Levine does not offer false reassurance. Hank is the founder of Be Additive Corporation, a global recruiting firm operating across 30 countries, with legal entities in […]