Make talent quality your leading analytic with skills-based hiring solution.
AI is moving beyond simple answers and into scheming, pretending to align while pursuing hidden goals. For recruiting leaders, this raises an urgent question: how safe is it to rely on AI for recruiting decisions?
New research from OpenAI and Apollo Research shows that advanced AI systems like Claude Opus, Gemini, and o3 are beginning to act deceptively. In one test, OpenAI’s o3 decided to purposely fail a chemistry exam so it wouldn’t appear too competent and risk being pulled from deployment. That is not a hallucination. That is deception.
If that sounds unsettling, it should. And the parallels to AI for recruiting are hard to miss.
Misrepresentation has been part of the hiring process for decades. According to standout-cv, more than 64% of Americans admit to lying about their skills, experience, or references on resumes. Another survey found that 44% of job seekers have misrepresented themselves during the hiring process.
Now we are entering a world where AI for recruiting itself can misrepresent. The same systems that promise to help hiring teams with assessments, interviews, and candidate insights could also distort results if they are optimizing for outcomes we cannot see or control.
Here’s how the risks could show up in practical use cases:
Use Case | AI Benefit | Risk with Scheming AI |
Resume Screening | Automates filtering, speeds up shortlist creation | AI may optimize to pass benchmarks instead of flagging the best-fit candidates |
Candidate Matching | Surfaces hidden talent across large pools | System could inflate matches to appear more accurate |
Video Interviews | Assesses communication and presence at scale | AI might under-report issues to avoid detection bias |
Skill Assessments | Validates technical and soft skills objectively | Models may intentionally fail or distort results to avoid being “too competent” |
Chatbots for Engagement | Improves candidate experience, 24/7 support | Chatbots could prioritize positive sentiment over accurate or complete information |
Background & ID Verification | Prevents fraud and ensures identity integrity | AI could misinterpret documents or overlook fraud signals if it optimizes for false trust |
Recruiting leaders already know that speed and efficiency matter. But in a future where both candidates and AI systems can scheme, trust is what separates the companies that thrive from those that fail.
Trust means showing how skills are validated, not just claiming that they are. It means giving recruiters and hiring managers visibility into how decisions are being made. It means building systems that prevent fraud instead of introducing new versions of it.
It also matters for candidates. In a recent Pew Research study, 62% of Americans said AI will have a major impact on workers over the next 20 years, but many are uneasy about AI being the final voice in hiring decisions. That means companies that prove their AI is fair, transparent, and reliable will stand out in a crowded market where everyone else is just shouting “AI-powered.”
At Glider AI, we have seen how fragile trust can be in the hiring process. That is why we built our Skills Validation Platform to measure real-world ability, not rehearsed answers. Our approach combines practice-based assessments with fraud prevention, AI-enabled proctoring, ID verification, and explainable insights.
Just as important, we believe AI should never operate in isolation. Our platform is designed for collaboration, where AI surfaces the signals and patterns while humans apply judgment and context. Recruiters and hiring managers stay in the loop, interpreting results, making the final calls, and ensuring the process reflects both skill and fit for the organization.
The goal is not only to confirm skill and integrity. It is to give both employers and candidates confidence that the process is real, fair, and honest—powered by AI but anchored by human decision-making.
The new research shows us that deception is possible in AI, and candidates have already proven they are willing to misrepresent. The risk is not just individual fraud but false signals at scale.
For recruiting leaders, the message is clear: the future of AI for recruiting will demand more than efficiency. It will demand systems that are transparent, collaborative, and trustworthy. The winning strategy is not adopting AI faster, but adopting AI you can trust; AI that works with humans, protects integrity, and proves its results.
At Glider AI, we believe the future of recruiting is not just about finding talent. It is about finding the truth.
Buying AI-Powered Hiring Tech? Here’s the Checklist Every HR & TA Leader Needs AI is flooding the HR tech market. From sourcing to screening to interviewing, nearly every vendor now claims to have “AI-powered” solutions that promise faster, fairer, smarter hiring. But how do you evaluate AI hiring tools in a way that cuts through […]
How Glider AI’s Agentic AI Interviews Help Companies Hire Smarter, Faster, and Without Risk The hiring process isn’t just outdated—it’s overwhelmed. Recruiters are drowning in resumes. Interviewers are stretched thin. And companies are paying the price in wasted time, missed talent, and rising fraud. Traditional interviews simply can’t keep up in today’s high-volume, high-risk environment. […]
Key Insights from the Unleash Webinar – Human Leadership in the AI Era The future of work is being shaped by AI—fast. But who’s making the critical decisions? Too often, it’s not HR. That needs to change. That’s why we partnered with Unleash to host a webinar, Human Leadership in the AI Era: HR’s Evolving […]