Fake IT Workers: A Story of Employee Impersonation and Inherited Risk

Employee impersonation

This year, expo floors are louder than ever. But the real signal is quieter, traded in hallway whispers and private meetings among security leaders grappling with the same unsettling truth:

“We hired someone who wasn’t who they said they were.”

Not what if. Not someday. It’s already happened. And it’s happened to almost everyone.

The lucky ones discovered it during audits. Most uncovered their impostor after an incident. 

One CISO told me how their latest hire “didn’t seem quite right.” For months, it was just a hunch. Then all of a sudden, a simple helpdesk ticket unraveled into a months-long impersonation complete with falsified identity documents, deepfake-masked video interviews, and someone else entirely doing the employee’s work.

And that’s just one story. Across company sizes and sectors, I heard two confessions echo over and over, variations on a sinister theme: 

  1. “We thought we hired one person. We onboarded someone else.”
  2. “We thought we hired a superstar. They were actually from North Korea.”

Employee impersonation isn’t a future threat. It’s a current reality.

Workforce identity is a new primary attack surface. Why? Because today’s Identity & Access Management (IAM) systems were designed to authenticate users based on little more than device possession. 

Authentication factors, from passwords to passkeys, verify knowledge or possession of an enrolled device. And in a post-COVID world of remote hiring and global workforces, it’s harder than ever to know who’s really behind the screen.

The background check mirage: HR owns the process, IT inherits the risk.

Hiring checks amount to little more than blind trust. We hand out credentials to people we’ve only seen through a webcam. We assume that background checks and I-9 validation are both rigorous and effective. They’re not.

I watch companies invest millions in robust endpoint protection, phishing-resistant multi-factor authentication, and highly sophisticated monitoring systems. And I watch them leave their front door—specifically their hiring and onboarding process—wide open. 

Here’s the uncomfortable truth that CIOs, CISOs, and HR leaders are now grappling with:

Most companies have no idea if the person they hired is the same person being onboarded, or whether that person is legitimate.

Today’s hiring checks verify criminal records, employment history, and work authorization. Not actual people.

Ask HR and they’ll tell you the new hire was cleared. Background verified. No red flags. And in a way, they’re not wrong; the background check was clear. But here’s the catch: many U.S. background checks do little more than look up a Social Security Number against public criminal records databases. When a North Korean uses a stolen SSN that belongs to a real American with a clean record, the background check comes back green.

Maybe your company does “enhanced” background checks that go into education and employment histories. But we’re dealing with nation-state-sponsored fraud here; North Korea has demonstrated, repeatedly, that they can craft a compelling synthetic identity that passes these checks, too.

Remember: HR isn’t tasked with verifying the actual identity of an applicant. This means IT inherits all the risk, but has no role in verifying who someone actually is before issuing credentials.

And that’s how attackers win; they don’t bypass security controls, they become trusted users by convincing you to hire them. And once inside your networks, those secure credentials you spent so much time and money deploying can suddenly, ironically, turn against you.

The new insider threat wears a (deepfake) mask.

We’re well into the era of synthetic insiders. It’s common practice for even legitimate candidates to use AI-optimized resumes, cover letters, and even live genAI aides during their interviews. But some candidates are also using deepfake video filters, fake or altered identity documents, VPNs, RATs, and willing collaborators.

Sometimes it’s basic employment fraud. But more and more often, it’s a coordinated attack. Search “North Korean IT workers” on Google News, sort by Recent, and you’ll see what I mean. Fake IT workers affiliated with the DPRK are confirmed to have infiltrated “hundreds of the Fortune 500” and sent billions of dollars in revenue back to the North Korean regime. 

Threat intelligence researchers at Mandiant, DTEX Systems, and Palo Alto Networks, as well as the American, British, and South Korean governments, have all issued dire warnings, reports, and even sanctions targeting North Korean IT workers.

This is no longer an edge case. It’s a global epidemic of misplaced trust.

IT security starts with identity security. Identity security starts with onboarding security.

Whether your insider threat is North Korean or a garden-variety fraudster, the core problem is the same: unvetted identities gaining legitimate access to your most sensitive systems.

It’s time for companies to stop hoping that their hires are who they say they are and start proving it instead. Increasingly, I’m seeing that this means embedding identity verification directly into employee hiring and onboarding flows. 

Identity verification (IDV) asks a person to scan a government-issued photo ID document and then take a selfie to prove that they’re not impersonating someone else. The system validates the authenticity of the ID, validates the authenticity of the selfie, and then validates that the selfie matches the photo on the ID.

There are dozens of companies selling IDV products. And although the core user flow is largely the same, there are important differences in how they implement and protect this flow, and how they manage data privacy. 

The upshot is that some IDV systems are consumer-grade and some are workforce-grade. And it’s important to understand the difference.

Consumer-grade IDV performs Know Your Customer (KYC) checks with the goal of converting users into customers. These products are built for compliance, not for security. Some companies are trying to retrofit their KYC products into workforce use cases, with mixed results. 

Workforce-grade IDV is built from the ground up for workforce use cases like employee onboarding and MFA resets. These products typically don’t check the boxes that KYC and other regulations require. But the tradeoff is that they are much more secure against injection attacks, deepfakes, and other advanced threats that beat consumer-grade IDV systems. 

Four factors to consider when implementing workforce IDV.

I’ve had hundreds, maybe thousands, of conversations with CIOs, CISOs, and their teams over the past few years. I’m glad to be able to say that IT and security departments are increasingly aware of IDV as a solution to these problems. But some confusion remains.

So, here’s what I encourage CIOs to consider when evaluating IDV solutions for the workforce.

Capture channels: Allowing users to capture their ID and selfie via a webcam is undoubtedly convenient, but it comes with a major security tradeoff. Webcams and web browsers, including mobile browsers, are highly susceptible to injection attacks that insert deepfake media and false data. I also strongly recommend that you completely avoid any IDV system that allows people to upload documents or selfie photos.

Liveness detection: How do you prove that it’s a real, live human being actually in front of the camera? IDV companies typically speak in terms of “active” or “passive” liveness detection. Active models make a user dance for the camera, turning their head side to side or in a circle. Passive methods range from flashing lights to basic data analysis to more sophisticated “spatial” techniques leveraging three-dimensional depth maps and other advanced sensors.

Integrations: One might argue that any security technology is only as good as its ease of deployment. Increasingly, IDV providers are building plug-and-play integrations with workforce identity providers and enterprise apps. Look for a company that slots into your existing tech stack (IAM, HRIS, ITSM, SIEM, ATS, etc.), with little to no dev work. Consider if you need a turnkey solution you can deploy ASAP, building blocks you’ll need to piece together yourself, or something in between.

Privacy: I’d be remiss if I didn’t talk about data privacy and security here. Using IDV in a workforce context brings very different privacy considerations than in consumer use cases. For example, if you want to use IDV to verify candidates at the interview stage, be sure that you research any applicable laws or guidance. And make sure that your IDV system can be configured to adhere to those regulations, such as only showing the result of the verification.

The conversation we’re not having loud enough: Secure credentials are the frame, not the foundation.

Security teams shouldn’t be the last to find out there’s an impersonator inside the network. But today, they often are. And that’s because true identity verification was never part of your hiring process.

Whether it’s for new hires, contractors, offshore teams, or step-up verifications later in the employee lifecycle, identity verification is the critical missing element for ensuring that the human to whom you’re issuing credentials is exactly who they claim to be.

If you don’t know who exactly is enrolling or resetting credentials, every other layer of your security stack is built on sand.

It’s time to shift left, not just in software development, but in trust. Because if your enterprise is still issuing credentials without verifying who you’re issuing them to, you may already have hired an impostor.

And by the time you realize it, they’re already inside.

Disclaimer: The views expressed in the content are solely those of the author and do not necessarily reflect the views of the IDPro organization.

Author

Photo Aaron Painter Aaron Painter is the CEO of Nametag, the world’s first identity verification system purpose-built to combat deepfakes and other AI-powered threats. Driven by his personal experiences with online fraud and identity theft, Aaron assembled a team of security experts to create the next generation of account protection. Aaron is a best-selling author, former VP and General Manager at Microsoft, Fellow at the Royal Society of Arts, Founder Fellow at OnDeck, a member of Forbes Business Council and a senior External Advisor to Bain & Company

Lets get in touch ...

Please use the below contact form to leave your message with us. We will be pleased to respond as soon as possible.

Contact Us

Name(Required)
You may contact us by filling in this form any time you need professional support or have any questions. You can also fill in the form to leave your comments or feedback.