Addressing the Threat of Fake Job Candidates
5.5.2026

With the rise of artificial intelligence, remote hiring has entered a new era. Legitimate job candidates use AI tools to polish their resumes, cover letters and job applications; and deceptive candidates exploit AI tools to misrepresent their credentials, skills and experience. Threat actors, including those directed by state-sponsored regimes, leverage AI tools – including deepfakes and synthetic identities – to scale hiring fraud and evade traditional screening. In this new era, onboarding an AI-generated impostor into a remote role is not just a bad hire; it is a high-consequence business threat with potential far-reaching and overlapping employment, cybersecurity and sanctions consequences.
Experian’s 2026 Future of Fraud Forecast highlights the top five fraud threats expected to have the biggest impact on business and consumers this year. Experian predicts that deepfakes will outsmart human resources, rating it as the second-highest threat. Employment fraud is set to escalate in the remote workforce as generative AI tools generate hyper-tailored resumes and deepfake candidates capable of passing interviews in real time. Experian believes that employers will unknowingly onboard individuals who aren’t who they say they are on a much larger scale, giving bad actors access to sensitive systems. This emerging threat is expected to reshape how organizations verify identity and intent in the hiring process.[1]
Defining the AI Impostor: Deepfake and Synthetic Identities
Deepfakes and synthetic identities are separate concepts that, when used together, form a single fraud: a deepfake impostor using a synthetic identity.
- Deepfakes are the result of visual/audio impersonation. A combination of “deep learning” and “fake,” a deepfake refers to visual or audio content manipulated by advanced AI tools to change how a person, object, or environment appears. Deepfakes include: face swapping (digitally replacing one person’s face with another); face re‑enactment (altering a real person’s facial features); face generation (creating an entirely fake person); speech synthesis (text to AI‑generated voice); and voice cloning (mimics a real person’s voice and speech patterns).[2] In 2025, advances in generative AI made it much easier ”for just about anyone” to create higher quality deepfakes that are more realistic and harder to detect.[3]
- Synthetic identity is broader. It is a fabricated or heavily altered identity that combines real and fake data. Synthetic identity candidate fraud is defined by the Federal Reserve in the financial context as “[t]he use of a combination of personally identifiable information (PII) to fabricate a person or entity in order to commit a dishonest act for personal or financial gain.”[4] Primary elements of personally identifiable information are identity elements that are, in combination, typically unique to an individual or profile; for example, name, date of birth, Social Security number and other government-issued identifiers. Supplemental elements of PII can help substantiate or enhance the validity of an identity but cannot establish an identity by themselves; for example, mailing or billing address, phone number, email address, or digital footprint.[5]
A deepfake impostor using a synthetic identity (AI impostor) uses a stolen, purchased, or fully synthetic identity with matching AI-generated fake information, such as a fake LinkedIn profile and engagement history, a fabricated resume, and an authentic-sounding cover letter. For a virtual interview, virtual onboarding and remote employment, the AI impostor animates the face of the synthetic identity with deepfake video and audio.[6]
“Generative AI has blurred the line between what it is to be a human and what it means to be a machine,” said Vijay Balasubramaniyan, CEO and co-founder of Pindrop, an information security company, in an April 2025 CNBC article.[7]
Remote jobs are a gateway for AI imposters. According to the research and advisory firm Gartner, “The rise of AI-generated job seeker profiles means that by 2028 globally 1 in 4 applicants will be fake.”[8]
“Deepfake candidates are infiltrating the job market at a crazy, unprecedented rate,” said Balasubramaniyan in a July 2025 CNBC article.[9] He said his company caught a deepfake candidate called “Ivan X” internally who “represented himself as a Russian coder when he applied to Pindrop for a posted backend senior engineering role.”[10] He seemed qualified on paper, but things fell apart during his virtual interview when the recruiter saw red flags: his facial expressions were out of sync with his words; his voice did not align perfectly with his lip movements; and he couldn’t immediately respond when the recruiter asked an unexpected technical question, pausing unnaturally before answering.[11] Eight days later, “Ivan X” reapplied to the same posting through a different recruiter, using the same identity and credentials. Having already identified him as an AI impostor, human resources gave him a second video interview to collect more information. They were startled that “Ivan X” looked like a different person. When he abruptly dropped the call and then rejoined, his deepfake image had improved, “highlighting how quickly these bad actors can improve their use of technology,” said Pindrop’s Chief People Officer Christine Kaszubski Aldrich.[12]
North Korean Remote IT Worker Scheme Impacts National Security and Sanctions
The Democratic People’s Republic of North Korea’s remote IT‑worker scheme is a years‑long, regime‑directed campaign using skilled tech workers posing as remote employees who use fraudulent identities from other countries to infiltrate companies in the U.S. and elsewhere.[13] Once hired, these workers route salaries and contract payments back to the North Korean regime, steal sensitive data, and, in some cases, threaten to leak or lock systems to extort additional payments – conduct U.S. authorities describe as directly funding prohibited weapons programs, including ballistic missiles.[14] U.N. officials estimate that North Korean IT‑worker schemes have generated global proceeds of $250 million to $600 million annually since 2018, with some part of that coming from U.S. companies.[15]
Federal Agency Warnings: 2022-24
Beginning in May 2022, U.S. authorities issued a series of advisories and public service announcements to U.S. and foreign businesses, describing how IT workers acting on behalf of the North Korean state were obtaining remote positions using fraudulent identities with foreign and U.S companies and routing their salaries to the regime in violation of U.S. and U.N. sanctions. The May 16, 2022 joint guidance from the State Department, Treasury, and the FBI emphasized the reputational and legal consequences for companies and individuals who enable or process transactions for North Korea, including possible sanctions designation under U.S. and U.N. authorities. That advisory identified “red flag indicators” for employers hiring remote workers, such as incorrect or frequently changing contact information; requests to ship company‑issued devices to addresses not shown on identification documents and stolen, altered, or otherwise falsified identity documents, including those created with image‑editing software.[16]
On Oct. 18, 2023, an FBI public service announcement provided follow‑up guidance for companies that had inadvertently hired, or suspected they had hired, synthetic North Korean IT workers. The announcement reiterated that the wages – totaling millions of dollars – were funding North Korea’s weapons programs, including ballistic missiles, and warned that the same schemes can enable data theft and money laundering. The FBI urged employers to use live video interviews and cross‑check resumes against online profiles, independently verify contact and address details and scrutinize mismatches with shipping addresses for company equipment, and restrict remote‑access and collaboration tools on corporate devices while reporting suspicious activity to the Internet Crime Complaint Center and other federal contacts.[17]
A May 16, 2024 FBI public service announcement shifted focus to “U.S.-based facilitators” – sometimes called “laptop mules” – who are paid to help North Korean IT workers secure fraudulent employment with U.S. companies by hosting them in domestic “laptop farms” and installing remote‑access tools so overseas workers can appear to log in from the United States. That public service announcement urged employers to implement stronger identity verification processes during hiring and onboarding; educate HR staff, hiring managers and technical teams about foreign remote‑worker schemes and monitor applicants and employees for suspicious address changes, particularly just before company laptops are shipped to a purported U.S. location.
New York State Department of Financial Services Cybersecurity Advisory
On Nov. 21, 2024, the New York State Department of Financial Services issued an industry letter to the financial institutions it regulates, urging caution when hiring for all-remote technology positions. The department advised covered entities to:
- Raise awareness by briefing executives, HR, hiring managers, security teams and key vendors on North Korean remote worker risks and red flags.
- Strengthen hiring controls through tighter identity and background checks, live video interviews, multifactor ID verification, IP address/location checks and careful reference checks.
- Harden and monitor remote access by enforcing least‑privilege access (especially for technical roles), restricting remote‑access tools, tracking corporate device use and location and promptly investigating and reporting suspected North Korean IT‑worker incidents to law enforcement and regulators (including NYDFS).[18]
KnowBe4 and Amazon Publicly Disclose Inadvertent North Korean Hires
On July 15, 2024, KnowBe4, a cybersecurity company that provides security awareness and simulated phishing training, discovered during the onboarding process for a remote software engineer that he was a North Korean IT worker. The company investigated, shared the collected data with the FBI and Mandiant, its cybersecurity expert, and confirmed that no illegal access had occurred and no data was compromised or exfiltrated. KnowBe4 chose to publicly disclose the incident – through a blog post, online FAQ and press release/white paper – as a warning to other companies.[19]
In a July 23, 2024 incident report blog post, KnowBe4’s executive chair and founder, Stu Sjouwerman, explained that the company’s internal investigation found that HR had conducted four separate videoconference interviews with the candidate, confirming that he matched the photo provided on his application. However, the candidate had altered the photo to create a profile picture that matched his face during the video interviews. The background check and other standard pre‑hiring checks were performed and came back clear based on the candidate’s stolen U.S. identity. The post‑incident review found the background check inadequate: the names used were inconsistent, and references were not properly vetted, relying only on email references. He was discovered when unusual computer activity was flagged as soon as he logged in to his company laptop and began loading malware, triggering alerts to the company’s Security Operations Center. When the security team reached out, the imposter claimed he was troubleshooting a speed issue, then became unavailable for a call and stopped responding. Security quarantined the workstation and later determined that during the 25 minutes the imposter was online, he had unsuccessfully attempted to manipulate session history files, transfer potentially harmful files and execute unauthorized software.[20]
Sjouwerman reported that a key change in KnowBe4’s practices is that the company no longer ships laptops directly to new employees’ home addresses. Post‑incident, KnowBe4 ships equipment to UPS‑Store‑style pickup locations near the address on the job application and requires a photo ID check after other vetting is complete. This single step would have prevented the incident discussed in the previous paragraph.
On Dec. 20, 2025, Amazon disclosed that earlier in the year, the company’s security monitoring had flagged unusual behavior on a company laptop assigned to a systems administrator. The device was physically located in Arizona, but the keystrokes reaching Amazon’s Seattle infrastructure showed latency slightly above the normal threshold, “a subtle but telling indicator of overseas access.”[21] Amazon Chief Security Officer Stephen Schmidt later noted that the company had prevented more than 1,800 suspected North Korean operatives from being hired using fake or stolen identities, a 27% quarter‑over‑quarter increase in 2025. Schmidt and subsequent reporting described how Amazon combines AI‑driven anomaly detection with manual review to spot suspected North Korean applicants, which included looking for inconsistent education histories, hijacked or suspicious online profiles and geographic discrepancies between claimed location and technical signals. Schmidt added that Amazon’s visibility into large‑scale cyber threats “gives us a responsibility to share what we’re learning.” He warned that “small details give them away,” such as incorrectly formatted phone numbers and inconsistent education histories, and that tech and AI companies are “prime targets” for North Korean IT‑worker schemes because they combine valuable data with remote roles that are easy to abuse.[22]
DOJ’s Concerted Enforcement Response
In March 2025, the U.S. Department of Justice announced coordinated nationwide enforcement actions targeting North Korean remote-worker schemes, including indictments of multiple U.S.‑based facilitators, searches of 29 laptop farms in 16 states, and evidence that more than 130 U.S. companies had been tricked into hiring North Korean IT workers.[23] In July 2025, the FBI issued a warning to U.S. businesses about North Korean IT workers and outlined steps they should take, including:
- Scrutinize identity and employment documents for inconsistencies and independently verify prior employment and education.
- Cross‑check photos and contact details against social media or other online profiles.
- Use unobscured live video interviews and ask simple movement or location‑specific questions to help detect AI‑generated or manipulated video.
- Ship laptops and other equipment only to the address on the worker’s identification and delay system access until background checks are complete.
- Train and brief HR, hiring managers and third‑party IT vendors about these schemes as contract IT work is a common way North Korean workers get in.[24]
In July 2025, in a case that received national attention, Christina Chapman, a 50-year-old Arizona resident, was sentenced in the U.S. District Court for the District of Columbia to an 8 1/2 year prison sentence, plus restitution of $284,566 in salary not yet paid to the North Koreans from their 309 inadvertent U.S. employers and a money judgment against her in the amount of $176,850. That amount was equal to the fees she received from the North Koreans for hosting a “laptop farm” at her home from 2020 to 2023, which generated more than $17 million in salaries funneled to the North Korean regime. Based on press releases and reporting, the unnamed victimized companies included a Fortune 500 car maker, a major television network, a Silicon Valley technology company, an aerospace manufacturer, a luxury retail company and a major media and entertainment company. According to reporting, Nike filed a crime‑victim impact statement with the court, disclosing it had unwittingly hired a remote North Korean worker to whom they had paid about $70,000.[25]
In February 2026, 29-year-old Ukrainian national Oleksandr Didenko was sentenced to five years in prison for running an identity-rental service and paying U.S. hosts to operate laptops in several states; at least one was sent to Chapman’s laptop farm. His operation helped North Korean IT workers get jobs at about 40 unnamed U.S. companies. He also agreed to forfeit more than $1.4 million and to pay restitution tied to the identity‑theft conduct.[26]
The corporate victims of the North Korean IT worker scheme reflected in the DOJ prosecutions total 479 to date. There are hundreds more, reported Mandiant Consulting Chief Technology Officer Charles Carmakal at a May 2025 annual security conference media briefing in 2025: “Literally every Fortune 500 company has at least dozens, if not hundreds of applications from North Korean IT workers,” he said, adding that nearly every chief information officer he has spoken to “[h]as admitted they’ve hired at least one North Korean worker, if not a dozen or few dozen.”[27]
Conclusion
When KnowBe4 publicly disclosed its discovery of an AI imposter hire during onboarding, Sjouwerman had this advice: “If it can happen to us, it can happen to almost anyone. Don’t let it happen to you.”[28]
The first step in preventing an AI impostor from getting a foot in the door is to harden those doors, including those for staffing agencies and other third-party vendors. The FBI and the New York State Department of Financial Services recommend identifying and closing gaps in the end-to-end hiring process. Employer-facing alerts and articles on this subject provide valuable guidance.[29] Some large employers, including Google, have reintroduced mandatory in-person interview rounds – even for hybrid or remote roles – to address AI-assisted and deepfake-enabled interview fraud.[30]
Initiating and implementing changes to the hiring process require compliance not only with applicable federal laws and regulations, but also with the “rapidly expanding patchwork” of state regulation regarding AI use in hiring. For example, New York City employers and employment agencies must follow the requirements of Local Law 144 (effective 2023) for the use of Automated Employment Decision Tools. These tools substantially assist or replace discretionary decision-making in hiring or promotion for jobs used in New York City.[31] Technology and human resource managers will need to ensure they are in compliance with state law involving biometric verification privacy and will need to monitor evolving legislation.[32]
Being deceived into hiring an AI imposter is a now foreseeable risk in remote hiring; like other foreseeable high-impact threats, an organization may not be able to prevent every incident but is expected to plan for it.[33] It is also a foreseeable risk that the AI imposter is a state-sponsored actor. Strict liability exposure is an added sanctions risk even when victims lack actual knowledge as to where salary payments are going. This means that companies must take compliance programs into account and voluntarily report incidents to law enforcement in a timely manner.[34]
By analogy to ransomware, employers should plan for deepfake and synthetic‑identity hiring risks with the same kind of risk‑based, scaled‑to‑the‑organization controls and incident‑response planning, rather than waiting for a catastrophic incident to force change.[35]
Priscilla Lundin is a consultant with Employment Practices Solutions, Inc., a human resource-focused consulting firm. She is a member of the New York State Bar Association’s Labor and Employment Law Section’s Workplace Rights and Responsibilities Committee.
Endnotes:
[1] Experian, 2026 Future of Fraud Forecast (2026), https://www.experian.com/thought-leadership/business/2026-future-of-fraud-forecast-infographic.
[2] See, e.g., Nat’l Sec. Agency et al., Contextualizing Deepfake Threats to Organizations 1 (2023),
https://media.defense.gov/2023/Sep/12/2003298925/-1/-1/0/CSI-DEEPFAKE-THREATS.PDF.
[3] Siwei Lyu, Deepfakes Leveled Up in 2025 – Here’s What’s Coming Next, The Conversation (Jan. 11, 2026), https://theconversation.com/deepfakes-leveled-up-in-2025-heres-whats-coming-next-271391.
[4] Synthetic Identity Fraud Defined, FedPayments Improvement (Fed. Reserve Sys.), https://fedpaymentsimprovement.org/strategic-initiatives/payments-security/synthetic-identity-payments-fraud/synthetic-identity-fraud-defined/.
[5] Id.
[6] See, e.g., Christine Kaszubski Aldrich, Think You Won’t Be Targeted by Deepfake Candidates? Think Again, Pindrop (Apr. 8, 2025, updated Sept. 15, 2025), https://www.pindrop.com/article/targeted-by-deepfake-candidates/; Paul Barsness & Matthew C. Lonergan, Al, Deepfakes, and the Rise of the Fake Applicant – What Employers Need to Know, Bradley (Apr. 10, 2025), https://www.bradley.com/insights/publications/2025/06/ai-deepfakes-and-the-rise-of-the-fake-applicant-what-employers-need-to-know; Ng S.T. Chong, The New Arms Race: How to Protect Your Hiring Process from AI-Assisted Interview Fraud, UNU Campus Computing Centre (Nov. 26, 2025), https://c3.unu.edu/blog/the-new-arms-race-how-to-protect-your-hiring-process-from-ai-assisted-interview-fraud; Tom Sullivan, Synthetic Identity Fraud: How to Detect and Prevent It, Plaid (Jan. 15, 2026), https://plaid.com/resources/fraud/synthetic-identity-fraud; Clive Bourke, Recruitment Fraud: How AI and Deepfakes Are Hijacking the Hiring Process, Daon (June 12, 2025), https://www.daon.com/resource/recruitment-fraud-how-ai-and-deepfakes-are-hijacking-the-hiring-process/.
[7] Hugh Son, Fake Job Seekers Use AI to Interview for Remote Jobs, Tech CEOs Say, CNBC (Apr. 8, 2025), https://www.cnbc.com/2025/04/08/fake-job-seekers-use-ai-to-interview-for-remote-jobs-tech-ceos-say.html.
[8] By 2028, 1 in 4 Candidate Profiles Will Be Fake, Gartner Predicts, HR Dive (Aug. 8, 2025), https://finance.yahoo.com/news/2028-1-4-candidate-profiles-111800899.html.
[9] Anuz Thapa and Hugh Son, How Deepfake AI Job Applicants Are Stealing Remote Work, CNBC (July 11, 2025), https://www.cnbc.com/2025/07/11/how-deepfake-ai-job-applicants-are-stealing-remote-work.html.
[10] Christine Kaszubski Aldrich, Think You Won’t Be Targeted by Deepfake Candidates? Think Again, Pindrop (Apr. 8, 2025, updated Sept. 15, 2025), https://www.pindrop.com/article/targeted-by-deepfake-candidates.
[11] Id.
[12] Id.
[13] See Guidance on the Democratic People’s Republic of Korea Information Technology Workers, U.S. Dep’t of State, U.S. Dep’t of the Treasury and Fed. Bureau of Investigation (May 16, 2022), https://ofac.treasury.gov/media/923126/download; Publication of North Korea Information Technology Workers Advisory, U.S. Dep’t of the Treasury (May 16, 2022), https://ofac.treasury.gov/recent-actions/20220516; Treasury Sanctions Clandestine IT Worker Network Funding the DPRK Regime, U.S. Dep’t of the Treasury (July 23, 2025), https://home.treasury.gov/news/press-releases/sb0205; Kelly Phillips Erb, North Korean Tech Workers Infiltrating Companies Around World, N.Y. Times (July 2, 2025), https://www.nytimes.com/2025/07/02/world/asia/north-korea-tech-workers.html.
[14] See Guidance on the Democratic People’s Republic of Korea Information Technology Workers, U.S. Dep’t of State, U.S. Dep’t of the Treasury and Fed. Bureau of Investigation (May 16, 2022), https://ofac.treasury.gov/media/923126/download; North Korean Tactics, Techniques, and Procedures for Revenue Generation, Fed. Bureau of Investigation (Oct. 18, 2023), https://www.ic3.gov/PSA/2023/PSA231018.pdf; Matt Stankiewicz, Navigating OFAC Sanctions Risks From North Korean Remote IT Workers, JDSupra (Oct. 30, 2025), https://www.jdsupra.com/legalnews/navigating-ofac-sanctions-risks-from-2049989/.
[15] See Chloe Taylor, Thousands of North Korean IT Workers Have Infiltrated the Fortune 500 – And They’re Using AI To Get Hired, Fortune (Apr. 6, 2025), https://fortune.com/2025/04/07/north-korean-it-workers-infiltrating-fortune-500-companies/ (reporting that U.N. estimates put the IT‑worker scheme at “$250 million to $600 million every year since 2018”); Amanda Gerut, North Korean IT Worker Infiltrations Exploded 220% Over the Past 12 Months, Fortune (Aug. 4, 2025), https://fortune.com/2025/08/04/north-korean-it-worker-infiltrations-exploded.
[16] See U.S. Dep’t of State, U.S. Dep’t of the Treasury & Fed. Bureau of Investigation, Guidance on the Democratic People’s Republic of Korea Information Technology Workers (May 16, 2022), https://ofac.treasury.gov/media/923126/download; Publication of North Korea Information Technology Workers Advisory, U.S. Dep’t of the Treasury (May 16, 2022), https://ofac.treasury.gov/recent-actions/20220516.
[17] See Fed. Bureau of Investigation, Public Service Announcement, Additional Guidance on the Democratic People’s Republic of Korea Information Technology Workers (Oct. 18, 2023), https://www.ic3.gov/PSA/2023/PSA231018.
[18] See N.Y. Dep’t of Fin. Servs., Cybersecurity Advisory (Nov. 1, 2024); NYDFS Warns Against the Threat of Accidentally Hiring North Korean Remote IT Workers, Byteback Blog (Nov. 19, 2024), https://www.bytebacklaw.com/2024/11/nydfs-warns-against-the-threat-of-accidentally-hiring-north-korean-remote-it-workers/.
[19] See, e.g., Joseph Cox, How a North Korean Remote Worker Got Hired by a U.S. Cybersecurity Company, Vice (July 25, 2024); KnowBe4, How a North Korean Fake IT Worker Tried to Infiltrate Us (July 22, 2024); Field Effect Security Intelligence Team, Cybersecurity Company Accidentally Hires North Korean Threat Actor (July 24, 2024).
[20] See KnowBe4, How a North Korean Fake IT Worker Tried to Infiltrate Us (July 22, 2024), https://blog.knowbe4.com/how-a-north-korean-fake-it-worker-tried-to-infiltrate-us; North Korean Fake IT Worker FAQ (July 24, 2024, updated July 27, 2024), https://blog.knowbe4.com/north-korean-fake-it-worker-faq; North Korean Fake Employees Are Everywhere! How to Protect Your Organization (Sept. 17, 2024), https://www.knowbe4.com/press/knowbe4-issues-warning-to-organizations-after-hiring-fake-north-korean-employee. See also, Tonya Riley, Cyber Firm KnowBe4 Hired a Fake IT Worker from North Korea, Cyberscoop (July 23, 2024), https://cyberscoop.com/cyber-firm-knowbe4-hired-a-fake-it-worker-from-north-korea/; North Korean Fake IT Worker Dupes Security Firm: A Wake‑Up Call for Employers, Forbes: Hum. Res. Council (July 25, 2024), https://www.forbes.com/sites/alonzomartinez/2024/07/25/north-korean-fake-it-worker-dupes-security-firm-a-wake-up-call-for-employers/.
[21] Ken Underhill, Amazon Detects North Korean IT Infiltrator via Latency Clues, ESecurity Planet (Dec. 18, 2025), https://www.esecurityplanet.com/threats/amazon-detects-north-korean-it-infiltrator-via-latency-clues/.
[22] Stephen Schmidt, LinkedIn (Dec. 17, 2025, 9:14 AM), https://www.linkedin.com/posts/stephenschmidt1_over-the-past-few-years-north-korean-dprk-activity-7407485036142276610-dot7; Viktor Eriksson, Amazon Has Stopped 1,800 Job Applications from North Korean Agents, CSO Online (Dec. 23, 2025), https://www.csoonline.com/article/4111148/amazon-has-stopped-1800-job-applications-from-north-korean-agents.html.
[23] See, e.g., Press Release, U.S. Dep’t of Justice, Charges and Seizures Brought in Fraud Scheme Aimed at Denying Revenue to Workers Associated with North Korea (Mar. 26, 2025).
[24] Fed. Bureau of Investigation, North Korean IT Worker Threats to U.S. Businesses (Pub. Serv. Announcement, July 23, 2025), https://www.ic3.gov/PSA/2025/PSA250723-4.
[25] See Press Release, U.S. Dep’t of Justice, Arizona Woman Sentenced for $17M Information Technology Worker Fraud Scheme That Generated Revenue for North Korea (July 23, 2025); IRS Crim. Investigation, IRS‑CI Reveals Top 10 Cases of 2025 (Dec. 21, 2025); Amanda Gerut, An American Who Helped North Korean IT Workers Rake In Millions Gets 8½ Years, Fortune (July 18, 2025).
[26] Press Release, U.S. Att’y’s Off. for the D.C., Ukrainian National Sentenced in ‘Laptop Farm’ Scheme That Generated Income for North Korean IT Workers (Feb. 19, 2026), https://www.justice.gov/usao-dc/pr/ukrainian-national-sentenced-laptop-farm-scheme-generated-income-north-korean-it-workers.
[27] Hundreds of Fortune 500 Companies Have Hired North Korean Operatives, KnowBe4 (Apr. 30, 2025), https://blog.knowbe4.com/hundreds-of-fortune-500-companies-have-hired-north-korean-operatives.
[28] Stu Sjouwerman, How a North Korean Fake IT Worker Tried to Infiltrate Us, KnowBe4 Blog (July 22, 2024) (updated Oct. 19, 2024), https://blog.knowbe4.com/how-a-north-korean-fake-it-worker-tried-to-infiltrate-us.
[29] Paul R. Barsness and Matthew C. Lonergan AI, Deepfakes, and the Rise of the Fake Applicant – What Employers Need to Know, Bradley.com (Apr. 10, 2025), https://www.bradley.com/insights/publications/2025/06/ai-deepfakes-and-the-rise-of-the-fake-applicant-what-employers-need-to-know (discussing the use of deepfakes and generative AI by job applicants and outlining steps employers can take in screening and onboarding); Miller Nash LLP, How Employers Can Protect Themselves from Deepfake Employees (Dec. 2, 2025), https://www.millernash.com/industry-news/deepfake-employees-are-here-heres-how-employers-can-protect-themselves (identifying “warning signs of potential fake employees during onboarding” and recommending follow‑up checks when fraud is suspected); HYPR, HR’s 2026 Guide to Identity Verification in Onboarding (June 12, 2025), https://www.hypr.com/blog/hr-interview-onboarding-fraud-guide (advising HR to use biometric and liveness verification, geolocation checks and lifecycle identity controls to combat deepfake and AI‑powered applicant fraud during interviewing and onboarding); AMS Inform, Deepfakes, Synthetic Identities, and the New Frontier of Hiring Fraud (Feb. 22, 2026), https://www.amsinform.com/fake-degree/the-new-face-of-hiring-fraud.
[30] See Dexter Tilo, Google Opts for In-Person Interviews amid Surge in AI-Aided Candidates, HRD Connect (Aug. 12, 2025), https://www.hcamag.com/us/specialization/hr-technology/google-opts-for-in-person-interviews-amid-surge-in-ai-aided-candidates/545926.
[31] See, e.g., K&L Gates, Navigating the AI Employment Landscape in 2026: Considerations and Best Practices for Employers (Feb. 2, 2026), https://www.klgates.com/Navigating-the-AI-Employment-Landscape-in-2026-Considerations-and-Best-Practices-for-Employers-2-2-2026; (AI laws affecting employment decisions); McDermott Will & Emery, State Laws on AI Hiring Tools Persist After One Big Beautiful Bill Act (Sept. 16, 2025), https://www.mwe.com/insights/state-laws-on-ai-hiring-tools-persist-after-obbba/ (noting a “rapidly expanding patchwork of state legislation” regulating AI in hiring); N.Y.C., N.Y., Local Law No. 144 of 2021 (codified at N.Y.C. Admin. Code §§ 20‑870–20‑874).
[32] Bryan Cave Leighton Paisner LLP, U.S. Biometric Laws & Pending Legislation Tracker – January 2026 (Jan. 15, 2026).
[33] See, e.g., Industry Letter on Cybersecurity Risks Arising from Artificial Intelligence and Strategies to Combat Related Risks, N.Y. State Dep’t of Fin. Servs. (Oct. 16, 2024), https://www.dfs.ny.gov/industry-guidance/industry-letters/il20241016-cyber-risks-ai-and-strategies-combat-related-risks.
[34] See OFAC and FinCEN Release Advisories on Risks of Ransomware Payments, Crowell & Moring LLP (Oct. 2, 2020), https://www.crowell.com/en/insights/client-alerts/ofac-and-fincen-release-advisories-on-risks-of-ransomware-payments; U.S. Dep’t of the Treasury, Office of Foreign Assets Control, Updated Advisory on Potential Sanctions Risks for Facilitating Ransomware Payments (Sept. 21, 2021), https://ofac.treasury.gov (discussing sanctions exposure and compliance expectations related to ransomware payments); From Deepfakes to Sanctions Violations: The Rise of North Korean Remote IT Worker Schemes, Crowell & Moring LLP (Sept. 18, 2025), https://www.crowell.com/en/insights/client-alerts/from-deepfakes-to-sanctions-violations-the-rise-of-north-korean-remote-it-worker-schemes; Justice Department Announces Coordinated, Nationwide Actions to Combat North Korean Remote IT Worker Scheme, U.S. Dep’t of Just. (July 2, 2025), https://www.justice.gov/opa/pr/justice-department-announces-coordinated-nationwide-actions-combat-north-korean-remote; Cybersecurity Advisory – Threats Posed by Remote Technology Workers with Ties to Democratic People’s Republic of Korea, N.Y. State Dep’t of Fin. Servs. (Nov. 1, 2024), https://www.dfs.ny.gov/industry-guidance/industry-letters/il20241101-cyber-advisory-remote-workers-nk (describing DPRK remote IT worker schemes using synthetic identities and highlighting related sanctions and cybersecurity risks).
[35] Jennifer Andrus, How To Prepare and Respond to Ransomware Attacks, New York State Bar Association (Jan. 16, 2026), https://nysba.org/how-to-prepare-and-respond-to-ransomware-attacks/.





