North Korean threat actors are using AI tools to enhance long-running IT worker scams, making them more effective and scalable. They employ AI to research job postings, create convincing fake identities and application materials, and even use face-swapping and voice-changing software. Once hired, these fraudulent workers use AI to maintain their cover by generating code, responding to emails, and mimicking expected communication styles. The main topics covered are the use of AI by DPRK-linked groups to facilitate employment scams and the specific techniques, like persona creation and task automation, used throughout the fraudulent hiring and work process.
North Korean APTs Use AI to Enhance IT Worker Scams
DPRK worker scams are old hat, but they're still working, thanks to AI tools that help with everything from face swapping to daily emails.
While threat actors across the board struggle to meaningfully upgrade their cyberattacks with artificial intelligence (AI), North Korean threat actors are making more practical use of the same technology to perpetuate their classic IT worker scams.
In a new report, Microsoft's threat intelligence team described how two clusters of malicious actors tied to the Democratic People's Republic of Korea (DPRK) — "Jasper Sleet" and "Coral Sleet" — use AI in a variety of ways to improve the scale and precision of their fraudulent campaigns, enabling "sustained, large-scale misuse of legitimate access" to organizations that don't know better. They're using it to more effectively fabricate their identities, maintain those identities, and socially engineer their prospective employers in all kinds of small but meaningful ways.
None of the tactics, techniques and procedures (TTPs) described in the report are novel. Still, they're useful for organizations to know about, as the same old shtick has continued to bring the bad guys success for years now, despite more widespread awareness and law enforcement counteraction.
How AI Helps Fake IT Workers Apply for Jobs
There's no stage of an IT worker scam that isn't touched by — if not entirely enabled by — AI technology.
Long before a company receives a fake resume and cover letter in its inbox, threat actors use AI tools to research the jobs they want to target on platforms like Upwork, and how to most effectively apply for them. They use them to extract useful terminology from job postings, and identify the requirements that might make a fake application look good, such as certifications, skills, or tools applicants are expected to possess.
Working with a linguistic and cultural gap, the threat actors within a group like Jasper Sleet will then prompt large language model (LLM) chatbots for fake names, email addresses, and social media handles that might appear convincing to their intended victims. It goes without saying that they also use chatbots to write their resumes and cover letters.
Finally, threat actors bring all of this information together to create convincing digital personas mimicking IT talent. These personas can be used repeatedly to apply for various jobs across different employers.
Sometimes these personas are AI-generated, from the details of a resume through the polished headshot used at the top. In other cases, Jasper Sleet has used a commercial face swapping app called Faceswap to insert their own chosen faces into real individuals' stolen identity documents. And in interviews with prospective employers, it supplements fake visuals with voice-changing software.
How AI Helps Fake IT Workers Do their Jobs
Securing a gig is just phase one of the attack. AI remains essential when fake IT workers actually have to do their jobs.
Part of it is about keeping up the ruse. Having initially presented as a certain kind of person, with certain qualifications and a certain character of speech, the threat actors then have to perform the actions and maintain the tone of voice their employer expects. That could mean successfully fulfilling tasks handed to them by their employer, or presenting consistently across email and chat platforms used for daily communication.
In a lot of ways, though their intent is different, DPRK threat actors also use AI just like your average business user. They ask it to help them respond to emails, generate snippets of code, and carry out any number of other little tasks. And like those users, Microsoft has also observed threat actors experimenting with agentic AI.
"Although not yet observed at scale and limited by reliability and operational risk, these efforts point to a potential shift toward more adaptive threat actor tradecraft that could complicate detection and response," the researchers wrote.
Though revenue generation for Kim Jong-Un's regime is the first goal of any IT worker scam, exploiting insider access to Western organizations is always a nice bonus. Beyond doing their jobs, actors like Coral Sleet use AI — and sometimes jailbreak it — to quickly develop Web infrastructure, generate and refine malware, and, of course, assist with social engineering. Coral Sleet also uses agentic AI to string together a fully automated cyberattack workflow: to create fake company websites, remotely provision infrastructure, test and deploy malicious payloads, and more.
Brian Hussey, senior vice president of Cyber Fusion at Cyderes, argues that attackers will have to continue upgrading their IT worker scams with AI because organizations are catching onto their longstanding tricks.
"Increased awareness among hiring teams is clearly making a difference. Many organizations are now incorporating verification questions during remote interviews, such as asking applicants about local landmarks or activities in the city they claim to live in. Some even ask cultural or political questions that a covert North Korean operator would be hesitant to answer candidly. These approaches are not foolproof, but they demonstrate that organizations are becoming more vigilant," he says.
Anecdotally, he adds, "We have seen fewer investigations related to this activity over the past six months. That may not fully reflect the broader threat landscape, but it could suggest a temporary slowdown or a shift in tactics."