A recent cyber-espionage incident has exposed how a North Korea-linked hacking group is misusing AI tools like ChatGPT to produce deepfake military IDs. The group, known as Kimsuky, used these forged documents in a spear-phishing campaign targeting South Korean defense-related institutions as well as journalists, human rights activists, and researchers.
What Happened: The Attack Details
- In July 2025, the attack began with emails sent to selected individuals associated with military, defense, or human rights functions in South Korea.
- The emails posed as official military communications about an ID issue (“ID issuance for military-affiliated officials”) and included attachments with malicious code disguised under the pretext of drafts or sample design files.
- Within a ZIP file named something like Government_ID_Draft.zip was a shortcut file (.lnk) which, when executed, triggered hidden commands (PowerShell, batch scripts) to download malware. At the same time, the campaign included a deepfake image of a South Korean military ID card meant to lend credibility to the email and trick recipients into trusting it.
How AI & ChatGPT Were Exploited
- Although ChatGPT (and similar AI models) have built-in restrictions against generating realistic government IDs or official IDs, the hackers were able to bypass the safeguards by cleverly rephrasing their prompts. For example, they asked for sample designs or mock‐ups instead of direct reproduction of real IDs.
- The fake ID images appear to have been generated using ChatGPT’s image generation tools (or at least aided by its capabilities) and then integrated into the phishing emails to make them more believable.
Why This Matters: Risks & Implications
- Erosion of trust in institutions: If military IDs and government documents can be convincingly faked, it undermines trust in verification systems.
- Increased risk to targeted individuals: Journalists, activists, and human rights researchers are often vulnerable to attacks, and having such tools increases the sophistication of the threats.
- AI misuse is evolving: This case shows how AI isn’t just used for generative content or chat; it’s becoming a tool in the threat actor’s toolbox for social engineering, identity spoofing, and even malware delivery.
- Challenges to security and detection: Because attackers used visual deepfake elements, obfuscated malware, email impersonation (like from “.mil.kr” addresses), etc., it becomes harder for normal users and sometimes even security teams to distinguish real from fake.
Expert Response & Recommended Mitigations
- Organizations must increase awareness about how AI can be misused, especially among defense, media, and civil society actors.
- Cybersecurity firms recommend tight controls on verifying official identity documents in email communications and especially scrutinizing unexpected requests or drafts.
- Email filtering, anti-phishing tools, and regular training of staff are essential.
- AI providers (like OpenAI) are urged to strengthen their detection/safeguard systems, especially to make it harder to bypass restrictions via prompt rephrasing.
- Governments might need to regulate the misuse of generative AI, define clear ethical/legal boundaries, and enforce penalties for misuse.
What We Know & What Remains Unclear
Known:
- The group Kimsuky is linked to the North Korean regime and known for intelligence-gathering operations targeting South Korea and other countries.
- The campaign is real and documented by Genians Security Center (South Korea).
Less clear:
- Number of actual victims affected (how many clicked or had malware installed).
- Full technical details of how the image generation was integrated (which version of ChatGPT, which image/gen model, etc.). Cybernews
Bigger Picture: The Rise of AI-Powered State-Sponsored Threats
This incident is part of a rising trend: state-sponsored hacking groups are making increasing use of generative AI tools – for forging identities, writing fake résumés, crafting phishing content, developing malware or scripts, etc.
AI lowers the bar for such attacks: even actors without deep graphic design or forgery skills can generate realistic content. The prompt engineering loopholes and inability of AI restraint systems to catch all malicious intent are serious risk points.
Conclusion
The use of ChatGPT to produce deepfake military IDs is a wake-up call that generative AI can be weaponised in real, high‐stakes espionage operations. As the technology becomes more accessible, both providers and institutions must raise their defenses, refine detection methods, and educate those at risk. The line between legitimate and malicious use of AI is becoming thinner – understanding when that line is crossed will be essential for preserving digital security and trust.