Government-backed hacking groups are increasingly using artificial intelligence to speed up and tailor their attacks, according to CrowdStrike’s 2025 threat-hunting report. The security firm said AI is now a core part of cyber operations, enabling attackers to conduct reconnaissance, evaluate vulnerabilities, and generate convincing phishing messages.

CrowdStrike found that adversaries are also using AI to “automate tasks and improve their tools,” boosting the efficiency of both state-sponsored and criminal campaigns. The Iran-linked group Charming Kitten “likely” used AI to write phishing messages targeting U.S. and European organizations in 2024. Another actor, dubbed Reconnaissance Spider, “almost certainly” leveraged AI to translate a phishing lure into Ukrainian, accidentally leaving in a model’s boilerplate prompt-response text.

North Korea’s Famous Chollima, also tracked as UNC5267, used generative AI to sustain “an exceptionally high operational tempo” of more than 320 intrusions in a year, the report said. Known for orchestrating fraudulent IT-worker schemes that channel stolen funds to Pyongyang, the group applied AI to draft résumés, manage job applications, and conceal identities during interviews. CrowdStrike noted they “interweav[ed] GenAI-powered tools that automate and optimize workflows at every stage of the hiring and employment process.”

AI systems themselves have also become a prime target. “Threat actors are using organizations’ AI tools as initial access vectors to execute diverse post-exploitation operations,” CrowdStrike said, citing the April exploitation of a flaw in Langflow’s AI workflow development platform. Attackers reportedly used the bug to infiltrate networks, hijack accounts, and deploy malware.

The report warns that as companies integrate AI into daily operations, their attack surface is widening. “As organizations continue adopting AI tools,” CrowdStrike said, “the attack surface will continue expanding, and trusted AI tools will emerge as the next insider threat.”