Cybersecurity researchers have revealed that large language models (LLMs) can generate thousands of new variants of existing malware, which in turn helps the malware avoid detection. By obfuscating malicious JavaScript code, this AI-driven technique challenges traditional malware detection systems.

Researchers at Palo Alto Networks Unit 42 discovered that while LLMs struggle to create malware from scratch, they can easily rewrite existing malicious scripts, making them harder to identify. This transformation uses natural-looking code modifications like renaming variables, splitting strings, and inserting junk code.

Mainstream LLM providers have been increasingly enforcing security measures to ensure that their AI models aren’t used for cybercrime. In October 2024, for example, OpenAI reported blocking over 20 operations attempting to misuse its platform for malicious purposes. However, threat actors have simply turned to malicious AI tools like WormGPT to automate phishing attacks and create malware.

Unit 42’s study demonstrated the potential of LLMs to bypass machine learning-based malware classifiers. By rewriting malicious JavaScript samples, they fooled models like Innocent Until Proven Guilty (IUPG) and PhishingJS.

Beyond malware, AI tools face other security challenges. Researchers at North Carolina State University uncovered a side-channel attack named TPUXtract targeting Google Edge Tensor Processing Units (TPUs). By capturing electromagnetic signals, the attack reveals model hyperparameters and reconstructs AI models. While requiring physical access and costly equipment, it underscores the risks to proprietary AI technologies.

Additionally, AI systems like the Exploit Prediction Scoring System (EPSS) are vulnerable to adversarial manipulation. Cybersecurity firm Morphisec showcased how fake external signals, such as random social media mentions of an exploit, and otherwise empty GitHub repositories showcasing exploits, could skew EPSS metrics, misleading vulnerability prioritization efforts.

The rise of generative AI amplifies the scale and sophistication of cyber threats. However, the same technologies can enhance defenses by improving detection of malicious activity — such is the case with NordVPN’s AI phishing prevention tool called Sonar. This arms race highlights the need for continued AI development and robust security measures.

With the development of AI, cyber attack techniques are also evolving accordingly. For instance, a North Korean hacking group named Sapphire Sleet recently stole over $10 million in cryptocurrency via a LinkedIn social engineering campaign that was enhanced with AI tools.