AI Code Hallucinations Cause “Slopsquatting” Supply Chain Risk
Security researchers have found that AI’s tendency to hallucinate package names when used to assist with writing code has led to a new software supply chain vulnerability, dubbed “slopsquatting.”
The term, introduced by security expert Seth Larson, refers to a variation of typosquatting. While typosquatting depends on slight misspellings of legitimate packages, slopsquatting exploits entirely fabricated libraries that AI models often generate in their outputs.
According to a study conducted in March 2025, 576,000 code samples from platforms such as ChatGPT-4, CodeLlama, DeepSeek, WizardCoder, and Mistral were examined. Researchers discovered that around 20% of the packages proposed by these AI systems were nonexistent. Even among commercial models like ChatGPT-4, hallucinations emerged in approximately 5% of the generated code.
The researchers documented over 200,000 distinct hallucinated package names. Notably, 58% of these appeared in multiple instances, indicating they were not simply isolated mistakes but consistent and predictable. Threat actors are able to use this to their advantage, uploading harmful packages under these AI-hallucinated names to repositories like PyPI or npm.
The research also revealed that 38% of the fabricated packages bore resemblance to real ones, 13% were due to typographical errors, and 51% were completely invented.
Experts advise developers to manually verify every package suggested by AI tools. Utilizing dependency scanners, lockfiles, and hash-based verification can also help prevent the inadvertent installation of harmful packages. Also, reducing the AI’s “temperature” setting may help decrease hallucinations.
In general, developers ought to regard all AI-generated code as untrusted until it has been reviewed and tested in a secure, isolated setting. As the industry increasingly adopts AI for enhanced productivity, it’s essential to recognize its risks — particularly in software supply chains.
React to this headline: