Cybersecurity Researcher Uncovers Exposed AI Database Containing Disturbing Content
A security researcher, Jeremiah Fowler, recently discovered an unsecured database belonging to South Korean AI company GenNomis by AI-NOMIS. The database, which was not password-protected or encrypted, contained over 93,000 images and JSON files, totaling 47.8 GB.
“In a limited sample of the exposed records, I saw numerous pornographic images, including what appeared to be disturbing AI-generated portrayals of very young people,” Fowler said.
The database also held logs detailing command prompts and links to the images, but Fowler found no personally identifiable information (PII).
“This was my first look behind the scenes of an AI image generator,” Fowler said. “It was a wake-up call for how this technology could potentially be abused by users, and how developers must do more to protect themselves and others.”
Fowler reported the breach to GenNomis, but no acknowledgment was received, and the database was eventually secured.
“I immediately sent a responsible disclosure notice to GenNomis and AI-NOMIS, and the database was restricted from public access and no longer accessible,” he said.
However, it remains unclear whether the exposed content was part of GenNomis’ official platform or uploaded by users.
GenNomis, which provides AI tools for face swapping and creating adult content, has guidelines banning illegal material. Yet Fowler pointed out, “Despite the fact that I saw numerous images that would be classified as prohibited and potentially illegal content, it is not known if those images were available to users or if the accounts were suspended.”
This incident is part of a larger conversation about AI image generation and its potential for abuse.
“Any service that provides the ability to face-swap images or bodies using AI without an individual’s knowledge and consent poses serious privacy, ethical, and legal risks,” Fowler cautioned.
Law enforcement agencies worldwide are taking action. Fowler highlighted, “In early March 2025, Australian Federal Police arrested 2 men as part of an international law-enforcement effort… resulting in the apprehension of 23 other suspects.”
Fowler also stressed the need for stricter safeguards in AI image platforms to prevent misuse.
“My advice to any AI service provider would be to first be aware of what users are doing, and then limit what they can do when it comes to illegal or questionable content,” Fowler recommended.
React to this headline: