Cybersecurity Researcher, Jeremiah Fowler, discovered and reported to vpnMentor about a non-password-protected database that contained just under 100k records belonging to GenNomis by AI-NOMIS — an AI company based in South Korea that provides face swapping and “Nudify” adult content as well as a marketplace where images can be bought or sold.

The publicly exposed database was not password-protected or encrypted. It contained 93,485 images and .Json files with a total size of 47.8 GB. The name of the database and its internal files indicated they belonged to South Korean AI company GenNomis by AI-NOMIS. In a limited sample of the exposed records, I saw numerous pornographic images, including what appeared to be disturbing AI-generated portrayals of very young people.

The database also included .Json files that logged command prompts and links to the images they generated. Although I did not see any PII or user data, this was my first look behind the scenes of an AI image generator. It was a wake-up call for how this technology could potentially be abused by users, and how developers must do more to protect themselves and others. This data breach opens a larger conversation on the entire industry of unrestricted image generation.

I immediately sent a responsible disclosure notice to GenNomis and AI-NOMIS, and the database was restricted from public access and no longer accessible. I did not receive any reply or acknowledgement to my notice. Although the records belonged to GenNomis by AI-NOMIS, it is not known if the database was owned and managed directly by them or by a third-party contractor. It is also not known how long the database was exposed before I discovered it or if anyone else may have gained access to it. Only an internal forensic audit could identify additional access or potentially suspicious activity.

GenNomis is an AI-powered image generation platform that enables users to transform text descriptions into unrestricted images, create AI personas, turn images to videos, face-swap images, remove backgrounds, and more. Based on the files I saw in a limited sample, nearly all of the images were explicit and depicted adult content. The GenNomis platform supports over 45 distinct art styles, including Realistic, Anime, Cartoon, Vintage, and Cyberpunk, allowing users to tailor their image creations to specific aesthetic preferences. GenNomis also offers a Marketplace, where users can buy and sell images labeled as artwork.




There are numerous AI image generators offering to create pornographic images from text prompts, and there is no shortage of explicit images online for the AI models to pull from. Any service that provides the ability to face-swap images or bodies using AI without an individual’s knowledge and consent poses serious privacy, ethical, and legal risks. These explicit and sexual images can be misused for extortion, reputation damage, and revenge purposes.

This type of image manipulation is commonly referred to as “nudify” or “deepfake pornography”. These images can be highly realistic, and it may be humiliating for individuals to be portrayed in such a way without their consent. Non-consensual deepfake content has become a significant concern in the digital age of AI generated images. It is estimated that 96% of all deepfakes online are pornographic, and 99% of these involve women who did not consent to their likeness being used in such a manner.

It should be noted that the Face Swap folder disappeared before I sent the responsible disclosure notice and was no longer listed in the database. Several days later the websites of both GenNomis and AI-NOMIS went offline and the database was deleted.

I am not saying these individuals did not give their consent when using the GenNomis platform, nor am I saying these individuals are at risk of extortion or harassment. I am only providing a real-world risk scenario of the broader landscape of AI-generated explicit images and the potential risks they could pose.

In a perfect world, AI providers should have strict guardrails and protections in place to prevent misuse. Developers should implement a series of detection systems that flag and block attempts to generate explicit deepfake content — particularly when it involves images of underage children or non-consenting individuals. Services that allow users to generate images semi-anonymously without any type of identity verification or watermarking technology are providing an open invitation for misuse.

It feels like we are in the wild west of regulating AI-generated images and content, and stronger detection mechanisms and strict verification requirements are essential. Identifying perpetrators and holding them accountable for the content they create should be made easier, allowing service providers to remove harmful content fast. My advice to any AI service provider would be to first be aware of what users are doing, and then limit what they can do when it comes to illegal or questionable content. I also recommend providers have a system in place to delete potentially infringing content from their servers or storage network.

In this database, I saw numerous files depicting what appeared to be AI-generated explicit images of children and images of celebrities portrayed as children, including Ariana Grande, the Kardashians, Beyoncé, Michelle Obama, Kristen Stewart, and others. As an ethical researcher, I never download or screenshot illicit and potentially illegal images. This is only the second time in my decade-long career as a security researcher seeing these types of images publicly exposed in a database. In the previous case I reported my findings to the FBI and the cloud hosting provider along with that database was finally restricted several months later.

The good news is that law enforcement agencies around the world are waking up to the threats AI-generated content poses in regards to child abuse materals and criminal activities. In early March 2025, as I was writing this report, Australian Federal Police arrested 2 men as part of an international law-enforcement effort spearheaded by authorities in Denmark. Dubbed Operation Cumberland included Europol and law enforcement agencies from 18 additional nations, resulting in the apprehension of 23 other suspects. All individuals face charges related to the alleged creation and distribution of AI-generated child sexual abuse material (CSAM). In October 2024, a South Korean court handed down a ten-year prison sentence to the perpetrator of a deepfake sex crime. In March 2025, a teacher in the US was arrested for using artificial intelligence to create fake pornographic videos of his students.

According to the GenNomis use guidelines, there are restrictions on prohibited content. Explicit images of children and any other illegal activities are strictly prohibited on GenNomis — at least on paper. The guidelines also state that posting such content will result in immediate account termination and potential legal action. Despite the fact that I saw numerous images that would be classified as prohibited and potentially illegal content, it is not known if those images were available to users or if the accounts were suspended. However these images appeared to be generated using the GenNomis platform and stored inside the database that was publicly exposed.

Sadly, there have been numerous cases where individuals and young people have taken their own lives over sextortion attempts. I would recommend that anyone who receives threats or identifies that their image or likeness has been used without their consent contact law enforcement and share all relevant details of the attempt. There are ways to have images removed online and hopefully identify individuals engaged in harassment and sextortion attempts.

In the United States, the bipartisan “Take It Down Act” aims to criminalize the distribution of non-consensual intimate images, including those generated by AI (as of early 2025, the bill has passed the Senate and is awaiting action in the House of Representatives). Being a victim of AI-generated content in this manner means suffering a major violation of personal privacy, which can feel humiliating. Fortunately, bringing those who commit this type of criminal behavior to justice is becoming more common with the advancement of law enforcement technologies.

If you or someone you know is considering harming themselves, please reach out to a suicide prevention hotline or agency in your region and seek help.

I imply no wrongdoing by GenNomis, AI-NOMIS, or any contractors, affiliates, or related entities. I do not claim that internal, customer, or user data was ever at imminent risk. The hypothetical data-risk scenarios I have presented in this report are strictly and exclusively for educational purposes and do not reflect, suggest, or imply any actual compromise of data integrity or illegal activities. This report should not be construed as an assessment of, or a commentary on any organization’s specific practices, systems, or security measures.

As an ethical security researcher, I do not download the data I discover. I only take a limited number of screenshots as necessary and solely for verification purposes. I do not conduct any activities beyond identifying the security vulnerability and notifying the relevant parties. I disclaim any and all liability for any and all actions that may be taken as a result of this disclosure. I publish my findings to raise awareness of issues of data security and privacy. My aim is to encourage organizations to proactively safeguard sensitive information against unauthorized access.