A cybersecurity researcher discovered an unprotected database containing 93,485 explicit AI-generated images from South Korean company GenNomis, including apparent child abuse material and non-consensual deepfake pornography created using their face-swapping AI platform.
Cybersecurity researcher Jeremiah Fowler discovered an unprotected Amazon S3 database belonging to South Korean AI company GenNomis by AI-NOMIS on March 10, 2025. The database contained 93,485 images and JSON files totaling 47.8 GB, with no password protection or encryption. GenNomis operated an AI-powered image generation platform that enabled users to create unrestricted images, face-swap photos, and generate explicit content through their 'Nudify' service. The exposed database included numerous pornographic images, AI-generated explicit images of what appeared to be children, and images of celebrities portrayed as children including Ariana Grande, the Kardashians, Beyonce, and Michelle Obama. The database also contained JSON files logging user prompts and links to generated images, revealing disturbing requests such as 'Asian girl abused by uncle.' Face-swap folders contained normal photos of women, presumably for non-consensual deepfake creation. Fowler sent a responsible disclosure notice on March 12, and the database was immediately secured without acknowledgment from the company. Both GenNomis and AI-NOMIS websites subsequently went offline and the database was deleted. Despite having user guidelines prohibiting explicit images of children and illegal content with warnings of account termination, the prohibited material appeared to have been generated using the GenNomis platform.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI systems that memorize and leak sensitive personal data or infer private information about individuals without their consent. Unexpected or unauthorized sharing of data and information can compromise user expectation of privacy, assist identity theft, or cause loss of confidential intellectual property.
Human
Due to a decision or action made by humans
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed
No population impact data reported.