As well as CSAM, Fowler says, there were AI-generated pornographic images of adults in the database plus potential âface-swapâ images. Among the files, he observed what appeared to be photographs of real people, which were likely used to create âexplicit nude or sexual AI-generated images,â he says. âSo they were taking real pictures of people and swapping their faces on there,â he claims of some generated images.
When it was live, the GenNomis website allowed explicit AI adult imagery. Many of the images featured on its homepage, and an AI âmodelsâ section included sexualized images of womenâsome were âphotorealisticâ while others were fully AI-generated or in animated styles. It also included a âNSFWâ gallery and âmarketplaceâ where users could share imagery and potentially sell albums of AI-generated photos. The websiteâs tagline said people could âgenerate unrestrictedâ images and videos; a previous version of the site from 2024 said âuncensored imagesâ could be created.
GenNomisâ user policies stated that only ârespectful contentâ is allowed, saying âexplicit violenceâ and hate speech is prohibited. âChild pornography and any other illegal activities are strictly prohibited on GenNomis,â its community guidelines read, saying accounts posting prohibited content would be terminated. (Researchers, victims advocates, journalists, tech companies, and more have largely phased out the phrase âchild pornography,â in favor of CSAM, over the last decade).
It is unclear to what extent GenNomis used any moderation tools or systems to prevent or prohibit the creation of AI-generated CSAM. Some users posted to its âcommunityâ page last year that they could not generate images of people having sex and that their prompts were blocked for non-sexual âdark humor.â Another account posted on the community page that the âNSFWâ content should be addressed, as it âmight be looked upon by the feds.â
âIf I was able to see those images with nothing more than the URL, that shows me that they’re not taking all the necessary steps to block that content,â Fowler alleges of the database.
Henry Ajder, a deepfake expert and founder of consultancy Latent Space Advisory, says even if the creation of harmful and illegal content was not permitted by the company, the websiteâs brandingâreferencing âunrestrictedâ image creation and a âNSFWâ sectionâindicated there may be a âclear association with intimate content without safety measures.â
Ajder says he is surprised the English-language website was linked to a South Korean entity. Last year the country was plagued by a nonconsensual deepfake âemergencyâ that targeted girls, before it took measures to combat the wave of deepfake abuse. Ajder says more pressure needs to be put on all parts of the ecosystem that allows nonconsensual imagery to be generated using AI. âThe more of this that we see, the more it forces the question onto legislators, onto tech platforms, onto web hosting companies, onto payment providers. All of the people who in some form or another, knowingly or otherwiseâmostly unknowinglyâare facilitating and enabling this to happen,â he says.
Fowler says the database also exposed files that appeared to include AI prompts. No user data, such as logins or usernames, were included in exposed data, the researcher says. Screenshots of prompts show the use of words such as âtiny,â âgirl,â and references to sexual acts between family members. The prompts also contained sexual acts between celebrities.
âIt seems to me that the technology has raced ahead of any of the guidelines or controls,â Fowler says. âFrom a legal standpoint, we all know that child explicit images are illegal, but that didnât stop the technology from being able to generate those images.â
As generative AI systems have vastly enhanced how easy it is to create and modify images in the past two years, there has been an explosion of AI-generated CSAM. âWebpages containing AI-generated child sexual abuse material have more than quadrupled since 2023, and the photorealism of this horrific content has also leapt in sophistication, says Derek Ray-Hill, the interim CEO of the Internet Watch Foundation (IWF), a UK-based nonprofit that tackles online CSAM.
The IWF has documented how criminals are increasingly creating AI-generated CSAM and developing the methods they use to create it. âItâs currently just too easy for criminals to use AI to generate and distribute sexually explicit content of children at scale and at speed,â Ray-Hill says.
+ There are no comments
Add yours