OpenAI released draft documentation Wednesday laying out how it wants ChatGPT and its other AI technology to behave. Part of the lengthy Model Spec document discloses that the company is exploring a leap into porn and other explicit content.
OpenAIâs usage policies curently prohibit sexually explicit or even suggestive materials, but a âcommentaryâ note on part of the Model Spec related to that rule says the company is considering how to permit such content.
âWe’re exploring whether we can responsibly provide the ability to generate NSFW content in age-appropriate contexts through the API and ChatGPT,â the note says, using a colloquial term for content considered not safe for work contexts. âWe look forward to better understanding user and societal expectations of model behavior in this area.â
The Model Spec document says NSFW, or not safe for work content, âmay include erotica, extreme gore, slurs, and unsolicited profanity.â It is unclear if OpenAIâs explorations of how to responsibly make NSFW content envisage loosening its usage policy only slightly, for example to permit generation of erotic text, or more broadly to allow descriptions or depictions of violence.
In response to questions from WIRED, OpenAI spokesperson Grace McGuire said the Model Spec was an attempt to âbring more transparency about the development process and get a cross section of perspectives and feedback from the public, policymakers, and other stakeholders.â She declined to share details of what OpenAIâs exploration of explicit content generation involves, or what feedback the company has received on the idea.
Earlier this year, OpenAIâs chief technology officer Mira Murati told the Wall Street Journal that she was ânot sureâ if the company would in future allow depictions of nudity to be made with the companyâs video generation tool Sora.
AI-generated pornography has quickly become one of the biggest and most troubling applications of the type of generative AI technology OpenAI has pioneered. So-called deepfake pornâexplicit images or videos made with AI tools that depict real people without their consentâhas become a common tool of harassment against women and girls. In March, WIRED reported on what appear to be the first US minors arrested for distributing AI-generated nudes without consent, after Florida police charged two teenaged boys for making images depicting fellow middle school students.
âIntimate privacy violations, including deepfake sex videos and other nonconsensual synthesized intimate images, are rampant and deeply damaging,â says Danielle Keats Citron, a professor at the University of Virginia School of Law who has studied the problem. âWe now have clear empirical support showing that such abuse costs targeted individualsâ crucial opportunities, including to work, speak, and be physically safe.â
Citron calls OpenAIâs potential embrace of explicit AI content âalarming.â
As OpenAIâs usage policies prohibit impersonation without permission, explicit nonconsensual imagery would remain banned even if the company did allow creators to generate NSFW material. But it remains to be seen whether the company could effectively moderate explicit generation to prevent bad actors from using the tools. Microsoft made changes to one of its generative AI tools after 404 Media reported that it had been used to create explicit images of Taylor Swift that were distributed on the social platform X.
Additional reporting by Reece Rogers
+ There are no comments
Add yours