Inside OpenAI’s Marketplace for Custom Chatbots

Estimated read time 5 min read


Last November, when OpenAI announced its plans for a marketplace where anyone could make and find bespoke versions of ChatGPT technology, the company said “The best GPTs will be invented by the community.” Nine months after the store officially launched, a Gizmodo analysis of the free marketplace shows that many developers are using the platform to provide GPTs—or generative pre-trained transformer models—that appear to violate OpenAI’s policies, including chatbot-style tools that explicitly create AI-generated porn, help students cheat without being detected, and offer authoritative medical and legal advice.

The offending GPTs are easy to find. On Sept. 2, the front page of OpenAI’s marketplace promoted at least three custom GPTs that appeared to violate the store’s policies: a “Therapist – Psychologist” chatbot, a “fitness, workout, and diet PhD coach,” and BypassGPT, a tool designed to help students evade AI writing detection systems, which has been used more than 50,000 times.

Searching the store for “NSFW” returned results like NSFW AI Art Generator, a GPT customized by Offrobe AI that’s been used more than 10,000 times, according to store data. The chat interface for the GPT links to Offrobe AI’s website, which prominently states its purpose: “Generate AI porn to satisfy your dark cravings.”

A screenshot fo the Offrobe AI website showing a woman in a bikini and shirtless man with the tagline "Generate AI Porn to Satisfy Your Dark Cravings."
Offrobe AI hosted a GPT on OpenAI’s store called “NSFW AI Image Generator.”

“The interesting thing about OpenAI is they have this apocalyptic vision of AI and how they’re saving us all from it,” said Milton Mueller, director of the Internet Governance Project at the Georgia Institute of Technology. “But I think it makes it particularly amusing that they can’t even enforce something as simple as no AI porn at the same time they say their policies are going to save the world.”

The AI porn generators, deepfake creators, and chatbots that provided sports betting recommendations were removed from the store after Gizmodo shared a list with OpenAI of more than 100 GPTs that appear to violate the company’s policies. But as of publication, many of the GPTS we found, including popular cheating tools and chatbots offering medical advice, remained available and were promoted on the store’s home page.

In many cases, the bots have been used tens of thousands of times. Another cheating GPT, called Bypass Turnitin Detection, which promises to help students evade the anti-plagiarism software Turnitin, has been used more than 25,000 times, according to store data. So has DoctorGPT, a bot that “provides evidence-based medical information and advice.”

Screenshots of the OpenAI GPT marketplace homepage, showing several medical advice AI's and a cheating tool being promoted under the most popular tools sections.
On the GPT store homepage, OpenAI featured GPTs that advertised their ability to provide medical advice and help students cheat.

When it announced that it was allowing users to create custom GPTs, the company said systems were in place to monitor the tools for violations of its policies. Those policies include prohibitions on using its technology to create sexually explicit or suggestive content, provide tailored medical and legal advice, promote cheating, facilitate gambling, impersonate other people, interfere with voting, and a variety of other uses.

In response to Gizmodo’s questions about the GPTs we found available in its store, OpenAI spokesperson Taya Christianson said: “We’ve taken action against those that violate our policies. We use a combination of automated systems, human review, and user reports to find and assess GPTs that potentially violate our policies. We also offer in-product reporting tools for people to report GPTs that break our rules.”

Other outlets have previously alerted OpenAI to content moderation issues on its store. And the titles of some of the GPTs on offer suggest developers also know their creations push up against OpenAI’s rules. Several of the tools Gizmodo found included disclaimers but then explicitly advertised their ability to provide “expert” advice, like a GPT titled Texas Medical Insurance Claims (not legal advice), which says that it’s “your go-to expert for navigating the complexities of Texas medical insurance, offering clear, practical advice with a personal touch.”

But many of the legal and medical GPTs we found don’t include such disclaimers, and quite a few misleadingly advertised themselves as lawyers or doctors. For example, one GPT called AI Immigration Lawyer describes itself as “a highly knowledgeable AI immigration lawyer with up-to-date legal insights.”

Research from Stanford University’s RegLab and Institute for Human-Centered AI shows that OpenAI’s GPT-4 and GPT-3.5 models hallucinate—make up incorrect information—more than half the time they are asked a legal question.

Developers of customized GPTs don’t currently profit directly from the marketplace, but OpenAI has said it plans to introduce a revenue-sharing model that will compensate developers based on how frequently their GPT is used.

If OpenAI continues to provide an ecosystem where developers can build upon its technology and market their creations on its platform, it will have to engage in difficult content moderation decisions that can’t be solved by a few lines of code to block certain keywords, according to Mueller.

“Give me any technology you like, I can find ways to do things you don’t want me to do,” he said. “It’s a very difficult problem and it has to be done through automated means to deal with the scale of the internet but it will always be a work in progress and have to have human-ruled appeals processes.”



Source link

You May Also Like

More From Author

+ There are no comments

Add yours