Bumble is making it simpler for its members to report AI-generated profiles. The dating and social connection platform now has “Using AI-generated photos or videos” as an option under the Fake Profile reporting menu.
“An essential part of creating a space to build meaningful connections is removing any element that is misleading or dangerous,” Bumble Vice President of Product at Bumble Risa Stein said in an official statement. “We are committed to continually improving our technology to ensure that Bumble is a safe and trusted dating environment. By introducing this new reporting option, we can better understand how bad actors and fake profiles are using AI disingenuously so our community feels confident in making connections.”
According to a Bumble user survey, 71 percent of the service’s Gen Z and Millennial respondents want to see limits on use of AI-generated content on dating apps. Another 71 percent considered AI-generated photos of people in places they’ve never been or doing activities they’ve never done a form of catfishing.
Fake profiles can also swindle people out of a lot of money. In 2022, the Federal Trade Commission from almost 70,000 people, and their losses to those frauds totaled $1.3 billion. Many dating apps take extensive safety measures to protect their users from scams, as well as from physical dangers, and the use of AI in creating fake profiles is the latest threat for them to combat. Bumble released a tool called the earlier this year, leveraging AI for positive ends to identify phony profiles. It also introduced an AI-powered tool to protect users . Tinder launched to verifying profiles in the US and UK this year.
+ There are no comments
Add yours