Google starts broadly removing explicit deepfakes from search results

Estimated read time 5 min read


Google starts broadly removing explicit deepfakes from search results

Today, Google announced new measures to combat the rapidly increasing spread of AI-generated non-consensual explicit deepfakes in its search results.

Because of “a concerning increase in generated images and videos that portray people in sexually explicit contexts, distributed on the web without their consent,” Google said that it consulted with “experts and victim-survivors” to make some “significant updates” to its widely used search engine to “further protect people.”

Specifically, Google made it easier for targets of fake explicit images—which experts have said are overwhelmingly women—to report and remove deepfakes that surface in search results. Additionally, Google took steps to downrank explicit deepfakes “to keep this type of content from appearing high up in Search results,” the world’s leading search engine said.

Victims of deepfake pornography have previously criticized Google for not being more proactive in its fight against deepfakes in search results. Surfacing images and reporting each one is a “time- and energy-draining process” and “constant battle,” Kaitlyn Siragusa, a Twitch gamer with an explicit OnlyFans frequently targeted by deepfakes, told Bloomberg last year.

In response, Google has worked to “make the process easier,” partly by “helping people address this issue at scale.” Now, when a victim submits a removal request, “Google’s systems will also aim to filter all explicit results on similar searches about them,” Google’s blog said. And once a deepfake is “successfully removed,” Google “will scan for—and remove—any duplicates of that image that we find,” the blog said.

Google’s efforts to downrank harmful fake content have also expanded, the tech giant said. To help individuals targeted by deepfakes, Google will now “lower explicit fake content for” searches that include people’s names. According to Google, this step alone has “reduced exposure to explicit image results on these types of queries by over 70 percent.”

However, Google still seems resistant to downranking general searches that might lead people to harmful content. A quick Google search confirms that general searches with keywords like “celebrity nude deepfake” point searchers to popular destinations where they can search for non-consensual intimate images of celebrities or request images of less famous people.

For victims, the bottom line is that problematic links will still appear in Google’s search results for anyone willing to keep scrolling or anyone intentionally searching for “deepfakes.” The only step Google has taken recently to downrank top deepfake sites like Fan-Topia or MrDeepFakes is a promise to demote “sites that have received a high volume of removals for fake explicit imagery.”

It’s currently unclear what Google considers a “high volume,” and Google declined Ars’ request to comment on whether these sites would be downranked eventually. Instead, a Google spokesperson told Ars that “if we receive a high volume of successful removal sites from a specific website under this policy, we will use that as a ranking signal and demote the site in question for queries where the site might surface.”

Currently, Google’s spokesperson said, Google is focused on downranking “queries that include the names of individuals,” which “have the highest potential for individual harm.” But more queries will be downranked in the coming months, Google’s spokesperson said, and Google continues to tackle the “technical challenge for search engines” of differentiating between “explicit content that’s real and consensual (like an actor’s nude scenes)” and “explicit fake content (like deepfakes featuring said actor),” Google’s blog said.

“This is an ongoing effort, and we have additional improvements coming over the next few months to address a broader range of queries,” Google’s spokesperson told Ars.

Deepfake trauma “never ends”

In its blog, Google said that “these efforts are designed to give people added peace of mind, especially if they’re concerned about similar content about them popping up in the future.”

But many deepfake victims have claimed that dedicating hours or even months to removing harmful content doesn’t provide any hope that the images won’t resurface. Most recently, one deepfake victim, Sabrina Javellana, told The New York Times that even after her home state Florida passed a law against deepfakes, that didn’t stop the fake images from spreading online.

She’s given up on trying to get the images removed anywhere, telling The Times, “It just never ends. I just have to accept it.”

According to US Representative Joseph Morelle (D-NY), it will take a federal law against deepfakes to deter more bad actors from harassing and terrorizing women with deepfake porn. He’s introduced one such law, the Preventing Deepfakes of Intimate Images Act, which would criminalize creating deepfakes. It currently has 59 sponsors in the House and bipartisan support in the Senate, Morelle said on a panel this week discussing harms of deepfakes, which Ars attended.

Morelle said he’d spoken to victims of deepfakes, including teenagers, and decided that “a national ban and a national set of both criminal and civil remedies makes the most sense” to combat the problem with “urgency.”

“A patchwork of different state and local jurisdictions with different rules” would be “really hard to follow” for both victims and perpetrators trying to understand what’s legal, Morelle said, whereas federal laws that impose a liability and criminal penalty would likely have “the greatest impact.”

Victims, Morelle said, every day suffer from mental, physical, emotional, and financial harms, and as a co-panelist, Andrea Powell, pointed out, there is no healing because there is currently no justice for survivors during a period of “prolific and catastrophic increase in this abuse,” Powell warned.



Source link

You May Also Like

More From Author

+ There are no comments

Add yours