To address the flux of explicit “deepfake” images, Google will make it easier for people to remove AI-generated nudes of themselves from Search, the company said in a blog post on Wednesday.
Deepfake images and video use AI technology to impose one person’s face on another person’s body. Sometimes this can be humorous, like imposing Ron Swanson’s face and voice from the show Parks and Recreation over Wednesday Addams from The Addams Family. Other times, it can be problematic, like when using it to falsely portray Ukrainian President Volodymyr Zelenskyy telling his troops to surrender to Russian forces.
When the technology hit the internet a few years ago, people quickly began using it to create sexually explicit imagery of celebrities. Deepfake technology becoming easier to access has led to explicit fake imagery based on everyday people, not just public figures. These changes are intended to help remove that imagery.
Now, when someone requests the removal of explicit nonconsensual deepfakes from Search, Google says its systems will aim to filter all other explicit searches about them from Search. And, when the system is pinged to remove one explicit image from Search, Google will aim to remove any duplicate of that image from Search results. It should be noted that removing an explicit AI-generated deepfake from Search doesn’t mean it’s been removed from the internet entirely.
Google declined to respond to a request for further comment.
As AI technology continues to become more democratized, reaching mainstream attention with the launch of ChatGPT in late 2022, problems have arisen as well. There’s been a rise in misinformation and cybercrime. AI systems also require significant power to run, which is bad for the environment. But, on the visual side, there’s been an increase of fake generated nudes. In the past these faked images required time and photoshop skills to create, but now, in seconds, it’s easy to create explicit imagery, both based on fictional characters and real people. There have already been cases of people generating child sexual abuse material with AI or using the tech to create deepfake nudes of their classmates.
Even as Wall Street continues to reward AI companies with massive investments (some of which is now tempering), governments around the world are struggling to regulate the transformative tech as it continues to evolve rapidly.
Google also says it’s improving its ranking system to protect against harmful content. Explicit fake content will be lowered in Search. If you search for a person’s name, Google will boost high-quality nonexplicit content, like news articles, when available. Google says this update will reduce the exposure to explicit results by 70%. The company acknowledged the challenge of filtering legitimate nudes, such as an actor’s scene in a movie, versus illegitimate deepfakes of that same actor. Google says it’ll do this by looking to see if a site has a lot of removal requests for fake explicit imagery. If so, that site will be deemed low-quality.
If you’d like to remove an AI-generated explicit deepfake from Search, follow this link.
Note that Google says a request must meet three requirements to be approved for removal:
- The person must be identifiable in the imagery.
- The imagery in question must be fake and must falsely depict the person nude or in a sexually explicit situation.
- The imagery must be distributed without the person’s consent.
Google says it will review all requests, potentially requesting more information if needed, and will respond to each request either notifying the person with action taken or explaining why the request doesn’t meet the company’s requirements.
+ There are no comments
Add yours