Google’s ‘Woke’ Image Generator Shows the Limitations of AI

Estimated read time 3 min read


Google has admitted that its Gemini AI model “missed the mark” after a flurry of criticism about what many perceived as “anti-white bias.” Numerous users reported that the system was producing images of people of diverse ethnicities and genders even when it was historically inaccurate to do so. The company said Thursday it would “pause” the ability to generate images of people until it could roll out a fix.

When prompted to create an image of Vikings, Gemini showed exclusively Black people in traditional Viking garb. A “founding fathers” request returned Indigenous people in colonial outfits; another result depicted George Washington as Black. When asked to produce an image of a pope, the system showed only people of ethnicities other than white. In some cases, Gemini said it could not produce any image at all of historical figures like Abraham Lincoln, Julius Caesar, and Galileo.

Many right-wing commentators have jumped on the issue to suggest this is further evidence of an anti-white bias among Big Tech, with entrepreneur Mike Solana writing that “Google’s AI is an anti-white lunatic.”

But the situation mostly highlights that generative AI systems are just not very smart.

“I think it is just lousy software,” Gary Marcus, an emeritus professor of psychology and neural science at New York University and an AI entrepreneur, wrote on Wednesday on Substack.

Google launched its Gemini AI model two months ago as a rival to the dominant GPT model from OpenAI, which powers ChatGPT. Last week Google rolled out a major update to it with the limited release of Gemini Pro 1.5, which allowed users to handle vast amounts of audio, text, and video input.

Gemini also created images that were historically wrong, such as one depicting the Apollo 11 crew that featured a woman and a Black man.

On Wednesday, Google admitted its system was not working properly.

“We’re working to improve these kinds of depictions immediately,” Jack Krawczyk, a senior director of product management at Google’s Gemini Experiences, told WIRED in an emailed statement. “Gemini’s AI image generation does generate a wide range of people. And that’s generally a good thing because people around the world use it. But it’s missing the mark here.”

Krawczyk explained the situation further in a post on X: “We design our image generation capabilities to reflect our global user base, and we take representation and bias seriously. We will continue to do this for open ended prompts (images of a person walking a dog are universal!) Historical contexts have more nuance to them and we will further tune to accommodate that.”

He also responded to some critics directly by providing screenshots of his own interactions with Gemini which suggested the errors were not universal.

But the issues Gemini produced were quickly leveraged by anti-woke crusaders online, who claimed variously that Google was “racist” or “infected with the woke mind virus.”





Source link

You May Also Like

More From Author

+ There are no comments

Add yours