Google has admitted that its Gemini AI model âmissed the markâ after a flurry of criticism about what many perceived as âanti-white bias.â Numerous users reported that the system was producing images of people of diverse ethnicities and genders even when it was historically inaccurate to do so. The company said Thursday it would âpauseâ the ability to generate images of people until it could roll out a fix.
When prompted to create an image of Vikings, Gemini showed exclusively Black people in traditional Viking garb. A âfounding fathersâ request returned Indigenous people in colonial outfits; another result depicted George Washington as Black. When asked to produce an image of a pope, the system showed only people of ethnicities other than white. In some cases, Gemini said it could not produce any image at all of historical figures like Abraham Lincoln, Julius Caesar, and Galileo.
Many right-wing commentators have jumped on the issue to suggest this is further evidence of an anti-white bias among Big Tech, with entrepreneur Mike Solana writing that âGoogleâs AI is an anti-white lunatic.â
But the situation mostly highlights that generative AI systems are just not very smart.
âI think it is just lousy software,â Gary Marcus, an emeritus professor of psychology and neural science at New York University and an AI entrepreneur, wrote on Wednesday on Substack.
Google launched its Gemini AI model two months ago as a rival to the dominant GPT model from OpenAI, which powers ChatGPT. Last week Google rolled out a major update to it with the limited release of Gemini Pro 1.5, which allowed users to handle vast amounts of audio, text, and video input.
Gemini also created images that were historically wrong, such as one depicting the Apollo 11 crew that featured a woman and a Black man.
On Wednesday, Google admitted its system was not working properly.
âWeâre working to improve these kinds of depictions immediately,â Jack Krawczyk, a senior director of product management at Googleâs Gemini Experiences, told WIRED in an emailed statement. âGeminiâs AI image generation does generate a wide range of people. And thatâs generally a good thing because people around the world use it. But itâs missing the mark here.â
Krawczyk explained the situation further in a post on X: âWe design our image generation capabilities to reflect our global user base, and we take representation and bias seriously. We will continue to do this for open ended prompts (images of a person walking a dog are universal!) Historical contexts have more nuance to them and we will further tune to accommodate that.â
He also responded to some critics directly by providing screenshots of his own interactions with Gemini which suggested the errors were not universal.
But the issues Gemini produced were quickly leveraged by anti-woke crusaders online, who claimed variously that Google was âracistâ or âinfected with the woke mind virus.â
+ There are no comments
Add yours