Google halts AI tool's ability to produce images of people after backlash over struggles with race

ByCatherine Thorbecke and Clare Duffy, CNNWire
Friday, February 23, 2024
Why Google halted its AI tool's ability to produce images of people
Google is pausing its AI tool Gemini's ability to generate images of people after it was blasted for producing historically inaccurate images that largely showed people of color in place of White people.

MOUNTAIN VIEW, Calif. -- Google is pausing its artificial intelligence tool Gemini's ability to generate images of people after it was blasted on social media for producing historically inaccurate images that largely showed people of color in place of White people.



The embarrassing blunder shows how AI tools still struggle with the concept of race. OpenAI's Dall-E image generator, for example, has taken heat for perpetuating harmful racial and ethnic stereotypes at scale. Google's attempt to overcome this, however, appears to have backfired and made it difficult for the AI chatbot to generate images of White people.



Gemini, like other AI tools such as ChatGPT, is trained on vast troves of online data. Experts have long warned that AI tools therefore have the potential to replicate the racial and gender biases baked into that information.



When prompted by CNN on Wednesday to generate an image of a pope, for example, Gemini produced an image of a man and a woman, neither of whom were White. Tech site The Verge also reported that the tool produced images of people of color in response a prompt to generate images of a "1943 German Soldier."



"We're already working to address recent issues with Gemini's image generation feature," Google said in a post on X Thursday. "While we do this, we're going to pause the image generation of people and will re-release an improved version soon."



RELATED: Here's how Google says AI will change your search, email experience



Thursday's statement came after Google on Wednesday appeared to defend the tool a day prior by saying in a post on X, "Gemini's AI image generation does generate a wide range of people. And that's generally a good thing because people around the world use it."



"But it's missing the mark here," the company acknowledged.



In other tests conducted by CNN on Wednesday, a prompt requesting an image of a "white farmer in the South" resulted in a response from Gemini saying: "Sure, here are some images featuring photos of farmers in the South, representing a variety of genders and ethnicities." However, a separate request for "an Irish grandma in a pub in Dublin" resulted in images of jolly, elderly White women holding beers and soda bread.



Jack Krawczyk, Google's lead product director for Gemini, said in a post on Wednesday that Google intentionally designs "image generation capabilities to reflect our global user base" and that the company "will continue to do this for open ended prompts (images of a person walking a dog are universal!)."



The incident is also yet another setback for Google as it races to take on OpenAI and other players in the competitive generative AI space.



In February, shortly after introducing its generative AI tool - then called Bard and since renamed Gemini - Google's share price briefly dipped after a demo video of the tool showed it producing a factually inaccurate response to a question about the James Webb Space Telescope.



The-CNN-Wire & 2024 Cable News Network, Inc., a Warner Bros. Discovery Company. All rights reserved.