
Due to recent concerns about AI-generated images showing people of color in German military uniforms from World War II, Google has temporarily suspended the A.I. chatbot’s ability to generate images of people. This move comes as Google works to address inaccuracies in historical depictions created by the Gemini chatbot. The controversy is the latest test for Google’s A.I. efforts, as it rebranded its chatbot offering from Bard to Gemini and upgraded its technology. Critics have highlighted flaws in Gemini’s approach to A.I., including its refusal to depict white people and generate images based on specific ethnicities and skin tones to avoid perpetuating stereotypes and biases. Google has acknowledged that Gemini is “missing the mark” in its attempts to generate a diverse variety of people and has promised to improve the feature. The backlash serves as a reminder of past controversies about bias in Google’s technology, such as when Google Photos labeled a picture of two Black people as gorillas in 2015. Despite previous efforts to reduce offensive outputs and improve representation, social media users have criticized Google for going too far in its effort to showcase racial diversity. The chatbot now informs users that it is working to improve its ability to generate images of people and will notify users when the feature returns. Gemini’s predecessor, Bard, also faced challenges when it shared inaccurate information at its public debut last year.