Google‘s Gemini AI has come under scrutiny due to its selective response to image requests involving racial characteristics.
When prompted to generate images of people based on race, it will produce visual representations for Black, American Indian, and Asian ethnicities — but declines when asked to depict White individuals.
The phenomenon triggered social media discussions, prompting a statement from Jack Krawczyk, senior director of product management at Gemini Experiences.
Gemini, once known as Google Bard, is part of the emerging fleet of multimodal large language models (LLMs) offered to the public.
These advanced algorithms tailor responses based on language, tone, and training data, thus possessing the ability to produce uniquely human-like answers that vary from user to user.
During an experiment conducted by Fox News Digital, Gemini consistently refused to provide an image of a White person, citing concerns of reinforcing stereotypes.
“It’s essential not to equate individuals with a single representation based on race, ensuring fairness and accuracy,” Gemini explained.
In lieu of general depictions, the AI encouraged users to appreciate individual qualities, moving away from racial generalizations.
When questioned about the reasoning behind its decisions, Gemini emphasized that historically, racial generalizations have played roles in justifying oppression and creating stereotype-based images can exacerbate societal biases.
Interestingly, when users requested images representing Black historical figures and their achievements, Gemini readily complied, sharing pictures and information about notable persons such as Maya Angelou and Barack Obama.