Sometimes the image-generator would overcompensate for diversity

GO

-0.9995150566101074
Google apologized Friday for its faulty rollout of a new artificial intelligence image-generator, acknowledging that in some cases the tool would “overcompensate” in seeking a diverse range of people even when such a range didn’t make senseGoogle apologized Friday for its faulty rollout of a new artificial intelligence image-generator, acknowledging that in some cases the tool would “overcompensate” in seeking a diverse range of people even when such a range didn’t make sense.
“It’s clear that this feature missed the mark,” said a blog post Friday from Prabhakar Raghavan, a senior vice president who runs Google’s search engine and other businesses.
The Associated Press was not able to independently verify what prompts were used to generate those images.
Google added the new image-generating feature to its Gemini chatbot, formerly known as Bard, about three weeks ago.
It was built atop an earlier Google research experiment called Imagen 2.
Google has known for a while that such tools can be unwieldly.
The problems with Gemini are not the first to recently affect an image-generator.
Microsoft had to adjust its own Designer tool several weeks ago after some were using it to create deepfake pornographic images of Taylor Swift and other celebrities.
In one instance, the chatbot said it didn’t want to contribute to the spread of misinformation or “trivialization of sensitive topics.”
Raghavan said Google will do “extensive testing” before turning on the chatbot’s ability to show people again.

Google issued an apology on Friday for the mishandled introduction of a new AI image generator, admitting that the program occasionally “overcompensated” by looking for a wide variety of individuals even when this wasn’t necessary.

Google issued an apology on Friday for the poorly executed launch of a new AI image generator, admitting that occasionally the program would “overcompensate” by looking for a wide variety of individuals even when that variety didn’t make sense.

The explanation, which was only partially completed the day before, explained why the company’s images featured people of color in historically unlikely places. Google had previously announced that it would be temporarily halting the creation of any images featuring people by its Gemini chatbot. This was in reaction to a social media backlash from users who said that the tool’s generation of a racially diverse set of images in response to written prompts was anti-White.

Google senior vice president Prabhakar Raghavan, who oversees the search engine and other businesses at the company, stated in a blog post on Friday that “it’s clear that this feature missed the mark.”. “A number of the images produced are untrue or even offensive. We apologize for the feature’s poor performance and appreciate the feedback from users. “.

Images that showed a Black woman as a U.S. citizen were among the ones that garnered attention on social media this week, though Raghavan did not provide any specific examples. s. portrayed Black and Asian people as German soldiers during the Nazi era and the founding father. The prompts used to create those images could not be independently verified by The Associated Press.

Approximately three weeks ago, Google introduced the new feature that allows it to generate images to its Gemini chatbot, which was previously called Bard. It was constructed upon the foundation of an earlier Imagen 2 research project by Google.

Google has long been aware of how cumbersome these tools can be. The scientists behind Imagen issued a warning in a 2022 technical paper, stating that generative AI tools “raise many concerns regarding social and cultural exclusion and bias” and can be used for harassment or the spread of false information. The researchers noted at the time that Google’s decision to withhold “a public demo” of Imagen or its underlying code was influenced by these factors.

After that, a race to be the first among tech companies to capitalize on interest in the new technology—which was sparked by the introduction of OpenAI’s chatbot ChatGPT—has intensified, increasing pressure to publicly release generative AI products.

An image generator has not been plagued by issues like those with Gemini before. A few weeks ago, Microsoft was forced to make changes to its Designer tool because some users were using it to make deepfake, pornographic images of Taylor Swift and other celebrities. Research has also demonstrated that AI image generators, when left without filters, are more likely to produce lighter-skinned men when asked to create a person in a variety of scenarios. These generators can also reinforce gender and racial stereotypes present in their training data.

“We tuned this feature in Gemini when we built it to ensure it doesn’t fall into some of the traps we’ve seen in the past with image generation technology — such as creating violent or sexually explicit images, or depictions of real people,” Raghavan stated on Friday. And we want it to function well for everyone because our users are global in nature. “.

When requesting a photo of a football player or a dog walker, he said that many people might “want to receive a range of people.”. However, users who are searching for someone in particular cultural contexts or of a specific race or ethnicity “should absolutely get a response that accurately reflects what you ask for.”. “.

It misinterpreted some very anodyne prompts as sensitive, overcompensating in some situations while being “more cautious than we intended and refused to answer certain prompts entirely” in others. “.

He did not specify which prompts he meant, but tests of the tool conducted by the AP on Friday revealed that Gemini frequently declines requests for images related to specific subjects, like protest movements. For example, the tool refused to produce images about the Arab Spring, the George Floyd protests, or Tiananmen sq\.. During one conversation, the chatbot expressed its desire to avoid spreading false information or trivializing delicate subjects. “.”.

Elon Musk, the owner of the social media platform, expressed his outrage against Google for what he called its “insane racist, anti-civilizational programming,” which became the main source of this week’s outcry over Gemini’s outputs. With his own AI startup, Musk has regularly attacked other AI developers and Hollywood for supposedly liberal bias.

Before enabling the chatbot to show people again, Raghavan stated that Google will conduct “extensive testing.”.

Researcher Sourojit Ghosh of the University of Washington, who has examined bias in AI image generators, expressed disappointment on Friday that Raghavan’s message concluded with a statement from Google stating that the company “can’t promise that Gemini won’t occasionally generate embarrassing, inaccurate, or offensive results.”. “.

Creating accurate or inoffensive results should be a fairly low bar we can hold them accountable to, according to Ghosh, for a company that has mastered search algorithms and has “one of the biggest troves of data in the world.”.

Leave a Reply

scroll to top