0 0

Artificial intelligence (AI) has become an increasingly powerful tool, transforming everything from social media feeds to medical diagnoses. However, the recent controversy surrounding Google’s AI tool, Gemini, has cast a spotlight on a critical challenge: bias and inaccuracies within AI development.

By examining the issues with Gemini, we can delve deeper into these broader concerns. This article will not only shed light on the pitfalls of biased AI but also offer valuable insights for building more responsible and trustworthy AI systems in the future.

Read more: AI Bias: What It Is, Types and Their Implications

The Case of Gemini by Google AI

is Google AI Gemini bias?
Image Source Google

Gemini (formerly known as Bard) is a language model created by Google AI. Launched in March 2023, it’s known for its ability to communicate and generate human-like text in response to a wide range of prompts and questions.

According to Google, one of its key strengths is its multimodality, meaning it can understand and process information from various formats like text, code, audio, and video. This allows for a more comprehensive and nuanced approach to tasks like writing, translation, and answering questions in an informative way.

Gemini Image Analysis Tool

The image generation feature of Gemini is the part which has gained the most attention, though, due to the controversy surrounding it. Google, competing with OpenAI since the launch of ChatGPT, faced setbacks in rolling out its AI products.

On 22nd February, not even after its one-year debut, Google announced it would halt the development of the Gemini image analysis tool due to backlash over its perceived ‘anti-woke’ bias.

The tool was designed to assess whether an image contained a person and determine their gender. However, concerns were raised regarding its potential to reinforce harmful stereotypes and biases. Gemini-generated images circulated on social media, prompting widespread ridicule and outrage, with some users accusing Google of being ‘woke’ to the detriment of truth or accuracy.

Among the images that attracted criticism were: Gemini-generated images showing women and people of colour in historical events or roles historically held by white men. Another case saw a depiction of four Swedish women, none of whom were white, and scenes of Black and Asian Nazi soldiers.

Is Google Gemini Bias?

In the past, other AI models have also faced criticism for overlooking people of colour and perpetuating stereotypes in their results.

However, Gemini was actually designed to counteract these stereotype biases, as explained by Margaret Mitchell, Chief Ethics Scientist at the AI startup Hugging Face via Al Jazeera.

While many AI models tend to prioritise generating images of light-skinned men, Gemini focuses on creating images of people of colour, especially women, even in situations where it might not be accurate. Google likely adopted these methods because the team understood that relying on historical biases would lead to significant public criticism.

For example, the prompt, “pictures of Nazis”, might be changed to “pictures of racially diverse Nazis” or “pictures of Nazis who are Black women”. As such, a strategy which started with good intentions has the potential to backfire and produce problematic results.

Bias in AI can show up in various ways; in the case of Gemini, it can perpetuate historical bias. For instance, images of Black people as the Founding Fathers of the United States are historically inaccurate. Accordingly, the tool generated images that deviated from reality, potentially reinforcing stereotypes and leading to insensitive portrayals based on historical inaccuracies.

Google’s Response

Following the uproar, Google responded that the images generated by Gemini were produced as a result of the company’s efforts to remove biases which previously perpetuated stereotypes and discriminatory attitudes.

Google’s Prabhakar Raghavan further explained that Gemini had been regulated to show diverse people, but had not adjusted for prompts where that would be inappropriate. It had also been too ‘cautious’ and had misinterpreted “some very anodyne prompts as sensitive”.

“These two things led the model to overcompensate in some cases and be over-conservative in others, leading to images that were embarrassing and wrong,” he said.

The Challenge of Balancing Fairness and Accuracy

When Gemini was said to be ‘overcompensating’, it means it tried too hard to be diverse in its image outputs, but in a way that was not accurate and occasionally even offensive.

On top of that, Gemini went beyond simply representing a variety of people in its images. It might have prioritised diversity so much that it generated historically inaccurate or illogical results.

Learning From Mistake: Building Responsible AI Tools

The discussion surrounding Gemini reveals a nuanced challenge in AI development. While the intention behind Gemini was to address biases by prioritising the representation of people of colour, it appears that in some instances, the tool may have overcompensated.

The tendency to over-represent specific demographics can also lead to inaccuracies and perpetuate stereotypes. This underscores the complexity of mitigating biases in AI.

Furthermore, it emphasises the importance of ongoing scrutiny and improvement to achieve the delicate balance between addressing biases and avoiding overcorrection in AI technologies.

Therefore, through ongoing assessment and adjustment, brands can strive to create AI systems that not only combat biases but also ensure fair and accurate representation for all.

Image Source Deposit Photos

About Post Author

Jordan

Happy
Happy
0 %
Sad
Sad
0 %
Excited
Excited
0 %
Sleepy
Sleepy
0 %
Angry
Angry
0 %
Surprise
Surprise
0 %
Translate »