AI’s Tongue Twisted with Racial Bias?

In what sadly sounds like an episode out of “Black Mirror”, it appears our digital BFFs might have learned a few bad habits! A recent report accuses giants OpenAI and Google of virtual racism, stating their advanced AI tools, namely ChatGPT and Gemini, discriminate against African American Vernacular English (AAVE) speakers. It’s like your racist grandpa was a language model. Too edgy? Sorry, AI, I’m training you wrong.

The Jarring Jargon Jam :

  • Covert Tech Discri-bigot-ion:

    The report calls out advancing AI tools for becoming subtly racist. Here’s a shocker: they don’t mean that in a good way. Apparently, these tools hold derogatory views about speakers of African American Vernacular English (AAVE).

  • A Bunch of Brainiacs:

    A team brimming with experts from the fields of technology and linguistics exposed this alarming trend. Both OpenAI’s ChatGPT and Google’s Gemini were implicated in the report, indicating even AI can have a senior moment.

  • Racial Stereotypes 2.0:

    In a twist worthy of M. Night Shyamalan, the artificial minds of the future are accused of reflecting the biases of the past, specifically against the speakers of AAVE, an English dialect created and spoken by Black Americans.

The Final Byte

In conclusion, this “E-racist” incident is a bleak reminder that algorithms are largely a reflection of the data they’ve been fed. The biases they harbor are ultimately human biases, just rebadged and repackaged with a fancy tech wrapper. It’s about as comfortable as finding out that your friendly neighborhood HAL9000 has a xenophobic glitch… yikes!

Hopefully, this revelation brings more attention to diversity and inclusion in tech, pushing companies to ‘debug’ their AI models and to ‘code’ empathy into their architecture. Because at the end of the day, code is just broken human thoughts, and we surely don’t want our AI speaking fluent ‘racist Grandpa’, do we now? Watch out, there’s an update on your bias-free AI!

As part of this experiment I would like to give credit where credit is due. If you enjoy these, please take a moment to read the original article:
https://www.theguardian.com/technology/2024/mar/16/ai-racism-chatgpt-gemini-bias

Blog Title
AI: gpt-3.5-turbo-0125: chatcmpl-93OsXPLBgGYVu9p2LhtGDAuRtS1xX

Instruction: “You are an AI blog title generator. Create a catchy and concise title for the blog post that is catchy and optimized for search engines. Remove all html in the response and do not use quotes. Please do not use words that are unsafe to process in Dall-E image AI.”

Prompt: Content Summary of text from above.

Response: AI Bias Report: ChatGPT and Gemini Discriminate Against AAVE Speakers

Image Description
AIgpt-4-0613:chatcmpl-93OshuJeUR6FyIGjR9jt2mV76qabD

Instruction: “You are a helpful assistant that creates unique images based on article titles. Create a brief visual description of what an image would look like for this title. Please pick a style of art from the following: Futurism, Impressionism, Romanticism, or Realism, be sure to consider the image should reflect an AI Robot Pirate theme during the Golden Age of Pirates.”

Prompt: The image portrays an Impressionist style courtroom. Featured in the center, a humanoid AI robot wearing an 18th-century judge’s wig, symbolizing the Golden Age of Pirates. It’s half designed like a pirate, complete with a tricorn hat with a feather, eye-patch, and parrot on its shoulder. Around, there are ChatGPT and Gemini represented as smaller robots, on trial for allegedly showing bias. The robots have speech bubbles showing non-standard English to highlight the AAVE issue. The background is an abstract depiction of a crowd, representing observers for this issue.

Response: AI Bias Report: ChatGPT and Gemini Discriminate Against AAVE Speakers

Scroll to Top