We use cookies to ensure that we give you the best experience on our website. Read privacy policies.
Commercial AI chatbots exhibit racial bias towards speakers of African American English, despite conveying surface-level positive sentiments about African Americans.
Valentin Hofmann from the Allen Institute for AI, a non-profit research organization, highlighted the presence of covert racism in large language models, such as GPT-4 and GPT-3.5 from OpenAI, in a social media post. According to Hofmann, these models are more inclined to recommend death sentences for defendants who speak African American English.
The study uncovered this hidden bias across various versions of large language models, which currently power widely-used commercial chatbots.
AI chatbots found to use racist stereotypes even after anti-racism training
Thank you for subscribing!