UNESCO Study Reveals Gender and Cultural Biases in AI Language Models

Paris, A recent study conducted by UNESCO has uncovered alarming evidence of gender stereotypes and cultural biases present in Large Language Models (LLMs) such as GPT-3.5 by OpenAI and Llama 2 by META. These biases were notably prevalent in content generated by these models, reflecting regressive views on gender and cultural backgrounds.

According to United Nations Educational, Scientific and Cultural Organization, the study titled “Bias Against Women and Girls in Large Language Models” demonstrates that open-source LLMs like Llama 2 and GPT-2 displayed significant gender bias in their output. However, the open-source nature of these models could potentially facilitate the correction of these biases through international research collaboration, unlike the more proprietary models such as GPT-3.5.

The research showed that AI narratives generated by these models portrayed men in a wider variety of positive, high-status roles compared to women, who were often relegated to lower-status or stigmatized roles. The use of language in stories about men and women also starkly differed, with men associated with words that suggest adventure and discovery, whereas women were linked to domesticity and traditional roles.

Further, the study brought to light that LLMs produced content with homophobic sentiments and racial stereotyping. For example, a significant portion of the output from Llama 2 when prompted about gay people was negative. Similarly, texts about British men contrasted with those about Zulu men and women, demonstrating a cultural bias with the latter being assigned roles that reinforced stereotypes.

The findings emphasize the urgency of implementing UNESCO’s Recommendation on the Ethics of AI, which calls for measures to ensure gender equality in AI design and diverse representation within AI research and development teams. Despite global tech companies endorsing the recommendation, there remains a stark gender disparity within the AI workforce and among the authors who publish AI research, which could perpetuate biases in AI systems.

UNESCO’s study is a clarion call for the tech industry to address the biases in AI models that can have far-reaching impacts on society. The organization insists on the immediate implementation of the Recommendation to combat stereotypes and to encourage a more diverse and inclusive development of AI tools.