UNESCO Finds AI Models Perpetuate Gender Bias, Calls for More Diverse and Ethical AI Development
-
UNESCO tested popular AI models from OpenAI and Meta and found evidence of prejudice against women in the generated text.
-
The models associated women's names with domestic topics and men's names with high-status careers.
-
GPT-3.5 was found to be less biased than Llama 2 and GPT-2.
-
UNESCO praised the open source nature of Llama 2 and GPT-2 for allowing scrutiny of problems.
-
UNESCO recommended AI companies hire more diverse staff and governments ensure ethical AI through regulation.