Google's Gemini Stumbles Over Lack of Diverse Perspectives in AI Development
-
Google's Gemini image generator had problems generating images of white people, leading to accusations of being "too woke." But this likely stems from Google not applying AI ethics lessons correctly.
-
Lack of expertise in anticipating different use cases and contexts seems to be behind Gemini's awkward outputs. Those with backgrounds in social science and human-computer interaction could have helped.
-
Mapping out potential beneficial and harmful use cases is key in ethics-focused AI development. Gemini seems to have targeted pro-white bias risks but missed nuances around appropriate use.
-
Tech companies need to empower people skilled in articulating foreseeable uses and minimizing algorithmic harms, not shut them down. Diverse perspectives are critical.
-
Executives as diverse as Gemini's images would be a sign tech companies are structuring themselves to build ethical AI. True inclusion requires giving power to marginalized experts.