New AI Model Generates Biased Responses Based on Different Stereotypes
-
Researchers created a large language AI model called OpinionGPT that generates biased outputs.
-
OpinionGPT is a variant of Meta's Llama 2 model, fine-tuned to respond as 11 different bias groups.
-
The model was refined on AskX subreddit data meant to represent different biases.
-
The resulting outputs often reflect stereotypes more than measurable real-world biases.
-
OpinionGPT is available online for public testing but may generate false, inaccurate or obscene content.