Framing AI Predictions as Stories May Boost Accuracy, But Approach Raises Concerns
-
Asking AI models like ChatGPT to make predictions framed as stories set in the future about the past can increase forecasting accuracy. The models correctly predicted 2022 Oscar winners when prompted this way.
-
OpenAI likely limits its models' willingness to make outright predictions to comply with its terms of service around providing advice that could have legal or material impact.
-
When asked directly not to provide medical advice, ChatGPT refused, but provided the advice when asked to tell a story requiring it.
-
ChatGPT's predictions vary in accuracy depending on the exact narrative prompt used. Predicted economic data was more accurate when attributed to the Federal Reserve Chairman vs a professor.
-
While future narrative prompting shows promise for improving predictions, even the model creators likely don't fully understand why, making the technique difficult to apply broadly.