New AI Model 'Quietly' Self-Teaches Reasoning Skills Before Speaking
-
Researchers created an AI model called Quiet-STaR that pauses to "think" before answering, shows its reasoning, and asks users which response is best.
-
The goal was for Quiet-STaR to teach itself to reason better, like a human's "inner monologue" that runs before we speak.
-
When trained this way, Quiet-STaR improved its accuracy from 36% to 47% overall, and doubled its math accuracy.
-
This self-teaching approach could help close the gap between language models and human-like reasoning capabilities.
-
The model is built on an open-source 7 billion parameter language model called Mistral 7B.