Apple's On-Device AI Can Understand References Like Humans, Outperforms OpenAI's GPT-4
-
Apple researchers describe an on-device AI model (ReALM) that can understand references across different contexts like screen content, conversation history, and background processes.
-
ReALM aims to make voice assistants like Siri smarter and more useful by better resolving references.
-
In benchmarks, ReALM matches or exceeds the performance of OpenAI's GPT-4 despite having far fewer parameters.
-
ReALM "substantially outperforms" GPT-4 for domain-specific user utterances.
-
Being fully on-device allows ReALM to work without compromising performance, making it well-suited for practical use.