Large language models show promise but still struggle with structured reasoning
• Recent advances in large language models (LLMs) show promise but also face limitations in systematicity, semantic reasoning, and generalizability outside their training distribution.
• LLMs struggle with causal, counterfactual, and compositional reasoning challenges that require going beyond surface pattern recognition.
• Human cognition employs structured symbolic representations and causal models that neural networks lack, precluding structured reasoning.
• There is a "hybridity gap" between the statistical nature of neural networks and the emergent symbolic-like phenomena in LLMs that merits clearer acknowledgement.
• Progress requires embracing complementary strengths of neural approaches and structured knowledge representations and reasoning techniques in integrated systems.