LLM-driven AI systems work towards continuously learning general intelligence agents
-
Recent works integrate large language models (LLMs) with continual learning to work towards artificial general intelligence (AGI) agents that can continuously learn skills. LLMs serve as components like planners, selectors, controllers.
-
Frameworks utilize short-term memory (e.g. in-context learning) and long-term memory (e.g. skill libraries) to leverage previously learned skills. This boosts performance on new tasks.
-
Self-verification through LLM critics provides informative feedback to refine plans. Descriptors transform environment states to text for LLM consumption.
-
Initiatives like AutoGPT and BabyAGI showcase end-to-end autonomous agents accomplishing goals using prompted LLMs. They demonstrate promising capabilities.
-
Main limitations are LLM inaccuracies, limited context lengths restricting memory, and assumptions of sufficient environment knowledge. Further research is still needed.