Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Programmatic Representations for Agent Learning

Sketch-Plan-Generalize : Learning and Planning with Neuro-Symbolic Programmatic Representations for Inductive Spatial Concepts

Namasivayam Kalithasan · Sachit Sachdeva · Himanshu Gaurav Singh · Vishal Bindal · Arnav Tuli · Gurarmaan Panjeta · Harsh Vora · Divyanshu Agarwal · Rohan Paul · Parag Singla


Abstract:

Effective human-robot collaboration requires the ability to learn personalized concepts from a limited number of demonstrations, while exhibiting inductive generalization, hierarchical composition, and adaptability to novel constraints. Existing approaches that use code generation capabilities of pre-trained large (vision) language models as well as purely neural models show poor generalization to a-priori unseen complex concepts. Neuro-symbolic methods offer a promising alternative by searching in program space, but face challenges in large program spaces due to the inability to effectively guide the search using demonstrations. Our key insight is to factor inductive concept learning as: (i) Sketch: detecting and inferring a coarse signature of a new concept (ii) Plan: performing an MCTS search over grounded action sequences guided by human demonstrations (iii) Generalize: abstracting out grounded plans as inductive programs. Our pipeline facilitates generalization and modular re-use, enabling continual concept learning. Our approach combines the benefits of code generation ability of large language models (LLMs) along with grounded neural representations, resulting in neuro-symbolic programs that show stronger inductive generalization on the task of constructing complex structures vis-'a-vis LLM-only and purely neural approaches. Further, we demonstrate reasoning and planning capabilities with learned concepts for embodied instruction following.

Chat is not available.