Trajectory Stitching via Compositional Trajectory Generation. The proposed method, CompDiffuser, generates long-horizon plans by compositionally sampling a sequence of coherent short trajectory segments (while only being trained on short-horizon data).
Effective trajectory stitching for long-horizon planning is a significant challenge in robotic decision-making. While diffusion models have shown promise in planning, they are limited to solving tasks similar to those seen in their training data. We propose CompDiffuser, a novel generative approach that can solve new tasks by learning to compositionally stitch together shorter trajectory chunks from previously seen tasks. Our key insight is modeling the trajectory distribution by subdividing it into overlapping chunks and learning their conditional relationships through a single bidirectional diffusion model. This allows information to propagate between segments during generation, ensuring physically consistent connections. We conduct experiments on benchmark tasks of various difficulties, covering different environment sizes, agent state dimension, training data quality, and show that CompDiffuser significantly outperforming existing methods.
Unseen Long Horizon Evaluation Task
Start (circle) & Goal (star)
topdown view
Example Demonstrations
Synthesized Plan: Top-down View
We introduce CompDiffuser, a generative trajectory stitching method that leverages the compositionality of diffusion models. We introduce a noise-conditioned score function formulation that helps in performing autoregressive sampling of multiple short-horizon trajectory diffusion models and eventually stitching them to form a longer-horizon goal-conditioned trajectory. Our method demonstrates effective trajectory stitching capabilities as evident from the extensive experiments on tasks of various difficulty, including different environment sizes, planning state dimensions, trajectory types, and training data quality.
To be updated soon.