Zero-shot transfer to unseen cameras (Apple Pro, D435i) and varying lighting conditions without any fine-tuning.
The vision–language–action (VLA) paradigm has enabled powerful robotic control by leveraging vision–language models, but its reliance on large-scale, high-quality robot data limits its generalization. Generative world models offer a promising alternative for general-purpose embodied AI, yet a critical gap remains between their pixel-level plans and physically executable actions. To this end, we propose the Tool-Centric Inverse Dynamics Model (TC-IDM). By focusing on the tool’s imagined trajectory as synthesized by the world model, TC-IDM establishes a robust intermediate representation that bridges the gap between visual planning and physical control. TC-IDM extracts the tool’s point cloud trajectories via segmentation and 3D motion estimation from generated videos. Considering diverse tool attributes, our architecture employs decoupled action heads to project these planned trajectories into 6-DoF end-effector motions and corresponding control signals. This ’plan-and-translate’ paradigm not only supports a wide range of end-effectors but also significantly improves viewpoint invariance. Furthermore, it exhibits strong generalization capabilities across long-horizon and out-of-distribution tasks, including interacting with deformable objects. In real-world evaluations, the world model with TC-IDM achieves an average success rate of 61.11%, with 77.7% on simple tasks, and 38.46% on zero-shot deformable object tasks—substantially outperforming end-to-end VLA-style baselines and other IDMs.
Achieves sub-4cm precision in dart placement tasks, demonstrating superior plan-to-execution fidelity over VLA baselines.
Zero-shot transfer to unseen cameras (Apple Pro, D435i) and varying lighting conditions without any fine-tuning.
Successfully handles non-rigid objects (e.g., cloth removal) with a 38.46% success rate, despite being trained exclusively on rigid data.
Executes complex multi-stage tasks, such as 6-step hoodie folding, in a zero-shot manner without intermediate replanning.
Seamlessly transfers motion priors from single-arm Franka to dual-arm UR5 systems for dynamic tasks like xylophone playing.
Enables zero-shot retargeting to various multi-fingered hands (BrainCo, Inspire) via embodiment-agnostic motion representations.
@article{pixels2world2026,
title={Pixels to World: Grounding Video Generation for Executable Zero-shot Robot Motion},
author={Your Name and Co-Author},
journal={arXiv preprint arXiv:2400.xxxxx},
year={2026}
}