91ÇàÇà²Ý

Events

Guiding Long-Horizon Task and Motion Planning with Vision Language Models

LLM seminar event about the paper "Guiding Long-Horizon Task and Motion Planning with Vision Language Models" by researchers at MIT and NVIDIA Research.
Image with writing about the presenter name, title, time and place of the event. Black background with a book

Title: Guiding Long-Horizon Task and Motion Planning with Vision Language Models

Presenter: Wenshuai Zhao

Abstract: Vision-Language Models (VLM) can generate plausible high-level plans when prompted with a goal, the context, an image of the scene, and any planning constraints. However, there is no guarantee that the predicted actions are geometrically and kinematically feasible for a particular robot embodiment. As a result, many prerequisite steps such as opening drawers to access objects are often omitted in their plans. Robot task and motion planners can generate motion trajectories that respect the geometric feasibility of actions and insert physically necessary actions, but do not scale to everyday problems that require common-sense knowledge and involve large state spaces comprised of many variables. The authors propose VLM-TAMP, a hierarchical planning algorithm that leverages a VLM to generate goth semantically-meaningful and horizon-reducing intermediate subgoals that guide a task and motion planner. When a subgoal or action cannot be refined, the VLM is queried again for replanning. They evaluate VLM- TAMP on kitchen tasks where a robot must accomplish cooking goals that require performing 30-50 actions in sequence and interacting with up to 21 objects. VLM-TAMP substantially outperforms baselines that rigidly and independently execute VLM-generated action sequences, both in terms of success rates (50 to 100% versus 0%) and average task completion percentage (72 to 100% versus 15 to 45%).

Paper link:

Disclaimer: The presenter is not part of the authors!

LLM seminar
  • Updated:
  • Published:
Share
URL copied!