GPT-4V(ision) for Robotics: Multimodal Task Planning from Human Demonstration

التفاصيل البيبلوغرافية
العنوان: GPT-4V(ision) for Robotics: Multimodal Task Planning from Human Demonstration
المؤلفون: Wake, Naoki, Kanehira, Atsushi, Sasabuchi, Kazuhiro, Takamatsu, Jun, Ikeuchi, Katsushi
سنة النشر: 2023
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Robotics, Computer Science - Computation and Language, Computer Science - Computer Vision and Pattern Recognition
الوصف: We introduce a pipeline that enhances a general-purpose Vision Language Model, GPT-4V(ision), to facilitate one-shot visual teaching for robotic manipulation. This system analyzes videos of humans performing tasks and outputs executable robot programs that incorporate insights into affordances. The process begins with GPT-4V analyzing the videos to obtain textual explanations of environmental and action details. A GPT-4-based task planner then encodes these details into a symbolic task plan. Subsequently, vision systems spatially and temporally ground the task plan in the videos. Object are identified using an open-vocabulary object detector, and hand-object interactions are analyzed to pinpoint moments of grasping and releasing. This spatiotemporal grounding allows for the gathering of affordance information (e.g., grasp types, waypoints, and body postures) critical for robot execution. Experiments across various scenarios demonstrate the method's efficacy in achieving real robots' operations from human demonstrations in a one-shot manner. Meanwhile, quantitative tests have revealed instances of hallucination in GPT-4V, highlighting the importance of incorporating human supervision within the pipeline. The prompts of GPT-4V/GPT-4 are available at this project page
Comment: 9 pages, 12 figures, 2 tables. Last updated on May 6th, 2024
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2311.12015
رقم الأكسشن: edsarx.2311.12015
قاعدة البيانات: arXiv