AMP

التفاصيل البيبلوغرافية
العنوان: AMP
المؤلفون: Ze Ma, Pieter Abbeel, Xue Bin Peng, Sergey Levine, Angjoo Kanazawa
المصدر: ACM Transactions on Graphics. 40:1-20
بيانات النشر: Association for Computing Machinery (ACM), 2021.
سنة النشر: 2021
مصطلحات موضوعية: FOS: Computer and information sciences, Computer Science - Machine Learning, Computer science, cs.LG, 02 engineering and technology, Motion (physics), Machine Learning (cs.LG), Computer Science - Graphics, Match moving, cs.GR, 0202 electrical engineering, electronic engineering, information engineering, Reinforcement learning, Leverage (statistics), Computer vision, Computer animation, ComputingMethodologies_COMPUTERGRAPHICS, tv.genre, business.industry, Obstacle course, 020207 software engineering, Computer Graphics and Computer-Aided Design, Graphics (cs.GR), tv, Task (computing), Character (mathematics), 020201 artificial intelligence & image processing, Artificial intelligence, business
الوصف: Synthesizing graceful and life-like behaviors for physically simulated characters has been a fundamental challenge in computer animation. Data-driven methods that leverage motion tracking are a prominent class of techniques for producing high fidelity motions for a wide range of behaviors. However, the effectiveness of these tracking-based methods often hinges on carefully designed objective functions, and when applied to large and diverse motion datasets, these methods require significant additional machinery to select the appropriate motion for the character to track in a given scenario. In this work, we propose to obviate the need to manually design imitation objectives and mechanisms for motion selection by utilizing a fully automated approach based on adversarial imitation learning. High-level task objectives that the character should perform can be specified by relatively simple reward functions, while the low-level style of the character's behaviors can be specified by a dataset of unstructured motion clips, without any explicit clip selection or sequencing. For example, a character traversing an obstacle course might utilize a task-reward that only considers forward progress, while the dataset contains clips of relevant behaviors such as running, jumping, and rolling. These motion clips are used to train an adversarial motion prior, which specifies style-rewards for training the character through reinforcement learning (RL). The adversarial RL procedure automatically selects which motion to perform, dynamically interpolating and generalizing from the dataset. Our system produces high-quality motions that are comparable to those achieved by state-of-the-art tracking-based techniques, while also being able to easily accommodate large datasets of unstructured motion clips. Composition of disparate skills emerges automatically from the motion prior, without requiring a high-level motion planner or other task-specific annotations of the motion clips. We demonstrate the effectiveness of our framework on a diverse cast of complex simulated characters and a challenging suite of motor control tasks.
وصف الملف: application/pdf
تدمد: 1557-7368
0730-0301
URL الوصول: https://explore.openaire.eu/search/publication?articleId=doi_dedup___::e8917a4f117c9e74b9a4d1e900244b04
https://doi.org/10.1145/3450626.3459670
حقوق: OPEN
رقم الأكسشن: edsair.doi.dedup.....e8917a4f117c9e74b9a4d1e900244b04
قاعدة البيانات: OpenAIRE