Binding Touch to Everything: Learning Unified Multimodal Tactile Representations

التفاصيل البيبلوغرافية
العنوان: Binding Touch to Everything: Learning Unified Multimodal Tactile Representations
المؤلفون: Yang, Fengyu, Feng, Chao, Chen, Ziyang, Park, Hyoungseob, Wang, Daniel, Dou, Yiming, Zeng, Ziyao, Chen, Xien, Gangopadhyay, Rit, Owens, Andrew, Wong, Alex
سنة النشر: 2024
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Computer Vision and Pattern Recognition, Computer Science - Robotics
الوصف: The ability to associate touch with other modalities has huge implications for humans and computational systems. However, multimodal learning with touch remains challenging due to the expensive data collection process and non-standardized sensor outputs. We introduce UniTouch, a unified tactile model for vision-based touch sensors connected to multiple modalities, including vision, language, and sound. We achieve this by aligning our UniTouch embeddings to pretrained image embeddings already associated with a variety of other modalities. We further propose learnable sensor-specific tokens, allowing the model to learn from a set of heterogeneous tactile sensors, all at the same time. UniTouch is capable of conducting various touch sensing tasks in the zero-shot setting, from robot grasping prediction to touch image question answering. To the best of our knowledge, UniTouch is the first to demonstrate such capabilities. Project page: https://cfeng16.github.io/UniTouch/
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2401.18084
رقم الأكسشن: edsarx.2401.18084
قاعدة البيانات: arXiv