Cross-modal Subspace Learning for Fine-grained Sketch-based Image Retrieval

التفاصيل البيبلوغرافية
العنوان: Cross-modal Subspace Learning for Fine-grained Sketch-based Image Retrieval
المؤلفون: Xu, Peng, Yin, Qiyue, Huang, Yongye, Song, Yi-Zhe, Ma, Zhanyu, Wang, Liang, Xiang, Tao, Kleijn, W. Bastiaan, Guo, Jun
سنة النشر: 2017
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Computer Vision and Pattern Recognition
الوصف: Sketch-based image retrieval (SBIR) is challenging due to the inherent domain-gap between sketch and photo. Compared with pixel-perfect depictions of photos, sketches are iconic renderings of the real world with highly abstract. Therefore, matching sketch and photo directly using low-level visual clues are unsufficient, since a common low-level subspace that traverses semantically across the two modalities is non-trivial to establish. Most existing SBIR studies do not directly tackle this cross-modal problem. This naturally motivates us to explore the effectiveness of cross-modal retrieval methods in SBIR, which have been applied in the image-text matching successfully. In this paper, we introduce and compare a series of state-of-the-art cross-modal subspace learning methods and benchmark them on two recently released fine-grained SBIR datasets. Through thorough examination of the experimental results, we have demonstrated that the subspace learning can effectively model the sketch-photo domain-gap. In addition we draw a few key insights to drive future research.
Comment: Accepted by Neurocomputing
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/1705.09888
رقم الأكسشن: edsarx.1705.09888
قاعدة البيانات: arXiv