Active Learning at the ImageNet Scale

التفاصيل البيبلوغرافية
العنوان: Active Learning at the ImageNet Scale
المؤلفون: Emam, Zeyad Ali Sami, Chu, Hong-Min, Chiang, Ping-Yeh, Czaja, Wojciech, Leapman, Richard, Goldblum, Micah, Goldstein, Tom
سنة النشر: 2021
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Computer Vision and Pattern Recognition, Computer Science - Artificial Intelligence
الوصف: Active learning (AL) algorithms aim to identify an optimal subset of data for annotation, such that deep neural networks (DNN) can achieve better performance when trained on this labeled subset. AL is especially impactful in industrial scale settings where data labeling costs are high and practitioners use every tool at their disposal to improve model performance. The recent success of self-supervised pretraining (SSP) highlights the importance of harnessing abundant unlabeled data to boost model performance. By combining AL with SSP, we can make use of unlabeled data while simultaneously labeling and training on particularly informative samples. In this work, we study a combination of AL and SSP on ImageNet. We find that performance on small toy datasets -- the typical benchmark setting in the literature -- is not representative of performance on ImageNet due to the class imbalanced samples selected by an active learner. Among the existing baselines we test, popular AL algorithms across a variety of small and large scale settings fail to outperform random sampling. To remedy the class-imbalance problem, we propose Balanced Selection (BASE), a simple, scalable AL algorithm that outperforms random sampling consistently by selecting more balanced samples for annotation than existing methods. Our code is available at: https://github.com/zeyademam/active_learning .
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2111.12880
رقم الأكسشن: edsarx.2111.12880
قاعدة البيانات: arXiv