DOCTOR: A Simple Method for Detecting Misclassification Errors

التفاصيل البيبلوغرافية
العنوان: DOCTOR: A Simple Method for Detecting Misclassification Errors
المؤلفون: Granese, Federica, Romanelli, Marco, Gorla, Daniele, Palamidessi, Catuscia, Piantanida, Pablo
سنة النشر: 2021
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Computer Vision and Pattern Recognition, Computer Science - Machine Learning
الوصف: Deep neural networks (DNNs) have shown to perform very well on large scale object recognition problems and lead to widespread use for real-world applications, including situations where DNN are implemented as "black boxes". A promising approach to secure their use is to accept decisions that are likely to be correct while discarding the others. In this work, we propose DOCTOR, a simple method that aims to identify whether the prediction of a DNN classifier should (or should not) be trusted so that, consequently, it would be possible to accept it or to reject it. Two scenarios are investigated: Totally Black Box (TBB) where only the soft-predictions are available and Partially Black Box (PBB) where gradient-propagation to perform input pre-processing is allowed. Empirically, we show that DOCTOR outperforms all state-of-the-art methods on various well-known images and sentiment analysis datasets. In particular, we observe a reduction of up to $4\%$ of the false rejection rate (FRR) in the PBB scenario. DOCTOR can be applied to any pre-trained model, it does not require prior information about the underlying dataset and is as simple as the simplest available methods in the literature.
Comment: This paper has been accepted to appear as a spotlight in the Proceedings of the 2021 Conference on Neural Information Processing Systems (NeurIPS 2021), December 6-14, 2021, Virtual Event
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2106.02395
رقم الأكسشن: edsarx.2106.02395
قاعدة البيانات: arXiv