Mean Aggregator Is More Robust Than Robust Aggregators Under Label Poisoning Attacks

التفاصيل البيبلوغرافية
العنوان: Mean Aggregator Is More Robust Than Robust Aggregators Under Label Poisoning Attacks
المؤلفون: Peng, Jie, Li, Weiyu, Ling, Qing
سنة النشر: 2024
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Machine Learning
الوصف: Robustness to malicious attacks is of paramount importance for distributed learning. Existing works often consider the classical Byzantine attacks model, which assumes that some workers can send arbitrarily malicious messages to the server and disturb the aggregation steps of the distributed learning process. To defend against such worst-case Byzantine attacks, various robust aggregators have been proven effective and much superior to the often-used mean aggregator. In this paper, we show that robust aggregators are too conservative for a class of weak but practical malicious attacks, as known as label poisoning attacks, where the sample labels of some workers are poisoned. Surprisingly, we are able to show that the mean aggregator is more robust than the state-of-the-art robust aggregators in theory, given that the distributed data are sufficiently heterogeneous. In fact, the learning error of the mean aggregator is proven to be optimal in order. Experimental results corroborate our theoretical findings, demonstrating the superiority of the mean aggregator under label poisoning attacks.
Comment: Accepted by IJCAI 2024
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2404.13647
رقم الأكسشن: edsarx.2404.13647
قاعدة البيانات: arXiv