Data Poisoning and Leakage Analysis in Federated Learning

التفاصيل البيبلوغرافية
العنوان: Data Poisoning and Leakage Analysis in Federated Learning
المؤلفون: Wei, Wenqi, Huang, Tiansheng, Yahn, Zachary, Singhal, Anoop, Loper, Margaret, Liu, Ling
سنة النشر: 2024
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Machine Learning
الوصف: Data poisoning and leakage risks impede the massive deployment of federated learning in the real world. This chapter reveals the truths and pitfalls of understanding two dominating threats: {\em training data privacy intrusion} and {\em training data poisoning}. We first investigate training data privacy threat and present our observations on when and how training data may be leaked during the course of federated training. One promising defense strategy is to perturb the raw gradient update by adding some controlled randomized noise prior to sharing during each round of federated learning. We discuss the importance of determining the proper amount of randomized noise and the proper location to add such noise for effective mitigation of gradient leakage threats against training data privacy. Then we will review and compare different training data poisoning threats and analyze why and when such data poisoning induced model Trojan attacks may lead to detrimental damage on the performance of the global model. We will categorize and compare representative poisoning attacks and the effectiveness of their mitigation techniques, delivering an in-depth understanding of the negative impact of data poisoning. Finally, we demonstrate the potential of dynamic model perturbation in simultaneously ensuring privacy protection, poisoning resilience, and model performance. The chapter concludes with a discussion on additional risk factors in federated learning, including the negative impact of skewness, data and algorithmic biases, as well as misinformation in training data. Powered by empirical evidence, our analytical study offers some transformative insights into effective privacy protection and security assurance strategies in attack-resilient federated learning.
Comment: Chapter of Handbook of Trustworthy Federated Learning
نوع الوثيقة: Working Paper
DOI: 10.1007/978-3-031-58923-2_3
URL الوصول: http://arxiv.org/abs/2409.13004
رقم الأكسشن: edsarx.2409.13004
قاعدة البيانات: arXiv
الوصف
DOI:10.1007/978-3-031-58923-2_3