Update Selective Parameters: Federated Machine Unlearning Based on Model Explanation

التفاصيل البيبلوغرافية
العنوان: Update Selective Parameters: Federated Machine Unlearning Based on Model Explanation
المؤلفون: Xu, Heng, Zhu, Tianqing, Zhang, Lefeng, Zhou, Wanlei, Yu, Philip S.
سنة النشر: 2024
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Cryptography and Security, Computer Science - Distributed, Parallel, and Cluster Computing, Computer Science - Machine Learning
الوصف: Federated learning is a promising privacy-preserving paradigm for distributed machine learning. In this context, there is sometimes a need for a specialized process called machine unlearning, which is required when the effect of some specific training samples needs to be removed from a learning model due to privacy, security, usability, and/or legislative factors. However, problems arise when current centralized unlearning methods are applied to existing federated learning, in which the server aims to remove all information about a class from the global model. Centralized unlearning usually focuses on simple models or is premised on the ability to access all training data at a central node. However, training data cannot be accessed on the server under the federated learning paradigm, conflicting with the requirements of the centralized unlearning process. Additionally, there are high computation and communication costs associated with accessing clients' data, especially in scenarios involving numerous clients or complex global models. To address these concerns, we propose a more effective and efficient federated unlearning scheme based on the concept of model explanation. Model explanation involves understanding deep networks and individual channel importance, so that this understanding can be used to determine which model channels are critical for classes that need to be unlearned. We select the most influential channels within an already-trained model for the data that need to be unlearned and fine-tune only influential channels to remove the contribution made by those data. In this way, we can simultaneously avoid huge consumption costs and ensure that the unlearned model maintains good performance. Experiments with different training models on various datasets demonstrate the effectiveness of the proposed approach.
Comment: Accepted by IEEE Transactions on Big Data
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2406.12516
رقم الأكسشن: edsarx.2406.12516
قاعدة البيانات: arXiv