Fed2A: Federated Learning Mechanism in Asynchronous and Adaptive Modes
Published in Electronics, 2022
Recommended citation: S Liu, Q Chen, and L You, "Fed2A: Federated Learning Mechanism in Asynchronous and Adaptive Modes", Electronics, 11(9):1393, Apr 2022, doi: 10.3390/electronics11091393. https://www.mdpi.com/2079-9292/11/9/1393
This paper is about the number 3. The number 4 is left for future work.
Abstract: Driven by emerging technologies such as edge computing and Internet of Things (IoT), recent years have witnessed the increasing growth of data processing in a distributed way. Federated Learning (FL), a novel decentralized learning paradigm that can unify massive devices to train a global model without compromising privacy, is drawing much attention from both academics and industries. However, the performance dropping of FL running in a heterogeneous and asynchronous environment hinders its wide applications, such as in autonomous driving and assistive healthcare. Motivated by this, we propose a novel mechanism, called Fed2A: Federated learning mechanism in Asynchronous and Adaptive Modes. Fed2A supports FL by (1) allowing clients and the collaborator to work separately and asynchronously, (2) uploading shallow and deep layers of deep neural networks (DNNs) adaptively, and (3) aggregating local parameters by weighing on the freshness of information and representational consistency of model layers jointly. Moreover, the effectiveness and efficiency of Fed2A are also analyzed based on three standard datasets, i.e., FMNIST, CIFAR-10, and GermanTS. Compared with the best performance among three baselines, i.e., FedAvg, FedProx, and FedAsync, Fed2A can reduce the communication cost by over 77%, as well as improve model accuracy and learning speed by over 19% and 76%, respectively.
Keywords: Federated Learning, Asynchronous Federated Learning, Adaptive Uploading, Adaptive Aggregation