FedRSM: Representational-Similarity-Based Secured Model Uploading for Federated Learning
Published in IEEE International Conference on Trust, Security and Privacy in Computing and Communications, 2023
Recommended citation: G Chen, S Liu, X Yang, T Wang, L You, and F Xia, "FedRSM: Representational-Similarity-Based Secured Model Uploading for Federated Learning", IEEE 22nd International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom), 189-196, Nov, 2023, doi: 10.1109/TrustCom60117.2023.00046. https://ieeexplore.ieee.org/abstract/document/10538768
Abstract: As a novel learning paradigm, Federated learning (FL) aims at protecting privacy by avoiding raw data shifting between distributed clients and central servers. However, recent researches demonstrate the vulnerability of FL against gradient-based privacy attacks, in which, gradients intercepted by malicious adversaries may result in data leakage. Current defense methods suffer from performance drops, low privacy guarantees, and high communication costs. Motivated by this, we propose FedRSM, a Representational-Similarity-Based Secured Model Uploading for Federated Learning. FedRSM splits Deep Neural Networks (DNNs) into layers, calculates Representational Dissimilarity Vector (RDV), measures the similarity between local RDV and glocal RDV of each model layer, and constructs secured local model to be uploaded based on Representational Consistency Alteration (RCA). According to the evaluation result, FedRSM can improve testing accuracy by up to 2%, significantly reduce communication costs, and avoid data leakage under different model complexities.
Keywords: Federated Learning, Privacy Protection, Secured Model Uploading