In: (AISec 2023 - Proceedings of the 16th ACM Workshop on Artificial Intelligence and Security, 30 November 2023, Copenhagen, Denmark). 2023. 55-65 (AISec 2023 - Proceedings of the 16th ACM Workshop on Artificial Intelligence and Security)
We investigate the concept of utility-preserving federated learning (UPFL) in the context of deep neural networks. We theoretically prove and experimentally validate that UPFL achieves the same accuracy as centralized training independent of the data distribution across the clients. We demonstrate that UPFL can fully take advantage of the momentum and weight decay techniques compared to centralized training, but it incurs substantial communication overhead. Ordinary federated learning, on the other hand, provides much higher communication efficiency, but it can partially benefit from the aforementioned techniques to improve utility. Given that, we propose a method called weighted gradient accumulation to gain more benefit from the momentum and weight decay akin to UPFL, while providing practical communication efficiency similar to ordinary federated learning.