Intrinsic Gradient Compression for Scalable and Efficient Federated LearningDownload PDF

Published: 27 Mar 2022, Last Modified: 05 May 2023FL4NLP@ACL2022Readers: Everyone
Keywords: federated, nlp, vision, learning, compression, intrinsic dimension
TL;DR: We propose a set of algorithms for efficient and scalable federated learning which leverage the idea that deep neural networks have low intrinsic dimensions
Abstract: Federated learning is a rapidly growing area of research, holding the promise of privacy-preserving distributed training on edge devices. The largest barrier to wider adoption of federated learning is the communication cost of model updates, which is accentuated by the fact that many edge devices are bandwidth-constrained. At the same time, within the machine learning theory community, a separate line of research has emerged around optimizing networks within a subspace of the full space of all parameters. The dimension of the smallest subspace for which these methods still yield strong results is called the intrinsic dimension. In this work, we prove a general correspondence between the notions of intrinsic dimension and gradient compressibility, and we show that a family of low-bandwidth federated learning algorithms, which we call intrinsic gradient compression algorithms, naturally emerges from this correspondence. Finally, we conduct large-scale NLP experiments using transformer models with over 100M parameters (GPT-2 and BERT), and show that our method significantly outperforms the state-of-the-art in gradient compression.
4 Replies

Loading