Exponential weight averaging as damped harmonic motion

Published: 19 Jun 2023, Last Modified: 25 Jul 2023Frontiers4LCDEveryoneRevisionsBibTeX
Keywords: weight averaging, optimization, exponential moving average, physically based modeling
TL;DR: Exponential moving averages taken of neural network weights over training can be generalized to classical damped harmonic motion, opening up new intuition and tunable parameters for first-order stochastic optimizers.
Abstract: The exponential moving average (EMA) is a commonly used statistic for providing stable estimates of stochastic quantities in deep learning optimization. Recently, EMA has seen considerable use in generative models, where it is computed with respect to the model weights, and significantly improves the stability of the inference model during and after training. While the practice of weight averaging at the end of training is well-studied and known to improve estimates of local optima, the benefits of EMA over the course of training is less understood. In this paper, we derive an explicit connection between EMA and a damped harmonic system between two particles, where one particle (the EMA weights) is drawn to the other (the model weights) via an idealized zero-length spring. We then leverage this physical analogy to analyze the effectiveness of EMA, and propose an improved training algorithm, which we call BELAY. Finally, we demonstrate theoretically and empirically several advantages enjoyed by BELAY over standard EMA.
Submission Number: 113
Loading