Two Complementary Perspectives to Continual Learning: Ask Not Only What to Optimize, But Also How

Published: 03 Apr 2024, Last Modified: 03 Apr 20241st CLAI UnconfEveryoneRevisionsBibTeX
Keywords: continual learning, stability gap, gradient projection, constrained optimization
TL;DR: We propose that continual learning should focus not only on improving the optimization objective, but also on the way this objective is optimized.
Abstract: Recent years have seen considerable progress in the continual training of deep neural networks, predominantly thanks to approaches that add replay or regularization terms to the loss function to approximate the joint loss over all tasks so far. However, we show that even with a perfect approximation to the joint loss, these approaches still suffer from temporary but substantial forgetting when starting to train on a new task. Motivated by this 'stability gap', we propose that continual learning strategies should focus not only on the optimization objective, but also on the way this objective is optimized. While there is some continual learning work that alters the optimization trajectory (e.g., using gradient projection techniques), this line of research is positioned as alternative to improving the optimization objective, while we argue it should be complementary. To evaluate the merits of our proposition, we plan to combine replay-approximated joint objectives with gradient projection-based optimization routines to test whether the addition of the latter provides benefits in terms of (1) alleviating the stability gap, (2) increasing the learning efficiency and (3) improving the final learning outcome.
Submission Number: 2
Loading