Examining Changes in Internal Representations of Continual Learning Models Through Tensor Decomposition

Published: 03 Apr 2024, Last Modified: 03 Apr 20241st CLAI UnconfEveryoneRevisionsBibTeX
Keywords: Catastrophic Forgetting, Continual Learning, Tensor Decomposition
TL;DR: We propose a tensor decomposition based study of learning and forgetting dynamics across multiple importance-based continual learning methods and model architectures
Abstract: Continual learning (CL) has spurred the development of several methods aimed at consolidating previous knowledge across sequential learning. Yet, the evaluations of these methods has primarily focused on the final output, such as changes in the accuracy of predicted classes, overlooking the issue of representational forgetting within the model. In this paper, we propose a novel representation-based evaluation framework for CL models. This approach involves gathering internal representations from throughout the continual learning process and formulating three-dimensional tensors. The tensors are formed by stacking representations, such as layer activations, generated from several inputs and model snapshots, throughout the learning process. By conducting tensor component analysis (TCA), we aim to uncover meaningful patterns about how the internal representations evolve, expecting to highlight the merits or shortcomings of examined cl strategies. We plan to conduct our analyses across different model architectures and importance-based continual learning strategies, with a curated task selection, allowing us to gain insight whether any observed patterns are consistently replicable.
Submission Number: 8
Loading