Adaptive Hyperparameter Optimization for Continual Learning Scenarios

Published: 03 Apr 2024, Last Modified: 03 Apr 20241st CLAI UnconfEveryoneRevisionsBibTeX
Keywords: Hyperparameter Optimization, Continual Learning
TL;DR: This paper explores the pivotal role of HPO within the domain of continual learning. It introduces a novel, adaptive hyperparameter update strategy tailored for streaming data, leading to enhancements in optimization efficiency and effectiveness.
Abstract: Hyperparameter selection in continual learning scenarios is a challenging and underexplored aspect, especially in practical non-stationary environments. Traditional approaches, such as grid searches with held-out validation data from all tasks, are unrealistic for building accurate lifelong learning systems. This paper aims to explore the role of hyperparameter selection in continual learning and the necessity of continually and automatically tuning them according to the complexity of the task at hand. Hence, we propose leveraging the nature of sequence task learning to improve Hyperparameter Optimization efficiency. By using the functional analysis of variance-based techniques, we identify the most crucial hyperparameters, which we aim to empirically prove that they remain consistent across tasks in the sequence. This approach, agnostic to continual scenarios and strategies, allows us to speed up hyperparameters optimization continually across tasks. We believe that our future findings can contribute to the advancement of continual learning methodologies towards more efficient, robust and adaptable models for real-world applications.
Submission Number: 6
Loading