Poster
in
Affinity Workshop: LatinX in AI
Dynamic Adapter Routing in Continual Learning of Language Models
Vladimir Araujo · Marie-Francine Moens · Tinne Tuytelaars
Parameter-efficient fine-tuning (PEFT) techniques are increasingly applied to pre-trained language models (PLMs) for continual learning (CL). Typically, these methods train a PEFT module for each new task and use similarity-based selection to route modules during inference. However, they suffer from two key issues: interference with previously trained modules and suboptimal module composition. We introduce L2R, a strategy that isolates the training of new PEFT modules to secure task-specific specialization. L2R then learns to combine these modules by training a router network that leverages a small memory of prior task examples. We evaluate our approach in two CL setups across various benchmarks. Our findings show that L2R effectively composes PEFT modules, resulting in improved generalization and performance compared to other methods.