continual-learning

Continual-T0: progressively instructing 50+ tasks to language models without forgetting

This was a paper I presented about in Bang Liu’s research group meeting on 2022-06-06. You can view the slides I used here. Continual-T0 (CT0) extends T0 by progressively training it on 8 unseen language generation tasks, while retaining a replay buffer of 1% of the original training data to preserve performance. The result is a model that maintains nearly all of its performance on previous tasks while learning the new tasks. In addition, CT0 maintains the original T0’s performance on unseen tasks (which is a big deal because those tasks could not appear in the replay buffer) and it extends the compositionality of T0 to even more unseen tasks.
Read more