Continual learning using multi-view task conditional neural networks
File(s)2005.05080v3.pdf (912.34 KB)
Working paper
Author(s)
Li, Honglin
Barnaghi, Payam
Enshaeifar, Shirin
Ganz, Frieder
Type
Working Paper
Abstract
Conventional deep learning models have limited capacity in learning multiple
tasks sequentially. The issue of forgetting the previously learned tasks in
continual learning is known as catastrophic forgetting or interference. When
the input data or the goal of learning change, a continual model will learn and
adapt to the new status. However, the model will not remember or recognise any
revisits to the previous states. This causes performance reduction and
re-training curves in dealing with periodic or irregularly reoccurring changes
in the data or goals. The changes in goals or data are referred to as new tasks
in a continual learning model. Most of the continual learning methods have a
task-known setup in which the task identities are known in advance to the
learning model. We propose Multi-view Task Conditional Neural Networks
(Mv-TCNN) that does not require to known the reoccurring tasks in advance. We
evaluate our model on standard datasets using MNIST, CIFAR10, CIFAR100, and
also a real-world dataset that we have collected in a remote healthcare
monitoring study (i.e. TIHM dataset). The proposed model outperforms the
state-of-the-art solutions in continual learning and adapting to new tasks that
are not defined in advance.
tasks sequentially. The issue of forgetting the previously learned tasks in
continual learning is known as catastrophic forgetting or interference. When
the input data or the goal of learning change, a continual model will learn and
adapt to the new status. However, the model will not remember or recognise any
revisits to the previous states. This causes performance reduction and
re-training curves in dealing with periodic or irregularly reoccurring changes
in the data or goals. The changes in goals or data are referred to as new tasks
in a continual learning model. Most of the continual learning methods have a
task-known setup in which the task identities are known in advance to the
learning model. We propose Multi-view Task Conditional Neural Networks
(Mv-TCNN) that does not require to known the reoccurring tasks in advance. We
evaluate our model on standard datasets using MNIST, CIFAR10, CIFAR100, and
also a real-world dataset that we have collected in a remote healthcare
monitoring study (i.e. TIHM dataset). The proposed model outperforms the
state-of-the-art solutions in continual learning and adapting to new tasks that
are not defined in advance.
Date Issued
2020-07-13
Citation
2020
Publisher
arXiv
Copyright Statement
© 2020 The Author(s)
Identifier
http://arxiv.org/abs/2005.05080v3
Subjects
cs.LG
cs.LG
stat.ML
Notes
10 pages
Publication Status
Published