Multiple task transfer learning with small sample sizes
Saha, Budhaditya, Gupta, Sunil, Phung, Dinh and Venkatesh, Svetha 2016, Multiple task transfer learning with small sample sizes, Knowledge and information systems, vol. 46, no. 2, pp. 315-342, doi: 10.1007/s10115-015-0821-z.
Attached Files
Name
Description
MIMEType
Size
Downloads
Title
Multiple task transfer learning with small sample sizes
Prognosis, such as predicting mortality, is common in medicine. When confronted with small numbers of samples, as in rare medical conditions, the task is challenging. We propose a framework for classification with data with small numbers of samples. Conceptually, our solution is a hybrid of multi-task and transfer learning, employing data samples from source tasks as in transfer learning, but considering all tasks together as in multi-task learning. Each task is modelled jointly with other related tasks by directly augmenting the data from other tasks. The degree of augmentation depends on the task relatedness and is estimated directly from the data. We apply the model on three diverse real-world data sets (healthcare data, handwritten digit data and face data) and show that our method outperforms several state-of-the-art multi-task learning baselines. We extend the model for online multi-task learning where the model parameters are incrementally updated given new data or new tasks. The novelty of our method lies in offering a hybrid multi-task/transfer learning model to exploit sharing across tasks at the data-level and joint parameter learning.
Every reasonable effort has been made to ensure that permission has been obtained for items included in DRO. If you believe that your rights have been infringed by this repository, please contact drosupport@deakin.edu.au.