You are not logged in.

Multiple task transfer learning with small sample sizes

Saha, Budhaditya, Gupta, Sunil, Phung, Dinh and Venkatesh, Svetha 2016, Multiple task transfer learning with small sample sizes, Knowledge and information systems, vol. 46, no. 2, pp. 315-342, doi: 10.1007/s10115-015-0821-z.

Attached Files
Name Description MIMEType Size Downloads

Title Multiple task transfer learning with small sample sizes
Author(s) Saha, BudhadityaORCID iD for Saha, Budhaditya orcid.org/0000-0001-8011-6801
Gupta, SunilORCID iD for Gupta, Sunil orcid.org/0000-0002-3308-1930
Phung, DinhORCID iD for Phung, Dinh orcid.org/0000-0002-9977-8247
Venkatesh, SvethaORCID iD for Venkatesh, Svetha orcid.org/0000-0001-8675-6631
Journal name Knowledge and information systems
Volume number 46
Issue number 2
Start page 315
End page 342
Total pages 28
Publisher Springer
Place of publication Berlin, Germany
Publication date 2016-02
ISSN 0219-1377
0219-3116
Keyword(s) Science & Technology
Technology
Computer Science, Artificial Intelligence
Computer Science, Information Systems
Computer Science
Multi-task
Transfer learning
Optimization
Healthcare
Data mining
Statistical analysis
ADAPTATION
Summary Prognosis, such as predicting mortality, is common in medicine. When confronted with small numbers of samples, as in rare medical conditions, the task is challenging. We propose a framework for classification with data with small numbers of samples. Conceptually, our solution is a hybrid of multi-task and transfer learning, employing data samples from source tasks as in transfer learning, but considering all tasks together as in multi-task learning. Each task is modelled jointly with other related tasks by directly augmenting the data from other tasks. The degree of augmentation depends on the task relatedness and is estimated directly from the data. We apply the model on three diverse real-world data sets (healthcare data, handwritten digit data and face data) and show that our method outperforms several state-of-the-art multi-task learning baselines. We extend the model for online multi-task learning where the model parameters are incrementally updated given new data or new tasks. The novelty of our method lies in offering a hybrid multi-task/transfer learning model to exploit sharing across tasks at the data-level and joint parameter learning.
Language eng
DOI 10.1007/s10115-015-0821-z
Field of Research 080109 Pattern Recognition and Data Mining
0801 Artificial Intelligence And Image Processing
Socio Economic Objective 970108 Expanding Knowledge in the Information and Computing Sciences
HERDC Research category C1 Refereed article in a scholarly journal
ERA Research output type C Journal article
Copyright notice ©2016, Springer
Persistent URL http://hdl.handle.net/10536/DRO/DU:30076876

Connect to link resolver
 
Unless expressly stated otherwise, the copyright for items in DRO is owned by the author, with all rights reserved.

Versions
Version Filter Type
Citation counts: TR Web of Science Citation Count  Cited 1 times in TR Web of Science
Scopus Citation Count Cited 0 times in Scopus
Google Scholar Search Google Scholar
Access Statistics: 196 Abstract Views, 1 File Downloads  -  Detailed Statistics
Created: Mon, 07 Mar 2016, 17:58:31 EST

Every reasonable effort has been made to ensure that permission has been obtained for items included in DRO. If you believe that your rights have been infringed by this repository, please contact drosupport@deakin.edu.au.