You are not logged in.

Flexible transfer learning framework for bayesian optimisation

Joy, Tinu Theckel, Rana, Santu, Gupta, Sunil Kumar and Venkatesh, Svetha 2016, Flexible transfer learning framework for bayesian optimisation. In Bailey, James, Khan, Latifur, Washio, Takashi, Dobbie, Gillian, Huang, Joshua Zhexue and Wang, Ruili (ed), Advances in knowledge discovery and data mining: 20th Pacific-Asia Conference, PAKDD 2016 Auckland, New Zealand, April 19-22, 2016 proceedings, part I, Springer, Berlin, Germany, pp.102-114, doi: 10.1007/978-3-319-31753-3_9.

Attached Files
Name Description MIMEType Size Downloads

Title Flexible transfer learning framework for bayesian optimisation
Author(s) Joy, Tinu Theckel
Rana, SantuORCID iD for Rana, Santu orcid.org/0000-0003-2247-850X
Gupta, Sunil KumarORCID iD for Gupta, Sunil Kumar orcid.org/0000-0002-3308-1930
Venkatesh, SvethaORCID iD for Venkatesh, Svetha orcid.org/0000-0001-8675-6631
Title of book Advances in knowledge discovery and data mining: 20th Pacific-Asia Conference, PAKDD 2016 Auckland, New Zealand, April 19-22, 2016 proceedings, part I
Editor(s) Bailey, James
Khan, Latifur
Washio, Takashi
Dobbie, Gillian
Huang, Joshua Zhexue
Wang, Ruili
Publication date 2016
Series Lecture notes in artificial intelligence; v.9651
Chapter number 9
Total chapters 47
Start page 102
End page 114
Total pages 13
Publisher Springer
Place of Publication Berlin, Germany
Summary Bayesian optimisation is an efficient technique to optimise functions that are expensive to compute. In this paper, we propose a novel framework to transfer knowledge from a completed source optimisation task to a new target task in order to overcome the cold start problem. We model source data as noisy observations of the target function. The level of noise is computed from the data in a Bayesian setting. This enables flexible knowledge transfer across tasks with differing relatedness, addressing a limitation of the existing methods. We evaluate on the task of tuning hyperparameters of two machine learning algorithms. Treating a fraction of the whole training data as source and the whole as the target task, we show that our method finds the best hyperparameters in the least amount of time compared to both the state-of-art and no transfer method.
ISBN 9783319317533
ISSN 0302-9743
Language eng
DOI 10.1007/978-3-319-31753-3_9
Field of Research 080109 Pattern Recognition and Data Mining
Socio Economic Objective 970108 Expanding Knowledge in the Information and Computing Sciences
HERDC Research category B1 Book chapter
ERA Research output type B Book chapter
Copyright notice ©2016, Springer
Persistent URL http://hdl.handle.net/10536/DRO/DU:30083252

Document type: Book Chapter
Collection: Centre for Pattern Recognition and Data Analytics
Connect to link resolver
 
Unless expressly stated otherwise, the copyright for items in DRO is owned by the author, with all rights reserved.

Versions
Version Filter Type
Citation counts: TR Web of Science Citation Count  Cited 0 times in TR Web of Science
Scopus Citation Count Cited 1 times in Scopus
Google Scholar Search Google Scholar
Access Statistics: 192 Abstract Views, 4 File Downloads  -  Detailed Statistics
Created: Thu, 05 May 2016, 12:45:39 EST

Every reasonable effort has been made to ensure that permission has been obtained for items included in DRO. If you believe that your rights have been infringed by this repository, please contact drosupport@deakin.edu.au.