Multiple kernel learning with data augmentation

Nguyen, Khanh, Le, Trung, Nguyen, Tien Vu, Nguyen, Tu Dinh and Phung, Quoc-Dinh 2016, Multiple kernel learning with data augmentation, in ACML 2016 : JMLR: Workshop and Conference Proceedings, MIT Press, Cambridge, Ma., pp. 49-64.

Attached Files
Name Description MIMEType Size Downloads

Title Multiple kernel learning with data augmentation
Author(s) Nguyen, Khanh
Le, TrungORCID iD for Le, Trung orcid.org/0000-0002-7070-8093
Nguyen, Tien VuORCID iD for Nguyen, Tien Vu orcid.org/0000-0002-9977-8247
Nguyen, Tu Dinh
Phung, Quoc-Dinh
Conference name Machine Learning. Asian Conference (2016 : Hamilton, New Zealand)
Conference location Hamilton, New Zealand
Conference dates 2016/11/16 - 2016/11/18
Title of proceedings ACML 2016 : JMLR: Workshop and Conference Proceedings
Publication date 2016
Start page 49
End page 64
Total pages 16
Publisher MIT Press
Place of publication Cambridge, Ma.
Summary © 2016 K. Nguyen, T. Le, V. Nguyen, T.D. Nguyen & D. Phung. The motivations of multiple kernel learning (MKL) approach are to increase kernel expressiveness capacity and to avoid the expensive grid search over a wide spectrum of kernels. A large amount of work has been proposed to improve the MKL in terms of the computational cost and the sparsity of the solution. However, these studies still either require an expensive grid search on the model parameters or scale unsatisfactorily with the numbers of kernels and training samples. In this paper, we address these issues by conjoining MKL, Stochastic Gradient Descent (SGD) framework, and data augmentation technique. The pathway of our proposed method is developed as follows. We first develop a maximum-aposteriori (MAP) view for MKL under a probabilistic setting and described in a graphical model. This view allows us to develop data augmentation technique to make the inference for finding the optimal parameters feasible, as opposed to traditional approach of training MKL via convex optimization techniques. As a result, we can use the standard SGD framework to learn weight matrix and extend the model to support online learning. We validate our method on several benchmark datasets in both batch and online settings. The experimental results show that our proposed method can learn the parameters in a principled way to eliminate the expensive grid search while gaining a significant computational speedup comparing with the state-of-the-art baselines.
ISSN 1532-4435
1533-7928
Language eng
Indigenous content off
Field of Research 08 Information and Computing Sciences
17 Psychology and Cognitive Sciences
HERDC Research category E1 Full written paper - refereed
Copyright notice ©2016, K. Nguyen, T. Le, V. Nguyen, T.D. Nguyen & D. Phung
Free to Read? Yes
Persistent URL http://hdl.handle.net/10536/DRO/DU:30129571

Connect to link resolver
 
Unless expressly stated otherwise, the copyright for items in DRO is owned by the author, with all rights reserved.

Versions
Version Filter Type
Citation counts: TR Web of Science Citation Count  Cited 0 times in TR Web of Science
Scopus Citation Count Cited 4 times in Scopus
Google Scholar Search Google Scholar
Access Statistics: 15 Abstract Views, 1 File Downloads  -  Detailed Statistics
Created: Thu, 05 Sep 2019, 12:16:11 EST

Every reasonable effort has been made to ensure that permission has been obtained for items included in DRO. If you believe that your rights have been infringed by this repository, please contact drosupport@deakin.edu.au.