Approximation vector machines for large-scale online learning
Version 2 2024-06-05, 11:50Version 2 2024-06-05, 11:50
Version 1 2018-01-17, 20:54Version 1 2018-01-17, 20:54
journal contribution
posted on 2024-06-05, 11:50authored byT Le, TD Nguyen, V Nguyen, D Phung
One of the most challenging problems in kernel online learning is to bound the model size and to promote model sparsity. Sparse models not only improve computation and memory usage, but also enhance the generalization capacity-a principle that concurs with the law of parsimony. However, inappropriate sparsity modeling may also significantly degrade the performance. In this paper, we propose Approximation Vector Machine (AVM), a model that can simultaneously encourage sparsity and safeguard its risk in compromising the per-formance. In an online setting context, when an incoming instance arrives, we approximate this instance by one of its neighbors whose distance to it is less than a predefined threshold. Our key intuition is that since the newly seen instance is expressed by its nearby neigh-bor the optimal performance can be analytically formulated and maintained. We develop theoretical foundations to support this intuition and further establish an analysis for the common loss functions including Hinge, smooth Hinge, and Logistic (i.e., for the classification task) and ℓ 1 ,ℓ 2 , and.ϵ-insensitive (i.e., for the regression task) to characterize the gap between the approximation and optimal solutions. This gap crucially depends on two key factors including the frequency of approximation (i.e., how frequent the approximation operation takes place) and the predefined threshold. We conducted extensive experiments for classification and regression tasks in batch and online modes using several benchmark datasets. The quantitative results show that our proposed AVM obtained comparable pre-dictive performances with current state-of-the-art methods while simultaneously achieving significant computational speed-up due to the ability of the proposed AVM in maintaining the model size.
History
Journal
Journal of machine learning research
Volume
18
Pagination
1-55
Location
Cambridge, Mass.
ISSN
1532-4435
eISSN
1533-7928
Language
eng
Publication classification
C Journal article, C1 Refereed article in a scholarly journal
Copyright notice
2017, Trung Le, Tu Dinh Nguyen, Vu Nguyen, and Dinh Phung.