Deakin University
Browse

File(s) under permanent embargo

A gradient-based metric learning algorithm for k-NN classifiers

conference contribution
posted on 2010-12-01, 00:00 authored by Nayyar ZaidiNayyar Zaidi, D M G Squire, D Suter
The Nearest Neighbor (NN) classification/regression techniques, besides their simplicity, are amongst the most widely applied and well studied techniques for pattern recognition in machine learning. A drawback, however, is the assumption of the availability of a suitable metric to measure distances to the k nearest neighbors. It has been shown that k-NN classifiers with a suitable distance metric can perform better than other, more sophisticated, alternatives such as Support Vector Machines and Gaussian Process classifiers. For this reason, much recent research in k-NN methods has focused on metric learning, i.e. finding an optimized metric. In this paper we propose a simple gradient-based algorithm for metric learning. We discuss in detail the motivations behind metric learning, i.e. error minimization and margin maximization. Our formulation differs from the prevalent techniques in metric learning, where the goal is to maximize the classifier's margin. Instead our proposed technique (MEGM) finds an optimal metric by directly minimizing the mean square error. Our technique not only results in greatly improved k-NN performance, but also performs better than competing metric learning techniques. Promising results are reported on major UCIML databases. © 2010 Springer-Verlag.

History

Volume

6464 LNAI

Pagination

194 - 203

ISSN

0302-9743

eISSN

1611-3349

ISBN-13

9783642174315

ISBN-10

3642174310

Publication classification

E1.1 Full written paper - refereed

Title of proceedings

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Usage metrics

    Research Publications

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC