Advanced search
1 file | 1.18 MB Add to list

Supervised distance metric learning through maximization of the Jeffrey divergence

(2017) PATTERN RECOGNITION. 64. p.215-225
Author
Organization
Abstract
Over the past decades, distance metric learning has attracted a lot of interest in machine learning and related fields. In this work, we propose an optimization framework for distance metric learning via linear transformations by maximizing the Jeffrey divergence between two multivariate Gaussian distributions derived from local pairwise constraints. In our method, the distance metric is trained on positive and negative difference spaces, which are built from the neighborhood of each training instance, so that the local discriminative information is preserved. We show how to solve this problem with a closed-form solution rather than using tedious optimization procedures. The solution is easy to implement, and tractable for large-scale problems. Experimental results are presented for both a linear and a kernelized version of the proposed method for k nearest neighbors classification. We obtain classification accuracies superior to the state-of-the-art distance metric learning methods in several cases while being competitive in others.
Keywords
Distance metric learning, Nearest neighbor, Linear transformation, Jeffrey divergence, EQUIVALENCE CONSTRAINTS, COMPONENT ANALYSIS, CLASSIFICATION, ALGORITHMS, SELECTION, KERNEL

Downloads

  • (...).pdf
    • full text
    • |
    • UGent only
    • |
    • PDF
    • |
    • 1.18 MB

Citation

Please use this url to cite or link to this publication:

MLA
Nguyen Cong, Bac, Carlos Morell, and Bernard De Baets. “Supervised Distance Metric Learning Through Maximization of the Jeffrey Divergence.” PATTERN RECOGNITION 64 (2017): 215–225. Print.
APA
Nguyen Cong, B., Morell, C., & De Baets, B. (2017). Supervised distance metric learning through maximization of the Jeffrey divergence. PATTERN RECOGNITION, 64, 215–225.
Chicago author-date
Nguyen Cong, Bac, Carlos Morell, and Bernard De Baets. 2017. “Supervised Distance Metric Learning Through Maximization of the Jeffrey Divergence.” Pattern Recognition 64: 215–225.
Chicago author-date (all authors)
Nguyen Cong, Bac, Carlos Morell, and Bernard De Baets. 2017. “Supervised Distance Metric Learning Through Maximization of the Jeffrey Divergence.” Pattern Recognition 64: 215–225.
Vancouver
1.
Nguyen Cong B, Morell C, De Baets B. Supervised distance metric learning through maximization of the Jeffrey divergence. PATTERN RECOGNITION. 2017;64:215–25.
IEEE
[1]
B. Nguyen Cong, C. Morell, and B. De Baets, “Supervised distance metric learning through maximization of the Jeffrey divergence,” PATTERN RECOGNITION, vol. 64, pp. 215–225, 2017.
@article{8516928,
  abstract     = {Over the past decades, distance metric learning has attracted a lot of interest in machine learning and related fields. In this work, we propose an optimization framework for distance metric learning via linear transformations by maximizing the Jeffrey divergence between two multivariate Gaussian distributions derived from local pairwise constraints. In our method, the distance metric is trained on positive and negative difference spaces, which are built from the neighborhood of each training instance, so that the local discriminative information is preserved. We show how to solve this problem with a closed-form solution rather than using tedious optimization procedures. The solution is easy to implement, and tractable for large-scale problems. Experimental results are presented for both a linear and a kernelized version of the proposed method for k nearest neighbors classification. We obtain classification accuracies superior to the state-of-the-art distance metric learning methods in several cases while being competitive in others.},
  author       = {Nguyen Cong, Bac and Morell, Carlos and De Baets, Bernard},
  issn         = {0031-3203},
  journal      = {PATTERN RECOGNITION},
  keywords     = {Distance metric learning,Nearest neighbor,Linear transformation,Jeffrey divergence,EQUIVALENCE CONSTRAINTS,COMPONENT ANALYSIS,CLASSIFICATION,ALGORITHMS,SELECTION,KERNEL},
  language     = {eng},
  pages        = {215--225},
  title        = {Supervised distance metric learning through maximization of the Jeffrey divergence},
  url          = {http://dx.doi.org/10.1016/j.patcog.2016.11.010},
  volume       = {64},
  year         = {2017},
}

Altmetric
View in Altmetric
Web of Science
Times cited: