Advanced search
1 file | 1.18 MB Add to list

Scalable large-margin distance metric learning using stochastic gradient descent

(2020) IEEE TRANSACTIONS ON CYBERNETICS. 50(3). p.1072-1083
Author
Organization
Abstract
The key to success of many machine learning and pattern recognition algorithms is the way of computing distances between the input data. In this paper, we propose a large-margin-based approach, called the large-margin distance metric learning (LMDML), for learning a Mahalanobis distance metric. LMDML employs the principle of margin maximization to learn the distance metric with the goal of improving k-nearest-neighbor classification. The main challenge of distance metric learning is the positive semidefiniteness constraint on the Mahalanobis matrix. Semidefinite programming is commonly used to enforce this constraint, but it becomes computationally intractable on large-scale data sets. To overcome this limitation, we develop an efficient algorithm based on a stochastic gradient descent. Our algorithm can avoid the computations of the full gradient and ensure that the learned matrix remains within the positive semidefinite cone after each iteration. Extensive experiments show that the proposed algorithm is scalable to large data sets and outperforms other state-of-the-art distance metric learning approaches regarding classification accuracy and training time.
Keywords
EIGENVALUE, Large-margin nearest neighbor, metric learning, positive semidefinite (PSD) matrix, stochastic gradient descent (SGD)

Downloads

  • (...).pdf
    • full text (Published version)
    • |
    • UGent only
    • |
    • PDF
    • |
    • 1.18 MB

Citation

Please use this url to cite or link to this publication:

MLA
Nguyen Cong, Bac, et al. “Scalable Large-Margin Distance Metric Learning Using Stochastic Gradient Descent.” IEEE TRANSACTIONS ON CYBERNETICS, vol. 50, no. 3, 2020, pp. 1072–83, doi:10.1109/tcyb.2018.2881417.
APA
Nguyen Cong, B., Morell, C., & De Baets, B. (2020). Scalable large-margin distance metric learning using stochastic gradient descent. IEEE TRANSACTIONS ON CYBERNETICS, 50(3), 1072–1083. https://doi.org/10.1109/tcyb.2018.2881417
Chicago author-date
Nguyen Cong, Bac, Carlos Morell, and Bernard De Baets. 2020. “Scalable Large-Margin Distance Metric Learning Using Stochastic Gradient Descent.” IEEE TRANSACTIONS ON CYBERNETICS 50 (3): 1072–83. https://doi.org/10.1109/tcyb.2018.2881417.
Chicago author-date (all authors)
Nguyen Cong, Bac, Carlos Morell, and Bernard De Baets. 2020. “Scalable Large-Margin Distance Metric Learning Using Stochastic Gradient Descent.” IEEE TRANSACTIONS ON CYBERNETICS 50 (3): 1072–1083. doi:10.1109/tcyb.2018.2881417.
Vancouver
1.
Nguyen Cong B, Morell C, De Baets B. Scalable large-margin distance metric learning using stochastic gradient descent. IEEE TRANSACTIONS ON CYBERNETICS. 2020;50(3):1072–83.
IEEE
[1]
B. Nguyen Cong, C. Morell, and B. De Baets, “Scalable large-margin distance metric learning using stochastic gradient descent,” IEEE TRANSACTIONS ON CYBERNETICS, vol. 50, no. 3, pp. 1072–1083, 2020.
@article{8585899,
  abstract     = {{The key to success of many machine learning and pattern recognition algorithms is the way of computing distances between the input data. In this paper, we propose a large-margin-based approach, called the large-margin distance metric learning (LMDML), for learning a Mahalanobis distance metric. LMDML employs the principle of margin maximization to learn the distance metric with the goal of improving k-nearest-neighbor classification. The main challenge of distance metric learning is the positive semidefiniteness constraint on the Mahalanobis matrix. Semidefinite programming is commonly used to enforce this constraint, but it becomes computationally intractable on large-scale data sets. To overcome this limitation, we develop an efficient algorithm based on a stochastic gradient descent. Our algorithm can avoid the computations of the full gradient and ensure that the learned matrix remains within the positive semidefinite cone after each iteration. Extensive experiments show that the proposed algorithm is scalable to large data sets and outperforms other state-of-the-art distance metric learning approaches regarding classification accuracy and training time.}},
  author       = {{Nguyen Cong, Bac and Morell, Carlos and De Baets, Bernard}},
  issn         = {{2168-2267}},
  journal      = {{IEEE TRANSACTIONS ON CYBERNETICS}},
  keywords     = {{EIGENVALUE,Large-margin nearest neighbor,metric learning,positive semidefinite (PSD) matrix,stochastic gradient descent (SGD)}},
  language     = {{eng}},
  number       = {{3}},
  pages        = {{1072--1083}},
  title        = {{Scalable large-margin distance metric learning using stochastic gradient descent}},
  url          = {{http://doi.org/10.1109/tcyb.2018.2881417}},
  volume       = {{50}},
  year         = {{2020}},
}

Altmetric
View in Altmetric
Web of Science
Times cited: