Advanced search
1 file | 1.18 MB

Scalable large-margin distance metric learning using stochastic gradient descent

Author
Organization
Abstract
The key to success of many machine learning and pattern recognition algorithms is the way of computing distances between the input data. In this paper, we propose a large-margin-based approach, called the large-margin distance metric learning (LMDML), for learning a Mahalanobis distance metric. LMDML employs the principle of margin maximization to learn the distance metric with the goal of improving k-nearest-neighbor classification. The main challenge of distance metric learning is the positive semidefiniteness constraint on the Mahalanobis matrix. Semidefinite programming is commonly used to enforce this constraint, but it becomes computationally intractable on large-scale data sets. To overcome this limitation, we develop an efficient algorithm based on a stochastic gradient descent. Our algorithm can avoid the computations of the full gradient and ensure that the learned matrix remains within the positive semidefinite cone after each iteration. Extensive experiments show that the proposed algorithm is scalable to large data sets and outperforms other state-of-the-art distance metric learning approaches regarding classification accuracy and training time.

Downloads

  • (...).pdf
    • full text
    • |
    • UGent only
    • |
    • PDF
    • |
    • 1.18 MB

Citation

Please use this url to cite or link to this publication:

Chicago
Nguyen Cong, Bac, Carlos Morell, and Bernard De Baets. 2019. “Scalable Large-margin Distance Metric Learning Using Stochastic Gradient Descent.” Ieee Transactions on Cybernetics.
APA
Nguyen Cong, B., Morell, C., & De Baets, B. (2019). Scalable large-margin distance metric learning using stochastic gradient descent. IEEE TRANSACTIONS ON CYBERNETICS.
Vancouver
1.
Nguyen Cong B, Morell C, De Baets B. Scalable large-margin distance metric learning using stochastic gradient descent. IEEE TRANSACTIONS ON CYBERNETICS. 2019;
MLA
Nguyen Cong, Bac, Carlos Morell, and Bernard De Baets. “Scalable Large-margin Distance Metric Learning Using Stochastic Gradient Descent.” IEEE TRANSACTIONS ON CYBERNETICS (2019): n. pag. Print.
@article{8585899,
  abstract     = {The key to success of many machine learning and pattern recognition algorithms is the way of computing distances between the input data. In this paper, we propose a large-margin-based approach, called the large-margin distance metric learning (LMDML), for learning a Mahalanobis distance metric. LMDML employs the principle of margin maximization to learn the distance metric with the goal of improving k-nearest-neighbor classification. The main challenge of distance metric learning is the positive semidefiniteness constraint on the Mahalanobis matrix. Semidefinite programming is commonly used to enforce this constraint, but it becomes computationally intractable on large-scale data sets. To overcome this limitation, we develop an efficient algorithm based on a stochastic gradient descent. Our algorithm can avoid the computations of the full gradient and ensure that the learned matrix remains within the positive semidefinite cone after each iteration. Extensive experiments show that the proposed algorithm is scalable to large data sets and outperforms other state-of-the-art distance metric learning approaches regarding classification accuracy and training time.},
  author       = {Nguyen Cong, Bac and Morell, Carlos and De Baets, Bernard},
  issn         = {2168-2267},
  journal      = {IEEE TRANSACTIONS ON CYBERNETICS},
  language     = {eng},
  title        = {Scalable large-margin distance metric learning using stochastic gradient descent},
  url          = {http://dx.doi.org/10.1109/tcyb.2018.2881417},
  year         = {2019},
}

Altmetric
View in Altmetric