Advanced search
1 file | 208.77 KB Add to list

Parallel one-versus-rest SVM training on the GPU

Author
Organization
Abstract
Linear SVMs are a popular choice of binary classifier. It is often necessary to train many different classifiers on a multiclass dataset in a one-versus-rest fashion, and this for several values of the regularization constant. We propose to harness GPU parallelism by training as many classifiers as possible at the same time. We optimize the primal L2-loss SVM objective using the conjugate gradient method, with an adapted backtracking line search strategy. We compared our approach to liblinear and achieved speedups of up to 17 times on our available hardware.
Keywords
support vector machine, SVM, GPU

Downloads

  • biglearning2012 submission 6.pdf
    • full text
    • |
    • open access
    • |
    • PDF
    • |
    • 208.77 KB

Citation

Please use this url to cite or link to this publication:

MLA
Dieleman, Sander, et al. “Parallel One-versus-Rest SVM Training on the GPU.” Big Learning : Algorithms, Systems, and Tools, Proceedings, edited by Sameer Singh et al., 2012.
APA
Dieleman, S., van den Oord, A., & Schrauwen, B. (2012). Parallel one-versus-rest SVM training on the GPU. In S. Singh, J. Duchi, Y. Low, & J. Gonzalez (Eds.), Big Learning : Algorithms, Systems, and Tools, Proceedings.
Chicago author-date
Dieleman, Sander, Aäron van den Oord, and Benjamin Schrauwen. 2012. “Parallel One-versus-Rest SVM Training on the GPU.” In Big Learning : Algorithms, Systems, and Tools, Proceedings, edited by Sameer Singh, John Duchi, Yucheng Low, and Joseph Gonzalez.
Chicago author-date (all authors)
Dieleman, Sander, Aäron van den Oord, and Benjamin Schrauwen. 2012. “Parallel One-versus-Rest SVM Training on the GPU.” In Big Learning : Algorithms, Systems, and Tools, Proceedings, ed by. Sameer Singh, John Duchi, Yucheng Low, and Joseph Gonzalez.
Vancouver
1.
Dieleman S, van den Oord A, Schrauwen B. Parallel one-versus-rest SVM training on the GPU. In: Singh S, Duchi J, Low Y, Gonzalez J, editors. Big Learning : Algorithms, Systems, and Tools, Proceedings. 2012.
IEEE
[1]
S. Dieleman, A. van den Oord, and B. Schrauwen, “Parallel one-versus-rest SVM training on the GPU,” in Big Learning : Algorithms, Systems, and Tools, Proceedings, Lake Tahoe, Nevada, USA, 2012.
@inproceedings{3118534,
  abstract     = {{Linear SVMs are a popular choice of binary classifier. It is often necessary to train many different classifiers on a multiclass dataset in a one-versus-rest fashion, and this for several values of the regularization constant. We propose to harness GPU parallelism by training as many classifiers as possible at the same time. We optimize the primal L2-loss SVM objective using the conjugate gradient method, with an adapted backtracking line search strategy. We compared our approach to liblinear and achieved speedups of up to 17 times on our available hardware.}},
  author       = {{Dieleman, Sander and van den Oord, Aäron and Schrauwen, Benjamin}},
  booktitle    = {{Big Learning : Algorithms, Systems, and Tools, Proceedings}},
  editor       = {{Singh, Sameer and Duchi, John and Low, Yucheng and Gonzalez, Joseph}},
  keywords     = {{support vector machine,SVM,GPU}},
  language     = {{eng}},
  location     = {{Lake Tahoe, Nevada, USA}},
  pages        = {{5}},
  title        = {{Parallel one-versus-rest SVM training on the GPU}},
  url          = {{http://www.biglearn.org/2012/files/papers/biglearning2012_submission_6.pdf}},
  year         = {{2012}},
}