Advanced search
2 files | 9.48 MB Add to list

Flexible performant GEMM kernels on GPUs

Thomas Faingnaert (UGent) , Tim Besard (UGent) and Bjorn De Sutter (UGent)
Author
Organization
Project
Abstract
General Matrix Multiplication or GEMM kernels take centre place in high performance computing and machine learning. Recent NVIDIA GPUs include GEMM accelerators, such as NVIDIA’s Tensor Cores. Their exploitation is hampered by the two-language problem: it requires either low-level programming which implies low programmer productivity or using libraries that only offer a limited set of components. Because rephrasing algorithms in terms of established components often introduces overhead, the libraries’ lack of flexibility limits the freedom to explore new algorithms. Researchers using GEMMs can hence not enjoy programming productivity, high performance, and research flexibility at once. In this paper we solve this problem. We present three sets of abstractions and interfaces to program GEMMs within the scientific Julia programming language. The interfaces and abstractions are co-designed for researchers’ needs and Julia’s features to achieve sufficient separation of concerns and flexibility to easily extend basic GEMMs in many different ways without paying a performance price. Comparing our GEMMs to state-of-the-art libraries cuBLAS and CUTLASS, we demonstrate that our performance is in the same ballpark of the libraries, and in some cases even exceeds it, without having to write a single line of code in CUDA C++ or assembly, and without facing flexibility limitations.
Keywords
Computational Theory and Mathematics, Hardware and Architecture, Signal Processing, Libraries, Kernel, Graphics processing units, Codes, Programming, Instruction sets, Productivity, Matrix multiplication, graphics processors, high-level programming languages, TENSOR CONTRACTION

Downloads

  • main.pdf
    • full text (Accepted manuscript)
    • |
    • open access
    • |
    • PDF
    • |
    • 6.96 MB
  • (...).pdf
    • full text (Published version)
    • |
    • UGent only
    • |
    • PDF
    • |
    • 2.53 MB

Citation

Please use this url to cite or link to this publication:

MLA
Faingnaert, Thomas, et al. “Flexible Performant GEMM Kernels on GPUs.” IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, vol. 33, no. 9, 2022, pp. 2230–48, doi:10.1109/tpds.2021.3136457.
APA
Faingnaert, T., Besard, T., & De Sutter, B. (2022). Flexible performant GEMM kernels on GPUs. IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 33(9), 2230–2248. https://doi.org/10.1109/tpds.2021.3136457
Chicago author-date
Faingnaert, Thomas, Tim Besard, and Bjorn De Sutter. 2022. “Flexible Performant GEMM Kernels on GPUs.” IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS 33 (9): 2230–48. https://doi.org/10.1109/tpds.2021.3136457.
Chicago author-date (all authors)
Faingnaert, Thomas, Tim Besard, and Bjorn De Sutter. 2022. “Flexible Performant GEMM Kernels on GPUs.” IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS 33 (9): 2230–2248. doi:10.1109/tpds.2021.3136457.
Vancouver
1.
Faingnaert T, Besard T, De Sutter B. Flexible performant GEMM kernels on GPUs. IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS. 2022;33(9):2230–48.
IEEE
[1]
T. Faingnaert, T. Besard, and B. De Sutter, “Flexible performant GEMM kernels on GPUs,” IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, vol. 33, no. 9, pp. 2230–2248, 2022.
@article{8741713,
  abstract     = {{General Matrix Multiplication or GEMM kernels take centre place in high performance computing and machine learning. Recent NVIDIA GPUs include GEMM accelerators, such as NVIDIA’s Tensor Cores. Their exploitation is hampered by the two-language problem: it requires either low-level programming which implies low programmer productivity or using libraries that only offer a limited set of components. Because rephrasing algorithms in terms of established components often introduces overhead, the libraries’ lack of flexibility limits the freedom to explore new algorithms. Researchers using GEMMs can hence not enjoy programming productivity, high performance, and research flexibility at once. In this paper we solve this problem. We present three sets of abstractions and interfaces to program GEMMs within the scientific Julia programming language. The interfaces and abstractions are co-designed for researchers’ needs and Julia’s features to achieve sufficient separation of concerns and flexibility to easily extend basic GEMMs in many different ways without paying a performance price. Comparing our GEMMs to state-of-the-art libraries cuBLAS and CUTLASS, we demonstrate that our performance is in the same ballpark of the libraries, and in some cases even exceeds it, without having to write a single line of code in CUDA C++ or assembly, and without facing flexibility limitations.}},
  author       = {{Faingnaert, Thomas and Besard, Tim and De Sutter, Bjorn}},
  issn         = {{1045-9219}},
  journal      = {{IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS}},
  keywords     = {{Computational Theory and Mathematics,Hardware and Architecture,Signal Processing,Libraries,Kernel,Graphics processing units,Codes,Programming,Instruction sets,Productivity,Matrix multiplication,graphics processors,high-level programming languages,TENSOR CONTRACTION}},
  language     = {{eng}},
  number       = {{9}},
  pages        = {{2230--2248}},
  title        = {{Flexible performant GEMM kernels on GPUs}},
  url          = {{http://doi.org/10.1109/tpds.2021.3136457}},
  volume       = {{33}},
  year         = {{2022}},
}

Altmetric
View in Altmetric
Web of Science
Times cited: