STAR : Sub-Entry Sharing-Aware TLB for Multi-Instance GPU
- Author
- Bingyao Li, Yueqi Wang, Tianyu Wang, Lieven Eeckhout (UGent) , Jun Yang, Aamer Jaleel and Xulong Tang
- Organization
- Abstract
- NVIDIA's Multi-Instance GPU (MIG) technology enables partitioning GPU computing power and memory into separate hardware instances, providing complete isolation including compute resources, caches, and memory. However, prior work identifies that MIG does not partition the last-level TLB (i.e., L3 TLB), which remains shared among all instances. To enhance TLB reach, NVIDIA GPUs reorganized the TLB structure with 16 sub-entries in each L3 TLB entry that have a one-to-one mapping to the address translations for 16 pages of size 64KB located within the same 1MB aligned range. Our comprehensive investigation of address translation efficiency in MIG identifies two main issues caused by L3 TLB sharing interference: (i) it results in performance degradation for co-running applications, and (ii) TLB sub-entries are not fully utilized before eviction. Based on this observation, we propose STAR to improve the utilization of TLB sub-entries through dynamic sharing of TLB entries across multiple base addresses. STAR evaluates TLB entries based on their sub-entry utilization to optimize address translation storage, dynamically adjusting between a shared and non-shared state to cater to current demand. We show that STAR improves overall performance by an average of 28.7% across various multi-tenant workloads.
- Keywords
- multi-instance GPU, sub-entry TLB, PAGE, MEMORY
Downloads
-
MICRO2024-STAR.pdf
- full text (Accepted manuscript)
- |
- open access
- |
- |
- 1.21 MB
-
(...).pdf
- full text (Published version)
- |
- UGent only
- |
- |
- 562.94 KB
Citation
Please use this url to cite or link to this publication: http://hdl.handle.net/1854/LU-01JX2MSJXKVPAVG0JG7BNQQ239
- MLA
- Li, Bingyao, et al. “STAR : Sub-Entry Sharing-Aware TLB for Multi-Instance GPU.” 2024 57TH IEEE/ACM INTERNATIONAL SYMPOSIUM ON MICROARCHITECTURE, MICRO, IEEE COMPUTER SOC, 2024, pp. 309–23, doi:10.1109/MICRO61859.2024.00031.
- APA
- Li, B., Wang, Y., Wang, T., Eeckhout, L., Yang, J., Jaleel, A., & Tang, X. (2024). STAR : Sub-Entry Sharing-Aware TLB for Multi-Instance GPU. 2024 57TH IEEE/ACM INTERNATIONAL SYMPOSIUM ON MICROARCHITECTURE, MICRO, 309–323. https://doi.org/10.1109/MICRO61859.2024.00031
- Chicago author-date
- Li, Bingyao, Yueqi Wang, Tianyu Wang, Lieven Eeckhout, Jun Yang, Aamer Jaleel, and Xulong Tang. 2024. “STAR : Sub-Entry Sharing-Aware TLB for Multi-Instance GPU.” In 2024 57TH IEEE/ACM INTERNATIONAL SYMPOSIUM ON MICROARCHITECTURE, MICRO, 309–23. LOS ALAMITOS: IEEE COMPUTER SOC. https://doi.org/10.1109/MICRO61859.2024.00031.
- Chicago author-date (all authors)
- Li, Bingyao, Yueqi Wang, Tianyu Wang, Lieven Eeckhout, Jun Yang, Aamer Jaleel, and Xulong Tang. 2024. “STAR : Sub-Entry Sharing-Aware TLB for Multi-Instance GPU.” In 2024 57TH IEEE/ACM INTERNATIONAL SYMPOSIUM ON MICROARCHITECTURE, MICRO, 309–323. LOS ALAMITOS: IEEE COMPUTER SOC. doi:10.1109/MICRO61859.2024.00031.
- Vancouver
- 1.Li B, Wang Y, Wang T, Eeckhout L, Yang J, Jaleel A, et al. STAR : Sub-Entry Sharing-Aware TLB for Multi-Instance GPU. In: 2024 57TH IEEE/ACM INTERNATIONAL SYMPOSIUM ON MICROARCHITECTURE, MICRO. LOS ALAMITOS: IEEE COMPUTER SOC; 2024. p. 309–23.
- IEEE
- [1]B. Li et al., “STAR : Sub-Entry Sharing-Aware TLB for Multi-Instance GPU,” in 2024 57TH IEEE/ACM INTERNATIONAL SYMPOSIUM ON MICROARCHITECTURE, MICRO, Austin, TX, 2024, pp. 309–323.
@inproceedings{01JX2MSJXKVPAVG0JG7BNQQ239,
abstract = {{NVIDIA's Multi-Instance GPU (MIG) technology enables partitioning GPU computing power and memory into separate hardware instances, providing complete isolation including compute resources, caches, and memory. However, prior work identifies that MIG does not partition the last-level TLB (i.e., L3 TLB), which remains shared among all instances. To enhance TLB reach, NVIDIA GPUs reorganized the TLB structure with 16 sub-entries in each L3 TLB entry that have a one-to-one mapping to the address translations for 16 pages of size 64KB located within the same 1MB aligned range. Our comprehensive investigation of address translation efficiency in MIG identifies two main issues caused by L3 TLB sharing interference: (i) it results in performance degradation for co-running applications, and (ii) TLB sub-entries are not fully utilized before eviction. Based on this observation, we propose STAR to improve the utilization of TLB sub-entries through dynamic sharing of TLB entries across multiple base addresses. STAR evaluates TLB entries based on their sub-entry utilization to optimize address translation storage, dynamically adjusting between a shared and non-shared state to cater to current demand. We show that STAR improves overall performance by an average of 28.7% across various multi-tenant workloads.}},
author = {{Li, Bingyao and Wang, Yueqi and Wang, Tianyu and Eeckhout, Lieven and Yang, Jun and Jaleel, Aamer and Tang, Xulong}},
booktitle = {{2024 57TH IEEE/ACM INTERNATIONAL SYMPOSIUM ON MICROARCHITECTURE, MICRO}},
isbn = {{9798350350586}},
issn = {{1072-4451}},
keywords = {{multi-instance GPU,sub-entry TLB,PAGE,MEMORY}},
language = {{eng}},
location = {{Austin, TX}},
pages = {{309--323}},
publisher = {{IEEE COMPUTER SOC}},
title = {{STAR : Sub-Entry Sharing-Aware TLB for Multi-Instance GPU}},
url = {{http://doi.org/10.1109/MICRO61859.2024.00031}},
year = {{2024}},
}
- Altmetric
- View in Altmetric
- Web of Science
- Times cited: