Advanced search
2 files | 904.51 KB Add to list

A multi-agent q-learning-based framework for achieving fairness in HTTP adaptive streaming

Stefano Petrangeli (UGent) , Maxim Claeys (UGent) , Jeroen Famaey (UGent) , Filip De Turck (UGent) and Steven Latré (UGent)
Author
Organization
Abstract
HTTP Adaptive Streaming (HAS) is quickly becoming the de facto standard for Over-The-Top video streaming. In HAS, each video is temporally segmented and stored in different quality levels. Quality selection heuristics, deployed at the video player, allow dynamically requesting the most appropriate quality level based on the current network conditions. Today's heuristics are deterministic and static, and thus not able to perform well under highly dynamic network conditions. Moreover, in a multi-client scenario, issues concerning fairness among clients arise, meaning that different clients negatively influence each other as they compete for the same bandwidth. In this article, we propose a Reinforcement Learning-based quality selection algorithm able to achieve fairness in a multi-client setting. A key element of this approach is a coordination proxy in charge of facilitating the coordination among clients. The strength of this approach is three-fold. First, the algorithm is able to learn and adapt its policy depending on network conditions, unlike current HAS heuristics. Second, fairness is achieved without explicit communication among agents and thus no significant overhead is introduced into the network. Third, no modifications to the standard HAS architecture are required. By evaluating this novel approach through simulations, under mutable network conditions and in several multi-client scenarios, we are able to show how the proposed approach can improve system fairness up to 60% compared to current HAS heuristics.
Keywords
IBCN

Downloads

  • (...).pdf
    • full text
    • |
    • UGent only
    • |
    • PDF
    • |
    • 399.44 KB
  • (...).pdf
    • full text
    • |
    • UGent only
    • |
    • PDF
    • |
    • 505.07 KB

Citation

Please use this url to cite or link to this publication:

MLA
Petrangeli, Stefano, Maxim Claeys, Jeroen Famaey, et al. “A Multi-agent Q-learning-based Framework for Achieving Fairness in HTTP Adaptive Streaming.” IEEE IFIP Network Operations and Management Symposium. 2014. 1–9. Print.
APA
Petrangeli, S., Claeys, M., Famaey, J., De Turck, F., & Latré, S. (2014). A multi-agent q-learning-based framework for achieving fairness in HTTP adaptive streaming. IEEE IFIP Network Operations and Management Symposium (pp. 1–9). Presented at the 14th IEEE/IFIP Network Operations and Management Symposium (NOMS).
Chicago author-date
Petrangeli, Stefano, Maxim Claeys, Jeroen Famaey, Filip De Turck, and Steven Latré. 2014. “A Multi-agent Q-learning-based Framework for Achieving Fairness in HTTP Adaptive Streaming.” In IEEE IFIP Network Operations and Management Symposium, 1–9.
Chicago author-date (all authors)
Petrangeli, Stefano, Maxim Claeys, Jeroen Famaey, Filip De Turck, and Steven Latré. 2014. “A Multi-agent Q-learning-based Framework for Achieving Fairness in HTTP Adaptive Streaming.” In IEEE IFIP Network Operations and Management Symposium, 1–9.
Vancouver
1.
Petrangeli S, Claeys M, Famaey J, De Turck F, Latré S. A multi-agent q-learning-based framework for achieving fairness in HTTP adaptive streaming. IEEE IFIP Network Operations and Management Symposium. 2014. p. 1–9.
IEEE
[1]
S. Petrangeli, M. Claeys, J. Famaey, F. De Turck, and S. Latré, “A multi-agent q-learning-based framework for achieving fairness in HTTP adaptive streaming,” in IEEE IFIP Network Operations and Management Symposium, Krakow, Poland, 2014, pp. 1–9.
@inproceedings{4402534,
  abstract     = {HTTP Adaptive Streaming (HAS) is quickly becoming the de facto standard for Over-The-Top video streaming. In HAS, each video is temporally segmented and stored in different quality levels. Quality selection heuristics, deployed at the video player, allow dynamically requesting the most appropriate quality level based on the current network conditions. Today's heuristics are deterministic and static, and thus not able to perform well under highly dynamic network conditions. Moreover, in a multi-client scenario, issues concerning fairness among clients arise, meaning that different clients negatively influence each other as they compete for the same bandwidth. In this article, we propose a Reinforcement Learning-based quality selection algorithm able to achieve fairness in a multi-client setting. A key element of this approach is a coordination proxy in charge of facilitating the coordination among clients. The strength of this approach is three-fold. First, the algorithm is able to learn and adapt its policy depending on network conditions, unlike current HAS heuristics. Second, fairness is achieved without explicit communication among agents and thus no significant overhead is introduced into the network. Third, no modifications to the standard HAS architecture are required. By evaluating this novel approach through simulations, under mutable network conditions and in several multi-client scenarios, we are able to show how the proposed approach can improve system fairness up to 60% compared to current HAS heuristics.},
  author       = {Petrangeli, Stefano and Claeys, Maxim and Famaey, Jeroen and De Turck, Filip and Latré, Steven},
  booktitle    = {IEEE IFIP Network Operations and Management Symposium},
  isbn         = {9781479909131},
  issn         = {1542-1201},
  keywords     = {IBCN},
  language     = {eng},
  location     = {Krakow, Poland},
  pages        = {1--9},
  title        = {A multi-agent q-learning-based framework for achieving fairness in HTTP adaptive streaming},
  year         = {2014},
}

Web of Science
Times cited: