Advanced search
2 files | 1.58 MB Add to list

Contextual bandit learning-based viewport prediction for 360 video

Joris Heyse (UGent) , Maria Torres Vega (UGent) , Femke De Backere (UGent) and Filip De Turck (UGent)
Author
Organization
Abstract
Accurately predicting where the user of a Virtual Reality (VR) application will be looking at in the near future improves the perceive quality of services, such as adaptive tile-based streaming or personalized online training. However, because of the unpredictability and dissimilarity of user behavior it is still a big challenge. In this work, we propose to use reinforcement learning, in particular contextual bandits, to solve this problem. The proposed solution tackles the prediction in two stages: (1) detection of movement; (2) prediction of direction. In order to prove its potential for VR services, the method was deployed on an adaptive tile-based VR streaming testbed, for benchmarking against a 3D trajectory extrapolation approach. Our results showed a significant improvement in terms of prediction error compared to the benchmark. This reduced prediction error also resulted in an enhancement on the perceived video quality.
Keywords
learning (artificial intelligence), quality of service, video signal processing, video streaming, virtual reality, visual perception, contextual bandit learning-based viewport prediction, 360 video, Virtual Reality application, reinforcement learning, VR services, adaptive tile-based VR streaming testbed, prediction error, perceived video quality, 3D trajectory extrapolation, quality of services, Adaptive 360 video streaming, contextual bandit, VR, Information system, Information systems applications, Multimedia information systems, Multimedia streaming, Human computer interaction (HCI), Interaction paradigms, Virtual reality

Downloads

  • (...).pdf
    • full text (Published version)
    • |
    • UGent only
    • |
    • PDF
    • |
    • 1.07 MB
  • 7527 i.pdf
    • full text (Accepted manuscript)
    • |
    • open access
    • |
    • PDF
    • |
    • 510.36 KB

Citation

Please use this url to cite or link to this publication:

MLA
Heyse, Joris, et al. “Contextual Bandit Learning-Based Viewport Prediction for 360 Video.” 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), IEEE, 2019, pp. 972–73.
APA
Heyse, J., Torres Vega, M., De Backere, F., & De Turck, F. (2019). Contextual bandit learning-based viewport prediction for 360 video. In 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR) (pp. 972–973). New York: IEEE.
Chicago author-date
Heyse, Joris, Maria Torres Vega, Femke De Backere, and Filip De Turck. 2019. “Contextual Bandit Learning-Based Viewport Prediction for 360 Video.” In 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), 972–73. New York: IEEE.
Chicago author-date (all authors)
Heyse, Joris, Maria Torres Vega, Femke De Backere, and Filip De Turck. 2019. “Contextual Bandit Learning-Based Viewport Prediction for 360 Video.” In 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), 972–973. New York: IEEE.
Vancouver
1.
Heyse J, Torres Vega M, De Backere F, De Turck F. Contextual bandit learning-based viewport prediction for 360 video. In: 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR). New York: IEEE; 2019. p. 972–3.
IEEE
[1]
J. Heyse, M. Torres Vega, F. De Backere, and F. De Turck, “Contextual bandit learning-based viewport prediction for 360 video,” in 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Osaka, Japan, 2019, pp. 972–973.
@inproceedings{8627946,
  abstract     = {Accurately predicting where the user of a Virtual Reality (VR) application will be looking at in the near future improves the perceive quality of services, such as adaptive tile-based streaming or personalized online training. However, because of the unpredictability and dissimilarity of user behavior it is still a big challenge. In this work, we propose to use reinforcement learning, in particular contextual bandits, to solve this problem. The proposed solution tackles the prediction in two stages: (1) detection of movement; (2) prediction of direction. In order to prove its potential for VR services, the method was deployed on an adaptive tile-based VR streaming testbed, for benchmarking against a 3D trajectory extrapolation approach. Our results showed a significant improvement in terms of prediction error compared to the benchmark. This reduced prediction error also resulted in an enhancement on the perceived video quality.},
  author       = {Heyse, Joris and Torres Vega, Maria and De Backere, Femke and De Turck, Filip},
  booktitle    = {2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)},
  isbn         = {9781728113777},
  issn         = {2642-5254},
  keywords     = {learning (artificial intelligence),quality of service,video signal processing,video streaming,virtual reality,visual perception,contextual bandit learning-based viewport prediction,360 video,Virtual Reality application,reinforcement learning,VR services,adaptive tile-based VR streaming testbed,prediction error,perceived video quality,3D trajectory extrapolation,quality of services,Adaptive 360 video streaming,contextual bandit,VR,Information system,Information systems applications,Multimedia information systems,Multimedia streaming,Human computer interaction (HCI),Interaction paradigms,Virtual reality},
  language     = {eng},
  location     = {Osaka, Japan},
  pages        = {972--973},
  publisher    = {IEEE},
  title        = {Contextual bandit learning-based viewport prediction for 360 video},
  url          = {http://dx.doi.org/10.1109/VR.2019.8797830},
  year         = {2019},
}

Altmetric
View in Altmetric
Web of Science
Times cited: