Overview of MV-HEVC prediction structures for light field video
- Author
- Vasileios Avramelos, Glenn Van Wallendael (UGent) and Peter Lambert (UGent)
- Organization
- Abstract
- Light field video is a promising technology for delivering the required six-degrees-of-freedom for natural content in virtual reality. Already existing multi-view coding (MVC) and multi-view plus depth (MVD) formats, such as MV-HEVC and 3D-HEVC, are the most conventional light field video coding solutions since they can compress video sequences captured simultaneously from multiple camera angles. 3D-HEVC treats a single view as a video sequence and the other sub-aperture views as gray-scale disparity (depth) maps. On the other hand, MV-HEVC treats each view as a separate video sequence, which allows the use of motion compensated algorithms similar to HEVC. While MV-HEVC and 3D-HEVC provide similar results, MV-HEVC does not require any disparity maps to be readily available, and it has a more straightforward implementation since it only uses syntax elements rather than additional prediction tools for inter-view prediction. However, there are many degrees of freedom in choosing an appropriate structure and it is currently still unknown which one is optimal for a given set of application requirements. In this work, various prediction structures for MV-HEVC are implemented and tested. The findings reveal the trade-off between compression gains, distortion and random access capabilities in MVHEVC light field video coding. The results give an overview of the most optimal solutions developed in the context of this work, and prediction structure algorithms proposed in state-of-the-art literature. This overview provides a useful benchmark for future development of light field video coding solutions.
- Keywords
- Light field video coding, multi-view video coding, MV-HEVC, prediction structures
Downloads
-
Avramelos V accepted version.pdf
- full text (Accepted manuscript)
- |
- open access
- |
- |
- 993.53 KB
-
published.pdf
- full text (Published version)
- |
- open access
- |
- |
- 1.09 MB
Citation
Please use this url to cite or link to this publication: http://hdl.handle.net/1854/LU-8627768
- MLA
- Avramelos, Vasileios, et al. “Overview of MV-HEVC Prediction Structures for Light Field Video.” Applications of Digital Image Processing XLII, edited by Andrew G. Tescher and Touradj Ebrahimi, vol. 11137, SPIE, 2019, pp. 301–09, doi:10.1117/12.2529137.
- APA
- Avramelos, V., Van Wallendael, G., & Lambert, P. (2019). Overview of MV-HEVC prediction structures for light field video. In A. G. Tescher & T. Ebrahimi (Eds.), Applications of Digital Image Processing XLII (Vol. 11137, pp. 301–309). https://doi.org/10.1117/12.2529137
- Chicago author-date
- Avramelos, Vasileios, Glenn Van Wallendael, and Peter Lambert. 2019. “Overview of MV-HEVC Prediction Structures for Light Field Video.” In Applications of Digital Image Processing XLII, edited by Andrew G. Tescher and Touradj Ebrahimi, 11137:301–9. SPIE. https://doi.org/10.1117/12.2529137.
- Chicago author-date (all authors)
- Avramelos, Vasileios, Glenn Van Wallendael, and Peter Lambert. 2019. “Overview of MV-HEVC Prediction Structures for Light Field Video.” In Applications of Digital Image Processing XLII, ed by. Andrew G. Tescher and Touradj Ebrahimi, 11137:301–309. SPIE. doi:10.1117/12.2529137.
- Vancouver
- 1.Avramelos V, Van Wallendael G, Lambert P. Overview of MV-HEVC prediction structures for light field video. In: Tescher AG, Ebrahimi T, editors. Applications of Digital Image Processing XLII. SPIE; 2019. p. 301–9.
- IEEE
- [1]V. Avramelos, G. Van Wallendael, and P. Lambert, “Overview of MV-HEVC prediction structures for light field video,” in Applications of Digital Image Processing XLII, San Diego, USA, 2019, vol. 11137, pp. 301–309.
@inproceedings{8627768, abstract = {{Light field video is a promising technology for delivering the required six-degrees-of-freedom for natural content in virtual reality. Already existing multi-view coding (MVC) and multi-view plus depth (MVD) formats, such as MV-HEVC and 3D-HEVC, are the most conventional light field video coding solutions since they can compress video sequences captured simultaneously from multiple camera angles. 3D-HEVC treats a single view as a video sequence and the other sub-aperture views as gray-scale disparity (depth) maps. On the other hand, MV-HEVC treats each view as a separate video sequence, which allows the use of motion compensated algorithms similar to HEVC. While MV-HEVC and 3D-HEVC provide similar results, MV-HEVC does not require any disparity maps to be readily available, and it has a more straightforward implementation since it only uses syntax elements rather than additional prediction tools for inter-view prediction. However, there are many degrees of freedom in choosing an appropriate structure and it is currently still unknown which one is optimal for a given set of application requirements. In this work, various prediction structures for MV-HEVC are implemented and tested. The findings reveal the trade-off between compression gains, distortion and random access capabilities in MVHEVC light field video coding. The results give an overview of the most optimal solutions developed in the context of this work, and prediction structure algorithms proposed in state-of-the-art literature. This overview provides a useful benchmark for future development of light field video coding solutions.}}, articleno = {{111371F}}, author = {{Avramelos, Vasileios and Van Wallendael, Glenn and Lambert, Peter}}, booktitle = {{Applications of Digital Image Processing XLII}}, editor = {{Tescher, Andrew G. and Ebrahimi, Touradj}}, isbn = {{9781510629677}}, issn = {{0277-786X}}, keywords = {{Light field video coding,multi-view video coding,MV-HEVC,prediction structures}}, language = {{eng}}, location = {{San Diego, USA}}, pages = {{111371F:301--111371F:309}}, publisher = {{SPIE}}, title = {{Overview of MV-HEVC prediction structures for light field video}}, url = {{http://doi.org/10.1117/12.2529137}}, volume = {{11137}}, year = {{2019}}, }
- Altmetric
- View in Altmetric
- Web of Science
- Times cited: