Ghent University Academic Bibliography

Advanced

Content-aware objective video quality assessment

Benhur Ortiz Jaramillo UGent, Jorge Niño Castañeda UGent, Ljiljana Platisa UGent and Wilfried Philips UGent (2016) JOURNAL OF ELECTRONIC IMAGING. 25(1). p.1-16
abstract
Since the end-user of video-based systems is often a human observer, prediction of user-perceived video quality (PVQ) is an important task for increasing the user satisfaction. Despite the large variety of objective video quality measures (VQMs), their lack of generalizability remains a problem. This is mainly due to the strong dependency between PVQ and video content. Although this problem is well known, few existing VQMs directly account for the influence of video content on PVQ. Recently, we proposed a method to predict PVQ by introducing relevant video content features in the computation of video distortion measures. The method is based on analyzing the level of spatiotemporal activity in the video and using those as parameters of the anthropomorphic video distortion models. We focus on the experimental evaluation of the proposed methodology based on a total of five public databases, four different objective VQMs, and 105 content related indexes. Additionally, relying on the proposed method, we introduce an approach for selecting the levels of video distortions for the purpose of subjective quality assessment studies. Our results suggest that when adequately combined with content related indexes, even very simple distortion measures (e.g., peak signal to noise ratio) are able to achieve high performance, i.e., high correlation between the VQM and the PVQ. In particular, we have found that by incorporating video content features, it is possible to increase the performance of the VQM by up to 20% relative to its noncontent-aware baseline.
Please use this url to cite or link to this publication:
author
organization
year
type
journalArticle (original)
publication status
published
subject
keyword
user-perceived video quality, video quality assessment, Spatial activity, video content analysis, temporal activity, VISUAL SPEED PERCEPTION, FRAME RATE, MODEL, QUANTIZATION
journal title
JOURNAL OF ELECTRONIC IMAGING
editor
Gaurav Sharma and Karen Egiazarian
volume
25
issue
1
article number
013011
pages
1 - 16
Web of Science type
Article
Web of Science id
000375930700012
JCR category
ENGINEERING, ELECTRICAL & ELECTRONIC
JCR impact factor
0.754 (2016)
JCR rank
212/260 (2016)
JCR quartile
4 (2016)
ISSN
1017-9909
DOI
10.1117/1.JEI.25.1.013011
project
PANORAMA
language
English
UGent publication?
yes
classification
A1
copyright statement
I have retained and own the full copyright for this publication
id
7051349
handle
http://hdl.handle.net/1854/LU-7051349
alternative location
http://electronicimaging.spiedigitallibrary.org/article.aspx?articleid=2484436
date created
2016-01-25 18:34:07
date last changed
2017-09-15 14:03:49
@article{7051349,
  abstract     = {Since the end-user of video-based systems is often a human observer, prediction of user-perceived video quality (PVQ) is an important task for increasing the user satisfaction. Despite the large variety of objective video quality measures (VQMs), their lack of generalizability remains a problem. This is mainly due to the strong dependency between PVQ and video content. Although this problem is well known, few existing VQMs directly account for the influence of video content on PVQ. Recently, we proposed a method to predict PVQ by introducing relevant video content features in the computation of video distortion measures. The method is based on analyzing the level of spatiotemporal activity in the video and using those as parameters of the anthropomorphic video distortion models. We focus on the experimental evaluation of the proposed methodology based on a total of five public databases, four different objective VQMs, and 105 content related indexes. Additionally, relying on the proposed method, we introduce an approach for selecting the levels of video distortions for the purpose of subjective quality assessment studies. Our results suggest that when adequately combined with content related indexes, even very simple distortion measures (e.g., peak signal to noise ratio) are able to achieve high performance, i.e., high correlation between the VQM and the PVQ. In particular, we have found that by incorporating video content features, it is possible to increase the performance of the VQM by up to 20\% relative to its noncontent-aware baseline.},
  articleno    = {013011},
  author       = {Ortiz Jaramillo, Benhur and Ni{\~n}o Casta{\~n}eda, Jorge and Platisa, Ljiljana and Philips, Wilfried},
  editor       = {Sharma, Gaurav and Egiazarian, Karen},
  issn         = {1017-9909},
  journal      = {JOURNAL OF ELECTRONIC IMAGING},
  keyword      = {user-perceived video quality,video quality assessment,Spatial activity,video content analysis,temporal activity,VISUAL SPEED PERCEPTION,FRAME RATE,MODEL,QUANTIZATION},
  language     = {eng},
  number       = {1},
  pages        = {013011:1--013011:16},
  title        = {Content-aware objective video quality assessment},
  url          = {http://dx.doi.org/10.1117/1.JEI.25.1.013011},
  volume       = {25},
  year         = {2016},
}

Chicago
Ortiz Jaramillo, Benhur, Jorge Niño Castañeda, Ljiljana Platisa, and Wilfried Philips. 2016. “Content-aware Objective Video Quality Assessment.” Ed. Gaurav Sharma and Karen Egiazarian. Journal of Electronic Imaging 25 (1): 1–16.
APA
Ortiz Jaramillo, B., Niño Castañeda, J., Platisa, L., & Philips, W. (2016). Content-aware objective video quality assessment. (G. Sharma & K. Egiazarian, Eds.)JOURNAL OF ELECTRONIC IMAGING, 25(1), 1–16.
Vancouver
1.
Ortiz Jaramillo B, Niño Castañeda J, Platisa L, Philips W. Content-aware objective video quality assessment. Sharma G, Egiazarian K, editors. JOURNAL OF ELECTRONIC IMAGING. 2016;25(1):1–16.
MLA
Ortiz Jaramillo, Benhur, Jorge Niño Castañeda, Ljiljana Platisa, et al. “Content-aware Objective Video Quality Assessment.” Ed. Gaurav Sharma & Karen Egiazarian. JOURNAL OF ELECTRONIC IMAGING 25.1 (2016): 1–16. Print.