Advanced search
1 file | 2.44 MB Add to list

Improving relevant subjective testing for validation : comparing machine learning algorithms for finding similarities in VQA datasets using objective measures

Author
Organization
Abstract
Subjective quality assessment is a necessary activity to validate objective measures or to assess the performance of innovative video processing technologies. However, designing and performing comprehensive tests requires expertise and a large effort especially for the execution part. In this work we propose a methodology that, given a set of processed video sequences prepared by video quality experts, attempts to reduce the number of subjective tests by selecting a subset with minimum size which is expected to yield the same conclusions of the larger set. To this aim, we combine information coming from different types of objective quality metrics with clustering and machine learning algorithms that perform the actual selection, therefore reducing the required subjective assessment effort while trying to preserve the variety of content and conditions needed to ensure the validity of the conclusions. Experiments are conducted on one of the largest publicly available subjectively annotated video sequence dataset. As performance criterion, we chose the validation criteria for video quality measurement algorithms established by the International Telecommunication Union.
Keywords
QUALITY LISTENING TESTS, VIDEO, FRAMEWORK, Subjective testing, Subset selection, Statistical analysis, Video, quality assessment, Clustering

Downloads

  • (...).pdf
    • full text
    • |
    • UGent only
    • |
    • PDF
    • |
    • 2.44 MB

Citation

Please use this url to cite or link to this publication:

MLA
Aldahdooh, Ahmed, et al. “Improving Relevant Subjective Testing for Validation : Comparing Machine Learning Algorithms for Finding Similarities in VQA Datasets Using Objective Measures.” SIGNAL PROCESSING-IMAGE COMMUNICATION, vol. 74, Elsevier Science Bv, 2019, pp. 32–41, doi:10.1016/j.image.2019.01.004.
APA
Aldahdooh, A., Masala, E., Van Wallendael, G., Lambert, P., & Barkowsky, M. (2019). Improving relevant subjective testing for validation : comparing machine learning algorithms for finding similarities in VQA datasets using objective measures. SIGNAL PROCESSING-IMAGE COMMUNICATION, 74, 32–41. https://doi.org/10.1016/j.image.2019.01.004
Chicago author-date
Aldahdooh, Ahmed, Enrico Masala, Glenn Van Wallendael, Peter Lambert, and Marcus Barkowsky. 2019. “Improving Relevant Subjective Testing for Validation : Comparing Machine Learning Algorithms for Finding Similarities in VQA Datasets Using Objective Measures.” SIGNAL PROCESSING-IMAGE COMMUNICATION 74: 32–41. https://doi.org/10.1016/j.image.2019.01.004.
Chicago author-date (all authors)
Aldahdooh, Ahmed, Enrico Masala, Glenn Van Wallendael, Peter Lambert, and Marcus Barkowsky. 2019. “Improving Relevant Subjective Testing for Validation : Comparing Machine Learning Algorithms for Finding Similarities in VQA Datasets Using Objective Measures.” SIGNAL PROCESSING-IMAGE COMMUNICATION 74: 32–41. doi:10.1016/j.image.2019.01.004.
Vancouver
1.
Aldahdooh A, Masala E, Van Wallendael G, Lambert P, Barkowsky M. Improving relevant subjective testing for validation : comparing machine learning algorithms for finding similarities in VQA datasets using objective measures. SIGNAL PROCESSING-IMAGE COMMUNICATION. 2019;74:32–41.
IEEE
[1]
A. Aldahdooh, E. Masala, G. Van Wallendael, P. Lambert, and M. Barkowsky, “Improving relevant subjective testing for validation : comparing machine learning algorithms for finding similarities in VQA datasets using objective measures,” SIGNAL PROCESSING-IMAGE COMMUNICATION, vol. 74, pp. 32–41, 2019.
@article{8618302,
  abstract     = {{Subjective quality assessment is a necessary activity to validate objective measures or to assess the performance of innovative video processing technologies. However, designing and performing comprehensive tests requires expertise and a large effort especially for the execution part. In this work we propose a methodology that, given a set of processed video sequences prepared by video quality experts, attempts to reduce the number of subjective tests by selecting a subset with minimum size which is expected to yield the same conclusions of the larger set. To this aim, we combine information coming from different types of objective quality metrics with clustering and machine learning algorithms that perform the actual selection, therefore reducing the required subjective assessment effort while trying to preserve the variety of content and conditions needed to ensure the validity of the conclusions. Experiments are conducted on one of the largest publicly available subjectively annotated video sequence dataset. As performance criterion, we chose the validation criteria for video quality measurement algorithms established by the International Telecommunication Union.}},
  author       = {{Aldahdooh, Ahmed and Masala, Enrico and Van Wallendael, Glenn and Lambert, Peter and Barkowsky, Marcus}},
  issn         = {{0923-5965}},
  journal      = {{SIGNAL PROCESSING-IMAGE COMMUNICATION}},
  keywords     = {{QUALITY LISTENING TESTS,VIDEO,FRAMEWORK,Subjective testing,Subset selection,Statistical analysis,Video,quality assessment,Clustering}},
  language     = {{eng}},
  pages        = {{32--41}},
  publisher    = {{Elsevier Science Bv}},
  title        = {{Improving relevant subjective testing for validation : comparing machine learning algorithms for finding similarities in VQA datasets using objective measures}},
  url          = {{http://dx.doi.org/10.1016/j.image.2019.01.004}},
  volume       = {{74}},
  year         = {{2019}},
}

Altmetric
View in Altmetric
Web of Science
Times cited: