Advanced search
2 files | 21.43 MB Add to list
Author
Organization
Abstract
Large-scale datasets for single-label multi-class classification, such as ImageNet-1k, have been instrumental in advancing deep learning and computer vision. However, a critical and often understudied aspect is the comprehensive quality assessment of these datasets, especially regarding potential multi-label annotation errors. In this paper, we introduce a lightweight, user-friendly, and scalable framework that synergizes human and machine intelligence for efficient dataset validation and quality enhancement. We term this novel framework Multilabelfy. Central to Multilabelfy is an adaptable web-based platform that systematically guides annotators through the re-evaluation process, effectively leveraging human-machine interactions to enhance dataset quality. By using Multilabelfy on the ImageNetV2 dataset, we found that approximately 47.88% of the images contained at least two labels, underscoring the need for more rigorous assessments of such influential datasets. Furthermore, our analysis showed a negative correlation between the number of potential labels per image and model top-1 accuracy, illuminating a crucial factor in model evaluation and selection. Our open-source framework, Multilabelfy, offers a convenient, lightweight solution for dataset enhancement, emphasizing multi-label proportions. This study tackles major challenges in dataset integrity and provides key insights into model performance evaluation. Moreover, it underscores the advantages of integrating human expertise with machine capabilities to produce more robust models and trustworthy data development.
Keywords
Computer Vision, Dataset Quality Enhancement, Dataset Validation, Human-Computer Interaction, Multi-label Annotation

Downloads

  • DS766 acc.pdf
    • full text
    • |
    • open access
    • |
    • PDF
    • |
    • 9.72 MB
  • (...).pdf
    • full text
    • |
    • UGent only
    • |
    • PDF
    • |
    • 11.71 MB

Citation

Please use this url to cite or link to this publication:

MLA
Anzaku, Esla Timothy, et al. “Leveraging Human-Machine Interactions for Computer Vision Dataset Quality Enhancement.” INTELLIGENT HUMAN COMPUTER INTERACTION, IHCI 2023, PT I, vol. 14531, SPRINGER INTERNATIONAL PUBLISHING AG, 2024, pp. 295–309, doi:10.1007/978-3-031-53827-8_27.
APA
Anzaku, E. T., Hong, H., Park, J.-W., Yang, W., Kim, K., Won, J., … De Neve, W. (2024). Leveraging Human-Machine Interactions for Computer Vision Dataset Quality Enhancement. INTELLIGENT HUMAN COMPUTER INTERACTION, IHCI 2023, PT I, 14531, 295–309. https://doi.org/10.1007/978-3-031-53827-8_27
Chicago author-date
Anzaku, Esla Timothy, Hyesoo Hong, Jin-Woo Park, Wonjun Yang, Kangmin Kim, JongBum Won, Deshika Vinoshani Kumari Herath, Arnout Van Messem, and Wesley De Neve. 2024. “Leveraging Human-Machine Interactions for Computer Vision Dataset Quality Enhancement.” In INTELLIGENT HUMAN COMPUTER INTERACTION, IHCI 2023, PT I, 14531:295–309. CHAM: SPRINGER INTERNATIONAL PUBLISHING AG. https://doi.org/10.1007/978-3-031-53827-8_27.
Chicago author-date (all authors)
Anzaku, Esla Timothy, Hyesoo Hong, Jin-Woo Park, Wonjun Yang, Kangmin Kim, JongBum Won, Deshika Vinoshani Kumari Herath, Arnout Van Messem, and Wesley De Neve. 2024. “Leveraging Human-Machine Interactions for Computer Vision Dataset Quality Enhancement.” In INTELLIGENT HUMAN COMPUTER INTERACTION, IHCI 2023, PT I, 14531:295–309. CHAM: SPRINGER INTERNATIONAL PUBLISHING AG. doi:10.1007/978-3-031-53827-8_27.
Vancouver
1.
Anzaku ET, Hong H, Park J-W, Yang W, Kim K, Won J, et al. Leveraging Human-Machine Interactions for Computer Vision Dataset Quality Enhancement. In: INTELLIGENT HUMAN COMPUTER INTERACTION, IHCI 2023, PT I. CHAM: SPRINGER INTERNATIONAL PUBLISHING AG; 2024. p. 295–309.
IEEE
[1]
E. T. Anzaku et al., “Leveraging Human-Machine Interactions for Computer Vision Dataset Quality Enhancement,” in INTELLIGENT HUMAN COMPUTER INTERACTION, IHCI 2023, PT I, Daegu, SOUTH KOREA, 2024, vol. 14531, pp. 295–309.
@inproceedings{01HYMW54DMFKKAFFG31565EDT7,
  abstract     = {{Large-scale datasets for single-label multi-class classification, such as ImageNet-1k, have been instrumental in advancing deep learning and computer vision. However, a critical and often understudied aspect is the comprehensive quality assessment of these datasets, especially regarding potential multi-label annotation errors. In this paper, we introduce a lightweight, user-friendly, and scalable framework that synergizes human and machine intelligence for efficient dataset validation and quality enhancement. We term this novel framework Multilabelfy. Central to Multilabelfy is an adaptable web-based platform that systematically guides annotators through the re-evaluation process, effectively leveraging human-machine interactions to enhance dataset quality. By using Multilabelfy on the ImageNetV2 dataset, we found that approximately 47.88% of the images contained at least two labels, underscoring the need for more rigorous assessments of such influential datasets. Furthermore, our analysis showed a negative correlation between the number of potential labels per image and model top-1 accuracy, illuminating a crucial factor in model evaluation and selection. Our open-source framework, Multilabelfy, offers a convenient, lightweight solution for dataset enhancement, emphasizing multi-label proportions. This study tackles major challenges in dataset integrity and provides key insights into model performance evaluation. Moreover, it underscores the advantages of integrating human expertise with machine capabilities to produce more robust models and trustworthy data development.}},
  author       = {{Anzaku, Esla Timothy and Hong, Hyesoo and Park, Jin-Woo and Yang, Wonjun and Kim, Kangmin and Won, JongBum and Herath, Deshika Vinoshani Kumari and Van Messem, Arnout and De Neve, Wesley}},
  booktitle    = {{INTELLIGENT HUMAN COMPUTER INTERACTION, IHCI 2023, PT I}},
  isbn         = {{978-3-031-53826-1}},
  issn         = {{0302-9743}},
  keywords     = {{Computer Vision,Dataset Quality Enhancement,Dataset Validation,Human-Computer Interaction,Multi-label Annotation}},
  language     = {{eng}},
  location     = {{Daegu, SOUTH KOREA}},
  pages        = {{295--309}},
  publisher    = {{SPRINGER INTERNATIONAL PUBLISHING AG}},
  title        = {{Leveraging Human-Machine Interactions for Computer Vision Dataset Quality Enhancement}},
  url          = {{http://doi.org/10.1007/978-3-031-53827-8_27}},
  volume       = {{14531}},
  year         = {{2024}},
}

Altmetric
View in Altmetric
Web of Science
Times cited: