- Author
- Bart Baesens, Amy Adams, Rodrigo Pacheco-Ruiz, Ann-Sophie Baesens and Seppe vanden Broucke (UGent)
- Organization
- Abstract
- We research how deep learning convolutional neural networks can be used to automatically classify the unique data set of black-and-white naval ships images from the Wright and Logan photographic collection held by the National Museum of the Royal Navy. We contrast various types of deep learning methods: pretrained models such as ConvNeXt, ResNet and EfficientNet, and ConvMixer. We also thoroughly investigate the impact of data preprocessing and externally obtained images on model performance. Finally, we research how the models estimated can be made transparent using visually appealing interpretability techniques such as Grad-CAM. We find that ConvNeXt has the best performance for our data set achieving an accuracy of 79.62% for 0-notch classification and an impressive 94.86% for 1-notch classification. The results indicate the importance of appropriate image preprocessing. Image segmentation combined with soft augmentation significantly contributes to model performance. We consider this research to be original in several aspects. Notably, it distinguishes itself through the uniqueness of the acquired dataset. Additionally, its distinctiveness extends to the analytical modeling pipeline, which encompasses a comprehensive range of modeling steps, including data preprocessing (incorporating external data, image segmentation, and image augmentation) and the use of deep learning techniques such as ConvNeXt, ResNet, EfficientNet, and ConvMixer. Furthermore, the research employs explanatory tools like Grad-CAM to enhance model interpretability and usability. We believe the proposed methodology offers lots of potential for documenting historic image collections.
- Keywords
- General Engineering, General Materials Science, General Computer Science, Electrical and Electronic Engineering, Convolutional neural networks, deep learning, explainability, digitised archives, image classification, royal navy, CLASSIFICATION, IMAGES
Downloads
-
Explainable Deep Learning to Classify Royal Navy Ships.pdf
- full text (Published version)
- |
- open access
- |
- |
- 11.36 MB
Citation
Please use this url to cite or link to this publication: http://hdl.handle.net/1854/LU-01HJB02T4WMT586WGCJFB2G5S7
- MLA
- Baesens, Bart, et al. “Explainable Deep Learning to Classify Royal Navy Ships.” IEEE ACCESS, vol. 12, 2024, pp. 1774–85, doi:10.1109/access.2023.3346061.
- APA
- Baesens, B., Adams, A., Pacheco-Ruiz, R., Baesens, A.-S., & vanden Broucke, S. (2024). Explainable deep learning to classify royal navy ships. IEEE ACCESS, 12, 1774–1785. https://doi.org/10.1109/access.2023.3346061
- Chicago author-date
- Baesens, Bart, Amy Adams, Rodrigo Pacheco-Ruiz, Ann-Sophie Baesens, and Seppe vanden Broucke. 2024. “Explainable Deep Learning to Classify Royal Navy Ships.” IEEE ACCESS 12: 1774–85. https://doi.org/10.1109/access.2023.3346061.
- Chicago author-date (all authors)
- Baesens, Bart, Amy Adams, Rodrigo Pacheco-Ruiz, Ann-Sophie Baesens, and Seppe vanden Broucke. 2024. “Explainable Deep Learning to Classify Royal Navy Ships.” IEEE ACCESS 12: 1774–1785. doi:10.1109/access.2023.3346061.
- Vancouver
- 1.Baesens B, Adams A, Pacheco-Ruiz R, Baesens A-S, vanden Broucke S. Explainable deep learning to classify royal navy ships. IEEE ACCESS. 2024;12:1774–85.
- IEEE
- [1]B. Baesens, A. Adams, R. Pacheco-Ruiz, A.-S. Baesens, and S. vanden Broucke, “Explainable deep learning to classify royal navy ships,” IEEE ACCESS, vol. 12, pp. 1774–1785, 2024.
@article{01HJB02T4WMT586WGCJFB2G5S7, abstract = {{We research how deep learning convolutional neural networks can be used to automatically classify the unique data set of black-and-white naval ships images from the Wright and Logan photographic collection held by the National Museum of the Royal Navy. We contrast various types of deep learning methods: pretrained models such as ConvNeXt, ResNet and EfficientNet, and ConvMixer. We also thoroughly investigate the impact of data preprocessing and externally obtained images on model performance. Finally, we research how the models estimated can be made transparent using visually appealing interpretability techniques such as Grad-CAM. We find that ConvNeXt has the best performance for our data set achieving an accuracy of 79.62% for 0-notch classification and an impressive 94.86% for 1-notch classification. The results indicate the importance of appropriate image preprocessing. Image segmentation combined with soft augmentation significantly contributes to model performance. We consider this research to be original in several aspects. Notably, it distinguishes itself through the uniqueness of the acquired dataset. Additionally, its distinctiveness extends to the analytical modeling pipeline, which encompasses a comprehensive range of modeling steps, including data preprocessing (incorporating external data, image segmentation, and image augmentation) and the use of deep learning techniques such as ConvNeXt, ResNet, EfficientNet, and ConvMixer. Furthermore, the research employs explanatory tools like Grad-CAM to enhance model interpretability and usability. We believe the proposed methodology offers lots of potential for documenting historic image collections.}}, author = {{Baesens, Bart and Adams, Amy and Pacheco-Ruiz, Rodrigo and Baesens, Ann-Sophie and vanden Broucke, Seppe}}, issn = {{2169-3536}}, journal = {{IEEE ACCESS}}, keywords = {{General Engineering,General Materials Science,General Computer Science,Electrical and Electronic Engineering,Convolutional neural networks,deep learning,explainability,digitised archives,image classification,royal navy,CLASSIFICATION,IMAGES}}, language = {{eng}}, pages = {{1774--1785}}, title = {{Explainable deep learning to classify royal navy ships}}, url = {{http://doi.org/10.1109/access.2023.3346061}}, volume = {{12}}, year = {{2024}}, }
- Altmetric
- View in Altmetric
- Web of Science
- Times cited: