Advanced search
1 file | 7.46 MB Add to list

Deep Visible and Thermal Image Fusion for Enhanced Pedestrian Visibility

Ivana Shopovska (UGent) , Ljubomir Jovanov (UGent) and Wilfried Philips (UGent)
(2019) Sensors. 19(17). p.1-21
Author
Organization
Abstract
<jats:p>Reliable vision in challenging illumination conditions is one of the crucial requirements of future autonomous automotive systems. In the last decade, thermal cameras have become more easily accessible to a larger number of researchers. This has resulted in numerous studies which confirmed the benefits of the thermal cameras in limited visibility conditions. In this paper, we propose a learning-based method for visible and thermal image fusion that focuses on generating fused images with high visual similarity to regular truecolor (red-green-blue or RGB) images, while introducing new informative details in pedestrian regions. The goal is to create natural, intuitive images that would be more informative than a regular RGB camera to a human driver in challenging visibility conditions. The main novelty of this paper is the idea to rely on two types of objective functions for optimization: a similarity metric between the RGB input and the fused output to achieve natural image appearance; and an auxiliary pedestrian detection error to help defining relevant features of the human appearance and blending them into the output. We train a convolutional neural network using image samples from variable conditions (day and night) so that the network learns the appearance of humans in the different modalities and creates more robust results applicable in realistic situations. Our experiments show that the visibility of pedestrians is noticeably improved especially in dark regions and at night. Compared to existing methods we can better learn context and define fusion rules that focus on the pedestrian appearance, while that is not guaranteed with methods that focus on low-level image quality metrics.</jats:p>
Keywords
image fusion, visible, infrared, ADAS, pedestrian detection, deep learning

Downloads

  • sensors-19-03727-v2.pdf
    • full text
    • |
    • open access
    • |
    • PDF
    • |
    • 7.46 MB

Citation

Please use this url to cite or link to this publication:

MLA
Shopovska, Ivana, Ljubomir Jovanov, and Wilfried Philips. “Deep Visible and Thermal Image Fusion for Enhanced Pedestrian Visibility.” Sensors 19.17 (2019): 1–21. Print.
APA
Shopovska, I., Jovanov, L., & Philips, W. (2019). Deep Visible and Thermal Image Fusion for Enhanced Pedestrian Visibility. Sensors, 19(17), 1–21.
Chicago author-date
Shopovska, Ivana, Ljubomir Jovanov, and Wilfried Philips. 2019. “Deep Visible and Thermal Image Fusion for Enhanced Pedestrian Visibility.” Sensors 19 (17): 1–21.
Chicago author-date (all authors)
Shopovska, Ivana, Ljubomir Jovanov, and Wilfried Philips. 2019. “Deep Visible and Thermal Image Fusion for Enhanced Pedestrian Visibility.” Sensors 19 (17): 1–21.
Vancouver
1.
Shopovska I, Jovanov L, Philips W. Deep Visible and Thermal Image Fusion for Enhanced Pedestrian Visibility. Sensors. Multidisciplinary Digital Publishing Institute; 2019;19(17):1–21.
IEEE
[1]
I. Shopovska, L. Jovanov, and W. Philips, “Deep Visible and Thermal Image Fusion for Enhanced Pedestrian Visibility,” Sensors, vol. 19, no. 17, pp. 1–21, 2019.
@article{8626238,
  abstract     = {<jats:p>Reliable vision in challenging illumination conditions is one of the crucial requirements of future autonomous automotive systems. In the last decade, thermal cameras have become more easily accessible to a larger number of researchers. This has resulted in numerous studies which confirmed the benefits of the thermal cameras in limited visibility conditions. In this paper, we propose a learning-based method for visible and thermal image fusion that focuses on generating fused images with high visual similarity to regular truecolor (red-green-blue or RGB) images, while introducing new informative details in pedestrian regions. The goal is to create natural, intuitive images that would be more informative than a regular RGB camera to a human driver in challenging visibility conditions. The main novelty of this paper is the idea to rely on two types of objective functions for optimization: a similarity metric between the RGB input and the fused output to achieve natural image appearance; and an auxiliary pedestrian detection error to help defining relevant features of the human appearance and blending them into the output. We train a convolutional neural network using image samples from variable conditions (day and night) so that the network learns the appearance of humans in the different modalities and creates more robust results applicable in realistic situations. Our experiments show that the visibility of pedestrians is noticeably improved especially in dark regions and at night. Compared to existing methods we can better learn context and define fusion rules that focus on the pedestrian appearance, while that is not guaranteed with methods that focus on low-level image quality metrics.</jats:p>},
  articleno    = {3727},
  author       = {Shopovska, Ivana and Jovanov, Ljubomir and Philips, Wilfried},
  issn         = {1424-8220},
  journal      = {Sensors},
  keywords     = {image fusion,visible,infrared,ADAS,pedestrian detection,deep learning},
  language     = {eng},
  number       = {17},
  pages        = {3727:1--3727:21},
  publisher    = {Multidisciplinary Digital Publishing Institute},
  title        = {Deep Visible and Thermal Image Fusion for Enhanced Pedestrian Visibility},
  url          = {http://dx.doi.org/10.3390/s19173727},
  volume       = {19},
  year         = {2019},
}

Altmetric
View in Altmetric