Advanced search
2 files | 5.39 MB Add to list

Fail-safe human detection for drones using a multi-modal curriculum learning approach

Author
Organization
Project
Abstract
Drones are currently being explored for safety-critical applications where human agents are expected to evolve in their vicinity. In such applications, robust people avoidance must be provided by fusing a number of sensing modalities in order to avoid collisions. Currently however, people detection systems used on drones are solely based on standard cameras besides an emerging number of works discussing the fusion of imaging and event-based cameras. On the other hand, radar-based systems provide up-most robustness towards environmental conditions but do not provide complete information on their own and have mainly been investigated in automotive contexts, not for drones. In order to enable the fusion of radars with both event-based and standard cameras, we present KUL-UAVSAFE, a first-of-its-kind dataset for the study of safety-critical people detection by drones. In addition, we propose a baseline CNN architecture with cross-fusion highways and introduce a curriculum learning strategy for multi-modal data termed SAUL, which greatly enhances the robustness of the system towards hard RGB failures and provides a significant gain of 15% in peak F-1 score compared to the use of BlackIn, previously proposed for cross-fusion networks. We demonstrate the real-time performance and feasibility of the approach by implementing the system in an edge-computing unit. We release our dataset and additional material in the project home page.
Keywords
Artificial Intelligence, Control and Optimization, Computer Science Applications, Computer Vision and Pattern Recognition, Mechanical Engineering, Human-Computer Interaction, Biomedical Engineering, Control and Systems Engineering

Downloads

  • 8014 acc.pdf
    • full text (Accepted manuscript)
    • |
    • open access
    • |
    • PDF
    • |
    • 3.01 MB
  • (...).pdf
    • full text (Published version)
    • |
    • UGent only
    • |
    • PDF
    • |
    • 2.38 MB

Citation

Please use this url to cite or link to this publication:

MLA
Safa, Ali, et al. “Fail-Safe Human Detection for Drones Using a Multi-Modal Curriculum Learning Approach.” IEEE ROBOTICS AND AUTOMATION LETTERS, vol. 7, no. 1, 2022, pp. 303–10, doi:10.1109/lra.2021.3125450.
APA
Safa, A., Verbelen, T., Ocket, I., Bourdoux, A., Catthoor, F., & Gielen, G. G. E. (2022). Fail-safe human detection for drones using a multi-modal curriculum learning approach. IEEE ROBOTICS AND AUTOMATION LETTERS, 7(1), 303–310. https://doi.org/10.1109/lra.2021.3125450
Chicago author-date
Safa, Ali, Tim Verbelen, Ilja Ocket, Andre Bourdoux, Francky Catthoor, and Georges G. E. Gielen. 2022. “Fail-Safe Human Detection for Drones Using a Multi-Modal Curriculum Learning Approach.” IEEE ROBOTICS AND AUTOMATION LETTERS 7 (1): 303–10. https://doi.org/10.1109/lra.2021.3125450.
Chicago author-date (all authors)
Safa, Ali, Tim Verbelen, Ilja Ocket, Andre Bourdoux, Francky Catthoor, and Georges G. E. Gielen. 2022. “Fail-Safe Human Detection for Drones Using a Multi-Modal Curriculum Learning Approach.” IEEE ROBOTICS AND AUTOMATION LETTERS 7 (1): 303–310. doi:10.1109/lra.2021.3125450.
Vancouver
1.
Safa A, Verbelen T, Ocket I, Bourdoux A, Catthoor F, Gielen GGE. Fail-safe human detection for drones using a multi-modal curriculum learning approach. IEEE ROBOTICS AND AUTOMATION LETTERS. 2022;7(1):303–10.
IEEE
[1]
A. Safa, T. Verbelen, I. Ocket, A. Bourdoux, F. Catthoor, and G. G. E. Gielen, “Fail-safe human detection for drones using a multi-modal curriculum learning approach,” IEEE ROBOTICS AND AUTOMATION LETTERS, vol. 7, no. 1, pp. 303–310, 2022.
@article{8727285,
  abstract     = {{Drones are currently being explored for safety-critical applications where human agents are expected to evolve in their vicinity. In such applications, robust people avoidance must be provided by fusing a number of sensing modalities in order to avoid collisions. Currently however, people detection systems used on drones are solely based on standard cameras besides an emerging number of works discussing the fusion of imaging and event-based cameras. On the other hand, radar-based systems provide up-most robustness towards environmental conditions but do not provide complete information on their own and have mainly been investigated in automotive contexts, not for drones. In order to enable the fusion of radars with both event-based and standard cameras, we present KUL-UAVSAFE, a first-of-its-kind dataset for the study of safety-critical people detection by drones. In addition, we propose a baseline CNN architecture with cross-fusion highways and introduce a curriculum learning strategy for multi-modal data termed SAUL, which greatly enhances the robustness of the system towards hard RGB failures and provides a significant gain of 15% in peak F-1 score compared to the use of BlackIn, previously proposed for cross-fusion networks. We demonstrate the real-time performance and feasibility of the approach by implementing the system in an edge-computing unit. We release our dataset and additional material in the project home page.}},
  author       = {{Safa, Ali and Verbelen, Tim and Ocket, Ilja and Bourdoux, Andre and Catthoor, Francky and Gielen, Georges G. E.}},
  issn         = {{2377-3766}},
  journal      = {{IEEE ROBOTICS AND AUTOMATION LETTERS}},
  keywords     = {{Artificial Intelligence,Control and Optimization,Computer Science Applications,Computer Vision and Pattern Recognition,Mechanical Engineering,Human-Computer Interaction,Biomedical Engineering,Control and Systems Engineering}},
  language     = {{eng}},
  number       = {{1}},
  pages        = {{303--310}},
  title        = {{Fail-safe human detection for drones using a multi-modal curriculum learning approach}},
  url          = {{http://dx.doi.org/10.1109/lra.2021.3125450}},
  volume       = {{7}},
  year         = {{2022}},
}

Altmetric
View in Altmetric
Web of Science
Times cited: