Advanced search
2 files | 3.67 MB Add to list

Indoor human activity recognition using high-dimensional sensors and deep neural networks

Baptist Vandersmissen (UGent) , Nicolas Knudde (UGent) , Azarakhsh Jalalvand (UGent) , Ivo Couckuyt (UGent) , Tom Dhaene (UGent) and Wesley De Neve (UGent)
(2020) NEURAL COMPUTING & APPLICATIONS. 32(16). p.12295-12309
Author
Organization
Abstract
Many smart home applications rely on indoor human activity recognition. This challenge is currently primarily tackled by employing video camera sensors. However, the use of such sensors is characterized by fundamental technical deficiencies in an indoor environment, often also resulting in a breach of privacy. In contrast, a radar sensor resolves most of these flaws and maintains privacy in particular. In this paper, we investigate a novel approach toward automatic indoor human activity recognition, feeding high-dimensional radar and video camera sensor data into several deep neural networks. Furthermore, we explore the efficacy of sensor fusion to provide a solution in less than ideal circumstances. We validate our approach on two newly constructed and published data sets that consist of 2347 and 1505 samples distributed over six different types of gestures and events, respectively. From our analysis, we can conclude that, when considering a radar sensor, it is optimal to make use of a three-dimensional convolutional neural network that takes as input sequential range-Doppler maps. This model achieves 12.22% and 2.97% error rate on the gestures and the events data set, respectively. A pretrained residual network is employed to deal with the video camera sensor data and obtains 1.67% and 3.00% error rate on the same data sets. We show that there exists a clear benefit in combining both sensors to enable activity recognition in the case of less than ideal circumstances.
Keywords
MICRO-DOPPLER, RADAR, Activity recognition, Deep neural networks, High-dimensional sensors, Sensor fusion

Downloads

  • (...).pdf
    • full text (Accepted manuscript)
    • |
    • UGent only
    • |
    • PDF
    • |
    • 2.63 MB
  • (...).pdf
    • full text (Published version)
    • |
    • UGent only
    • |
    • PDF
    • |
    • 1.05 MB

Citation

Please use this url to cite or link to this publication:

MLA
Vandersmissen, Baptist, et al. “Indoor Human Activity Recognition Using High-Dimensional Sensors and Deep Neural Networks.” NEURAL COMPUTING & APPLICATIONS, vol. 32, no. 16, 2020, pp. 12295–309, doi:10.1007/s00521-019-04408-1.
APA
Vandersmissen, B., Knudde, N., Jalalvand, A., Couckuyt, I., Dhaene, T., & De Neve, W. (2020). Indoor human activity recognition using high-dimensional sensors and deep neural networks. NEURAL COMPUTING & APPLICATIONS, 32(16), 12295–12309. https://doi.org/10.1007/s00521-019-04408-1
Chicago author-date
Vandersmissen, Baptist, Nicolas Knudde, Azarakhsh Jalalvand, Ivo Couckuyt, Tom Dhaene, and Wesley De Neve. 2020. “Indoor Human Activity Recognition Using High-Dimensional Sensors and Deep Neural Networks.” NEURAL COMPUTING & APPLICATIONS 32 (16): 12295–309. https://doi.org/10.1007/s00521-019-04408-1.
Chicago author-date (all authors)
Vandersmissen, Baptist, Nicolas Knudde, Azarakhsh Jalalvand, Ivo Couckuyt, Tom Dhaene, and Wesley De Neve. 2020. “Indoor Human Activity Recognition Using High-Dimensional Sensors and Deep Neural Networks.” NEURAL COMPUTING & APPLICATIONS 32 (16): 12295–12309. doi:10.1007/s00521-019-04408-1.
Vancouver
1.
Vandersmissen B, Knudde N, Jalalvand A, Couckuyt I, Dhaene T, De Neve W. Indoor human activity recognition using high-dimensional sensors and deep neural networks. NEURAL COMPUTING & APPLICATIONS. 2020;32(16):12295–309.
IEEE
[1]
B. Vandersmissen, N. Knudde, A. Jalalvand, I. Couckuyt, T. Dhaene, and W. De Neve, “Indoor human activity recognition using high-dimensional sensors and deep neural networks,” NEURAL COMPUTING & APPLICATIONS, vol. 32, no. 16, pp. 12295–12309, 2020.
@article{8673047,
  abstract     = {Many smart home applications rely on indoor human activity recognition. This challenge is currently primarily tackled by employing video camera sensors. However, the use of such sensors is characterized by fundamental technical deficiencies in an indoor environment, often also resulting in a breach of privacy. In contrast, a radar sensor resolves most of these flaws and maintains privacy in particular. In this paper, we investigate a novel approach toward automatic indoor human activity recognition, feeding high-dimensional radar and video camera sensor data into several deep neural networks. Furthermore, we explore the efficacy of sensor fusion to provide a solution in less than ideal circumstances. We validate our approach on two newly constructed and published data sets that consist of 2347 and 1505 samples distributed over six different types of gestures and events, respectively. From our analysis, we can conclude that, when considering a radar sensor, it is optimal to make use of a three-dimensional convolutional neural network that takes as input sequential range-Doppler maps. This model achieves 12.22% and 2.97% error rate on the gestures and the events data set, respectively. A pretrained residual network is employed to deal with the video camera sensor data and obtains 1.67% and 3.00% error rate on the same data sets. We show that there exists a clear benefit in combining both sensors to enable activity recognition in the case of less than ideal circumstances.},
  author       = {Vandersmissen, Baptist and Knudde, Nicolas and Jalalvand, Azarakhsh and Couckuyt, Ivo and Dhaene, Tom and De Neve, Wesley},
  issn         = {0941-0643},
  journal      = {NEURAL COMPUTING & APPLICATIONS},
  keywords     = {MICRO-DOPPLER,RADAR,Activity recognition,Deep neural networks,High-dimensional sensors,Sensor fusion},
  language     = {eng},
  number       = {16},
  pages        = {12295--12309},
  title        = {Indoor human activity recognition using high-dimensional sensors and deep neural networks},
  url          = {http://dx.doi.org/10.1007/s00521-019-04408-1},
  volume       = {32},
  year         = {2020},
}

Altmetric
View in Altmetric
Web of Science
Times cited: