Advanced search
2 files | 1.09 MB Add to list

Multi-fidelity deep neural networks for adaptive inference in the internet of multimedia things

Sam Leroux (UGent) , Steven Bohez (UGent) , Elias De Coninck, Pieter Van Molle (UGent) , Bert Vankeirsbilck (UGent) , Tim Verbelen (UGent) , Pieter Simoens (UGent) and Bart Dhoedt (UGent)
Author
Organization
Abstract
Internet of Things (IoT) infrastructures are more and more relying on multimedia sensors to provide information about the environment. Deep neural networks (DNNs) could extract knowledge from this audiovisual data but they typically require large amounts of resources (processing power, memory and energy). If all limitations of the execution environment are known beforehand, we can design neural networks under these constraints. An IoT setting however is a very heterogeneous environment where the constraints can change rapidly. We propose a technique allowing us to deploy a variety of different networks at runtime, each with a specific complexity-accuracy trade-off but without having to store each network independently. We train a sequence of networks of increasing size and constrain each network to contain the parameters of all smaller networks in the sequence. We only need to store the largest network to be able to deploy each of the smaller networks. We experimentally validate our approach on different benchmark datasets for image recognition and conclude that we can build networks that support multiple trade-offs between accuracy and computational cost. (C) 2019 Elsevier B.V. All rights reserved.
Keywords
IoT, Deep neural networks, Resource efficient inference

Downloads

  • (...).pdf
    • full text
    • |
    • UGent only
    • |
    • PDF
    • |
    • 692.74 KB
  • 7412 i.pdf
    • full text
    • |
    • open access
    • |
    • PDF
    • |
    • 397.52 KB

Citation

Please use this url to cite or link to this publication:

MLA
Leroux, Sam, et al. “Multi-Fidelity Deep Neural Networks for Adaptive Inference in the Internet of Multimedia Things.” FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, vol. 97, Elsevier Science Bv, 2019, pp. 355–60, doi:10.1016/j.future.2019.03.001.
APA
Leroux, S., Bohez, S., De Coninck, E., Van Molle, P., Vankeirsbilck, B., Verbelen, T., … Dhoedt, B. (2019). Multi-fidelity deep neural networks for adaptive inference in the internet of multimedia things. FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 97, 355–360. https://doi.org/10.1016/j.future.2019.03.001
Chicago author-date
Leroux, Sam, Steven Bohez, Elias De Coninck, Pieter Van Molle, Bert Vankeirsbilck, Tim Verbelen, Pieter Simoens, and Bart Dhoedt. 2019. “Multi-Fidelity Deep Neural Networks for Adaptive Inference in the Internet of Multimedia Things.” FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE 97: 355–60. https://doi.org/10.1016/j.future.2019.03.001.
Chicago author-date (all authors)
Leroux, Sam, Steven Bohez, Elias De Coninck, Pieter Van Molle, Bert Vankeirsbilck, Tim Verbelen, Pieter Simoens, and Bart Dhoedt. 2019. “Multi-Fidelity Deep Neural Networks for Adaptive Inference in the Internet of Multimedia Things.” FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE 97: 355–360. doi:10.1016/j.future.2019.03.001.
Vancouver
1.
Leroux S, Bohez S, De Coninck E, Van Molle P, Vankeirsbilck B, Verbelen T, et al. Multi-fidelity deep neural networks for adaptive inference in the internet of multimedia things. FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE. 2019;97:355–60.
IEEE
[1]
S. Leroux et al., “Multi-fidelity deep neural networks for adaptive inference in the internet of multimedia things,” FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, vol. 97, pp. 355–360, 2019.
@article{8619122,
  abstract     = {{Internet of Things (IoT) infrastructures are more and more relying on multimedia sensors to provide information about the environment. Deep neural networks (DNNs) could extract knowledge from this audiovisual data but they typically require large amounts of resources (processing power, memory and energy). If all limitations of the execution environment are known beforehand, we can design neural networks under these constraints. An IoT setting however is a very heterogeneous environment where the constraints can change rapidly. We propose a technique allowing us to deploy a variety of different networks at runtime, each with a specific complexity-accuracy trade-off but without having to store each network independently. We train a sequence of networks of increasing size and constrain each network to contain the parameters of all smaller networks in the sequence. We only need to store the largest network to be able to deploy each of the smaller networks. We experimentally validate our approach on different benchmark datasets for image recognition and conclude that we can build networks that support multiple trade-offs between accuracy and computational cost. (C) 2019 Elsevier B.V. All rights reserved.}},
  author       = {{Leroux, Sam and Bohez, Steven and De Coninck, Elias and Van Molle, Pieter and Vankeirsbilck, Bert and Verbelen, Tim and Simoens, Pieter and Dhoedt, Bart}},
  issn         = {{0167-739X}},
  journal      = {{FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE}},
  keywords     = {{IoT,Deep neural networks,Resource efficient inference}},
  language     = {{eng}},
  pages        = {{355--360}},
  publisher    = {{Elsevier Science Bv}},
  title        = {{Multi-fidelity deep neural networks for adaptive inference in the internet of multimedia things}},
  url          = {{http://doi.org/10.1016/j.future.2019.03.001}},
  volume       = {{97}},
  year         = {{2019}},
}

Altmetric
View in Altmetric
Web of Science
Times cited: