Advanced search
2 files | 3.10 MB Add to list

Resource-constrained classification using a cascade of neural network layers

Sam Leroux (UGent) , Steven Bohez (UGent) , Tim Verbelen (UGent) , Bert Vankeirsbilck (UGent) , Pieter Simoens (UGent) and Bart Dhoedt (UGent)
Author
Organization
Abstract
Deep neural networks are the state of the art technique for a wide variety of classification problems. Although deeper networks are able to make more accurate classifications, the value brought by an additional hidden layer diminishes rapidly. Even shallow networks are able to achieve relatively good results on various classification problems. Only for a small subset of the samples do the deeper layers make a significant difference. We describe an architecture in which only the samples that can not be classified with a sufficient confidence by a shallow network have to be processed by the deeper layers. Instead of training a network with one output layer at the end of the network, we train several output layers, one for each hidden layer. When an output layer is sufficiently confident in this result, we stop propagating at this layer and the deeper layers need not be evaluated. The choice of a threshold confidence value allows us to trade-off accuracy and speed. Applied in the Internet-of-things (IoT) context, this approach makes it possible to distribute the layers of a neural network between low powered devices and powerful servers in the cloud. We only need the remote layers when the local layers are unable to make an accurate classification. Such an architecture adds the intelligence of a deep neural network to resource constrained devices such as sensor nodes and various IoT devices. We evaluated our approach on the MNIST and CIFAR10 datasets. On the MNIST dataset, we retain the same accuracy at half the computational cost. On the more difficult CIFAR10 dataset we were able to obtain a relative speed-up of 33% at an marginal increase in error rate from 15.3% to 15.8%.
Keywords
IBCN

Downloads

  • 6479 i.pdf
    • full text
    • |
    • open access
    • |
    • PDF
    • |
    • 922.76 KB
  • (...).pdf
    • full text
    • |
    • UGent only
    • |
    • PDF
    • |
    • 2.18 MB

Citation

Please use this url to cite or link to this publication:

MLA
Leroux, Sam, et al. “Resource-Constrained Classification Using a Cascade of Neural Network Layers.” IEEE International Joint Conference on Neural Networks (IJCNN), 2015, pp. 1–7.
APA
Leroux, S., Bohez, S., Verbelen, T., Vankeirsbilck, B., Simoens, P., & Dhoedt, B. (2015). Resource-constrained classification using a cascade of neural network layers. IEEE International Joint Conference on Neural Networks (IJCNN), 1–7.
Chicago author-date
Leroux, Sam, Steven Bohez, Tim Verbelen, Bert Vankeirsbilck, Pieter Simoens, and Bart Dhoedt. 2015. “Resource-Constrained Classification Using a Cascade of Neural Network Layers.” In IEEE International Joint Conference on Neural Networks (IJCNN), 1–7.
Chicago author-date (all authors)
Leroux, Sam, Steven Bohez, Tim Verbelen, Bert Vankeirsbilck, Pieter Simoens, and Bart Dhoedt. 2015. “Resource-Constrained Classification Using a Cascade of Neural Network Layers.” In IEEE International Joint Conference on Neural Networks (IJCNN), 1–7.
Vancouver
1.
Leroux S, Bohez S, Verbelen T, Vankeirsbilck B, Simoens P, Dhoedt B. Resource-constrained classification using a cascade of neural network layers. In: IEEE International Joint Conference on Neural Networks (IJCNN). 2015. p. 1–7.
IEEE
[1]
S. Leroux, S. Bohez, T. Verbelen, B. Vankeirsbilck, P. Simoens, and B. Dhoedt, “Resource-constrained classification using a cascade of neural network layers,” in IEEE International Joint Conference on Neural Networks (IJCNN), Killarney, Ireland, 2015, pp. 1–7.
@inproceedings{7017811,
  abstract     = {{Deep neural networks are the state of the art technique for a wide variety of classification problems. Although deeper networks are able to make more accurate classifications, the value brought by an additional hidden layer diminishes rapidly. Even shallow networks are able to achieve relatively good results on various classification problems. Only for a small subset of the samples do the deeper layers make a significant difference. We describe an architecture in which only the samples that can not be classified with a sufficient confidence by a shallow network have to be processed by the deeper layers. Instead of training a network with one output layer at the end of the network, we train several output layers, one for each hidden layer. When an output layer is sufficiently confident in this result, we stop propagating at this layer and the deeper layers need not be evaluated. The choice of a threshold confidence value allows us to trade-off accuracy and speed. 

Applied in the Internet-of-things (IoT) context, this approach makes it possible to distribute the layers of a neural network between low powered devices and powerful servers in the cloud. We only need the remote layers when the local layers are unable to make an accurate classification. Such an architecture adds the intelligence of a deep neural network to resource constrained devices such as sensor nodes and various IoT devices. 

We evaluated our approach on the MNIST and CIFAR10 datasets. On the MNIST dataset, we retain the same accuracy at half the computational cost. On the more difficult CIFAR10 dataset we were able to obtain a relative speed-up of 33% at an marginal increase in error rate from 15.3% to 15.8%.}},
  author       = {{Leroux, Sam and Bohez, Steven and Verbelen, Tim and Vankeirsbilck, Bert and Simoens, Pieter and Dhoedt, Bart}},
  booktitle    = {{IEEE International Joint Conference on Neural Networks (IJCNN)}},
  isbn         = {{978-1-4799-1959-8}},
  issn         = {{2161-4393}},
  keywords     = {{IBCN}},
  language     = {{eng}},
  location     = {{Killarney, Ireland}},
  pages        = {{1--7}},
  title        = {{Resource-constrained classification using a cascade of neural network layers}},
  year         = {{2015}},
}

Web of Science
Times cited: